| ID |
Date |
Author |
Subject |
|
550
|
Sun Apr 2 03:32:30 2017 |
DK, OH | Pulser walkthrough | 11:32 Pulser walkthrough, 90k down to 10k, 10k steps, 1 min per step
R26
11:41 stop run R26
11:44 Pulser walkthrough, 10k to 2k in 2k steps, 90 seconds per step
R27
11:53 stop run R27
11:56 Pulser walkthrough, 2k to 10k in 2k steps (90 sec each) & 10k to 40k in 10k steps (60 sec per step)
R28
12:05 Attachment 1 shows biases ("bias_2_a.png")
12:07 stop run R28
12:15 background run, R29
12:39 stop run 29. |
|
275
|
Sun Jun 5 01:20:39 2016 |
DK, AE, GL, PJW | Merger wants first SYNC | Trying to start the AIDA DAQ now that the RI beam is (nearly) ready. However, we cannot get merging to work. But the merger status has "Merge State = GOing : paused : xfer enabled : want first SYNC" but the last part ought to show "output paused" or after merge begins, "ready for SYNC". We restarted the DAQ according to the procedure several times but this aspect of the messaging does not change, and the merging does not go live. |
|
312
|
Sun Jun 12 19:19:22 2016 |
DK | Machine Time End | 3:13 operators call to tell us the machine time is over. No more EURICA!
AIDA62_23 ought to be the last file with legitimate physics data
We will leave the DAQ running overnight as a b/g run. |
|
322
|
Tue Jul 5 09:39:27 2016 |
DK | Rly16 is not online | AIDA was relocated, and now I want to power it on to do some tests. However, I cannot get the Rly16 service running.
Specifically, after powering up the entire AIDA system, on aidas1 PC, I cannot connect to the raspberry pi (nnrpi1) via a web browser
http://nnrpi1:8015/AIDA/Rly16/
The following elog may be relevant: https://elog.ph.ed.ac.uk/AIDA/58
Now I am ssh'd to nnrpi1 via aidas1
% dmesg | grep -A 3 USB0
[ 8.970098] usb 1-1.3: pl2303 converter now attached to ttyUSB0
[ 9.012479] usb 1-1.2.2: Detected FT232RL
[ 9.178208] usb 1-1.2.2: FTDI USB Serial Device converter now attached to ttyUSB1
[ 9.348288] ftdi_sio 1-1.2.3:1.0: FTDI USB Serial Device converter detected
So, I will try Patrick's workaround in elog #58
I tried it, though it's hard to know if it is done correctly. Which item is which? Or more importantly, which one runs Rly16?
[ 8.107290] usb 1-1.2.3: USB disconnect, device number 8
[ 8.261338] usbserial: USB Serial support registered for FTDI USB Serial Device
[ 8.541012] ftdi_sio 1-1.2.2:1.0: FTDI USB Serial Device converter detected
[ 8.712409] usb 1-1.2.2: Detected FT232RL
[ 8.847955] usb 1-1.2.2: FTDI USB Serial Device converter now attached to ttyUSB0
[ 12.252576] EXT4-fs (mmcblk0p2): re-mounted. Opts: (null)
[ 12.728966] EXT4-fs (mmcblk0p2): re-mounted. Opts: (null)
[ 23.695562] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup
[ 25.212043] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0xC1E1
[ 32.372015] Adding 102396k swap on /var/swap. Priority:-1 extents:2 across:2134012k SSFS
[ 109.219022] usb 1-1.2.3: new full-speed USB device number 9 using dwc_otg
[ 109.335538] usb 1-1.2.3: New USB device found, idVendor=0403, idProduct=6001
[ 109.335579] usb 1-1.2.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 109.335601] usb 1-1.2.3: Product: USB <-> Serial
[ 109.335619] usb 1-1.2.3: Manufacturer: FTDI
[ 109.354420] ftdi_sio 1-1.2.3:1.0: FTDI USB Serial Device converter detected
[ 109.354761] usb 1-1.2.3: Detected FT232BM
[ 109.356174] usb 1-1.2.3: FTDI USB Serial Device converter now attached to ttyUSB1
[ 163.939476] usb 1-1.3: new full-speed USB device number 10 using dwc_otg
[ 164.042348] usb 1-1.3: New USB device found, idVendor=067b, idProduct=2303
[ 164.042424] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 164.042446] usb 1-1.3: Product: USB-Serial Controller D
[ 164.042468] usb 1-1.3: Manufacturer: Prolific Technology Inc.
[ 164.096379] usbcore: registered new interface driver pl2303
[ 164.098786] usbserial: USB Serial support registered for pl2303
[ 164.099085] pl2303 1-1.3:1.0: pl2303 converter detected
[ 164.106666] usb 1-1.3: pl2303 converter now attached to ttyUSB2
On ttyUSB# have:
FT232RL
FT232BM
pl2303
% lsusb
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp.
Bus 001 Device 004: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB
Bus 001 Device 010: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
Bus 001 Device 005: ID 046d:c016 Logitech, Inc. Optical Wheel Mouse
Bus 001 Device 007: ID 04f2:0402 Chicony Electronics Co., Ltd Genius LuxeMate i200 Keyboard
Bus 001 Device 006: ID 0403:6001 Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC
Bus 001 Device 009: ID 0403:6001 Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC
N.B. USB-serial link to FEE64 is not connected to anything on the serial side. What is the FEE64 Console? See Attachment 1. It goes to nothing! Should it go somewhere? |
|
323
|
Tue Jul 5 10:26:11 2016 |
DK | AIDA relocation pictures | Here are some pictures after AIDA's relocation.
In principle, it can stay here, but probably after the tests we will push it further against the wall. However, power and ethernet cabling cannot reach that far.
Attachment 1: Shows EURICA (original position) B-RIKEN moderator (near old AIDA spot) and AIDA (pushed downstream, rotated 90 degrees counterclockwise)
Attachment 2: Zoom in on AIDA from above. Nominally the beam left downstream perspective. Power supply is also seen on BRIKEN moderator table.
Attachment 3: Zoom in further of above, namely near FEE power supply rack
Attachment 4: More upstream view, still on beam left, showing tube with SSDs as well as water cooler in background
Attachment 5: Beam right view
Attachment 6: Beam right view from more upstream perspective, also showing the full tower of electronics. |
|
329
|
Wed Jul 20 18:41:55 2016 |
DK | AIDA topped up on water | I actually topped off the chiller with water tonight as I am returning to Scotland (and then off to CERN) within hours.
I added 4 liters on around July 5th, to move the marker from a little under the halfway mark to over the halfway mark. After 2.x weeks, it was still sitting a bit shy of under the halfway mark. This means that in a humid and hot Japanese month like July, AIDA doesn't evaporate more than something like 2 liters in one week.
As it's now topped off, I think it's good to go for something like several months.
I also note that I use a faucet in the old Nishina Center, rather than passing radiation control, to find a water tap. It's maybe more of a hike but it avoids going through radiation control.
Anyway, the point of my elog post is that the AIDA chiller will not be running out of water any time soon, probably not before October, though local members at RIKEN might check it each month etc, or on request. I left the bucket and the large-volume container right sitting by it. I still don't know where the funnel is, so I robbed (borrowed) a kind of 1 liter handled jug with a beaker from a place, and returned it. |
|
439
|
Tue Nov 8 22:04:21 2016 |
DK | NP1412 R10 | Put the variable Al degraders all back in for safety (was 4.4 mm, now 5.4 mm)
Takechi team switches the condition: F5 empty; F7 CH2; F11 empty
7:00 Reset Run Control, run Setup, everything looks good. Start the run
Data rate around 10 MB/s
7:04 Checking the online data rate with MIDAS, the event rate is nearly zero...but it comes back (presume Takechi team was playing with the beam condition)
Events come back...
F11 Plastic rate is 1.2kHz
Here is a sample of the offline analysis of event rates with all Al degraders in:
*** DSSSD # 1 count: 640 old count: 321 dt: 1.03 s HEC rate: 309.51 Hz
*** DSSSD # 2 count: 551 old count: 277 dt: 1.03 s HEC rate: 265.85 Hz
*** DSSSD # 3 count: 773 old count: 452 dt: 1.03 s HEC rate: 311.45 Hz
*** DSSSD # 4 count: 633 old count: 371 dt: 1.03 s HEC rate: 254.20 Hz
*** DSSSD # 5 count: 211 old count: 140 dt: 1.03 s HEC rate: 68.89 Hz
*** DSSSD # 6 count: 6 old count: 4 dt: 1.03 s HEC rate: 1.94 Hz
7:11 Remove 1 mm of Al variable degrader (back to 4.4 mm total before AIDA)
Offline rates from R10_3:
*** DSSSD # 1 count: 2596 old count: 2447 dt: 6.52 s HEC rate: 398.43 Hz
*** DSSSD # 2 count: 2193 old count: 2067 dt: 6.52 s HEC rate: 336.58 Hz
*** DSSSD # 3 count: 2641 old count: 2485 dt: 6.52 s HEC rate: 405.34 Hz
*** DSSSD # 4 count: 2626 old count: 2470 dt: 6.52 s HEC rate: 403.04 Hz
*** DSSSD # 5 count: 1940 old count: 1825 dt: 6.52 s HEC rate: 297.75 Hz
*** DSSSD # 6 count: 852 old count: 801 dt: 6.52 s HEC rate: 130.76 Hz
7:18 Leak currents shown as attachments 1 and 2
7:20 Notice that the data rate has gone to zero...
Follow along with https://elog.ph.ed.ac.uk/AIDA/303 to fix it. This looks good and the merger seems to come back successfully.
Trying to STOP on Run Control gave:
STATE for nnaida16 returned with an error
error: SOAP http transport timed out after 20000 ms
NONE
error: SOAP http transport timed out after 20000 ms
while executing
"$transport $procVarName $url $req"
(procedure "::SOAP::invoke" line 18)
invoked from within
"::SOAP::invoke ::SOAP::_DataAcquisitionControlClient__GetState"
("eval" body line 1)
invoked from within
"eval ::SOAP::invoke ::SOAP::_DataAcquisitionControlClient__GetState $args"
(procedure "DataAcquisitionControlClient__GetState" line 1)
invoked from within
"DataAcquisitionControlClient__GetState"
Try a Reset on Run Control (this is the only option at present)
nnaida16 showed "going" but after Reset shows "reset" like all the rest.
Now we get another error
JavaScript error occurred!
Error description: SyntaxError: missing ) after argument list
Page address: http://localhost:8015/DataAcquisitionControl/DataAcquisitionControl.tml
Line number: 203
nnaida16 shows error
STATE for nnaida16 returned with an error
connect failed
NONE
connect failed
while executing
"::http::geturl http://nnaida16:8015/DataAcquisitionControlServer -headers {} -type text/xml -timeout 20000 -query {<?xml version="1.0" encoding="UTF-8..."
("eval" body line 1)
invoked from within
"eval [list ::http::geturl $url] $args"
(procedure "::http::geturl_followRedirects" line 4)
invoked from within
"::http::geturl_followRedirects http://nnaida16:8015/DataAcquisitionControlServer -headers {} -type text/xml -timeout 20000 -query {<?xml version="1.0"..."
("eval" body line 1)
invoked from within
"eval ::http::geturl_followRedirects [list $url] -headers [list $local_headers] -type text/xml -timeout $timeout -query [list $request] $local_pro..."
(procedure "::SOAP::Transport::http::xfer" line 61)
invoked from within
"$transport $procVarName $url $req"
(procedure "::SOAP::invoke" line 18)
invoked from within
"::SOAP::invoke ::SOAP::_DataAcquisitionControlClient__GetState"
("eval" body line 1)
invoked from within
"eval ::SOAP::invoke ::SOAP::_DataAcquisitionControlClient__GetState $args"
(procedure "DataAcquisitionControlClient__GetState" line 1)
invoked from within
"DataAcquisitionControlClient__GetState"
GET OPTION for nnaida16 returned with an error
connect failed
NONE
connect failed
while executing
"::http::geturl http://nnaida16:8015/DataAcquisitionControlServer -headers {} -type text/xml -timeout 60000 -query {<?xml version="1.0" encoding="UTF-8..."
("eval" body line 1)
invoked from within
"eval [list ::http::geturl $url] $args"
(procedure "::http::geturl_followRedirects" line 4)
invoked from within
"::http::geturl_followRedirects http://nnaida16:8015/DataAcquisitionControlServer -headers {} -type text/xml -timeout 60000 -query {<?xml version="1.0"..."
("eval" body line 1)
invoked from within
"eval ::http::geturl_followRedirects [list $url] -headers [list $local_headers] -type text/xml -timeout $timeout -query [list $request] $local_pro..."
(procedure "::SOAP::Transport::http::xfer" line 61)
invoked from within
"$transport $procVarName $url $req"
(procedure "::SOAP::invoke" line 18)
invoked from within
"::SOAP::invoke ::SOAP::_DataAcquisitionControlClient__GetOption HistEnable"
("eval" body line 1)
invoked from within
"eval ::SOAP::invoke ::SOAP::_DataAcquisitionControlClient__GetOption $args"
(procedure "DataAcquisitionControlClient__GetOption" line 1)
invoked from within
"DataAcquisitionControlClient__GetOption "$AcqGetOption""
GET OPTION for nnaida16 returned with an error
connect failed
NONE
connect failed
while executing
"::http::geturl http://nnaida16:8015/DataAcquisitionControlServer -headers {} -type text/xml -timeout 60000 -query {<?xml version="1.0" encoding="UTF-8..."
("eval" body line 1)
invoked from within
"eval [list ::http::geturl $url] $args"
(procedure "::http::geturl_followRedirects" line 4)
invoked from within
"::http::geturl_followRedirects http://nnaida16:8015/DataAcquisitionControlServer -headers {} -type text/xml -timeout 60000 -query {<?xml version="1.0"..."
("eval" body line 1)
invoked from within
"eval ::http::geturl_followRedirects [list $url] -headers [list $local_headers] -type text/xml -timeout $timeout -query [list $request] $local_pro..."
(procedure "::SOAP::Transport::http::xfer" line 61)
invoked from within
"$transport $procVarName $url $req"
(procedure "::SOAP::invoke" line 18)
invoked from within
"::SOAP::invoke ::SOAP::_DataAcquisitionControlClient__GetOption Xfer2Enable"
("eval" body line 1)
invoked from within
"eval ::SOAP::invoke ::SOAP::_DataAcquisitionControlClient__GetOption $args"
(procedure "DataAcquisitionControlClient__GetOption" line 1)
invoked from within
"DataAcquisitionControlClient__GetOption "$AcqGetOption""
TapeServer was still running, so maybe that was a mistake...?
Power cycle... |
|
479
|
Sun Nov 27 02:46:54 2016 |
DK | Offline analysis of R18_2 | Run R18_2: 3 mm Pb, 0.5 mm W (+1 mm Al) and 0.3 mm W (+1 mm Al)
*** data items: 261088000 ( 105923.42 Hz)
*** ADC events: 34184359 ( 13868.60 Hz)
*** time warps: 0 ( 0.00 Hz)
*** DSSSD # 1 count: 5644505 old count: 5634720 dt: 289.12 s LEC rate: 19523.08 Hz
*** DSSSD # 2 count: 5319467 old count: 5310562 dt: 289.12 s LEC rate: 18398.85 Hz
*** DSSSD # 3 count: 2800065 old count: 2795460 dt: 289.12 s LEC rate: 9684.80 Hz
*** DSSSD # 4 count: 2458077 old count: 2454093 dt: 289.12 s LEC rate: 8501.94 Hz
*** DSSSD # 5 count: 1903393 old count: 1900323 dt: 289.12 s LEC rate: 6583.41 Hz
*** DSSSD # 6 count: 3534753 old count: 3528835 dt: 289.12 s LEC rate: 12225.92 Hz
*** DSSSD # 1 count: 19 old count: 19 dt: 289.12 s HEC rate: 0.07 Hz
*** DSSSD # 2 count: 17 old count: 17 dt: 289.12 s HEC rate: 0.06 Hz
*** DSSSD # 3 count: 89 old count: 89 dt: 289.12 s HEC rate: 0.31 Hz
*** DSSSD # 4 count: 195 old count: 195 dt: 289.12 s HEC rate: 0.67 Hz
*** DSSSD # 5 count: 290 old count: 289 dt: 289.12 s HEC rate: 1.00 Hz
*** DSSSD # 6 count: 266 old count: 265 dt: 289.12 s HEC rate: 0.92 Hz
*** ENTRY finish ends
S O R T S T O P P E D ..... Sun Nov 27 11:42:52 2016
|
|
489
|
Mon Nov 28 05:38:39 2016 |
DK | Offline analysis R28_14 | Conditions: Two Pb walls with empty windows. 3mm Pb degrader between MUSIC 1 and MUSIC 2.
Variable degraders: 1.0 mm W (+1 mm Al) and 0.3 mm W (+1 mm Al)
*** DSSSD # 1 count: 5901917 old count: 5892055 dt: 286.62 s LEC rate: 20591.57 Hz
*** DSSSD # 2 count: 5564387 old count: 5555277 dt: 286.62 s LEC rate: 19413.94 Hz
*** DSSSD # 3 count: 2995556 old count: 2990474 dt: 286.62 s LEC rate: 10451.38 Hz
*** DSSSD # 4 count: 2453137 old count: 2448680 dt: 286.62 s LEC rate: 8558.90 Hz
*** DSSSD # 5 count: 1926370 old count: 1923142 dt: 286.62 s LEC rate: 6721.03 Hz
*** DSSSD # 6 count: 3550119 old count: 3544196 dt: 286.62 s LEC rate: 12386.23 Hz
*** DSSSD # 1 count: 76 old count: 76 dt: 286.62 s HEC rate: 0.27 Hz
*** DSSSD # 2 count: 94 old count: 94 dt: 286.62 s HEC rate: 0.33 Hz
*** DSSSD # 3 count: 118 old count: 118 dt: 286.62 s HEC rate: 0.41 Hz
*** DSSSD # 4 count: 74 old count: 74 dt: 286.62 s HEC rate: 0.26 Hz
*** DSSSD # 5 count: 48 old count: 48 dt: 286.62 s HEC rate: 0.17 Hz
*** DSSSD # 6 count: 17 old count: 17 dt: 286.62 s HEC rate: 0.06 Hz
*** ENTRY finish ends
S O R T S T O P P E D ..... Mon Nov 28 14:20:29 2016
See attached histograms |
|
490
|
Mon Nov 28 06:49:04 2016 |
DK | Offline analysis R27_1 | Conditions: One Pb wall with empty windows. 3mm Pb degrader closer to AIDA, on the front of PE wall
Variable degraders: 1.0 mm W (+1 mm Al) and 0.3 mm W (+1 mm Al)
*** data items: 261088000 ( 110875.14 Hz)
*** ADC events: 39427988 ( 16743.72 Hz)
*** time warps: 0 ( 0.00 Hz)
*** DSSSD # 1 count: 5617273 old count: 5603383 dt: 327.21 s LEC rate: 17167.27 Hz
*** DSSSD # 2 count: 6007495 old count: 5992656 dt: 327.21 s LEC rate: 18359.85 Hz
*** DSSSD # 3 count: 3154280 old count: 3146595 dt: 327.21 s LEC rate: 9639.98 Hz
*** DSSSD # 4 count: 2713744 old count: 2706934 dt: 327.21 s LEC rate: 8293.63 Hz
*** DSSSD # 5 count: 1977642 old count: 1972985 dt: 327.21 s LEC rate: 6043.99 Hz
*** DSSSD # 6 count: 2943065 old count: 2935788 dt: 327.21 s LEC rate: 8994.47 Hz
*** DSSSD # 1 count: 128 old count: 128 dt: 327.21 s HEC rate: 0.39 Hz
*** DSSSD # 2 count: 174 old count: 174 dt: 327.21 s HEC rate: 0.53 Hz
*** DSSSD # 3 count: 236 old count: 236 dt: 327.21 s HEC rate: 0.72 Hz
*** DSSSD # 4 count: 158 old count: 158 dt: 327.21 s HEC rate: 0.48 Hz
*** DSSSD # 5 count: 80 old count: 80 dt: 327.21 s HEC rate: 0.24 Hz
*** DSSSD # 6 count: 29 old count: 29 dt: 327.21 s HEC rate: 0.09 Hz
*** ENTRY finish ends
S O R T S T O P P E D ..... Mon Nov 28 15:32:15 2016
See attached histograms |
|
509
|
Thu Dec 1 00:21:08 2016 |
DK | Offline analysis of R42_7 | Checking that the implantation is still reasonable after LN2 fill recovery.
Degrader condition is 8. + 0.2 mm Al on the upstream side of the PE wall.
All three variable degraders are in (0.5 mm W; 0.3 mm W; 1.0 mm W, each on 1 mm Al backing).
Confirmed degrader settings at 10.11 (right before a change was made after this run...)
*** data items: 261088000 ( 125355.46 Hz)
*** ADC events: 32885891 ( 15789.41 Hz)
*** time warps: 0 ( 0.00 Hz)
*** DSSSD # 1 count: 5369750 old count: 5351200 dt: 283.23 s LEC rate: 18958.97 Hz
*** DSSSD # 2 count: 4958223 old count: 4941299 dt: 283.23 s LEC rate: 17505.99 Hz
*** DSSSD # 3 count: 2872405 old count: 2862650 dt: 283.23 s LEC rate: 10141.59 Hz
*** DSSSD # 4 count: 2367926 old count: 2359880 dt: 283.23 s LEC rate: 8360.43 Hz
*** DSSSD # 5 count: 2063809 old count: 2056797 dt: 283.23 s LEC rate: 7286.69 Hz
*** DSSSD # 6 count: 3582944 old count: 3570809 dt: 283.23 s LEC rate: 12650.29 Hz
*** DSSSD # 1 count: 61 old count: 61 dt: 283.23 s HEC rate: 0.22 Hz
*** DSSSD # 2 count: 82 old count: 82 dt: 283.23 s HEC rate: 0.29 Hz
*** DSSSD # 3 count: 111 old count: 111 dt: 283.23 s HEC rate: 0.39 Hz
*** DSSSD # 4 count: 71 old count: 71 dt: 283.23 s HEC rate: 0.25 Hz
*** DSSSD # 5 count: 42 old count: 42 dt: 283.23 s HEC rate: 0.15 Hz
*** DSSSD # 6 count: 9 old count: 9 dt: 283.23 s HEC rate: 0.03 Hz
*** ENTRY finish ends
S O R T S T O P P E D ..... Thu Dec 1 09:11:23 2016
See attached histograms
Figure 5 shows low energy front and back, and if I compare to R41_2 offline analysis (i.e. https://elog.ph.ed.ac.uk/AIDA/507 ) then we see more fluctuation (of light ions?) it seems. Run 41_2 seems to show mostly noise (anti correlation) and here we see real ions (positive correlation. This is confirmed in our figure 4, where the statistics of DSSD#3 seems about doubled. The event rate per unit time in 41_2 shows a relatively flat top behavior characteristic of noise domination, where as our present one shows more jitter that could be characterized by light ion production and transport variation.
|
|
510
|
Thu Dec 1 02:25:41 2016 |
DK | Offline analysis of R42_32 | Offline analysis of AIDA R42_32.
Degrader condition of F11 is with the same stopping power of e.g. R42_7 (see https://elog.ph.ed.ac.uk/AIDA/509).
5 mm Al between MUSIC 1 and MUSIC 2
3. + 0.2 = 3.2 mm Al on the upstream side of the PE wall
Pb wall between MUSIC 1 and MUSIC 2 was reduced from 7.5 cm gap to ~6.5 mm gap (beam right side pushed in 1
cm, so asymmetric)
(Variable degrader system remains unchanged with 1,2,3 in: 0.5 mm W + 0.3 mm W + 1.0 mm W (+3 mm Al backings))
*** data items: 261088000 ( 118111.81 Hz)
*** ADC events: 36442207 ( 16485.84 Hz)
*** time warps: 0 ( 0.00 Hz)
*** DSSSD # 1 count: 5340742 old count: 5338849 dt: 304.98 s LEC rate: 17511.91 Hz
*** DSSSD # 2 count: 4962731 old count: 4960982 dt: 304.98 s LEC rate: 16272.43 Hz
*** DSSSD # 3 count: 2885062 old count: 2884042 dt: 304.98 s LEC rate: 9459.91 Hz
*** DSSSD # 4 count: 2336560 old count: 2335778 dt: 304.98 s LEC rate: 7661.41 Hz
*** DSSSD # 5 count: 2003149 old count: 2002489 dt: 304.98 s LEC rate: 6568.18 Hz
*** DSSSD # 6 count: 3645726 old count: 3644468 dt: 304.98 s LEC rate: 11954.07 Hz
*** DSSSD # 1 count: 56 old count: 56 dt: 304.98 s HEC rate: 0.18 Hz
*** DSSSD # 2 count: 62 old count: 62 dt: 304.98 s HEC rate: 0.20 Hz
*** DSSSD # 3 count: 87 old count: 87 dt: 304.98 s HEC rate: 0.29 Hz
*** DSSSD # 4 count: 60 old count: 60 dt: 304.98 s HEC rate: 0.20 Hz
*** DSSSD # 5 count: 37 old count: 37 dt: 304.98 s HEC rate: 0.12 Hz
*** DSSSD # 6 count: 9 old count: 9 dt: 304.98 s HEC rate: 0.03 Hz
*** ENTRY finish ends
S O R T S T O P P E D ..... Thu Dec 1 11:06:21 2016
See attached histograms.
Figure 1: The time structure of low energy events seems more normalized with a flat-top and similar intensity as
like R41_2 (see i.e. https://elog.ph.ed.ac.uk/AIDA/507 ) rather than R42_32, though one wonders if this is a
real effect or if R42_7 was somehow unusual.
Figure 2: We can confirm that the number of real events corresponding to light ions seems reduced compared to
R42_7 Fig 4.
Figure 5: Distribution in XY of low gain [heavy ion / high energy] is still weighted to the right hand side as
usual, but seems slightly more homogeneous than previous cases such as R42_7. This is corroborated by the
scalers pasted above, where the implant rate to DSSD#3 is about 0.3 Hz rather than 0.4 Hz of R42_7; this is
probably a real decrease and not merely a statistical effect as R41_2 also shows around 0.4 Hz. I conclude that
heavy ion implant rate is reduced by about 1/3 from R42_7, most probably from the Pb bricks cutting some
fraction of the beam (though in theory it could be from shifting 5 mm of the Al degrader up stream by ~2 m and
the angular straggling of the heavy ions in the plate).
Specific comparison of the implant rates show we are dropping 0.1 cps on DSSD#2, 0.1 cps on DSSD#3, and maybe 0.05 Hz
(it's a sensible number or not different than zero?) in DSSD#4, which can give an idea of what ion species are being
cut out if we looked at the BigRIPS PID correlated with AIDA implantation. |
|
515
|
Fri Dec 2 06:41:25 2016 |
DK | Benchmarks for compression | R45_17 as the test file, which was closed at 9.02 AM Friday, December 2nd at the end of the official parasitic
machine time on Fallon et al.
===Summarized results===
Initial file is 2.0 GB of AIDA data
LZMA: 862M; Time: 23 min
BZ2: 1.3G; Time: 6.5 min
GZ: 1.3G; Time: 3 min
The LZMA results are about as I expected: fantastic compression but it takes a very long time to pack the data.
The BZ2 data are a bit surprising; usually it can be about 10 or 15% smaller than GZ for hexdata from my
experience. I conclude that we should use GZ (which we are already doing, but now we confirm that it is
optimized for time versus disk space usage).
15.49 We are continuing to compress the data. Presently we are somewhere around R39_20, sequentially. I
estimated that there are 578 runs remaining, and naively say each is 2 GB (some at the end of RXX_ may be less).
Compression at 3 minutes each then takes 29 hours, so it should be finished by tomorrow evening, say around
20:00 or a little later, depending on the fluctuation.
===Full details of the test===
npg@aidas1 ~/benchmarks % ls -altr
total 2048024
-rw-r--r--. 1 npg npgstaff 2097152000 Dec 2 13:58 R45_17
drwxrwxr-x. 55 npg users 4096 Dec 2 14:01 ..
drwxr-xr-x. 2 npg npgstaff 4096 Dec 2 14:01 .
First test with lzma
npg@aidas1 ~/benchmarks % time tar cvf R45_17.lzma --lzma R45_17
R45_17
tar cvf R45_17.lzma --lzma R45_17 1388.10s user 15.63s system 100% cpu 23:13.94 total
npg@aidas1 ~/benchmarks % ls -altrh
total 2.8G
-rw-r--r--. 1 npg npgstaff 2.0G Dec 2 13:58 R45_17
drwxrwxr-x. 55 npg users 4.0K Dec 2 14:08 ..
-rw-r--r--. 1 npg npgstaff 862M Dec 2 14:31 R45_17.lzma
npg@aidas1 ~/benchmarks %
as expected, the compression quality is very good (> 50%) but this is much to slow to be practical.
next we can attempt bz2
npg@aidas1 ~/benchmarks % time tar cvjf R45_17.tar.bz2 R45_17
R45_17
tar cvjf R45_17.tar.bz2 R45_17 375.22s user 6.68s system 97% cpu 6:31.74 total
npg@aidas1 ~/benchmarks % ls -altrh
total 4.1G
-rw-r--r--. 1 npg npgstaff 2.0G Dec 2 13:58 R45_17
-rw-r--r--. 1 npg npgstaff 862M Dec 2 14:31 R45_17.lzma
-rw-r--r--. 1 npg npgstaff 737 Dec 2 14:31 results.txt
drwxrwxr-x. 55 npg users 4.0K Dec 2 14:32 ..
-rw-------. 1 npg npgstaff 12K Dec 2 14:33 .results.txt.swp
drwxr-xr-x. 2 npg npgstaff 4.0K Dec 2 14:34 .
-rw-r--r--. 1 npg npgstaff 1.3G Dec 2 14:40 R45_17.tar.bz2
npg@aidas1 ~/benchmarks % ls -altrh
total 4.1G
-rw-r--r--. 1 npg npgstaff 2.0G Dec 2 13:58 R45_17
-rw-r--r--. 1 npg npgstaff 862M Dec 2 14:31 R45_17.lzma
-rw-r--r--. 1 npg npgstaff 737 Dec 2 14:31 results.txt
drwxrwxr-x. 55 npg users 4.0K Dec 2 14:32 ..
-rw-------. 1 npg npgstaff 12K Dec 2 14:33 .results.txt.swp
drwxr-xr-x. 2 npg npgstaff 4.0K Dec 2 14:34 .
-rw-r--r--. 1 npg npgstaff 1.3G Dec 2 14:40 R45_17.tar.bz2
npg@aidas1 ~/benchmarks % time tar cvzf R45_17.tar.gz R45_17
R45_17
tar cvzf R45_17.tar.gz R45_17 188.01s user 6.53s system 100% cpu 3:13.98 total
npg@aidas1 ~/benchmarks % time tar cvzf R45_17.tar.gz R45_17
R45_17
tar cvzf R45_17.tar.gz R45_17 188.01s user 6.53s system 100% cpu 3:13.98 total
npg@aidas1 ~/benchmarks % ls -altrh
total 5.4G
-rw-r--r--. 1 npg npgstaff 2.0G Dec 2 13:58 R45_17
-rw-r--r--. 1 npg npgstaff 862M Dec 2 14:31 R45_17.lzma
-rw-r--r--. 1 npg npgstaff 737 Dec 2 14:31 results.txt
drwxrwxr-x. 55 npg users 4.0K Dec 2 14:32 ..
-rw-r--r--. 1 npg npgstaff 1.3G Dec 2 14:40 R45_17.tar.bz2
drwxr-xr-x. 2 npg npgstaff 4.0K Dec 2 14:44 .
-rw-r--r--. 1 npg npgstaff 1.3G Dec 2 14:47 R45_17.tar.gz
-rw-------. 1 npg npgstaff 12K Dec 2 14:50 .results.txt.swp
conclusion is gzip is the best as for time efficiency. |
|
517
|
Fri Dec 2 09:47:45 2016 |
DK | F8 targets | F8 targets, for posterity.
Plastic: 39~40 mm
Carbon: 20. mm |
|
518
|
Fri Dec 2 09:47:58 2016 |
DK | F8 targets | F8 target measurements, for posterity.
Plastic: ~39 to 40 mm
Carbon: 20. mm |
|
519
|
Fri Dec 2 09:58:54 2016 |
DK | Simple PID with LISE++ | I wrote this LISE++ file based on the Takechi .lpp modified by AE for 77Ni, 78Ni, 76Ni.
Basically, I changed the primary beam to 48Ca, reduced the production target thickness to 1 mm (I think it may
be correct), inserted the carbon target at F8, and made the RIB as 41Al and had it calculate the optics. Then I
modified by hand D7 and D8 Brho settings to like 6.32 Tm which is something near the correct value (for plastic,
but I forget for carbon, slightly different perhaps?)
Then I make a fairly arbitrary dE-ToF plot (attachment 1) without any concern for what data it thinks is filling
the plot, and run the Monte Carlo simulation.
This may give a user some kind of naive picture of what sorts of ions we might expect, with a basic clue about
their relative intensity or improbability to detect.
I did this work in literally about 5 minutes total, so do not take it too seriously. It is definitely not
right, but it is not totally nonsense either. You can play with the .lpp file (attachment 2) if you want to
improve it. You should definitely get the degraders used within BigRIPS correctly. I imagine they are much thicker, because D6 being 8.42 Tm is much too high (it was more like
7.5 Tm or so?) The other things you can (and should) add is things at F11 like our degraders, AIDA, and so on. I heard Gabor might do this at some point, but he may have become
very busy with other tasks.
Attachment 3 shows some relative intensities numerically. Again, you should not really trust this.
However, let's take some known experimental values.
I recall that at the BigRIPS F11 Pla. scaler, when we did the 41Al unreacted measurement, it was about 30 cps = 108000 counts per hour
I think we had about several counts per hour of 40Mg in the best case reaching to F11.
It means that 41Al and 40Mg ratio is around order 10^5 difference.
Surprisingly, this is nearly the same as the results of this quick LISE++ simulation (attachment 3 has numbers) |
|
545
|
Wed Mar 29 17:49:36 2017 |
DK | cshrc hacking w/ MIDAS | The ~/.cshrc was modified to change the MIDAS path for a convenient analysis method, but it should be done in a shell script instead, as it pushes the MIDAS BASE to all instances
of MIDAS, such as the Run Control, etc.
As we did not want to log out, and the X session is setting these globally somehow (unexpected) then I did a temporary hack.
The directory we did not want, MIDAS@aidas.240215 was moved to MIDAS@aidas.240215.hacking
Then a symbolic link called MIDAS@aidas.240215 was created to point at MIDAS@aidas
We should remember to undo this later and fix the situation. |
|
551
|
Sun Apr 2 04:40:00 2017 |
DK | BigRIPS Summary Info | It is all saved locally file:///homes/npg/Documents/2017Ca/
Although, a little more hacking is required to get the view.html webpage to allow you to click each run
Whether or not it is useful I don't know, but it was generated by a shell script
/homes/npg/Documents/bigrips_summary.sh which can be modified and used for future experiments. |
|
640
|
Tue Jun 6 15:59:19 2017 |
DK | Wed 7 June night shift | 7 June 2017
00:05 All system-wide checks are passed
Sync pulses nera 44 x 10^6
Memory 35k ~ 38k
Biases uploaded as 33.png, 34.png - attachments 1 & 2
Entered into spreadsheet
SSD 3 bias is going up slowly since this isotope setting began
Around "2 to 4" Hz in SSDs 3 and 4 -> 50 to 80 pps implant rate
See attachment 3, 35.png
All rates under 100k except nnaida 13, 21
See attachment 4, 36.png
FEE temperatures shown as 37.png, attachment #5
00:30 BRIKEN number 32, AIDA R6_299, BigRIPS 3032
01:01 Timewarps look okay (only 2 MBS seen in FEE 6)
See R6_307 - attachment #6
01:30 BRIKEN and BigRIPS runs stopped. AIDA R6_317 at that time.
01:31 AIDA R6_318, BRIKEN run #33, BigRIPS #3033
All system-wide checks passed.
SYNC pulses 46 x 10^6
Memory 35 ~ 39k
02:08 Biases shown as 38.png, attachment #7
02:34 AIDA run R6_337 when BRIKEN and BigRIPS runs are stopped.
AIDA passes all system-wide checks.
02:37 AIDA just begins R6_338l. BRIKEN run #34, BigRIPS #3034
Timewarps look okay, see attachment #8, R6_336
Statistics consistent, see attachment #9, 39.png
Implant rate around 75 cps
03:31 AIDA run R6_354 at the stop of BRIKEN and BigRIPS runs
03:34 AIDA R6_355, BRIKEN run #35, BigRIPS #3035
AIDA passed all system checks. Merger shows low rate for nnaida8, but this is consistent for tonight.
4:03 Biases shown as 40.png, attachment #10.
4:08 Stats shown as 41.png, attachment #11
As before, only nnaida13 and nnaida21 show over 100k, and nnaida8 is quite low.
4:30 AIDA run R6_372 at the close of BRIKEN and BigRIPS runs.
4:32 AIDA R6_373 with new BRIKEN run #36 and BigRIPS #3036.
All system-wide checks okay.
5:38 AIDA R6_393 at the stop of BRIKEN and BigRIPS runs
All system-wide checks okay.
5:39 AIDA still R6_393, BRIKEN run #37 and BigRIPS run #3037
6:00 Leak currents shown as 42.png, attachment #12
6:16 Beam stopped so we can check the signal of one Ge detector that became noisy
R6_404 at that time
Added some Al foil to the Ge signal cables
6:28 Beam is back R6_408
6:29 BRIKEN run #41, BigRIPS #38
7:28 AIDA R6_426 at stop of other runs
System-wide checks are okay.
7:30 AIDA R6_427. BRIKEN #42, BigRIPS #39
Implant rates 60 ~ 70 cps in AIDA
One MBS timewarp in R6_426, as attachment #13
7:57 Leak currents shown as attachment #14, 43.png |
|
651
|
Sat Jun 10 02:40:18 2017 |
DK | Sat 10 June 8:00 ~ 16:00 | 10:40 Put lead wall in front of AIDA during beam tuning.
12.30 BNC PB-4 Pulser OFF
AIDA file RINF128/R7_998
12.31 E17/F11 ambient temperature +24.7 deg C, RH 42.2%, d.p. +10.7 deg C
Julabo FL11006 set point +20 deg C, actual +20.0 deg C, water level OK (c. 70%)
12.53 analysis RIBF128/R7_1002 <0.5% dead-time, zero ts & MBS timewarps - see attachment 1
13:12 R7_1006 is the last run with 100% no pulser. Pulser turned on around 400 MB into R7_1007 |
|