PIRE-GEMADARC - Shared screen with speaker view
Litao Yang
There are many cuts, including some basic cuts such as pedestals, correlations between amplitude from different channels, beyond that, the most important cut is bulk event/surface event selection, which suppress the bkg level significantly. You can find more details in the fellowing papers: https://doi.org/10.1088/1674-1137/42/2/023002 https://doi.org/10.1016/j.nima.2017.12.078 https://doi.org/10.1103/PhysRevLett.120.241301
Julieta Gruszko (she/her)
The surface/bulk event selection that works so well near threshold was what I was most curious about, thank you!
Litao Yang
OK. You can find the details of bulk/surface selection in the second paper. Thank you!
Dongming Mei
After this session before we take a break, please turn on your camera, we will take a photo for all participants. If you don’t wish us to take a photo, you don’t need to turn on your camera. Thanks.
Harris, Harlan R
John, that "not quite dead" layer that shows some signal because of diffusion, I assume you can see the impact of different time constants? Does Debye length impact that?
Joel Sander
The Education group would appreciate young member input on how mentorship. We are considering different ways to provide you resources. We want you input on what form of mentorship would be most useful to you:
Joel Sander
John Wilkerson (he/him)
Rusty, difficult to give a short answer regarding diffusion and the impact on the signal. If there is an interaction in the diffusion level, then yes, the charge collection will be slower the closer it is to the true “dead” layer. However, the time scale of the diffusion is much longer than for a typical pulse in the active bulk. We should probably schedule a discussion on one of our upcoming research monthly meetings.
Zhenyu Zhang (Tsinghua Univ.)
I have a question. Do you treat the waveforms as normal pictures in the process? Or there is some special treatment? Thanks!
Zhenyu Zhang (Tsinghua Univ.)
Aobo Li
We normalize the waveform by dividing the maximum amplitude, and then feed it into a 1D convolutional layer, so it’s not a picture but an 1D sequence per event
Esteban Leon
I actually treat the waveforms as vectors (3404 samples per observation). Then the autoencoder performs several 1-D convolutions along with other operations to reconstruct the values of the waveform.
Zhenyu Zhang (Tsinghua Univ.)
Oh ok, so there is noralization. I missed that point just now, Thanks!
Zhenyu Zhang (Tsinghua Univ.)
Zhenyu Zhang (Tsinghua Univ.)
Because if you alsoconsider the amplitude as a feature, and the wave forms has the same range in Y axis and time range it seems reasonable too
Julieta Gruszko (she/her)
Yes, that is another approach one could take. We normalize waveforms because we’d prefer that our clustering not be by energy
Julieta Gruszko (she/her)
If you give the auto encoder amplitude information, there’s a good chance that feature will overwhelm all others, and you’ll just create groupings based on energy
Aobo Li
If you do it that way then you might fall into the regime of sparse image, and that might affect the performance of CAE
Aobo Li
Because you only get 1 value per Y
Zhenyu Zhang (Tsinghua Univ.)
Good point, thanks!
Joel Sander
Reminder to students and postdocs: PLEASE fill out the mentorship survey to help us know how to help you: https://docs.google.com/forms/d/e/1FAIpQLSfyGrZ2rO9HfjqMHbKDlMuDn-MCq_C9EDJ1JHHlll9rA9Trrw/viewform?usp=sf_link
Nader Mirabolfathi
Very nice set of talks.
Nader Mirabolfathi