Dr. Marta Babicz

Particle Physicist • Neutrinos • Dark Matter • AI Methods
×

Particle Physicist

Rare-event searches

I’m a physicist specializing in experimental particle physics, currently working at the University of Zurich (UZH) as a postdoctoral researcher. I explore some of the most elusive pieces of our universe: neutrinos and dark matter. My scientific journey began at AGH University of Science and Technology in Krakow, where I first set out to become an engineer. During those studies I met Dr. Tomasz Plazak, whose lectures in cosmology and particle physics were so inspiring and energizing that I shifted toward applied physics, with a clear goal of working at CERN and investigating the particles we understood the least.

Along the way I was guided by generous mentors whose passion was contagious. When the opportunity to begin a PhD arose, I didn’t hesitate. The path wasn’t always easy and it certainly brought doubts, but it taught me invaluable lessons (well beyond physics) for which I’m deeply grateful.

Throughout my career I have also had the privilege of spending time at leading particle-physics laboratories, including CERN, Fermilab, the Laboratori Nazionali del Gran Sasso (LNGS), and J-PARC. These experiences shaped how I design experiments, handle ultra-rare signals, and turn tiny fluctuations into robust evidence.

Today I collaborate with fantastic colleagues on searches for neutrinoless double-beta decay and dark-matter candidates, while also exploring how AI-assisted methods can help us design better analyses, accelerate discovery, and even become a research topic in their own right. I am passionate about integrating AI thoughtfully into research and education to enhance how we learn, teach, and do science, without replacing our most human abilities.

Outside the lab, sport is my recharge: I’m passionate about almost any discipline, but these days body combat and weightlifting empower me most, and I start every day with yoga.

Selected talks

  • (*Invited) Advancing the Search for Neutrinoless Double-Beta Decay with LEGEND; FPD Seminar, SLAC National Accelerator Laboratory, USA; 06 May 2025
  • Enhancing event discrimination in LEGEND-200 with transformer; Neutrino Physics & Machine Learning, ETH Zurich, Switzerland; 26 Jun 2024
  • (*Invited) Probing the Secrets of Matter Creation with LEGEND; MIT LNS Seminar, Cambridge, USA; 05 Mar 2024
  • Calibration System Mechanics; Conceptual Design Review of LEGEND, LNGS, Italy; 10–12 Jan 2024
  • Advancing Liquid Xenon Dark Matter Detection with the DARWIN Observatory; LIDINE 2023, Madrid, Spain; 20 Sep 2023
  • Status of the LEGEND experiment; Annual Meeting of the Swiss Physical Society, Basel, Switzerland; 07 Sep 2023
  • (*Invited) Design & Optimization of the SiPM Array for Event Reconstruction in DARWIN; IPMU/ICRR/ILANCE Seminars, Tokyo, Japan; 05 Jul 2023
  • (*Invited) Event filtering and mitigation of simulation biases; IPA-ML Workshop, ETH Zurich, Switzerland; 20–22 Mar 2023
  • The light detection system of the ICARUS detector in the SBN program; LIDINE 2022, Warsaw, Poland; 21–23 Sep 2022
  • The ICARUS detector for the SBN experiment at Fermilab; 28th Cracow Epiphany Conference, Krakow, Poland; 10–14 Jan 2022
Portrait of Dr. Marta Babicz — outreach talk
Speaking & outreach are a big part of my work.

Awards & grants

  • Forschungskredit (UZH); University of Zurich research grant, 2025.
  • NSF HDR ML Challenge; A3D3 Gravitational Wave category; 3rd place (team: Marta Babicz & Saul Alonso-Monsalve), 2024. Challenge organized by Imageomics, A3D3, and iHarp; result featured Mar 12, 2025.
  • GRC Short Grant; CHF 2,300 to organize the “Pulse Shape Simulation Workshop for the LEGEND Experiment,” UZH, 2025.
  • Fulbright Program; Distinguished Reviewer Award for the Polish-U.S. Fulbright Program, 2024–25.
  • ETH Zurich Interdisciplinary Symposium; 1st prize for talk “From LEGEND to Reality: Cracking the Code of Creation,” 2023.
  • PRELUDIUM 17 (National Science Centre, Poland); pre-doctoral research grant; PI; budget 115,767 PLN; ranked 1st, 2019.
  • AGH Diamonds; Best Thesis Award; 3rd place, AGH University, 2018.

Selected publications

  1. J. Aalbers et al., “Model-independent searches of new physics in DARWIN with a semi-supervised deep learning pipeline,” 2024. arXiv:2410.00755
  2. M. Adrover et al., “Cosmogenic background simulations for neutrinoless double beta decay with the DARWIN observatory at various underground sites,” Eur. Phys. J. C 84, 88 (2024). DOI:10.1140/epjc/s10052-023-12298-w
  3. A. A. Abud et al., “Separation of track- and shower-like energy deposits in ProtoDUNE-SP using a convolutional neural network,” Eur. Phys. J. C 82 (10), 903 (2022). DOI:10.1140/epjc/s10052-022-10791-2
  4. M. Babicz et al., “Adversarial methods to reduce simulation bias in neutrino interaction event filtering at Liquid Argon Time Projection Chambers,” Phys. Rev. D 105, 112009 (2022). APS / DOI
  5. M. Babicz et al., “Neutrino interaction event filtering at liquid argon time projection chambers using neural networks with minimal input model bias,” ICRC 2021. DOI:10.22323/1.395.1075
  6. M. Babicz et al., “Study of Light Production With A Fifty Liter Liquid Argon TPC,” J. Phys. Conf. Ser. 2374 (2022) 012165. DOI:10.1088/1742-6596/2374/1/012165
  7. R. Acciarri et al., “Cosmic Ray Background Removal With Deep Neural Networks in SBND,” Front. Artif. Intell. 4 (2021) 649917. DOI:10.3389/frai.2021.649917
  8. B. Ali-Mohammadzadeh, M. Babicz et al. (ICARUS), “Design and implementation of the new scintillation light detection system of ICARUS T600,” JINST 15 (10) T10007 (2020). DOI:10.1088/1748-0221/15/10/T10007
  9. M. Babicz et al., “A measurement of the group velocity of scintillation light in liquid argon,” JINST 15 (09) P09009 (2020). DOI:10.1088/1748-0221/15/09/P09009
  10. B. Ali-Mohammadzadeh, M. Babicz et al., “Measurement of Liquid Argon Scintillation Light Properties by means of an Alpha Source placed inside the CERN 10-PMT LAr Detection System,” JINST 15 (06) C06042 (2020). DOI:10.1088/1748-0221/15/06/C06042
  11. M. Babicz et al., “A particle detector that exploits Liquid Argon scintillation light,” NIM A (2020). DOI:10.1016/J.NIMA.2019.162421
  12. M. Babicz et al., “Scintillation light DAQ and trigger system for the ICARUS T600 experiment at Fermilab,” NIM A (2019). DOI:10.1016/j.nima.2018.11.111
  13. M. Babicz et al., “Linearity and saturation properties of Hamamatsu R5912-MOD photomultiplier tube for the ICARUS T600 light detection system,” NIM A (2019). DOI:10.1016/j.nima.2018.10.113
  14. M. Babicz et al., “Test and characterization of 400 Hamamatsu R5912-MOD photomultiplier tubes for the ICARUS T600 detector,” JINST 13 (10) P10030 (2018). DOI:10.1088/1748-0221/13/10/P10030
  15. M. Babicz et al., “Timing properties of Hamamatsu R5912-MOD photomultiplier tube for the ICARUS T600 light detection system,” NIM A (2018). DOI:10.1016/j.nima.2017.11.062
  16. T. Cervi, M. Babicz et al., “Characterization of SiPM arrays in different series and parallel configurations,” NIM A (2018). DOI:10.1016/j.nima.2017.11.038
  17. T. Cervi, M. Babicz et al., “Comparison between large area PMTs and SiPM arrays deployed in a Liquid Argon Time Projection Chamber at CERN,” NIM A (2018). DOI:10.1016/J.NIMA.2017.10.069

Books & movies I recommend

Books

  • Surely You’re Joking, Mr. Feynman! Richard P. Feynman Playful curiosity, rigorous thinking, and great lab stories.
  • A Brief History of Time Stephen Hawking Cosmology basics from singularities to the arrow of time.
  • Brief Answers to the Big Questions Stephen Hawking Short, accessible takes on humanity’s biggest physics questions.
  • The Disordered Cosmos Chanda Prescod-Weinstein Particle physics meets culture, equity, and the future of science.
  • Black Hole Blues Janna Levin Inside LIGO’s journey to the first gravitational-wave detection.
  • The Demon-Haunted World Carl Sagan A timeless toolkit for skepticism and scientific literacy.
  • The Making of the Atomic Bomb Richard Rhodes Sweeping history of physics, ethics, and world-changing tech.
  • An Introduction to Particle Dark Matter Cheng, Li, Zhang (eds.) Survey from theory to detection for motivated readers.
  • The 5 AM Club Robin Sharma Simple, disciplined habits for carving out focused, creative time.
  • Sapiens: A Brief History of Humankind Yuval Noah Harari A sweeping narrative of how cognition, stories, and tech shaped our species.

Movies

  • Arrival (2016) A meditation on language, causality, and inference.
  • Hidden Figures (2016) Brilliant mathematicians vs. systemic barriers, true story.
  • Oppenheimer (2023) Ambition, responsibility, and physics that changed history.
  • The Martian (2015) Engineering, grit, and “science the problem” energy.
  • Good Will Hunting (1997) Genius, mentorship, and finding purpose—heart + brains.
  • Tenet (2020) Time inversion, entropy, and high-concept physics wrapped in action.
  • Interstellar (2014) Relativity, love, and the vast unknown, science meets cinema.
Lab work at UZH — detector handling at a cryogenic setup
×

Neutrinos

Tiny masses · Big questions

Neutrinos are neutral, almost-massless particles that rarely interact. They’re made wherever nuclear reactions or decays happen: in your backyard (radioactive decays), in Earth’s interior (geoneutrinos), in the Sun and stars (fusion), in the sky (cosmic-ray air showers), in cataclysms (supernovae, black-hole jets), from the first seconds of the universe (the relic background), and by us (reactors and accelerator beams). Catch the few that do interact in giant, quiet detectors and you get a live news feed from the tiniest scales of matter to the largest structures in the universe.

They kept leaving clues

Beta decay looked “wrong.” Early experiments showed electrons coming out with a spread of energies instead of a single value. If a neutron simply turned into a proton + electron, energy should be sharp. Something else was slipping away with the missing energy.

Pauli’s rescue (1930). In a famous letter, Wolfgang Pauli suggested a tiny, neutral particle that carried off the extra energy and momentum, desperate, but elegant. Enrico Fermi built the theory around it and coined the name neutrino.

From guess to catch (1956). Clyde Cowan and Fred Reines placed a detector next to a nuclear reactor and finally saw antineutrinos: flashes in coincidence, exactly as the theory predicted. The “ghost” was real.

Why do some atoms spit out an electron? (beta decay)
  • Chasing stability: Nuclei with too many neutrons can lower energy by turning a neutron → proton.
  • Weak-force swap: A down quark flips to an up quark; a W− boson turns into an electron + antineutrino.
  • Bookkeeping works: The antineutrino carries off the “missing” energy and spin so conservation laws hold.
  • Why only sometimes: It happens only when the final nucleus is lower in energy than the initial one.

The Sun’s missing neutrinos

Another clue. If fusion powers the Sun, it should flood Earth with solar neutrinos. Ray Davis’s underground experiment saw only about a third of the expected rate, the “solar neutrino problem.”

The twist: neutrino oscillations. Decades later, Super-Kamiokande and SNO showed that neutrinos change identity in flight: electron ↔ muon ↔ tau. That solved the solar puzzle and proved neutrinos have mass (small, but not zero).

What “oscillations” really mean

Neutrinos are produced and detected in flavors (e, μ, τ), but they travel as a quantum mix of three mass states. As the mixed waves propagate, they slip out of step, and the flavor you’re likely to see beats with distance and energy. Matter can tip the balance too. It’s quantum interference written across planets and stars.

Oscillation quick guide
  • L/E: distance traveled (L, in km) divided by energy (E, in GeV). The wiggles depend on this ratio.
  • sin²(2θ): sets the depth of the wiggles (how strongly flavors mix).
  • Δm²: sets the spacing of the wiggles in L/E (the mass-squared difference).
  • 1.27: the constants wrapped up so the formula works with km/GeV/eV².

What we still don’t know

Oscillations told us neutrinos have mass and swap identities, but several big pieces are missing:

How heavy are neutrinos, exactly? Oscillations only measure the gaps between the masses, not the masses themselves. Experiments “weigh” neutrinos by looking at the very edge of beta-decay electrons, and cosmology constrains how much neutrinos can weigh without smearing out early-universe structure. Why it matters: pins down neutrinos’ role in shaping the cosmos and sets the true energy scale of the neutrino sector.

Which mass comes first? (mass ordering) We don’t yet know whether the lightest pair is (ν₁, ν₂) with ν₃ heavier (“normal”) or the reverse (“inverted”). Matter changes oscillations slightly, and that pattern reveals the order. Why it matters: the ordering feeds into how supernova signals unfold and how precisely long-baseline beams can measure other parameters.

Are neutrinos their own antiparticles? If neutrinos are Majorana, a nucleus could decay by turning two neutrons into two protons and two electrons with no neutrinos escaping (neutrinoless double-beta decay). Why it matters: it would show that lepton number isn’t sacred and point to mechanisms that may have helped the universe prefer matter over antimatter.

Do neutrinos violate CP symmetry? CP violation means neutrinos and antineutrinos don’t oscillate the same way. The phase δCP encodes this difference. Why it matters: a clear asymmetry could be a missing ingredient in explaining why any matter exists at all.

Are there extra, “sterile” neutrinos? Some short-baseline hints could be explained by neutrinos that don’t feel the weak force, only gravity and mixing. Why it matters: would be new physics beyond the Standard Model and could connect to dark-sector ideas.

How exactly do neutrinos interact with nuclei? In the few-GeV range, nuclear effects (multi-nucleon knocks, re-scattering) blur our view of the neutrino’s true energy. Why it matters: getting this right is essential for precision oscillation measurements, otherwise systematics swamp the signal.

What do neutrinos do in explosions? A nearby supernova would send a 10-second neutrino burst. The time/energy pattern would probe neutrino self-interactions, the mass ordering, and how matter affects mixing at extreme densities. Why it matters: a once-in-a-lifetime lab for both particle physics and stellar physics.

Answering these turns “neutrinos are odd” into “neutrinos explain things”, from why the universe looks the way it does to how stars live and die.

Experiments I work(ed) on — and why they matter

  • LEGEND: ultra-quiet germanium detectors searching for neutrinoless double-beta decay. Tests whether neutrinos are Majorana (their own antiparticles) and probes the effective mass scale.
  • ICARUS: liquid-argon imaging in the Short-Baseline program. Checks hints of possible sterile neutrinos and advances LArTPC reconstruction, purity, and calibration.
  • DUNE: long-baseline beam to giant underground argon detectors. Targets δCP (CP violation), the mass ordering via matter effects, supernova-burst neutrinos, and precision mixing, using near+far detectors to tame systematics.
  • T2K: off-axis ν beam from J-PARC to Super-Kamiokande. Precision νμ→νe appearance to constrain δCP and reduce cross-section/systematic uncertainties.
LEGEND-200 detector hardware at LNGS (courtesy Michael Willers via LBL) ICARUS T600 cryostat at Fermilab protoDUNE interior at CERN (Mark Benecke) Super-Kamiokande interior (Kamioka Observatory, ICRR, Univ. of Tokyo)

What’s next

Now that we know neutrinos have mass and mix, the field is moving from that they do to how and why, with a slate of new detectors and clever analyses aimed straight at the open questions.

  • Pin down the mass ordering and CP violation. Long-baseline “beam-to-underground” experiments will compare neutrinos with antineutrinos and watch how matter affects oscillations: DUNE (near+far argon detectors) and Hyper-Kamiokande/T2K pursue the CP-violating phase (δCP) and the mass order.
  • Weigh neutrinos directly and indirectly. Precision beta-decay endpoint measurements (KATRIN now; Project 8 next) aim for sub-eV sensitivity, while cosmology (CMB + galaxy surveys) squeezes the sum of masses from how neutrinos blur early-universe structure.
  • Are neutrinos their own antiparticles? Ultra-quiet searches for neutrinoless double-beta decay, including LEGEND, nEXO, and CUPID, target lepton-number violation and the Majorana question head-on.
  • Chase (or close) the “sterile” hints. Short-baseline detectors that image every track in liquid argon, ICARUS with its SBN partners, test anomalies with decisive systematics control.
  • Make interactions crystal-clear. Near-detector upgrades and dedicated beams improve neutrino–nucleus modeling (multi-nucleon effects, final-state re-scattering) so oscillation results aren’t limited by cross-section uncertainties.
  • Open new windows on the universe. Next-gen neutrino telescopes, like IceCube-Gen2, KM3NeT, and radio arrays for the ultra-high-energy frontier, turn neutrinos into astronomical messengers for black-hole jets, tidal disruptions, and more.
  • Catch the next nearby supernova in stereo. A 10-second burst will be seen in dozens of detectors across energies and flavors (coordinated by SNEWS), letting us watch flavor conversion in extreme matter and test the mass ordering in real time.
  • New regimes at very low energy. Coherent elastic neutrino–nucleus scattering (CEvNS) and low-energy solar measurements sharpen our picture of the weak force and open doors to new physics at tiny momentum transfers.

Taken together, these efforts turn today’s puzzles, like mass scale, ordering, CP asymmetry, Majorana nature into tomorrow’s measurements, with neutrinos doubling as both a particle-physics probe and a telescope for the most extreme places in the cosmos.

References & image credits
  1. General neutrino properties & oscillations. Particle Data Group, “Neutrino Mass, Mixing, and Oscillations” (2024). pdglive.lbl.gov
  2. Oscillations in matter (MSW effect). L. Wolfenstein, “Neutrino oscillations in matter,” Phys. Rev. D 17, 2369 (1978); S.P. Mikheyev & A.Yu. Smirnov, “Resonance enhancement of oscillations,” Sov. J. Nucl. Phys. 42, 913 (1985). APS · INSPIRE
  3. Pauli’s 1930 letter proposing the neutrino. Paul Scherrer Institute (Pauli Archive) overview. psi.ch
  4. First detection (reactor antineutrinos). C.L. Cowan Jr. & F. Reines, “Detection of the Free Neutrino: A Confirmation,” Science 124, 103 (1956). science.org · ADS
  5. Solar neutrino problem (Homestake). R. Davis Jr. Nobel Lecture & historical overview (2002). nobelprize.org
  6. Flavor change in solar neutrinos. SNO Collaboration: (CC vs ES) PRL 87, 071301 (2001) arXiv; (NC) PRL 89, 011301 (2002) APS.
  7. Atmospheric neutrino oscillations. Super-Kamiokande Collaboration, “Evidence for Oscillation of Atmospheric Neutrinos,” PRL 81, 1562 (1998). APS · arXiv
  8. IceCube (cosmic neutrinos). Instrumentation overview: JINST 12, P03012 (2017) PDF; first astrophysical flux: PRL 113, 101101 (2014) APS.
  9. Geoneutrinos (Earth’s interior). KamLAND first observation: Nature 436, 499–503 (2005) PDF; Borexino observation: Phys. Lett. B 687, 299–304 (2010) Elsevier.
  10. Absolute mass from β-decay (endpoint method). KATRIN Collaboration, “Direct neutrino-mass measurement based on 259 days of KATRIN data,” Science (2025). 90% CL limit: m(νe) < 0.45 eV. PubMed
  11. Cosmology (Σmν constraint). Planck Collaboration, “Planck 2018 results. VI. Cosmological parameters,” A&A 641, A6 (2020). With BAO: Σmν < 0.12 eV (95% CL). arXiv:1807.06209
  12. Why cosmology is sensitive (free-streaming suppresses small-scale structure). J. Lesgourgues & S. Pastor, “Neutrino mass from Cosmology,” Adv. High Energy Phys. 2012 (2012) 608515. arXiv:1212.6154
  13. LEGEND. Official pages: legend-exp.org · legend.ornl.gov
  14. ICARUS (LArTPC). Fermilab ICARUS overview. icarus.fnal.gov
  15. DUNE (long-baseline CPV, ordering, supernovae). Official site. dunescience.org
  16. T2K (off-axis J-PARC → Super-K). KEK overview. kek.jp
  17. Supernova bursts (10-s neutrino signal). Kamiokande-II: PRL 58, 1490 (1987); IMB: PRL 58, 1494 (1987). APS (K-II) · APS (IMB)
  18. DUNE physics program (mass ordering & CPV). DUNE Collaboration, “Technical Design Report, Volume II: DUNE Physics,” arXiv:2002.03005 (2020). CDS / arXiv
  19. Hyper-Kamiokande design report. Hyper-K Collaboration, “Hyper-Kamiokande Design Report,” arXiv:1805.04163 (2018). arXiv
  20. T2K constraint on δCP. T2K Collaboration, “Constraint on the matter–antimatter symmetry-violating phase in neutrino oscillations,” Nature 580, 339–344 (2020). Nature
  21. KATRIN direct mass limit. KATRIN Collaboration, “Direct neutrino mass measurement with improved sensitivity,” arXiv:2406.13516 (2024). arXiv
  22. Project 8 (CRES roadmap). Project 8 Collaboration, overview & sensitivity plans (e.g. NuFact 2022 proceedings), arXiv:2203.07349 (and refs therein). arXiv
  23. Cosmological limits on Σmν. Planck Collaboration, 2018 results, cosmological parameters (A&A 641, A6, 2020). ESA Planck overview
  24. nEXO pre-conceptual design. nEXO Collaboration, “nEXO Pre-Conceptual Design Report,” arXiv:1805.11142 (2018). arXiv
  25. CUPID concept & sensitivity. CUPID Collaboration, “CUPID pre-CDR,” arXiv:1907.09376 (2019). arXiv
  26. Short-Baseline Neutrino Program (ICARUS, SBND, MicroBooNE). The SBN Collaboration proposal and overview (systematics & sterile-ν tests), arXiv:1503.01520 (2015). arXiv
  27. T2K ND280 Upgrade TDR. T2K Collaboration, arXiv:1901.03750 (2019). arXiv
  28. DUNE Near Detector CDR. DUNE Collaboration, arXiv:2103.13910 (2021). arXiv
  29. IceCube-Gen2 white paper. IceCube-Gen2 Collaboration, “The Next-Generation Neutrino Observatory,” arXiv:2008.04323 (2020). arXiv
  30. KM3NeT 2.0 Letter of Intent. KM3NeT Collaboration, J. Phys. G 43 (2016) 084001; arXiv:1601.07459. arXiv
  31. RNO-G (radio UHE neutrinos). The Radio Neutrino Observatory in Greenland, arXiv:2010.12279 (2020) and instrument papers thereafter. arXiv
  32. SNEWS 2.0. SNEWS Collaboration, “SNEWS 2.0: a next-generation supernova early warning system,” arXiv:2008.09044 (2021). arXiv
  33. CEvNS observation. COHERENT Collaboration, “Observation of Coherent Elastic Neutrino–Nucleus Scattering,” Science 357 (2017) 1123–1126. Science · PDF
  34. Low-energy solar (CNO) neutrinos. Borexino Collaboration, “Experimental evidence of neutrinos produced in the CNO fusion cycle in the Sun,” Nature 587, 577–582 (2020). Nature

Image credits are already noted in each figure caption (IceCube/NSF; PSI; LBNL; Fermilab; Benecke; Kamioka Observatory; ESA).

South Pole station under aurora with the IceCube neutrino telescope buried in the ice below.
Some neutrinos arrive with energies from far-off cosmic engines; IceCube listens from under Antarctic ice. Yuya Makino / IceCube / NSF
Portrait of Wolfgang Pauli, 1930s.
Pauli’s bold idea: an invisible neutral particle to balance beta-decay books.
Neutrino flavor oscillations (schematic) Two-flavor appearance/survival probabilities vs L/E. Solid: P(νμ→νe). Dashed: νμ survival. Axes show flavor probability vs distance/energy. P(νμ→νe) ≈ sin²(2θ) · sin²(1.27 Δm² · L / E) (two-flavor; L in km, E in GeV; Δm² in eV²) Distance / Energy (L/E) Flavor probability νe appearance νμ survival νe appearance (solid) νμ survival (dashed)
Two-flavor vacuum oscillations vs L/E (km/GeV). Solid: νμ→νe appearance probability; dashed: νμ survival. Formula at top.
Beta decay (schematic) Neutron transforms to proton, emitting an electron and an electron antineutrino. Quark-level view: d → u + W−, then W− → e− + ν̄e. n → p + e⁻ + ν̄ₑ neutron (n) proton (p) e⁻ ν̄ₑ Quark picture: d → u + W⁻ W⁻ → e⁻ + ν̄ₑ (A down quark in the neutron flips to an up quark via the weak force; that makes the neutron a proton.)
Beta decay: a neutron becomes a proton and emits an electron and an electron antineutrino. Quark-level: d → u via a W⁻, which then decays to e⁻ + ν̄ₑ.
LEGEND-200 detector hardware at LNGS.
LEGEND: ultra-low-background germanium searching for neutrinoless double-beta decay. Michael Willers / LBNL
ICARUS T600 at Fermilab.
ICARUS liquid-argon TPC at Fermilab. Fermilab
Inside the protoDUNE detector at CERN.
DUNE / protoDUNE: technology path to giant underground detectors. Mark Benecke
Interior of Super-Kamiokande.
T2K’s far detector: Super-Kamiokande in Kamioka, Japan. Kamioka Observatory / ICRR
Supernova remnant N103B.
Next big flash: a nearby supernova would light up every neutrino detector on Earth. ESA
×
Bullet Cluster (1E 0657−56) composite: pink = hot gas (X-ray), orange/white = galaxies (optical), blue = mass map from gravitational lensing.

Dark matter

Why WIMPs? How dual-phase xenon detectors look for them

Dark matter is an unseen mass component inferred from multiple lines of evidence: galaxies rotate too fast for their visible mass alone, galaxy clusters bend light more than stars and gas can explain, and the cosmic microwave background requires non-luminous matter to match its pattern. Whatever it is, it hardly interacts with light, yet its gravity shapes structure across the universe.

Why WIMPs?

WIMPs (Weakly Interacting Massive Particles) are a well-motivated class of candidates: massive enough to supply the missing mass and interacting so feebly that they would slip through ordinary matter. The “WIMP miracle” notes that a particle with weak-scale interactions naturally freezes out from the early universe with about today’s dark-matter abundance, an elegant coincidence, not proof, but strong motivation to look.

How do you see something that barely interacts?

Direct-detection experiments seek a single nuclear recoil when a dark-matter particle bumps a nucleus. The recoil energy is tiny, typically a few keV (thousand electronvolts), but detectable with the right sensor.

Dual-phase xenon TPCs (e.g., XENON, LZ, PandaX)

Large detectors filled with liquid xenon act as both target and sensor. In a dual-phase time-projection chamber:

  • S1 (prompt light): the recoil makes xenon scintillate; arrays of PMTs record the flash.
  • S2 (delayed light): freed electrons drift upward, are extracted into the gas, and create a second, amplified flash. The S1–S2 delay gives the depth; the S2 pattern pinpoints the x–y position, yielding a full 3D tag.
  • Signal vs background: nuclear recoils (WIMP-like) and electronic recoils (radioactivity) have different S2/S1, enabling powerful discrimination and a clean inner “fiducial” region.

Beating backgrounds

To suppress false signals, these experiments run deep underground, use a water-Cherenkov muon veto and a dedicated neutron veto, build from ultra-screened materials, and continuously purify xenon to remove traces of ⁸⁵Kr and ²²²Rn. Liquid xenon also self-shields, so events near the edges can be rejected and the core kept exceptionally quiet.

What would a discovery look like?

A statistically significant cluster of nuclear-recoil events in the expected energy range, with the right S1/S2 behavior, stable in time and cross-checked by calibrations, ideally corroborated by other detectors with different backgrounds and systematics.

Where we are now

Searches have not yet seen WIMPs. The strongest published limits reach ~2 × 10⁻⁴⁸ cm² for a 30–40 GeV WIMP (spin-independent coupling), with complementary results from several xenon-TPC experiments that probe overlapping mass ranges.

If not WIMPs…

That’s still progress: tighter limits guide theory and push new ideas, like axions/ALPs, sterile-neutrino dark matter, ultralight fields, and broader “dark-sector” models, along with new detector concepts. Either way, each run teaches us how the universe is (or isn’t) built.

References & image credits
  1. Bullet Cluster (1E 0657−56) composite. X-ray: NASA/CXC/M. Markevitch et al.; Optical: NASA/STScI; Magellan/U. Arizona/D. Clowe et al.; Lensing map: NASA/STScI; ESO WFI; Magellan/U. Arizona/D. Clowe et al. Official image & explainer: Chandra Photo: 1E 0657−56 (Bullet Cluster) .
  2. Direct empirical proof via lensing–gas offset. D. Clowe et al., ApJ Letters 648, L109 (2006) — “A Direct Empirical Proof of the Existence of Dark Matter.” DOI: 10.1086/508162; open preprint: arXiv:astro-ph/0611496.
  3. Early weak-lensing reconstruction. D. Clowe, A. Gonzalez, M. Markevitch (2003), arXiv:astro-ph/0312273 .
  4. Galaxy rotation curves (review). Y. Sofue & V. Rubin, ARA&A 39, 137 (2001). DOI
  5. Gravitational lensing in the Bullet Cluster. D. Clowe et al., “A Direct Empirical Proof of the Existence of Dark Matter,” Chandra/press materials + paper
  6. CMB requires non-luminous matter. Planck Collaboration, “2018 results. VI. Cosmological parameters,” arXiv:1807.06209
  7. WIMP overview & “WIMP miracle”. M. Schumann, “Direct Detection of WIMP Dark Matter: Concepts and Status,” J. Phys. G 46, 103003 (2019). arXiv:1903.03026
  8. Direct detection experiments (review). T. Marrodán Undagoitia & L. Rauch, J. Phys. G 43, 013001 (2016). arXiv:1509.08767
  9. XENON1T detector (dual-phase TPC; S1/S2; muon veto; self-shielding). E. Aprile et al., “The XENON1T Dark Matter Experiment,” EPJ C 77, 881 (2017). arXiv:1708.07051
  10. Material screening & intrinsic backgrounds (⁸⁵Kr, ²²²Rn). E. Aprile et al., “Material radioassay and selection for XENON1T,” EPJ C 77, 890 (2017). arXiv:1705.01828
  11. LZ detector & veto systems (incl. neutron veto). D. S. Akerib et al., “The LUX-ZEPLIN (LZ) Experiment,” Nucl. Instrum. Meth. A 953, 163047 (2020). arXiv:1910.09124
  12. LZ 4.2 t·yr SI limit. D. S. Akerib et al. (LZ), “Results from the LZ 2021–2024 WIMP Search with 4.2 t·yr Exposure,” arXiv:2410.17036
  13. XENONnT WIMP search (2025). E. Aprile et al. (XENONnT), Phys. Rev. D 111, 092010 (2025). APS
  14. PandaX-4T full-exposure results. Y. Meng et al. (PandaX-4T), Phys. Rev. Lett. 132, 201002 (2025). APS
×
Abstract visualization of AI models applied to detector data

AI Methods

From denoising to discovery

AI is everywhere

We live in a world where AI (Artificial Intelligence) quietly powers cameras, maps, translation, medical imaging, and yes, modern physics experiments. Using these tools isn’t a fad; it’s part of scientific progress. But if we want results we can trust, we have to understand what the models are doing, how they’re trained, and where they can go wrong.

The good news is that AI is just math. A model is a function f(x; θ) with parameters θ. Training chooses the parameters that make f good at a task by minimizing a loss on data (no magic). Because of that, the final performance depends on three levers we control:

The algorithm (model). Pick an architecture suited to your data; capacity sets how complex a pattern it can learn.
  • Match to data: images → CNN; sequences/text → transformers/RNNs; tabular → MLP/GBM; graphs → GNN.
  • Capacity = depth/width/params: too small → underfit (misses patterns); too big with weak regularization → overfit (memorizes noise).
  • Mini-example: XOR isn’t linearly separable → needs a hidden layer or another non-linear model.
The objective (loss + regularization). Pick a loss that matches what you truly care about; add regularization to keep it from cheating or overfitting.
  • Imbalanced classes: If positives are 1%, plain accuracy rewards “always negative.” Use class-weighted cross-entropy or focal loss so missed positives cost more.
  • Noisy labels / outliers (regression): MSE overreacts to a few bad points → averages/blurry results. Use MAE or Huber to be robust.
  • When recall is the priority: e.g., a neutrino pre-filter where missing signal is costly. Use a recall-weighted loss or select models by PR-AUC/recall@fixed-FPR, not plain accuracy.
  • Calibrated probabilities: If you need well-scaled scores (for thresholds or downstream physics), add label smoothing and use temperature scaling on the validation set.
  • Uncertainty or bounds: Want ranges, not just a mean? Train with quantile (pinball) loss to learn the 10th/90th percentiles directly.
  • Geometry matters: For boxes/masks, MSE on coordinates can be misaligned with quality. Prefer IoU/GIoU-style losses so optimizing the loss improves overlap you actually care about.
  • Regularization = guardrails: L2/weight decay (simpler functions), dropout (robust features), data augmentation (invariances), early stopping (stop before memorizing), and—in sim→data settings—domain-adversarial training to avoid learning simulation quirks.
The training data. The model learns what you show it: coverage, labels, and bias decide what it can generalize.
  • Coverage: include the real variety (lighting/angles, energies/geometries). Missing regions → blind spots.
  • Labels & noise: noisy/mislabeled data teaches mistakes—clean labels or use robust losses/augmentations.
  • Bias & shift: train≠test (simulation quirks, sampling bias) → drops in performance; fix with balanced sampling, domain adaptation, calibration.
  • Mini-example: Train cats only in daylight → model thinks “dark = no cat.” Add night images or brightness jitter.

Two more ingredients matter in practice: the training procedure (optimizer, learning rate, augmentations) and validation (train/val/test splits, calibration, and uncertainty checks).

To make this concrete, let’s start with the simplest useful case, a feed-forward artificial neural network (ANN), and see exactly how the forward pass, loss, and learning work before scaling up to the event-classification model used in my research.

How neural networks work

Feed-forward neural network: input layer, hidden layers, output layer
Think of each layer as a set of tiny “dials.” The network learns which dials to turn so inputs land on the right output.
Intuitive explaination (follow this first)
  • Neurons as mini calculators. Each circle takes numbers in, multiplies by weights (the dials), adds a bias, and passes the result through a squashing function (the activation).
  • Layers build meaning. Early layers learn simple bits (edges, pulses, peaks); deeper layers combine them into “this looks like a signal, not a cosmic.”
  • Learning = feedback. We show examples with the correct label, measure how wrong we are (the loss), and nudge the dials to be a little better. Do this thousands of times → skill emerges.
  • Generalize, don’t memorize. Regularization (dropout, weight-decay), data augmentation, and a proper validation split help the model perform on new data, not just the training set.
The math in four lines

Forward pass (layer ℓ=1…L):

a⁽0⁾ = x
z⁽ℓ⁾ = W⁽ℓ⁾ a⁽ℓ−1⁾ + b⁽ℓ⁾
a⁽ℓ⁾ = σ⁽ℓ⁾(z⁽ℓ⁾)          (ReLU, GELU, …)
ŷ = softmax(z⁽L⁾)           (for multi-class)

Loss (classification):

L(θ) = −∑ᵢ yᵢ log ŷᵢ  +  λ‖θ‖²

Update (SGD/Adam):

θ ← θ − η ∇θL(θ)

Here θ={W,b}. Backprop uses the chain rule to get all the gradients efficiently.

One 30-second numerical pass (what the network actually computes)

Imagine 2 inputs → 2 hidden neurons (ReLU) → 2 outputs (softmax).

x = [0.6, 0.2]

W¹ = [[ 1.0, -0.5],
      [ 0.3,  0.8]]     b¹ = [0.1, -0.2]

z¹ = W¹x + b¹ = [ 1.0*0.6 + (-0.5)*0.2 + 0.1,
                  0.3*0.6 +  0.8*0.2 − 0.2 ] = [0.8, 0.08]
a¹ = ReLU(z¹) = [0.8, 0.08]

W² = [[ 1.2, -0.4],
      [−0.7,  0.9]]     b² = [0.0, 0.05]

z² = W²a¹ + b² = [1.2*0.8 + (-0.4)*0.08 + 0.0,
                  -0.7*0.8 + 0.9*0.08 + 0.05] = [0.92, -0.47]
ŷ  = softmax(z²) ≈ [0.83, 0.17]   ← predicts class 1 with ~83% confidence

Training moves the numbers in W and b so the right class gets higher probability on average.

Why model, loss, and data change the outcome
  • Model (capacity & inductive bias): Convs are good at local patterns; graphs at relationships; transformers at long-range context. Pick the wrong tool and you cap performance.
  • Loss & regularization: Optimize what you care about. Cross-entropy ≈ accuracy; focal loss helps when classes are imbalanced; weight-decay improves generalization.
  • Data (coverage & bias): If training data misses certain angles/energies or includes simulation quirks, the network learns those quirks. Validation on held-out (ideally real) data is crucial.

In our neutrino work, this basic ANN scales up: we swap dense layers for a sparse 3D CNN (great for PMT/voxelized signals) and add a domain-adversarial branch to reduce simulation→data bias, then use it for event classification (neutrino vs cosmic).

Where AI helps in my work

Event classification Anomaly detection Denoising & deconvolution Simulation surrogates Triggering Systematics modeling

Modern ML speeds up reconstruction, finds rare signals, and helps us understand uncertainties so physics results stay robust.

Selected demos & notes

Noise2Noise for sensors

Self-supervised denoising on detector waveforms.

Graph nets for tracks

Hit clustering and track building in LArTPCs.

Fast simulation

Diffusion & normalizing flows as GEANT surrogates.

Training demo: how loss minimization moves the boundary

What this shows. A tiny classifier learning on a 2-D toy dataset. You’ll see the loss fall, the final boundary, and an animation of how the boundary moves during training.

Why it matters. It turns “minimize the loss” into a visual story: each gradient step nudges parameters to reduce mistakes, and the decision boundary slides into the low-error corridor between classes.

Line plot of training loss vs epochs, steadily decreasing with small fluctuations.
Training loss over epochs. Loss measures “wrongness.” Big drops early; smaller refinements later. Mini-batch noise causes small wiggles.
Scatter of two classes in 2D with a smooth probability background; the p=0.5 curve separates them.
Final decision boundary. Background shows the model’s probability for class 1; the p=0.5 contour is where the model is on the fence.
Decision boundary and probability field evolving during training from random to stable.
How learning looks. As the loss is minimized, the boundary moves into the low-error region between the two classes and stabilizes.
The mechanics (short)

We learn a function f(x; θ) that outputs a probability. Parameters θ are updated to minimize cross-entropy loss. Gradient descent follows the downhill direction: θ ← θ − η ∇θℒ(θ). With mini-batches, each step uses a small, noisy slice of data—fast and effective.

Case study: event classification that resists simulation bias

In liquid-argon detectors like ICARUS, the hardest part is telling rare neutrino interactions from ever-present cosmic-ray backgrounds, and doing it fast, before expensive reconstruction. In our work (Phys. Rev. D 105, 112009), we train a neural network on the scintillation-light pattern from the detector’s PMTs to flag likely neutrino events.

  • Input: quick “light maps” from the PMTs (no full track reconstruction needed).
  • Model: a 3D sparse convolutional network that handles point-like inputs efficiently.
  • Result: in ICARUS-like simulations, the filter cuts cosmic background by up to 76.3% while keeping >98.9% of neutrino interactions.
  • Bias guard: to reduce “simulation-specific” quirks leaking into the classifier, we add domain-adversarial training so the network learns features that work even when data and simulation aren’t a perfect match.

Why it matters: this runs before heavyweight reconstruction, saves computing, keeps signal, and is more robust when reality differs from the Monte Carlo, exactly what precision oscillation measurements need.

Event classification (ICARUS)

Adversarial methods to reduce simulation bias — Phys. Rev. D 105, 112009 (2022)

Domain-adversarial event classifier: sparse input → feature extractor → label predictor (neutrino vs cosmic) and domain classifier (data vs simulation) via a gradient-reversal layer; arrows show forward pass and reversed gradients.
Event classification with a domain-adversarial branch: the network learns to separate neutrino from cosmic while being discouraged from encoding “data vs simulation” quirks, so it transfers better to real data.

Thoughtful AI augments, not replaces, our physics insight. I focus on methods that are transparent, validated, and uncertainty-aware.

References & sources
  1. Neural-network fundamentals (forward pass, losses, regularization). I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, 2016). deeplearningbook.org
  2. CNNs for images. A. Krizhevsky, I. Sutskever, G. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” NeurIPS 2012. ACM
  3. Transformers for sequences/text. A. Vaswani et al., “Attention Is All You Need,” NIPS 2017. arXiv:1706.03762
  4. LSTM (sequence modeling). S. Hochreiter, J. Schmidhuber, “Long Short-Term Memory,” Neural Computation 9(8):1735–1780 (1997). PDF
  5. GNNs for graphs. T. Kipf, M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” ICLR 2017. arXiv:1609.02907
  6. Tabular data — gradient boosted trees. T. Chen, C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” KDD 2016. arXiv:1603.02754
  7. Focal loss (class imbalance). T.-Y. Lin et al., “Focal Loss for Dense Object Detection,” ICCV 2017. arXiv:1708.02002
  8. Class-balanced loss. Y. Cui et al., “Class-Balanced Loss Based on Effective Number of Samples,” CVPR 2019. arXiv:1901.05555
  9. Why PR-AUC/recall@FPR for imbalance. T. Saito, M. Rehmsmeier, “The Precision-Recall Plot Is More Informative than the ROC Plot on Imbalanced Data,” PLOS ONE 10:e0118432 (2015). PLOS ONE
  10. Temperature scaling (post-hoc calibration). C. Guo et al., “On Calibration of Modern Neural Networks,” ICML 2017. PDF
  11. Label smoothing & calibration. C. Szegedy et al., “Rethinking the Inception Architecture for Computer Vision,” CVPR 2016 (introduces label smoothing); arXiv:1512.00567. T. Müller et al., “When Does Label Smoothing Help?” NeurIPS 2019. arXiv:1906.02629
  12. Huber (robust) loss. P. J. Huber, “Robust Estimation of a Location Parameter,” Annals of Mathematical Statistics 35(1):73–101 (1964). Project Euclid
  13. Quantile (pinball) loss. R. Koenker, G. Bassett Jr., “Regression Quantiles,” Econometrica 46(1):33–50 (1978). JSTOR
  14. GIoU (box/IoU-aligned loss). H. Rezatofighi et al., “Generalized Intersection over Union,” CVPR 2019. arXiv:1902.09630
  15. Dropout. N. Srivastava et al., “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” JMLR 15 (2014). PDF
  16. Early stopping. L. Prechelt, “Early Stopping — But When?” in Neural Networks: Tricks of the Trade (Springer, 1998). PDF
  17. Data augmentation (survey). T. Shorten, T. Khoshgoftaar, “A Survey on Image Data Augmentation for Deep Learning,” J. Big Data 6, 60 (2019). SpringerOpen
  18. Domain-adversarial training (GRL/DANN). Y. Ganin, V. Lempitsky, “Unsupervised Domain Adaptation by Backpropagation,” ICML 2015 (arXiv:1409.7495); arXiv:1505.07818. Y. Ganin et al., “Domain-Adversarial Training of Neural Networks,” JMLR 17(59):1–35 (2016). JMLR
  19. Sparse 3D convs (voxel/point clouds). C. Choy et al., “4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks,” CVPR 2019. arXiv:1904.08755
  20. Noise2Noise (self-supervised denoising). J. Lehtinen et al., ICML 2018. arXiv:1803.04189
  21. Generative surrogates (diffusion & flows). J. Ho, A. Jain, P. Abbeel, “Denoising Diffusion Probabilistic Models,” NeurIPS 2020 — arXiv:2006.11239; D. Rezende, S. Mohamed, “Variational Inference with Normalizing Flows,” ICML 2015 — arXiv:1505.05770
  22. Case study (ICARUS; adversarial training; performance numbers). M. Babicz et al., “Adversarial methods to reduce simulation bias in neutrino interaction event filtering at Liquid Argon TPCs,” Phys. Rev. D 105, 112009 (2022). APS · CERN CDS (details & figures)
LinkedIn · Instagram