Login

Lattice Field Theories

 

Non abelian gauge theories show the surprising property that the strength of the interaction decreases at short distances (a phenomena called asymptotic freedom). This seemingly simple fact is behind a very rich phenomenology. At short distances particles interact very weakly and accurate predictions are possible thanks to a perturbative approach. On the other hand at large distances the theory becomes strongly coupled, and a non-perturbative framework is needed in order to make any sensible prediction.

Lattice field theory provides a first principle computational approach to field theory. It is based on discretizing space and time on a hypercubic grid of points. Although not particularly useful from a purely analytic point of view, it is tremendously appealing because it can be simulated on a computer.

Sitting at the cross between particle and computational physics, lattice field theory aims at making first principles predictions. These predictions guide our understanding of the rich phenomena of non-perturbative field theories and also provide precise input to interpret many experimental results in high energy
physics.

 

* Latice QCD:

Lattice QCD allows to relate the properties of the fundamental quarks and gluons, like its mass or the strength of its interaction, to the spectrum of QCD (i.e. masses of hadrons like the proton). It is also the only known method that allows to make first principle computations of the interactions between hadrons.

Experiments that seek evidence for new physics are being carried out worldwide. Lattice QCD is a key element in the interpretation of these experimental data, since precise predictions of hadronic interactions are usually needed; A precise determination of the strong coupling constant is key to interpret the LHC experimental results. Many searches of new physics require precise prediction for meson decay constants and form factors.

Researchers at IFIC are involved in different aspects of this task. We develop new methods to determine the fundamental parameters of the standard model (i.e. quark masses and the strong coupling) in terms of hadronic quantities. One of the main focus of our group is the study of interactions of light hadrons using lattice methods. Some examples that we are exploring are the weak decay of a kaon to two pions.

We also participate in development of techniques that connect quantities from lattice simulations to hadronic scattering observables. We are involved in the scattering of two and three pions. Another research line is the three-body problem in finite volume. Some phenomenologically interesting applications of these developments are the study of the CP-violating decay of a kaon to three pions, and the three-pion contribution to the hadronic vacuum polarization in the anomalous magnetic momentum of the muon.


- Computational aspects:

Discretizing spacetime and putting all our particles on a lattice allows to use a computational approach to quantum field theory; nonetheless, the number of necessary operations is cumbersome, even for simple theories and small lattices. Lattice field theory relies on computational techniques known as importance sampling and Markov chain Mote Carlo to compute quantities of physical interest.

Currently, the best-suited algorithm in lattice QCD is a variant of the Hamiltonian Monte Carlo (also called Hybrid Monte Carlo). Proposed more than 30 years ago for lattice QCD calculations, it nowadays has numerous applications in several areas in science.

However, a well known problem is the difficulty to sample correctly the space of configurations close to the continuum limit (the infamous topology freezing problem).

One of the research lines in our group is the study and development of new algorithms which outperform Hamiltonian Monte Carlo and are able
to sample quantities related to the topological charge correctly, and, more generally, to generate lattice configurations with less
computational cost.

 

* The early universe:

In the early universe there are actually various phenomena characterised by non-perturbative collective field interactions, typically leading to non-linear dynamics. Such phenomena cannot be captured by perturbative coupling expansions, not even if the couplings involved were small. In general, the details of non-linear (typically out-of-equilibirum) dynamics, are difficult, when not impossible, to be grasped by analytic calculations. In order to fully understand the non-linearities developed in the real time evolution of a field theory, the use of numerical lattice techniques becomes mandatory. The numerical results arising from the non-linear dynamics of early universe high-energy phenomena, represent an important perspective in determining the best observational strategies to probe the unknown physics from this era. It is therefore crucial to develop numerical techniques, as efficient and robust as possible, to simulate these phenomena. Numerical algorithms developed for this purpose must satisfy a number of physical constraints (e.g. energy conservation), and keep the numerical integration errors under control. It is actually useful to develop as many techniques as possible, to validate and double check results from simulations. Only in this way, we will achieve a certain robustness in the predictions of the potentially observational implications of these high energy phenomena.

In our group we work in developing techniques for studying the non-linear dynamics of fields in many problems of the early universe, inclduing the dynamics of post-inflationary preheating or first order phase transitions, and their emission of gravitational waves. We also work on the creation, evolution and annihilation of cosmic defects, on axion-like field dynamics, moduli dynamics, and in general in gravitational wave emission from any such phenomena. We are experts in the studying both theoretical and observational aspects of these processes. 

In our group we have participated in the development of CosmoLattice (CL), a modern package publicly released in 2020/21 for lattice simulations of the dynamics of interacting scalar and gauge fields in an expanding universe. CL incorporates a series of features that makes it very versatile and powerful: i) it is written in C++ fully exploiting the object oriented programming paradigm, with a modular structure and a clear separation between the physics and the technical details, ii) it is MPI-based and uses a discrete Fourier Transform parallelized in multiple spatial dimensions, what makes it suitable for probing physical problems with well-separated scales, running very high resolution simulations, or simply very long ones, iii) it introduces its own symbolic language, defining field variables and operations over them, so that one can introduce differential equations and operators in a manner as close as possible to the continuum, and iv) it includes a library of numerical algorithms, ranging from O(dt^2) to O(dt^{10}) methods, suitable for simulating global and gauge theories in an expanding grid, including the case of self-consistent expansion sourced by the fields themselves. See:

The Art of Simulating the Early Universe -- Part I, JCAP in press, arXiv:2006.15122

 

Cosmology

 

Dark Matter and Dark Energy

Unprecedented precision measurements of Cosmic Microwave Background (CMB) temperature and polarization anisotropies from the Planck satellite, together with Type Ia Supernovae(SNeIa) luminosity distance measurements, and measurements of the clustering of the large-scale structure (LSS) of our Universe from a number of galaxy redshift surveys, have provided a remarkably accurate description of the Universe. The combination of these measurements points towards the so-called concordance ΛCDM model, describing a spatially flat Universe where most of the energy content takes the form of two dark components: Dark Matter and Dark Energy. The first represents about 25% of the total energy budget of the Universe, while visible matter (baryons) accounts for 5%. The remaining∼70% of the total energy is associated to a mysterious component, which does not behave like matter and is commonly referred to as dark energy. The Dark Energy component is responsible for the current accelerated expansion of the Universe, and thus constitutes a fundamental key towards understanding the fate of the latter. Yet, its nature and gravitational properties remain largely unknown. Relaxing the hypothesis that it is due to a cosmological constant, or in other words letting the Dark Energy contribution to the Universe budget evolve with time, could alleviate the required fine-tuning. A wide variety of Dark Energy models alternative to the cosmological constant, featuring either new fundamental particles and fields or modifications to the gravitational sector, have been proposed by members of the SOM group.

Recent work concerning dark energy and its properties can found in references:

https://arxiv.org/abs/2010.02230
https://arxiv.org/abs/2009.12620
https://arxiv.org/abs/2005.02062
https://arxiv.org/abs/1910.09853
https://arxiv.org/abs/1908.04281
https://arxiv.org/abs/1906.11697

Dark matter is one of the most interesting open problems in Cosmology and Particle Physics, pointing towards the existence of new elementary particles and theories beyond the Standard Model of Particle Physics. Unlike baryonic matter, dark matter does not absorb, emit or reflect light (it is dark), and its existence is inferred through gravitational effects on baryonic matter, with plenty of evidence arising from different sources. The flatness of the rotational curves of galaxies requires a significant dark matter component to account for the inferred dynamical galactic mass, whereas the temperature and polarization anisotropies of the Cosmic Microwave Background (CMB) radiation indicate a dark matter component that accounts for about 27% of the present energy balance in the Universe. On the scale of the galaxy clusters, the Bullet Cluster provides one of the most convincing evidence for the existence of dark matter, clearly showing the separation between non-luminous and baryonic matter. Over the past few years, researchers have come up with different theoretical models to describe dark matter and have proposed several experiments to direct and indirectly detect any dark matter candidates, but so far, the origin and nature of dark matter remain unknown. Hence, the goal of SOM members is to continue to elaborate models to understand dark matter, taking into account new data and constraints, in order to shed light on the dark side of the Universe.

 

              Bullet Cluster                  pie uni

              Fig: Bullet cluster, probing the distribution of Dark Matter                            Fig: Cosmposition of the universe: Dark Energy (DE), Dark Matter (DM), ...

 

Reionization Epoch : 21 cm Cosmology


The appearance of the first generation of galaxies, when the universe was a few hundred million years old, lead to the end of the so-called dark ages of the universe. The ultraviolet (UV) photons emitted in these galaxies, gradually ionized the neutral hydrogen in a process known as reionization. Awaiting future cosmological measurements of the 21 cm transition line, it is important to exploit our current knowledge of the evolution of the total ionized fraction at late times to tests the Dark Matter properties. 21 cm cosmology aims to measure accurately the Epoch of Reionization (EoR) through the 21 cm neutral hydrogen hyperfine transition to trace the baryon overdensities in the z > 6 redshift range. Current 21 cm radio interferometers aim to measure the power-spectrum measurements of the 21 cm signal. Next decade, high-redshift 21 cm experiments include SKA (Square Kilometre Array) and HERA (Hydrogen Epoch of Reionization Array). 21 cm cosmology is an emerging field and one of the foremost objectives of this proposal is to exploit these future measurements to improve the present cosmological constraints on Dark Matter. The SOM member Olga Mena of this proposal is a member of the SKA science team, and she is deeply involved in the Spanish participation in this experiment.

Please see references below:

https://arxiv.org/abs/2004.00013
https://arxiv.org/abs/1906.07735
https://arxiv.org/abs/1811.02716

           

Neutrino properties from cosmology

A thermal neutrino relic component in the universe modifies both the expansion history and the growth of structure, having a cosmological impact on: a) the big bang nucleosynthesis epoch through its effect on the expansion rate, measurements of the primordial abundance of light elements can constrain Neff (the effective number of neutrino species); b) the epoch of the matter-radiation equality, leaving an imprint on the CMB anisotropies; c) in the recent universe, after neutrinos become non-relativistic, they suppress the growth of matter density fluctuations and galaxy clustering.

Members of the SOM group were part of the BOSS galaxy survey and has led in the past some of the analyses of the neutrino mass constraints from this galaxy survey, combining its measurements with a number of other cosmological data sets, leading to the most constraining bounds on the neutrino masses and other cosmic relic (e.g. axions) properties. Our goals are to continue improving the cosmological bounds on neutrino properties, as new data becomes available, and to combine such constraints with neutrino oscillation data that will help to identify the underlying dynamics responsible for neutrino masses. Special attention will be devoted to the issue of the neutrino mass hierarchy (normal versus inverted), combining with future bounds from the long baseline DUNE (Deep Underground Neutrino Experiment) experiment. The optimization of future galaxy surveys is also crucial for extracting the neutrino mass from cosmological observations.

Our recent work on neutrino cosmology can be found in the references below

https://arxiv.org/abs/2006.11237

https://arxiv.org/abs/1806.11051

 


Early Universe, Inflation and gravitational waves

The inflationary paradigm successfully explains the flatness problem, the horizon problem and the origin of the perturbations which seeded the structures that we observe today in our universe. The smoking-gun of inflation will be the detection of a stochastic background of gravitational waves. Such primordial signature is characterized by its amplitude, parametrized via the tensor-to-scalar ratio r. The 2018 analyses from the Planck CMB data release have presented the tightest bounds to date on r using temperature and polarization measurements. Albeit current Planck constraints are perfectly compatible with a vanishing tensor-to-scalar ratio, yet there is still enough room for many theoretical possibilities.

See e.g. https://arxiv.org/abs/1802.04290

In the early universe there are also various phenomena characterised by non-perturbative collective field interactions, such as post-inflationary (p)reheating and strong first-order phase transitions. Soon after the end of inflation, the field responsible for cosmic inflation -- the inflaton -- is typically expected to be in the form of a homogeneous condensate oscillating around the minimum of its potential. Each time the inflaton crosses zero, particle species sufficiently strongly coupled to the inflaton, are created in energetic bursts. For bosonic species this leads to an exponential growth of the energy transfer within few oscillations of the inflaton. Such particle production phenomenon corresponds to a non-perturbative effect, which cannot be captured by perturbative coupling expansions, not even if the couplings involved are small. In the case of strong first-order phase transitions (after the completion of reheating), the situation is similar as the dynamics can only be captured on a lattice: no analytical or perturbative technique will allow us to describe properly the 3-dimensional spatial field profiles emerging of the symmetry breaking field involved in the phase transitions (nor of the fields/plasma coupled to such field), unless we simulate numerically the non-linear interactions. See Lattice Field Theories for further clarification.

Furthermore, both preheating and first-order phase transitions create gravitational waves (GWs). GW astronomy has emerged as an exciting new field, triggered by the detection of GWs by the LIGO/Virgo network. So far, GW experiments have impacted astrophysics, and provided new tests of gravity, but it is clear that they also offer unprecedented opportunities for breakthroughs in High Energy Physics (HEP) and Early Universe Cosmology. The connection between HEP and GWs represents an interdisciplinary emerging field, with the potential to address major puzzles in fundamental physics, e.g. the state of the Early Universe during inflation and just afterwards, the origin of the baryon asymmetry of the Universe (BAU), or the nature of dark matter. Furthermore, the prospect of a GW experimental program spanning across different frequencies is emerging, and hence, dedicated studies on the complementarity and cross-connection among different detectors, optimizing new physics searches, are pivotal for a coordinated planning of GW observations. For example, in a very strong first-order phase transition around the EW scale (let it be the EW symmetry breaking in beyond SM scenarios or simply a transition in a dark sector), the GW background produced is expected to fall in the observational window of the will-be first space-based GW detector, LISA, expected to start operating in the middle 2030's. This will allow us to test particle physics theories with GWs, in a completely independent and orthogonal manner to particle colliders.

 

                        reheating 

 

Furthermore, the areas of inflationary reheating and axions can be related as follows, thanks to the requirement of radiative stability, which plays a crucial role in the consistency of inflationary model building. A simple way to ensure radiative stability of a scalar potential such as that of an inflaton, is to assume a (softly broken) shift symmetry, that is, the invariance of the theory under arbitrary constant shifts of the field, which is then a type of axion. The operator of lowest dimensionality allowed by a shift symmetry is a linear coupling between an axion (pseudo-scalar) field and the Pontryagin (topological) density F-Fdual of a U(1) gauge field. Axion-inflation scenarios are models where gauge fields are coupled to a pseudo-scalar inflaton via such axionic coupling. When this is assumed, the gauge fields are excited to high occupation states via non-perturbative (tachyonic) excitations, both during inflation and during post-inflationary preheating. Towards the last stages of inflation the system becomes non-linear due to the large excitation of the gauge fields. The latter significantly back-react into the inflaton dynamics, affecting the expansion rate. More importantly, the large amplification of the gauge fields leads to a very efficient generation of GWs and scalar density perturbations, both with non-Gaussian statistics. If the amplitude of the scalar perturbations sourced by the gauge fields is large enough, primordial black holes (PBHs) may also appear later due to gravitational collapse.

Selected works on early Universe by SOM members:

JHEP 04 (2018) 04, 026, ArXiv:1707.09967 [hep-ph]
JCAP 06 (2019) 002, ArXiv:1812.03132 [astro-ph.CO]
PRD 79, 063531 (2009); ArXiv:0812.4624 [hep-ph]
JHEP 10 (2019) 142; ArXiv:1904.11892 [hep-th]
PRL 110, 101302 (2013); ArXiv:1212.5458 [astro-ph]
PRD 102 (2020) 10, 103516; ArXiv:2007.03337 [astro-ph] 
JCAP 02 (2017) 001, ArXiv:1609.05197 [astro-ph.CO]
JCAP 10 (2017) 057, ArXiv:1707.04533 [astro-ph.CO]
JCAP 11 (2018) 034, ArXiv:1806.02819 [astro-ph.CO]
 

 

Neutrinos

 

HINTS OF 'BSM' PHYSICS FROM NEUTRINOS


The definitive discovery of a non-zero neutrino mass represents a strong experimental hint for physics beyond the Standard Model (BSM). Neutrino, with its unknown mass and its uncertain Dirac or Majorana nature, is waiting to be placed into a new model capable to describe it in the correct way. A strong interplay between particle physics, cosmology and astroparticle physics is needed to take steps forward into this field, as well as theoretical and experimental efforts have to be tightly connected to look for new physics signatures.

 

MODEL BUILDING

The Standard Model of Particle Physics is a very successful theory but it lacks an explanation for neutrino masses, dark matter and the baryon asymmetry of the Universe. Several members of SOM work on proposing and studying scenarios of physics beyond the standard model for these unsolved issues, and to study their phenomenological implications, which can be tested against experimental results. In particular, we are experts on models of neutrino mass generation, such as low-scale seesaws and radiative models. In some of these scenarios, also dark matter and/or the baryon asymmetry can be explained. The experimental results expected in the next years will be crucial in narrowing down the plethora of possibilities. In particular, in the neutrino sector, the mass ordering and CP violation will be measured. In the context of dark matter, if a signal of a weakly interacting massive particle (WIMP) is not observed in direct or indirect detection, the WIMP paradigm may lose part of its motivation. Other options, such as asymmetric dark matter, are also of interest to our group.

 

DIRAC VS MAJORANA NEUTRINOS (neutrinoless double beta decay)

Determining if massive neutrinos are Dirac or Majorana fermions is crucial to define the mechanism that gives them mass. The most promising way to investigate the nature of massive neutrinos is to search for the neutrinoless double beta decay, a forbidden Standard Model nuclear transition which explicitly violates the total lepton number. Nowadays, many experiments are waiting to discover something more about this process, exploiting different techniques (NEXT, CUORE..).

 

SUPERNOVAE NEUTRINOS (neutrino mass ordering and neutrino mass bounds)

Core-collapse Supernovae provide a precious signal from our Universe that can help us to constrain neutrino properties. Most of the energy released by the explosion is emitted through neutrinos and antineutrinos of all flavors, with mean energies of tens of MeV. Their detection, expliyting their different interaction channels with matter, offers a lot of interesting possibilities as, for example, determine the neutrino mass ordering (normal ordering or inverted one) and imposing new bounds on neutrino mass, together with the development of more detailed models on the Supernova explosion mechanisms.
A new generation of neutrino detectors is ready to observe the next Supernova explosion (IceCube), and a future generation of detectors is under design and construction (Hyper-Kamiokande, DUNE). Thanks to the enhanced flavor sensitivity and statistics, the next observed core-collapse burst will lead to an important step forward in understanding of core-collapse mechanisms and neutrino properties.

 

The DUNE experiment

The future long baseline facility DUNE (Deep Under-ground Neutrino Experiment) consists of a beam of muon neutrinos (or muon antineutrinos) that will travel from Fermilab to the far detector, located at the Sanford Underground Research Laboratory in Lead, South Dakota — 1,300 kilometers downstream of the source. This experiment aims to extract the sign of the atmospheric mass splitting and the CP violating phase in the neutrino mixing sector. However, great physics opportunities also arise for atmospheric neutrinos, supernovae neutrinos, and neutrinos from dark matter annihilating particles. Some members of the SOM group, even if theoretical physicists, are also part of the DUNE collaboration, see some related work in:


https://arxiv.org/abs/1905.03589

 

 

Copyright © Saturday the 27th 2024. Sabor y Origen de la Materia. - All rights reserved.