8096tp.indd 1
1/24/11 12:16 PM
This page intentionally left blank
Cape Town, South Africa,
1 – 6 February 2010
H V Klapdor-Kleingrothaus Heidelberg, Germany
I V Krivosheina
Heidelberg, Germany and Nishnij Novgorod, Russia
R Viollier
University of Cape Town, South Africa editors
World Scientific NEW JERSEY
8096tp.indd 2
•
LONDON
•
SINGAPORE
•
BEIJING
•
SHANGHAI
•
HONG KONG
•
TA I P E I
•
CHENNAI
1/24/11 12:16 PM
Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
PHYSICS BEYOND THE STANDARD MODEL OF PARTICLES, COSMOLOGY AND ASTROPHYSICS Proceedings of the Fifth International Conference — Beyond 2010 Copyright © 2011 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN-13 978-981-4340-85-4 ISBN-10 981-4340-85-5
Printed in Singapore.
November 25, 2010
16:31
WSPC - Proceedings Trim Size: 9.75in x 6.5in
preface
v
Preface
The Fifth International Conference on Physics Beyond the Standard Models of Particle Physics, Cosmology and Astrophysics (BEYOND 2010) took place this year during February 1-6 in Cape Town, South Africa. It was the second conference of this series (after Oulu Finland 2002), held outside Germany. The first conferences were held at Castle Ringberg, Tegernsee, Germany in 1997, 1999 and later in 2003. Traditionally, the Scientific Program of the BEYOND conferences covers most of the prominent topics of modern particle physics. Also this conference, with 87 participants and 73 presentations, gave a broad view of the status and future of the field.
Fig. 1.
Geography of Heidelberg BEYOND Conferences, from 1997 till 2010.
Although the meeting took place just before the LHC entered a new energy region at 7 TeV in the centre of mass, it allowed the presentation of some first results from 2009, and a look on what what will come next. The latter included the discovery potential of the ATLAS experiment, challenges for the CMS detector in search for a fourth generation of quarks and for exotic partners of the top quark, the potential of the LHCb detector for B physics. On the theoretical side the role of approximate conformal symmetry in strongly coupled theories, on which LHC will
November 25, 2010
16:31
WSPC - Proceedings Trim Size: 9.75in x 6.5in
preface
vi
begin to shed light, flavour physics in warped extra dimensions and hidden sectors and hidden extra dimensions were discussed as well as expected production rates at the LHC of long-lived superparticles. Other highlights centred on areas beyond the Standard Model which are under investigation in non-accelerator experiments. They ranged from neutrinos (double beta decay, tritium decay and solar neutrinos) to Q balls, from cosmological connections, including search for and theories of dark matter, and of dark energy, to the µ → eγ experiment, thus among them the two prominent observations of beyond SM physics, one of which being discussed also as eventual key to understanding of dark energy. One of the highlights was a presentation of the rejuvenated Hubble Space Telescope and some early results, others included expectations for AMANDA and ANTARES, and an extended Auger Observatory, and also observations of very high-energy gamma rays of supernova remnants interacting with molecular clouds, which seem to be a new way to reveal cosmic-ray accelerators. The present status and future of search for superheavy elements was covered as well as status and future of gravitational wave search, expectations for the new aSPECT spectrograph at Grenoble for beyond standard model physics from neutron decay, and candidates for future high-energy accelerators, to mention just a few of the topics discussed. An overview of the topics of the conference has been recently published in CERN Courier, October 2010 (pp.15-18). We add this report at the end of this preface, and thank the Editor of CERN Courier for generous permission. The program and all presentations (transparencies) given at the conference can be viewed at www.klapdor-k.de/Conferences /Conferences.htm. We are confident that the Proceedings of this conference will provide a useful overview of this exciting field of research, its current status and the future prospects. We are convinced that this book may become a nice handbook for students. In conclusion, the lively, enthusiastic and highly stimulating atmosphere of BEYOND 2010 raises the expectation of an exciting future for particle physics and cosmology beyond their standard models. The organizers thank all of the speakers and participants who made this meeting an unusually successful one. Our thanks go to the Physics Department of the University of Cape Town for financial support. We are grateful to the National Institute of Theoretical Physics (NITheP), led by its Director Prof. Frederick Scholtz at the Stellenbosch Institute of Advanced Studies, for contributing substantially to the funding of these Proceedings. We thank all people who contributed in one way or another to the organisation of the conference, and in creating a pleasant and inspiring atmosphere during the conference. We are indebted in particular to Drs. Neven Bilic (Cape Town) and Irina Krivosheina (Heidelberg and Nishnij Novgorod), and to the conference secretary Mrs. Joan Parsons and her husband Mr. Derek Parsons, for their untiring assistance in preparing this conference. We are grateful to Mrs. Dorly Viollier for the nice organisation of the social program of the conference. Finally we are indebted to Dr. Irina Krivosheina for her invaluable work as Scientific Secretary of the conference and for editing these Proceedings.
November 25, 2010
16:31
WSPC - Proceedings Trim Size: 9.75in x 6.5in
preface
vii
Last but not least, one of the Chairmen would like to give his personal thanks to his friend Prof. Raoul Viollier, local host of the conference, for making this beautiful event possible, while the latter would like to thank Prof. Hans V. KlapdorKleingrothaus, Chairman of the Beyond Conferences, for choosing Cape Town as a conference site for this first BEYOND conference in the Southern Hemispere.
Hans V. Klapdor-Kleingrothaus Raoul D. Viollier Chairmen of BEYOND 2010 Conference Heidelberg, Germany Cape Town, South Africa September 2010
Irina Vladimirovna Krivosheina Scientific Secretary of BEYOND 2010 Conference Heidelberg - Nishnij Novgorod Germany - Russia September 2010
November 25, 2010
viii
16:31
WSPC - Proceedings Trim Size: 9.75in x 6.5in
preface
November 25, 2010
16:31
WSPC - Proceedings Trim Size: 9.75in x 6.5in
preface
ix
November 25, 2010
x
16:31
WSPC - Proceedings Trim Size: 9.75in x 6.5in
preface
November 25, 2010
16:31
WSPC - Proceedings Trim Size: 9.75in x 6.5in
preface
xi
December 20, 2010
xii
18:31
WSPC - Proceedings Trim Size: 9.75in x 6.5in
photo
December 20, 2010 18:31 WSPC - Proceedings Trim Size: 9.75in x 6.5in photo
Fig. 1. 1. Malcolm Bowen Niedner 2. Probir Roy 3. Peter McIntyre 4. Thomas Appelquist 5. Raoul D. Viollier 6. Hans Volker Klapdor-Kleingrothaus 7. Norma Susana Mankoc Borstnik 8. Ignatios Antoniadis 9. Jihn E. Kim 10. James Byrne 11. Irina Vladimirovna Krivosheina 12. John Kelley 13. Emmanuel Moulin 14. Federico Urban 15. Matthias Neubert 16. Yosuke Takubo 17. Gertrud Konrad 18. Manfred Leubner 19. Georg Wolschin 20. Marta A. Losada 21. Asantha Cooray 22. Alexander Osipowicz 23. Walter Kutschera 24. Zeeshan Ahmed 25. Silvia Capelli 26. Naba Mondal 27. Ji-Haeng Huh 28. Jouni Suhonen 29. Lino Miramonti 30. Osamu Yasuda 31. Andrew W. Beckwith 32. Paolo Desiati 33. Max Richter 34. Fabrice Feinstein 35. Shin-Ted Lin 36. Claude Guyot 37. Alan David Bross 38. Andrea Giuliani 39. Manfred Lindner 40. Tommy Ohlsson 41. Jacopo Nardulli 42. Clemens P. Kiessig 43. Ewan Stewart 44. Rachid Mazini 45. Silvia Costantini 46. Stefan Antusch 47. Roland Allen 48. Raymond R. Volkas 49. Fedor v Simkovic 50. Mikhail Shaposhnikov 51. Sandy S.C. Law 52. Neven Bilic 53. Gerard J. Jr. Stephenson 54. Luca Stanco 55. Felix A. Aharonian xiii
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
December 20, 2010
17:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
contents
xv
Contents
Preface
v
Physics at New and Future Colliders (LHC, Muon-Facility, ILC, ...) TeV Physics and Conformality T. Appelquist
3
Searches for New Heavy Quarks With the CMS Detector at the LHC S. Costantini (on behalf of the CMS Collaboration)
17
ATLAS Discovery Prospects for Few 100 PB−1 C. Guyot (on behalf of the ATLAS Collaboration)
26
Search For Dark Matter Candidates With the ATLAS Detector at the LHC R. Mazini (for the ATLAS Collaboration)
43
B-physics at the LHC J. Nardulli (on behalf of the LHCb Collaboration)
54
Long-Lived Superparticles at the LHC A.V. Gladyshev, D.I. Kazakov and M.G. Paucar
60
Heavy Ions at the LHC: Selected Predictions G. Wolschin
69
Exploring Physics Beyond the Standard Model With a Muon Accelerator Facility A.D. Bross
82
Measurement of Little Higgs Parameters at International Linear Collider Y. Takubo, M. Asano, T. Kusano et. al.
91
PETAVAC: Boson-Boson Colliding Beams At 100 TEV in the SSC Tunnel P. McIntyre and A. Sattarov
100
December 20, 2010
17:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
contents
xvi
Accelerator-Driven Thorium-Cycle Fission: Green Nuclear Power for the New Millennium P. McIntyre and A. Sattarov
112
Leptogenesis Recent Issues in Leptogenesis M. Losada
122
Electromagnetic Leptogenesis in a Nutshell S.S.C. Law
135
Neutrino Decay into Fermionic Quasiparticles in Leptogenesis C.P. Kießig, M. Pl¨ umacher and M.H. Thoma
146
New Interactions, Inflationary and Quantum Cosmology Anomaly Driven Signatures of Extra U (1)’s I. Antoniadis, A. Boyarsky and O. Ruchayskiy
155
An Adjustable Cosmological Constant J.E. Kim
166
Cosmic Inflation Meets Particle Physics S. Antusch, J.P. Baumann, K. Dutta and P.M. Kostka
177
Resummed Quantum Gravity and Planck Scale Cosmology B.F.L. Ward
188
SUSY/SUGRA Phenomenology, Fundamental Symmetries Supersymmetric SO(N ) From a Planck-Scale Statistical Picture R.E. Allen
199
SUSY Lepton Flavor Violation: Radiative Decays and Collider Searches R. R¨ uckl
211
New Physics Without New Energy Scale M. Shaposhnikov
219
December 20, 2010
17:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
contents
xvii
Neutrinos (Double Beta Decay, ν-Oscillations, Solar and Astrophysical Neutrinos, Tritium Decay) Double Beta Decay and Beyond Standard Model Particle Physics H.V. Klapdor-Kleingrothaus and I.V. Krivosheina
231
LUCIFER: An Experimental Breakthrough in the Search for Neutrinoless Double Beta Decay I. Dafinei, F. Ferroni, A. Giuliani et al.
256
KATRIN the Karlsruhe Tritium Neutrino Project A. Osipowicz (on behalf of KATRIN Collaboration)
261
Neutrinoless Double EC and Rare Beta Decays as Tools to Search for the Neutrino Mass J. Suhonen and M.T. Mustonen
267
Detecting of Relic Neutrinos and Measuring of Fundamental Properties of Neutrinos With Atomic Nuclei ˇ F. Simkovic
276
Neutrinoless Double Beta Decay With TeO2 Bolometers: Past and Future S. Capelli (on behalf of CUORE and CUORICINO Collaborations)
286
The Magic of Four Zero Neutrino Yukawa Textures P. Roy
293
Sensitivity to Sterile Neutrino Mixings and the Discovery Channel at a Neutrino Factory O. Yasuda
300
Searching For the Mixing Angle θ13 With Reactor Neutrinos P. Novella
314
Status of the Double Chooz Experiment T. Kawasaki (on behalf of the Double Chooz Collaboration)
318
Neutrino Oscillations With Long-Base-Line Beams (Past, Present and Very Near Future) L. Stanco
325
December 20, 2010
17:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
contents
xviii
Status of the T2K Experiment A. Bravar
339
Looking for High Energy Astrophysical Neutrinos: the ANTARES Experiment V. Flaminio (for the ANTARES Collaboration)
347
Low Energy Solar Neutrino Spectroscopy: Results from the BOREXINO Experiment D. d’Angelo (on behalf of the Borexino Collaboration)
362
Neutrino Astrophysics and Galactic Cosmic Ray Anisotropy in IceCube P. Desiati (for the IceCube Collaboration)
376
Lepton-Flavour Violation, Superstrings, Magnetic Monopoles and Search for Exotics A Framework for Domain-Wall Brane Model Building R.R. Volkas
393
Search for Lepton Flavour Violation With the µ+ → e+ γ Decay: First Results from the MEG Experiment G. Signorelli (on behalf of the MEG Collaboration)
406
Searches for Magnetic Monopoles and Beyond L. Patrizii, G. Giacomelli and Z. Sahnoun
417
Daemon Decay and Cosmic Inflation E.M. Prodanov
432
Cosmological Parameters, Dark Matter and Dark Energy Chromodynamics, Vacuum Structure and Cosmology F.R. Urban
441
Determining Dark Energy C. Clarkson
455
Interacting Majorana Fermions and Cosmic Acceleration G.J. Stephenson Jr., P.M. Alsing, T. Goldman et al.
471
December 20, 2010
17:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
contents
xix
Nonextensivity in a Dark Maximum Entropy Landscape M.P. Leubner
482
Deceleration Parameter Q(Z) in 4D and 5D Geometries, and Implications of Graviton Mass in Mimicking Dark Energy in Both Geometries A.W. Beckwith
491
Neutrinos from KALUZA-KLEIN Dark Matter Annihilations in the Sun T. Ohlsson
496
Cosmological k-Essence Condensation N. Bilic, G.B. Tupper and R.D. Viollier
503
Signals from the Dark Universe: New Results from DAMA/LIBRA R. Bernabei, P. Belli, F. Montecchia et al.
511
Recent Results from WIMP-Search Analysis of CDMS-II Data A. Zeeshan (for the CDMS − II Collaboration)
530
Low Energy Neutrino and Dark Matter Physics With Sub-KeV Germanium Detector S.-T. Lin and H.T. Wong (for the TEXONO Collaboration)
537
The “Approach Unifying Spin and Charges” Predicts the Fourth Family and a Stable Family Forming the Dark Matter Clusters N.S. Mankoˇc Borˇstnik
543
High-Energy Gamma Rays, Cosmic Rays, Status and Explanations of the PAMELA/ATIC Anomaly Search for Dark Matter Through Very High Energy Gamma-Rays E. Moulin
557
The Pierre Auger Observatory: Recent Results and Future Plans J.L. Kelley (for the Pierre Auger Collaboration)
571
Supernova Remnants Interacting With Molecular Clouds: A New Way to Reveal Cosmic Rays F. Feinstein and A. Fiasson
579
December 20, 2010
17:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
contents
xx
A Test for the Dark Matter Interpretation of the PAMELA Positron Excess With the FERMI Telescope M. Regis
588
Minimal SUSY Dark Matter for Fermi-LAT/PAMELA Cosmic-Ray Data J.-H. Huh
597
Hubble Space Telescope Early Scientific Results and Future Prospects for the Rejuvenated HUBBLE Space Telescope M.B. Niedner
605
Archeology and Physics A Physicist’s View - the Disk of Nebra W. Schlosser
625
Exotic Archaeology: Searching for Superheavy Elements in Nature and Dating Human DNA with the 14 C Bomb Peak W. Kutschera, F. Dellinger, J. Liebl et al.
633
Neutron Beta Decay The Crucial Role of Neutron β-Decay Experiments in Establishing the Fundamental Symmetries of the (V-A) Description of Weak Interactions J. Byrne
647
Impact of Neutron Decay Experiments on Non-Standard Model Physics G. Konrad, W. Heil, S. Baeßler et al.
660
Superheavy Elements Study of SHE at GSI - Status and Perspectives for the Next Decade F.P. Hessberger
675
Synthesis and Study of Superheavy Elements A.G. Popeko
689
December 20, 2010
17:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
contents
xxi
General Relativity On the Threshold of Gravitational Wave Astronomy P. Aufmuth
707
Spherical Accretion of Relativistic Fluid Onto Supermassive Black Hole Including Back-Reaction M.C. Richter, G.B. Tupper and R.D. Viollier
720
List of Participants
727
Authors Index
743
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
December 20, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
photo1
xxiii
December 20, 2010
xxiv
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
photo1
December 20, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
photo1
xxv
December 20, 2010
xxvi
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
photo1
December 20, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
photo1
xxvii
December 20, 2010
xxviii
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
photo1
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
PART I Physics at New and Future Colliders (LHC, Muon-Facility, ILC, ...)
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
3
TeV PHYSICS AND CONFORMALITY THOMAS APPELQUIST Department of Physics, Sloane Laboratory, Yale University New Haven, Connecticut, 06520, USA In this lecture, I will describe some recent work on the application of lattice-based simulations to strongly coupled gauge theories that might play a role in describing physics beyond the standard model. I will first discuss the exploration of conformal and nearconformal behavior in these theories employing a definition of the running coupling derived from the Schroedinger functional of the theory. I will then review some recent work on the chiral properties of gauge theories as the fermion number is adjusted to approach the critical value at which infrared conformal behavior replaces confinement and chiral symmetry breaking.
1. Introduction Experiments at the Large Hadron Collider will soon begin revealing new physics at the TeV scale. A possibility is that this physics will involve new strong interactions in some form. These forces could describe electroweak symmetry breaking or perhaps some new sector not directly related to electroweak breaking. If any of this comes to pass, it will be very important for theorists to bring strong-coupling methods to bear on these new phenomena. Now is the time to begin these theoretical studies. Lattice gauge theory has been very successful in deepening our understanding of the strong nuclear interactions. During the past two years, stimulated to some extent by the start-up of the Large Hadron Collider, interest is growing in applying lattice methods to new, strongly interacting theories that could play a role in extending the standard model. In this lecture, I will describe the use of lattice methods to study strongly coupled gauge theories with possible application to models of dynamical electroweak symmetry breaking. My special focus will be on gauge theories that exhibit approximate conformal symmetry in the infrared.
2. The Conformal Window and Walking 2.1. Perturbative RG flow in generalized Yang-Mills theories Consider a Yang-Mills theory with local gauge symmetry group SU (Nc ), coupled to Nf massless Dirac fermion flavors:
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
4 Table 1. Casimir invariants and dimensions of some common representations of SU (N ): fundamental (F ), two-index symmetric (S2 ), two-index antisymmetric (A2 ), and adjoint (G). Representation
dim(R)
T (R)
F S2 A2 G
N
1 2 N+2 2 N−2 2
LY M = −
N(N+1) 2 N(N−1) 2 N2 − 1
N
C2 (R)
N 2 −1 2N (N+2)(N−1) N (N−2)(N+1) N
N
Nf Nc 1 a a,µν ψ¯i (iD)ψi , F F + 4g 2 a=1 µν i=1
(1)
With the fermions in a representation R of the gauge group. The scale dependence of the renormalized coupling g = g(µ) is determined by the β-function, which can be expanded perturbatively: β(α) ≡
∂α = −β0 α2 − β1 α3 − β2 α4 − ... ∂(log µ2 )
with α(µ) ≡ g(µ)2 /4π. The universal values for the first two coefficients are 1 11 4 Nc − T (R)Nf , β0 = 4π 3 3 34 2 1 20 N − 4C2 (R) + Nc T (R)Nf , β1 = (4π)2 3 c 3
(2)
(3) (4)
where T (R) and C2 (R) are the trace normalization and quadratic Casimir invariant of the representation R, respectively. The Casimir invariants for a few commonly-used representations of SU (N ) are shown in Table 1. So long as Nf /Nc < 11/(4T (R)) so that β0 > 0, the theory is asymptotically free. One may continue on to higher order in this expansion, at the cost of specifying a renormalization scheme; in the commonly used MS scheme, the next two coefficients are known.1 For present purposes, a key step in studying the RG flow of the coupling constant is to identify any fixed points. Keeping the first two terms in the β-function, we see that in addition to the trivial ultraviolet fixed point α = 0, there is a second, infrared-stable solution located at (2L)
α
=−
β0 , β1
(5)
describing the limit µ → 0. If the fixed-point coupling is sufficiently weak, the theory is perturbative at all scales. This condition will be satisfied if Nf is near the value 11Nc /4T (R) at which asymptotic freedom is lost.2,3 Since confinement and spontaneous breaking of chiral symmetry require strong coupling, they are absent in a theory that is perturbative at all scales.
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
5
As Nf is decreased, the value of the fixed-point coupling increases, at some point reaching a critical value Nfc at which the infrared behavior changes from conformal to confining. Theories within the range Nfc < Nf <
11Nc 4T (R)
(6)
are said to lie in the conformal window, due to the approximate restoration of conformal symmetry in the infrared. Perturbation theory is a-priori unreliable to describe physics in the vicinity of the infrared fixed point when Nf is near the the transition point Nfc , so in order to determine the location of the transition, some non-perturbative estimate is required.
2.2. Infrared conformality and walking behavior Theories that lie inside the conformal window, although interesting in their own right, are not generally useful in describing electroweak symmetry breaking due to the lack of chiral symmetry breaking (although it is possible to force trigger symmetry breaking even in the conformal window by explicit construction.4 ) A theory outside the window (Nf < Nfc ), but close to the transition would break the electroweak symmetry, but in a way that could address some important problems. The idea is as follows:5–7 suppose that there is some (scheme-dependent) critical coupling αc , which when exceeded will trigger the spontaneous breaking of chiral symmetry. Now consider a theory with a beta-function such that the coupling is approaching a somewhat supercritical fixed point α > αc . When αc is exceeded, confinement and chiral symmetry breaking set in. The fermions which were responsible for the existence of the fixed point develop masses and are screened out of the theory, causing the coupling to run as in the Nf = 0 theory below the generated mass scale. This idea, known as walking, results in a separation of scale between the UV physics where the coupling runs perturbatively and the IR scale at which confinement sets in. This dynamical scale-separation is what one needs to address the FCNC problem in extended technicolor. The conflict there was between trying to simultaneously match the standard model particle masses and suppress FCNCgenerating effects, both of which are tied to the same ETC scale Λi . With walking, the ultraviolet-sensitive condensate can pick up a large additional contribution from the scales between ΛT C and ΛET C , allowing recovery of standard model masses without violation of precision electroweak experimental bounds. While walking technicolor offers a solution to the difficulties with technicolor models, the existence of such a theory is speculative. The onset of walking is a strong-coupling effect, so that perturbative methods are unlikely to be useful. The search for walking is closely linked to more general questions about the location and nature of the conformal transition at Nf = Nfc .
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
6
3. Lattice Studies of the Conformal Transition 3.1. Overview Lattice field theory provides an ideal way to study the conformal transition, and more generally the properties of Yang-Mills theories as we vary Nf . Lattice simulations are truly non-perturbative, although the continuum limit must be taken carefully to recover information about continuum physics (the ability to take this limit being made possible by the asymptotic freedom of the theories in which we are interested). Lattice simulations allow broad investigation; a large number of different observables can be computed simultaneously on a single set of gauge configurations. 3.2. Running coupling One method for studying the conformal transition as a function of Nf is the direct computation of an appropriately defined running coupling. There are a number of such choices possible, including the standard extraction of the static potential from Wilson loops, the Schr¨ odinger functional,8–12 the twisted Polyakov loop scheme,13 and constructions using ratios of Wilson loops.14,15 Regardless of the definition, the goal is to map out the evolution of the coupling over a large range of distance scales R. If we work at a fixed lattice spacing a, then the range of available R at which we can measure the coupling is small. The problem can be exacerbated in a theory with large Nf , where the β-function can be small; to go from weak to strong coupling, a change in scale of many orders of magnitude is often required. To achieve our goal, then, we must find some way to match together lattice measurements of the running coupling taken at different lattice spacings and combine them into an overall measurement of continuum evolution. A technique known as step scaling16,17 provides a systematic approach. Step scaling is simply a lattice-based renormalization-group (RG) procedure, describing the evolution of the coupling g(R) as the scale changes from R → sR, where s is a numerical scaling factor. Its implementation begins with the choice of some initial value for the running coupling g 2 (R). Several ensembles at different a/R are then generated, tuning the lattice bare coupling β = 2Nc /g02 so that on each ensemble one obtains the chosen value of g 2 (R). Then one generates a second ensemble at each β, but measures the coupling at a longer scale R → sR. One can then extrapolate a/R → 0 and recover the continuum value g 2 (sR). Taking g 2 (sR) to be the new starting value, one repeats the procedure, mapping g 2 (R) → g 2 (sR)... → g 2 (sn R) until we have sampled the coupling over a large range of R values. An efficient approach to step scaling is to measure g 2 (R) for a wide range of values in β and R/a, and then to generate an interpolating function. Step scaling may then be done analytically using the interpolated values. Such an interpolating function should reproduce the perturbative relation g 2 (R) = g02 + O(g04 ) at weak coupling, but otherwise its form is not strongly constrained. One possible choice which has worked quite well in some studies is an expansion of the inverse coupling
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
7
1/g 2 (β, R/a) as a set of polynomial series in the bare coupling g02 = 2Nc /β at each R/a: i n 2Nc β 1 1− . (7) = ci,R/a g 2 (β, R/a) 2Nc β i=1 The order n of the polynomial is arbitrary, and can be varied as a function of R/a to achieve the optimal fit to the available data. There is a natural caveat on the step-scaling procedure, especially in the context of studying theories with infrared fixed points. The procedure as outlined above depends crucially on the ability to take the limit a/R → 0. If we hold g 2 (R) fixed and take the limit a/R → 0, it is important that the bare coupling g02 (a/R), which depends on the short-distance behavior of the theory, does not become strong enough to trigger a bulk phase transition. This is satisfied automatically if the short-distance behavior is determined by asymptotic freedom, in which case g02 (a/R) vanishes as 1/ log(R/a). However, if we work in a theory with an infrared fixed point and measure values of g2 (R) lying above the fixed point at g2 , then g02 (a/R) will increase as a → 0, with no evidence that it remains bounded and therefore that a continuum limit exists. Even so, we can extrapolate to small enough values of a/R to render lattice artifact corrections negligible, providing that g02 (a/R) is kept small enough to avoid triggering a bulk transition into a strong-coupling phase. Our work11,12 has relied on one definition of a running coupling based on the Schr¨ odinger functional (SF). The SF running coupling is defined through the response of a system to variation in strength of a background chromoelectric field. It is a finite-volume method, with the coupling strength defined at the spatial box size L, so that we identify R = L and can discard finite-volume corrections. Formally, the Schr¨ odinger functional describes the quantum mechanical evolution of some system from a given state at time t = 0 to another given state at time t = T , in a spatial box of size L with periodic boundary conditions.8,9,18 The temporal extent T is fixed proportional to L, so that the Euclidean box size depends only on a single parameter. The initial and final states are described as Dirichlet boundary conditions which are imposed at t = 0 and t = T , and for measurement of the coupling constant are chosen such that the minimum-action configuration is a constant chromo-electric background field of strength O(1/L). This can be implemented both in the continuum8 and on the lattice.10 One can represent the Schr¨ odinger functional as the path integral
Z[W, ζ, ζ; W , ζ , ζ ] = [DADψDψ]e−SG (W,W )−SF (W,W ,ζ,ζ,ζ ,ζ ) ,
(8)
where A is the gauge field and ψ, ψ are the fermion fields. W and W are the boundary values of the gauge fields, and ζ, ζ, ζ , ζ are the boundary values of the fermion fields at t = 0 and t = T , respectively. The fermionic boundary values are
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
8
subject only to multiplicative renormalization,19 and as such are generally taken to be zero in order to simplify the calculation. The gauge boundary fields W, W are chosen to given a constant chromo-electric field in the bulk, whose strength is of order 1/L and controlled by a dimensionless odinger functional (SF) running coupling is then defined parameter η.20 The Schr¨ by the response of the action to variation of η: ∂ k = − , (9) log Z 2 ∂η g (L, T ) η=0
where (with the standard choice of gauge boundary fields for SU (3)), the normalization factor k is 2 2 L 2πa2 πa k = 12 sin + sin . (10) a 3LT 3LT The presence of k ensures that g 2 (L, T ) is equal to the bare coupling g02 at tree level in perturbation theory. In general, g2 (L, T ) can be thought of as the response of the system to small variations in the background chromo-electric field. For most fermion discretizations, at this point one can take T = L in order to define the running coupling as a function of a single scale, g 2 (L). However, if staggered fermions are used (as they are often in order to offset the cost of simulating additional fermion flavors), then an additional complication arises which can be envisioned geometrically. The staggered approach to fermion discretization can be formulated as splitting the 16 spinor degrees of freedom available up over a 24 hypercubic sublattice. Clearly, such a framework requires an even number of lattice sites in all directions. If all boundaries are periodic or anti-periodic, then setting T = L can be done so long as L is even. However, with Dirichlet boundaries in the time direction, the site t = T is no longer identified with t = 0, so that a total of T /a + 1 lattice sites must exist. In order to accommodate staggered fermions, T /a must be odd. Thus when using staggered fermions, the closest we can come to the desired choice of T is T = L ± a. In the continuum limit, the relation T = L is recovered. However, at finite lattice spacing O(a) lattice artifacts are introduced into observables. This is undesirable, since staggered fermion simulations contain bulk artifacts only at O(a2 ) and above. But simulating at the choices T = L ± a and averaging over the results has been shown to eliminate the induced O(a) bulk artifact in the running coupling.21 We define g2 (L) through the average: 1 1 1 1 , (11) = + 2 g 2 (L, L − a) g2 (L, L + a) g 2 (L) I will now discuss simulation data and results from a Schr¨odinger-functional running-coupling study of the Nf = 8 and 12 theories with the fermions in the fundamental representation of the SU (3) gauge group, using staggered fermions.12 I begin with the Nf = 8 theory, for which data was gathered in the range 4.55 ≤ β ≤ 192 on lattice volumes given by L/a = 6, 8, 12, 16. The lower limit on β was
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
9
determined to keep the lattice coupling too weak to trigger a bulk phase transition. A selection of the data, together with interpolating function fits of the form Eq. (7), are shown in Fig. 1. Note that at any fixed value of β, the coupling strength g 2 (L) increases with L/a, showing no evidence of the “backwards” running that we would expect to observe in a theory with an infrared fixed point. 20
L/a=6 L/a=8 L/a=12 L/a=16
15
g 2 L
10
5
4.6
4.8
5.0
5.2
5.4
5.6
5.8
6.0
Β Fig. 1. Measured values g 2 (L) versus β for Nf = 8. The interpolating curves shown represent the best fit to the data, using the functional form Eq. (7). The errors are statistical.
Although Fig. 1 is indicative that the Nf = 8 theory lies outside the conformal window, it is possible for results at fixed β to be misleading; we must take the continuum limit in order to recover information about the continuum theory. We apply the step-scaling procedure detailed above to extract the continuum behavior, by extrapolation of a/L → 0 with each doubling of the scale L. Our results depend on the choice of continuum extrapolation. As this is a staggered fermion study, the leading bulk lattice artifacts are expected to be O(a2 ), but there are additional boundary artifacts of O(a) which are only partially cancelled by subtraction of their perturbative values. However, in this case the a/L dependence is weak, with the associated systematic error dominated by the statistical errors on the points, so that a constant extrapolation (i.e. weighted average of the two points) is used. The resulting continuum running of g 2 (L) for Nf = 8 is shown in Fig. 2. L0 is an arbitrary length scale here defined by the condition g2 (L) = 1.6, anchoring the step-scaling curve at a relatively weak value. The points shown correspond to repeated doubling of the scale L relative to L0 . Derivation of statistical errors uses
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
10
a bootstrap technique.12 Perturbative running at two and three loops is also shown for comparison up through g 2 (L) ≈ 10, beyond which the accuracy of perturbation theory is expected to degrade. The coupling measured in this simulation follows the perturbative curve closely up through g2 (L) ≈ 4, and then begins to increase more rapidly, reaching values that exceed typical estimates of the coupling strength needed to induce spontaneous chiral symmetry breaking. As there is no evidence for an infrared fixed point, or even for an inflection point in the running of g 2 (L), this study supports the assertion that the Nf = 8 theory lies outside the conformal window.
25
2-loop univ.
20
3-loop SF 15 g 2L 10
5
0
2
4 LogLL0
6
8
Fig. 2. Continuum running for Nf = 8. Purple points are derived by step-scaling using the constant continuum-extrapolation. The error bars shown are purely statistical. Two-loop and threeloop perturbation theory curves are shown for comparison.
Data and interpolating fits for the Nf = 12 theory are shown in Fig. 3. Here, simulations were performed on lattice extents of L/a = 10, 20, in addition to the values of L/a = 6, 8, 12, 16 of the Nf = 8 case. The data and fits shown in Fig. 3 show striking qualitative differences with their counterparts at Nf = 8 (Fig. 1). In particular, there are some hints of a “crossover” phenomenon taking place, in which the order of the curves in L/a from weak to strong coupling is inverted at small β. Such a crossover is indicative of a region in which the coupling decreases towards the infrared, and is thus a signature of an infrared fixed point. However, it should be emphasized again that it is important to go through the full step-
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
11
scaling procedure in order to extract meaningful continuum physics, and any result indicated by working at fixed β should not be taken as definitive. As above, we choose a constant continuum extrapolation, i.e. weighted average of the three points. 12
L/a=6 L/a=8 L/a=10 L/a=12 L/a=16 L/a=20
10
8 2
g L 6
4
2
4.2
4.4
4.6 Β
4.8
5.0
Fig. 3. Measured values g 2 (L) versus β, Nf = 12. The interpolating curves shown represent the best fit to the data, using the functional form of Eq. (7).
Results for continuum running, again from the starting value g 2 (L0 ) = 1.6, are shown in Fig. 4. Two-loop and three-loop perturbative curves are shown again for reference. The figure clearly shows the running coupling tracking towards an infrared fixed point, whose exact value lies within the statistical error band and which is consistent with the value predicted by three-loop perturbation theory. It should be noted that the error bars of Fig. 4 are highly correlated, with correlation approaching 100% near the fixed point, due to the use of an underlying interpolating function. This causes the error bars to approach a stable value asymptotically, even as we increase the number of steps towards infinity. The infrared fixed point here also governs the infrared behavior of the theory for values of g 2 (L) which lie above the fixed point. As discussed previously, we cannot naively apply the step-scaling procedure in this region, since we can no longer approach the ultraviolet fixed point at zero coupling strength in order to take the continuum limit. Instead, we can restrict our attention to finite but small value of a/L, small enough to keep lattice artifacts small and yet large enough so that g02 (a/L) does not trigger a bulk phase transition for g 2 (L) near (but above)
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
12
the fixed point. With these caveats in mind, the step-scaling procedure can then be applied and leads to the running from above the fixed point shown in Fig. 4. The observation of this “backwards-running” region is crucial to distinguishing theories with true infrared fixed points from walking theories, in which the β-function may become vanishingly small before turning over and confining.
10
2-loop univ. 3-loop SF
8
g 2L
6
4
2 0
10
20 LogLL0
30
40
Fig. 4. Continuum running for Nf = 12. Results shown for running from below the infrared fixed point (purple triangles) are based on g 2 (L0 ) ≡ 1.6. Also shown is continuum backwards running from above the fixed point (light blue squares), based on g 2 (L0 ) ≡ 9.0. Error bars are again purely statistical, although strongly correlated due to the underlying interpolating functions. Two-loop and three-loop perturbation theory curves are shown for comparison.
Having shown evidence for the existence of an infrared fixed point in the Nf = 12 theory and demonstrated its absence up to strong coupling at Nf = 8, we have constrained the edge of the conformal window for the case of Nc = 3 with fermions in the fundamental rep, 8 < Nfc < 12. Similar measurements at other values of Nf can allow us to further constrain Nfc . Furthermore, if a walking theory exists just below the transition value, a lattice measurement of the scale dependence of the coupling could directly reveal the expected plateau behavior and resulting separation between infrared and ultraviolet scales. In addition, the non-perturbative β function can be used in conjunction with additional lattice measurements to extract the anomalous dimension γm of the mass operator.22
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
13
3.3. Spectral and chiral properties I will next discuss the evolution with Nf of various observables on the broken side of the conformal transition. In order to meaningfully compare any quantity between theories with different Nf , we must first identify a physical scale to hold fixed. A natural choice is the Goldstone-boson decay constant F . However, the extraction of F from lattice simulations can be challenging. The rho meson mass mρ is much more easily determined, due to the lack of a chiral logarithm at next-to-leading order (NLO) in a χPT-derived fit.25 However, in the end we are more interested in the evolution of physics with respect to F than mρ . In QCD the two scales are connected, mρ ∼ 2πF , but it is not known a priori whether this connection will persist near the edge of the conformal window. The Sommer scale r0 ,26 associated with the scale of confinement, is another possible choice with similar advantages and drawbacks to mρ . In the present discussion we will assume that these scales do not evolve with respect to each other, so that holding any one constant with Nf is sufficient. This assumption is supported by available data, but the choice of scale is an open question going forward. As these lattice simulations are necessarily performed at finite mass, while we are interested in the behavior of theories in the chiral limit, extrapolation of results m → 0 is crucial. Chiral perturbation theory provides a consistent way to carry out this extrapolation. The familiar expressions for the Goldstone boson mass Mm , decay constant Fm and chiral condensate ψψm (with the subscript denoting evaluation at finite quark mass m) are easily generalized to theories with arbitrary Nf ≥ 2 by inclusion of known counting factors. The next-to-leading order (NLO) expressions for a theory with 3 colors are:27
2mψψ 1 2 1 + zm αM + Mm = log(zm) , (12) F2 Nf Fm
Nf log(zm) , = F 1 + zm αF − 2
ψψm
Nf2 − 1 = ψψ 1 + zm αC − log(zm) , Nf
(13)
(14)
where z = 2ψψ/(4π)2 F 4 . These expressions have also been computed at next-tonext-to-leading order (NNLO) and for fermions in the adjoint representation and the pseudo-real representations of the 2-color theory.28 Each of the unknown coefficients αM , αF , αC also contain terms that grow linearly with Nf . αC also contains a unique, Nf -independent “contact term” which remains even in the absence of spontaneous chiral symmetry breaking. This contribution is linear in m, quadratically sensitive to the ultraviolet cutoff (here the lattice spacing a−1 .) This term dominates the chiral expansion of ψψm , making numerically accurate extrapolation of the condensate more difficult. Finally, the growth of the chiral log term in Fm with Nf forces the use of increasingly smaller
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
14
fermion masses m as Nf is increased, in order to keep the NLO terms small enough relative to the leading order so that χPT is trustworthy.
0.4
r-1 , M,m 0,m
0.3
Nf=2 M,m Nf=6 M,m Nf=2 r-1 0,m Nf=6 r-1 0,m
0.2
0.1
0 0
0.01
m
0.02
0.03
−1 Fig. 5. From Ref. 29 Linear chiral extrapolations of Mρ,m and the Sommer scale r0,m , in lattice units, based on the (solid) points at mf = 0.01 − 0.02. Both show agreement within error between Nf = 2 and Nf = 6 in the chiral limit.
The goal is to search for enhancement of the condensate relative to the scale F . 3 , and extrapolate directly One way to proceed is to construct the ratio ψψm /Fm m → 0; however, as noted above the presence of the contact term can make such an extrapolation difficult to carry out precisely. By making use of the additional quan2 2 2 tity Mm and the Gell-Mann-Oakes-Renner (GMOR) relation Mm Fm = 2mψψm , incorporated into the NLO formulas shown above, we can construct other ratios at finite m which will also extrapolate to ψψ/F 3 in the chiral limit: the other two 1/2 2 2 possibilities are Mm /(2mFm ), and (Mm /2m)3/2 /ψψm . Due to the contact term 2 in ψψm , Mm /(2mFm ) should have the mildest chiral extrapolation of the three ratios. A lattice study of the type outlined here, investigating the evolution from Nf = 2 to Nf = 6 in the SU (3) fundamental case, has been carried out by the Lattice Strong Dynamics (LSD) collaboration.29 The physical scales chosen to be matched are the rho mass mρ and the Sommer scale r0−1 ; each observable was first measured in the Nf = 6 case, and then matched by tuning the bare lattice coupling at Nf = 2. The resulting chiral extrapolation of these quantities is shown in Fig. 5, and shows good agreement, so that the lattice cutoffs are well matched between Nf = 2 and Nf = 6. Determination of the presence or absence of condensate enhancement is done through comparison of the quantity ψψ/F 3 between the Nf = 6 and Nf = 2
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
15
2
Rm
Nf=6 / Nf=2
1.5
1 0
0.01 m
0.005
0.015
0.02
2 /2mF ] /[M 2 /2mF ] , versus m ≡ (m(2f ) + m(6f ))/2, Fig. 6. From Ref. 29 Rm ≡ [Mm m 6f m 2f m showing enhancement of ψψ/F 3 at Nf = 6 relative to Nf = 2. The open symbol at m = 0.005 denotes the presence of possible systematic errors.
2 theories, by way of the equivalent ratio Mm /(2mFm ). We can directly construct a “ratio of ratios”
Rm ≡
2 Mm /2mFm
Nf =6
2 /2mF ] [Mm m Nf =2
,
(15)
A value of Rm > 1 then implies enhancement of the condensate as Nf increases. The result is shown in Fig. 6, and indicates that Rm > ∼ 1.5 in the chiral limit, barring a downturn in Rm - an unlikely outcome, as the curvature of the NLO logarithm is naturally upwards in the chiral expansion of Rm itself. The magnitude of Rm is significant and larger than expected; an MS perturbation theory estimate of the enhancement from Nf = 2 to 6 by integrating the anomalous dimension of the mass operator γm leads to an expected increase on the order of 5-10%. Some care must be taken in comparing this value to our lattice result, since the condensate ψψ and by extension ψψ/F 3 depends on the renormalization scheme chosen. The conversion factor Z MS between the lattice-cutoff scheme with domain wall fermions and MS is known from Ref.30 From that reference, for this simulation the required factor to convert Rm is Z6MS /Z2MS = 1.449(29)/1.227(11) = 1.18(3). This increases the perturbative estimate of expected enhancement to the order of 20 − 30%, so the observed Rm > ∼ 1.5 is still significantly larger than anticipated.
November 26, 2010
19:21
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.01˙Appelquist
16
A direct computation of the S-parameter is also important to the study of general Yang-Mills theories with a focus on technicolor, and is well within the reach of existing lattice techniques. Some results have been reported,31,32 and more work is underway by the LSD collaboration. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29.
30. 31. 32.
T. van Ritbergen, J. A. M. Vermaseren and S. A. Larin, Phys. Lett. B400, 379 (1997). W. E. Caswell, Phys. Rev. Lett. 33, p. 244 (1974). T. Banks and A. Zaks, Nucl. Phys. B196, p. 189 (1982). M. A. Luty, JHEP 04, p. 050 (2009). B. Holdom, Phys. Lett. B150, p. 301 (1985). K. Yamawaki, M. Bando and K.-i. Matumoto, Phys. Rev. Lett. 56, p. 1335 (1986). T. W. Appelquist, D. Karabali and L. C. R. Wijewardhana, Phys. Rev. Lett. 57, p. 957 (1986). M. L¨ uscher, R. Narayanan, P. Weisz and U. Wolff, Nucl. Phys. B384, 168 (1992). S. Sint, Nucl. Phys. B421, 135 (1994). A. Bode, P. Weisz and U. Wolff, Nucl. Phys. B576, 517 (2000), Erratumibid.B608:481,2001. T. Appelquist, G. T. Fleming and E. T. Neil, Phys. Rev. Lett. 100, p. 171607 (2008). T. Appelquist, G. T. Fleming and E. T. Neil, Phys. Rev. D79, p. 076010 (2009). E. Bilgici et al. (2009). E. Bilgici et al., Phys. Rev. D80, p. 034507 (2009). Z. Fodor, K. Holland, J. Kuti, D. Nogradi and C. Schroeder, Phys. Lett. B681, 353 (2009). M. L¨ uscher, P. Weisz and U. Wolff, Nucl. Phys. B359, 221 (1991). S. Caracciolo, R. G. Edwards, S. J. Ferreira, A. Pelissetto and A. D. Sokal, Phys. Rev. Lett. 74, 2969 (1995). A. Bode et al., Phys. Lett. B515, 49 (2001). R. Sommer (2006). M. L¨ uscher, R. Sommer, P. Weisz and U. Wolff, Nucl. Phys. B413, 481 (1994). U. M. Heller, Nucl. Phys. B504, 435 (1997). F. Bursa, L. Del Debbio, L. Keegan, C. Pica and T. Pickup, Phys. Rev. D81, p. 014505 (2010). X.-Y. Jin and R. D. Mawhinney, PoS LAT2009, p. 049 (2009). L. Del Debbio, B. Lucini, A. Patella, C. Pica and A. Rago, Phys. Rev. D80, p. 074507 (2009). D. B. Leinweber, A. W. Thomas, K. Tsushima and S. V. Wright, Phys. Rev. D64, p. 094502 (2001). R. Sommer, Nucl. Phys. B411, 839 (1994). J. Gasser and H. Leutwyler, Phys. Lett. B184, p. 83 (1987). J. Bijnens and J. Lu, JHEP 11, p. 116 (2009). T. Appelquist, A. Avakian, R. Babich, R. C. Brower, M. Cheng, M. A. Clark, S. D. Cohen, G. T. Fleming, J. Kiskis, E. T. Neil, J. C. Osborn, C. Rebbi, D. Schaich and P. Vranas, Phys. Rev. Lett. 104, p. 071601 (2010). S. Aoki, T. Izubuchi, Y. Kuramashi and Y. Taniguchi, Phys. Rev. D67, p. 094502 (2003). E. Shintani et al., Phys. Rev. Lett. 101, p. 242001 (2008). P. A. Boyle, L. Del Debbio, J. Wennekers and J. M. Zanotti, Phys. Rev. D81, p. 014504 (2010).
November 19, 2010
19:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.02˙Costantini
17
SEARCHES FOR NEW HEAVY QUARKS WITH THE CMS DETECTOR AT THE LHC S. COSTANTINI on behalf of the CMS Collaboration Department of Physics and Astronomy, University of Ghent, Ghent, Belgium E-mail:
[email protected] Postal address: CERN-PH-EP 354-2002, CH-1211 Geneva 23, Switzerland We review the capability of the CMS experiment to address the experimental searches for New Heavy Quarks at the LHC. In particular, we concentrate on the first year(s) of LHC operations, since new physics at the TeV scale may manifest itself even in modest data samples of the order of a few hundreds pb−1 . A few example searches for New Heavy Quarks are discussed, with emphasis on processes characterized by clean final states with electrons and muons. Keywords: LHC, CMS, luminosity, physics reach, searches, new physics, beyond Standard Model, New Heavy Quarks
1. Introduction The possible existence of new heavy fermions is going to be fully tested at the Large Hadron Collider. Already with the first data it will be possible to entirely explore the interesting range allowed for their mass values, from the existing experimental bounds up to the limits set by unitarity conditions. Here we present two independent scenarios analyzed by the CMS experiment: new physics with a fourth generation of elementary quarks, b0 and t0 , and with exotic partners of the top quark. In both cases a significance above three standard deviations can be reached at the LHC with integrated luminosities between 100 pb −1 and 1 fb−1 . Stringent limits can be also set with early data. This note is structured as follows: after an introduction on the Large Hadron Collider (Section 2), Section 3 contains a description of the CMS detector, while Section 4 is dedicated to the CMS performances with the first data. The searches for fourth generation quarks are addressed in Section 5. Finally Section 6 deals with the searches for exotic partners of the top quark. 2. The Large Hadron Collider The Large Hadron Collider (LHC) has become operational in 2009. High-energy physics runs are taking place in 2010,1 with proton-proton collisions at a center-of-
November 19, 2010
19:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.02˙Costantini
18
mass energy of 7 TeV and peak values of the instantaneous luminosity that will soon reach 1030 − 1031 cm−2 s−1 . The design energy of 14 TeV and the design luminosity of 1034 cm−2 s−1 are expected to be attained after a few years of operation. Each LHC experiment will collect an integrated luminosity up to about 1 fb−1 under the initial conditions, and up to about hundreds fb−1 per year when the design luminosity will be reached. Six experiments are currently operating at the LHC: two so-called omni-purpose detectors, ATLAS2 and CMS,3,8 which are performing a general research program, two dedicated detectors, ALICE4 and LHCb,5 specifically designed for heavy-ion physics and b-physics, respectively, and two special purpose experiments: TOTEM6 and LHCf.7
3. The CMS detector The central feature of the Compact Muon Solenoid (CMS) detector8 is a superconducting solenoid, of 6 m internal diameter, providing a field of 3.8 T. Within the field volume are the silicon pixel and strip tracker, the lead-tungstate crystal electromagnetic calorimeter (ECAL), and the brass/scintillator hadron calorimeter (HCAL). Muons are measured in gas-ionization detectors embedded in the steel return yoke. In addition to the barrel and endcap detectors, CMS has extensive forward calorimetry, assuring very good hermeticity whit pseudorapidity coverage up to high values (|η| < 5). CMS has an overall length of 22 m, a diameter of 15 m, and weighs 12 500 tonnes. The electromagnetic calorimeter (ECAL) contains 75 848 lead tungstate (P bW O4 ) crystals (25.8 X0 long in the barrel, 24.7 X0 long in the endcaps). Scintillating crystals are the most precise calorimeters for energy measurements and they provide excellent energy resolution over a wide range, as well as high detection efficiency for low energy electrons and photons. The ECAL has an energy resolution of better than 0.5 % above 100 GeV. The 15K-channel HCAL, √ when combined with the ECAL, measures jets with a resolution ∆E/E ∼ 100%/ E ⊕ 5%. Muons with pseudorapidity in the range |η| < 2.4 are measured with detection planes made of three technologies: Drift Tube chambers (DT), Cathode Strip Chambers (CSC) and Resistive Plate Chambers (RPC). The readout has nearly 1 million electronic channels. Matching the muons to the tracks measured in the silicon tracker should result in a transverse momentum (pT ) resolution between 1 and 5 %, for pT values up to 1 TeV/c. The inner tracker measures charged particles within the |η| < 2.5 pseudorapidity range. It consists of 1440 silicon pixel and 15 148 silicon strip detector modules, chosen for their radiation hardness and small amount of material, corresponding to about 30% of the radiation length X0 . The tracking system provides an impact parameter resolution of the order of 5 µm and a transverse momentum resolution of about 1.5 % for 100 GeV/c particles. The first level (Level-1) of the CMS trigger system, composed of custom hard-
November 19, 2010
19:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.02˙Costantini
19
ware processors, is designed to select the most interesting events in about 1 µs using information from the calorimeters and muon detectors. The High Level Trigger (HLT) processor farm further decreases the event rate from up to 100 kHz to 100 Hz (initial DAQ system is 50 kHz), before data storage. On the Worldwide LHC Computing GRID (WLCG), some 50k cores dedicated to CMS run more than 2M lines of source code. 4. Detector Performance with Data and Prospects for Searches The data collected during the first proton-proton collisions have shown that the performance of the CMS detector is according to design expectations and the first data distributions agree well with Monte Carlo simulation.9 As an example, Figure 1 show the K0S and Λ invariant mass distributions, in agreement with the PDG10 values at the 10−4 level. 5. Searches for Fourth Generation b0 Quarks In this Section we consider two benchmark channels for the search for heavy bottomlike fourth generation quark pairs in proton-proton collisions with the CMS detector, pp → b0 b¯0 : (1) Searches for light b0 quarks (2) Searches for heavy b0 quarks, above the tW threshold. A wider discussion can be found in.11,12 The center-of-mass energy assumed in those analyses is 10 TeV. The existence of a fourth generation of elementary fermions, a new replica of the known three generations of chiral matter, may provide a sufficiently large CP
Fig. 1. Left: π + π − invariant mass distribution, obtained with CMS 900 GeV and 2.36 TeV data. The reconstructed K0S mass is in agreement with the PDG value at the 10−4 level: mK /mP DG = 1 − (0.7 ± 1.4) · 10−4 . Right: pπ − (+c.c.) invariant mass distribution, from 900 GeV and 2.36 TeV data. The reconstructed Λ mass results : mΛ /mP DG = 1 + (1.9 ± 0.9) · 10−4 .
November 19, 2010
19:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.02˙Costantini
20
violation and may account as well for the asymmetry between matter and antimatter. Provided the mass difference between the fourth generation quarks t0 and b0 is lower than the W mass, their existence is not excluded by precision electroweak measurements. Furthermore, within the framework of the Standard Model, the t0 and b0 masses are constrained to be below approximately 550 GeV/c2 by unitarity conditions. The possible phenomenology of fourth generation quarks is extensively discussed in.13 The present b0 and t0 mass limits have been obtained by the CDF experiment14 assuming, in case of light b0 quark, a 100% decay branching fraction into the Flavor Changing Neutral Current (FCNC) decay channel b0 → bZ. For the b0 quark, mass values below 268 GeV/c2 and 325 GeV/c2 , respectively, are excluded at 95% confindence level by the light b0 and heavy b0 analyses. For the t0 quark, the current CDF limit is 311 GeV/c2 . The integrated luminosities considered in those analyses are between 1.1 and 2.7 fb−1 . Searches for fourth generation quarks at the LHC will benefit from the higher center-of-mass energy, providing larger possible b0 (or t0 ) production cross sections ranging from ∼ 1 pb for masses of about 500 GeV/c2 , to ∼ 100 pb for masses of 200 GeV/c2 . 5.1. Light b’ For b0 mass values lower than tW mass threshold the decay b0 → tW is kinematically suppressed. The leading charged current process is the doubly Cabibbo-suppressed b0 → cW , which suffers from high background contamination. For this reason, this analysis considers the possibility, for one of the two pair-produced b0 quarks, of a sizable FCNC decay channel b0 → bZ (an electroweak penguin loop process) with a branching ratio BR between 5% and 20%. With this assumption the signal is relatively clean and one can fully reconstruct the b0 in the leptonic decay channel of the vector bosons. The process b0 b¯0 → bZ cW , followed by leptonic decays of the Z and W bosons, gives rise to a tri-leptonic final state plus two jets. The main background is represented by Z+jets, WZ+jets and t¯t events. Further background rejection is achieved by requiring the presence of exactly one Z and one W and isolation between jets and lepton candidates. The results are shown in Figure 2. Assuming a BR (b0 → bZ) = 10%, b0 mass values up to about 190 GeV can be excluded with 200 pb−1 of data. With 1 fb−1 of data, we can exclude light b0 quark masses up to 235 GeV/c2 . 5.2. Heavy b’ In this analysis the mass of the b0 quark is assumed to be above the tW threshold, i.e. above approximately 255 GeV. The dominant decay mode is expected to be b0 b¯0 → tW −¯tW + , hence producing a four W boson plus two b-jets final state.
November 19, 2010
19:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.02˙Costantini
21
Fig. 2. b0 cross section as a function of the b0 mass, for different values of the BR (b0 → bZ) = 5% - 20 %. Upper limits at 95% C.L. are provided for 200 pb−1 and 1 fb−1 . The center-of-mass energy is 10 TeV.
Each W boson can decay either leptonically (W → lν) or hadronically (W → dijet). Among the possible final states of the four W boson decay chain, the ones with low Standard Model background are selected, i.e. trilepton and same-sign dilepton processes, with multijets. The event selection requires at least one energetic and isolated lepton with transverse momentum pT > 35 GeV/c and at least one hard jet with pT > 85 GeV/c. For the same-sign dileptonic channel, exactly two same-charge leptons (either electron or muon) and at least four jets are requested. For the trileptonic channel, events with three leptons are selected, with two or more jets. Lepton-jet separation is required to suppress additional leptons from jets. In addition, background from doubly reconstructed muons or electrons, where a radiative photon is reconstructed as a lepton candidate of the same charge, is rejected by requiring lepton-lepton isolation. Finally, the invariant mass of two muons or electrons of any charge should not be within a window of 10 GeV/c2 around the Z-boson mass The above selection criteria are optimized assuming a b0 → tW signal of 400 GeV/c2 . The analysis results are summarized in Figure 3, assuming an integrated luminosity of 200 pb−1 at 10 TeV for the search reach and the exclusion limits. The main background sources are t¯t, t¯t + W/Z + jets and W/Z + jets events. Three typical benchmark points are discussed in this analysis, corresponding to b0 masses of 300, 400, and 500 GeV/c2 , and production cross sections at the leading order17 √ in pp collisions at s = 10 TeV of 13.6 pb, 2.80 pb, and 0.78 pb, respectively.
November 19, 2010
19:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.02˙Costantini
22
Fig. 3. b0 cross-section upper limits as a function of the b0 mass, for 60 pb−1 and 200 pb−1 . The center-of-mass energy assumed in this analysis is 10 TeV. For comparison, the Pythia 17 leadingorder production cross section is shown as a function of the b0 mass.
With a data set of 200 pb−1 , evidence of a b0 b0 → ttW W signal can be obtained with a significance of 3.7 standard deviations for a b0 mass of 400 GeV/c2 . If no signal is observed in the data, b0 quarks with a mass less than 485 GeV/c2 can be excluded at the 95% confidence level. 6. Searches for Exotic Partners of the Top Quark In this Section we address the searches for heavy fermionic partners of the top quark and refer to15 for a detailed discussion. The center-of-mass energy considered in this analysis is 10 TeV. Natural, non-supersymmetric solutions of the hierarchy problem generally require fermionic partners of the top quark, with masses not much heavier than 500GeV/c2 , i.e. in the mass range accessible with early LHC data. This analysis searches for pair production of the two top partners with electric charge Q=5/3 (the T5/3 ) and Q=-1/3 (the B), that are predicted in models16 where the Higgs particle is a pseudo-Goldstone boson. Both kinds of new fermions decay to W t , leading to tt¯W + W − . With the subsequent decay of the top quarks to bW , as shown in Figure 4, the final state is given bbW W W W . For this study T5/3 and B are assumed to be degenerate in mass. The golden channels for this analyses are the semi-leptonic channels where two of the W bosons in Fig. 4 decay into same-sign leptons and the other two decay into jets. The event selection requires at least five jets with transverse momentum above 30 GeV/c, including a leading jet with a pT of more than 100 GeV/c, and
November 19, 2010
19:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.02˙Costantini
23
two same-sign leptons (two electrons, two muons or one electron and one muon), with transverse momentum larger than 50 GeV/c and 25 GeV/c, respectively. A 10 GeV/c2 veto around the Z mass is applied for the two-electron channel. The presence of same-sign dileptons distinguishes this process from t¯t, which represents the main Standard Model background, and allows to reduce its contribution. Other background processes, like ttWW, ttW, WWW, and WW, have much smaller cross sections. Due to instrumental effects, QCD multi-jets and Z+jets also contribute to the total background.
Fig. 4. Pair production of T5/3 (left) and B (right) quarks, followed by examples of their decays to same-sign dilepton final states. The figures are taken from Ref. 16.
As the same-sign dileptons come from different B 0 s, no full mass reconstruction is possible for the B quarks. However, the T5/3 mass can be reconstructed in the fully hadronic decay chain, as the dileptons come from the decay of the same heavy fermion. The T5/3 mass peak is shown in Fig. 5. An integrated luminosity of about 1.6 fb−1 at 10 TeV is needed for a 5 σ observation of the tW peak. Figure 6 shows, as a function of the integrated luminosity, the 95% upper limit on the production cross section (multiplied by the branching ratio into same-sign dileptons) and the discovery potential in terms of signal significance. T5/3 and B expectations are combined. In absence of any observed excess over the expected background, stringent limits can be set at the LHC with early data. Heavy exotic quarks with masses up to 400 GeV/c2 can be excluded with 80 pb−1 , while 340 pb−1 are needed for masses of 500 GeV/c2 . For the observation of heavy top partners of mass 400 GeV/c2 , ∼ 115pb−1 of data are needed for a 5 σ observation significance and about 50 pb−1 of integrated luminosity for a 3 σ evidence. For a heavy top partner of mass 500 GeV/c2 these numbers increase to about 600 pb−1 and 220 pb−1 , respectively.
November 19, 2010
19:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.02˙Costantini
24
Fig. 5.
Top: invariant mass distribution of the reconstructed tW for a signal of 500 GeV/c 2 .
Fig. 6. Left: 95 % C.L. cross-section upper limit BR(same-sign dileptons) as a function of integrated luminosity. Right: signal significance as a function of integrated luminosity.
7. Conclusions Evidence of New Physics could be obtained by the CMS experiment already during the low luminosity period of the LHC. A few searches for new heavy quarks have been discussed, which have been performed assuming a center-of-mass energy of 10 TeV. About 200 pb−1 , corresponding to the first year of data taking at low luminosity, will allow to observe new heavy quarks up to masses of the order of 400 GeV/c 2 . Acknowledgments Acknowledgments of support of all CMS are given in Ref.8 I would like to thank the organizers for their invitation.
November 19, 2010
19:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.02˙Costantini
25
References 1. The Large Hadron Collider home page: http://lhc.web.cern.ch/lhc/ contains also general and outreach information. The LHC schedule can be found here: http://lhccommissioning.web.cern.ch/lhc-commissioning/. 2. A Toroidal LHC Apparatus, Technical Proposal, CERN/LHCC 94-43 (1994). 3. The Compact Muon Solenoid, Technical Proposal, CERN/LHCC 94-38 (1994). 4. A Large Ion Collider Experiment at CERN LHC, ALICE Collaboration, F Carminati et al: J. Phys. G: Nucl. Part. Phys. 30, 1517 (2004) and references therein. 5. The Large Hadron Collider beauty experiment, Technical Proposal, CERN/LHCC 98-04 (1998). 6. Total Cross Section, Elastic Scattering and Diffraction Dissociation at the LHC, The TOTEM Collaboration, “The TOTEM Experiment at the CERN Large Hadron Collider”, http://iopscience.iop.org/1748-0221/3/08/S08007. 7. Forward production of neutral particles in proton-proton collisions at extremely low angles. The LHCf Collaboration, “The LHCf detector at the CERN Large Hadron Collider”, http://iopscience.iop.org/1748-0221/3/08/S08006. 8. CMS Collaboration, “The CMS Experiment at the CERN LHC, 2008 JINST 3 S08004, http://iopscience.iop.org/1748-0221/3/08/S08004. CMS Physics TDR Volume 1, Detector Performance and Software, CERN/LHCC 2006-001. 9. CMS public Physics Results are available here: http://cms-physics.web.cern.ch/cmsphysics/CMS Physics Results.htm An overview of results obtained with the 2009 run can be found here: http://indico.cern.ch/conferenceOtherViews.py?view=standard&confId=73860. 10. The Particle Data Group C. Amsler et al. (Particle Data Group), Physics Letters B667, 1 (2008) and 2009 web edition on http://pdg.lbl.gov/. 11. CMS Collaboration, “Search for Low Mass b’ Production in CMS”, CMS-PAS-EXO08-013, http://cdsweb.cern.ch/record/1194506. 12. CMS Collaboration, “Search for A Fourth Generation b’ Quark in tW Final State at CMS in pp Collisions at sqrt(s) = 10 TeV”, CMS-PAS-EXO-09-012, http://cdsweb.cern.ch/record/1195747. 13. G. D. Kribs, T. Plehn, M. Spannowsky, and T. M. P. Tait, “Four Generations and Higgs Physics”, Physical Review D 76 (2007) 075016. P. H. Frampton, P. Q. Hung, and M. Sher, “Quarks and Leptons Beyond the Third Generation”, Physics Reports 330 (2000) 263. P. Q. Hung, and M. Sher, “Experimental constraints on fourth generation quark masses”, Physical Review D, 77 (2007) 037302. 14. Public CDF results and notes on New or Excited Fermions can be found on http://www-cdf.fnal.gov/physics/exotic/ and in http://arxiv.org/PS cache/arxiv/pdf/0706/0706.3264v1.pdf. 15. CMS Collaboration, “Search for Exotic Top Partners with the CMS Experiment”, CMS-PAS-EXO-08-008, http://cdsweb.cern.ch/record/11954505. 16. Roberto Contino, Geraldine Servant, “Discovering the top partners at the LHC using same-sign dilepton final states” JHEP 06 (2008) 026, http://arxiv.org/abs/0801.1679. 17. T. Sj¨ ostrand, S. Mrenna and P. Skands, “PYTHIA 6.4 physics and manual”, JHEP05 (2006) 026, arXiv.org/abs/hep-ph/0603175.
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
26
ATLAS DISCOVERY PROSPECTS FOR FEW 100 PB−1 GUYOT CLAUDE (on behalf of the ATLAS Collaboration) SPP/IRFU, C.E. Saclay, 91191 Gif-sur-Yvette Cedex, France,
[email protected] As the LHC provided its first collisions at the energy of 7 TeV in the centre of mass in early spring 2010 and is expected to deliver an integrated luminosity of 1 fb −1 per experiment over the years 2010 and 2011, the physics discovery prospects for the first few 100 pb−1 (typically what is expected in 2010) are reviewed, following detailed simulation studies with the ATLAS detector. The status of the detector for the first collisions is also presented. Keywords: particle physics, LHC, ATLAS, first results
1. The LHC and Its Experiments The Large Hadron Collider (LHC) now in operation at CERN (Geneva, Switzerland) is going to open a new kinematic regime in high energy proton-proton collisions, to be able to search for new physics beyond the Standard Model and to find the missing piece of the Standard Model, the Higgs boson. It has been designed to achieve proton-proton collisions at 14 TeV centre-of-mass (c-o-m) energy (7 TeV per beam) at a luminosity of L= 1034 cm2 s−1 . Four experiments have been installed at the 4 interaction points: two multipurpose experiments ATLAS and CMS and two dedicated experiments, LHCb for B-physics and Alice for heavy ions collisions. 1.1. Expectation for 2010 After the September 2008 incident which occurred one week after the first events have been collected in the experiments, the LHC has restarted its operation at the fall 2009 and the first collision event at the injection energy (450 GeV per beam) has been recorded the 23 November 2009 in ATLAS. The data taking at this injection energy has continued until mid-December and the collected data have been used for commissioning the detector (see section 2.2) in addition to the extensive use of cosmic muons already registered in 2008 and 2009. A short run at 2.36 TeV centre-of-mass energy has also been achieved in December 2009.a a On
the 30 March, the first LHC collisions at 7 TeV centre-of-mass energy have been registered in all detectors and by the time of the writing of this paper, about 300µb−1 of high quality data have been collected in ATLAS.
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
27
Following the study of the faulty electrical connections between LHC magnets, it has been decided to run the LHC accelerator at reduced energy (3.5 TeV per beam) during the year 2010 and 2011, before a long shutdown foreseen in 2012 for a complete repair of the connections. The integrated luminosity in 2010 is expected to be of the order 100 to 200 pb−1 which corresponds to the physics prospects discussed in this paper. The integrated luminosity for 2011 is expected to be as large a 1 fb−1 per experiment. 1.2. Impact of the reduced beam energy √ At the time of this paper, most simulations have been done assuming s = 14 TeV √ and some with s = 10 TeV. The expectations for 7 TeV can be inferred from the studies at 14 TeV by looking at the ratios of parton luminosities at different centerof-mass energies calculated as a function of the mass of the object to be produced in pp (or ppbar) collisions. The calculation of these ratios has been performed by several authors (1,2 ) and some results are shown in Fig. 1. For instance, assuming
Fig. 1. Ratio of expected production rates at different collision energies derived from the ratio of so-called parton luminosities as a function of the mass of the object to be produced. The left figure compares the rates at different LHC energies (7, 10, 14 TeV) for ttbar and heavy gauge boson W’ production (source: J.Stirling). The right figure compares the W’ production rates at the Tevatron and the LHC at 7 TeV (source: C. Quiggs).
that the bulk of ttbar production proceeds via gluon-gluon fusion, one expects a reduction of the production rate by a factor 5 by running at 7 TeV instead of 14 TeV for a given luminosity. The loss is larger for an heavier object like an extra U(1) gauge boson W’ (factor 10 for MW ’ =1.5TeV). Compared to the Tevatron
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
28
at Fermilab, the gain for the W’ production is still rather large (factor 150 for MW ’ =1TeV), despite the advantage of the collisions where the production proceeds mainly via qqbar interactions.
2. The ATLAS Detector 2.1. Detector description and status ATLAS3 is a general purpose detector designed to exploit the full physics program of the LHC. It has full azimuthal coverage and extends over most of the polar angle θ. Surrounding the interaction point and the beam pipe there is a tracking system consisting of silicon pixel and strip detectors surrounded by a straw tube tracker with transition radiation detection capability for improved electron identification. It is located inside a strong solenoidal magnetic field of 2T with coverage in pseudorapidity extending up to |η| = 2.5. Outside the tracking system, electromagnetic and hadronic calorimeter systems are found, extending the coverage up to |η| < 5. These are surrounded by a dedicated muon detection system containing a separate toroidal magnet system with coverage of up to about |η| < 2.7. A complex multi-level trigger system provides the necessary reduction of the bunch crossing rate, while keeping excellent efficiency for signals of interest. More details on the sub-detector performances can be found in Fig. 2. √ Expected global detector performances and physics reach for s = 14 TeV are presented in.4
Fig. 2.
Overview of the ATLAS detector with a short description of the sub-detectors.
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
29
The detector status at the start of the collision data taking in November and December 2009 is very satisfactory as for most sub-detectors the fraction of functioning channels stays above 99%. 2.2. First results from collisions in 2009 During the first collision runs in December 2009, about 500000 minimum bias events √ at s = 900 GeV have been collected with stable beam conditions, which already allows a rather extensive test of the inner tracker and calorimeter performances with low momentum hadrons. Only a handful of low momentum muons could traverse the calorimeters (see Fig. 3 for an event display of a candidate di-muon). Some of the results of the analysis of these first runs at 900 GeV centre-of-mass energy are shown in Fig. 4 including a comparison with the simulation of minimum bias events. It shows that at least in the very low energy range, the jet production and the missing transverse energy measurement is already under control. The inner tracker is also performing well (see K0S peak), with an alignment of the silicon components already closed to the target value, especially for the barrel part thanks to the extensive studies with cosmic muons.
Fig. 3. Display of a candidate di-muon event in the forward region (left) and of a 2 jets event √ recorded during the short run at s= 2.36 TeV (right).
3. Physics Goals for 2010 A threefold approach (not fully sequentially) is considered for the analysis of the √ first data at s =7 TeV. The following activities will proceed somewhat in parallel: • Detector and reconstruction understanding with collision data. Going beyond the extensive commissioning with cosmics, it will use well-known physics samples like:
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
30
Fig. 4. First results from the 900 GeV collision runs. The data are compared with the simulation of minimum bias events (normalized to the same number of entries). The top left figure shows the spectrum of the “uncorrected” (electromagnetic scale) Jet transverse energy (Jets made from clusters with AntiKt algorithm with D=0.4). The top right figure displays the invariant mass distribution of two tracks secondary vertices showing the K0S peak. The bottom figures shows the distribution of missing transverse energy (left) and the resolution as a function of the total transverse energy (right).
• Z → ee, µµ for inner tracker studies, electromagnetic calorimeter (ECAL) calibration, muon chamber calibration and alignment, etc. • ttbar → blν bjj for jet scale from W → jet-jet, b-tag performance, etc. • “Re-discovery” of the Standard Model (section 4): • Establish how pp collisions really look like at LHC • Followed later on by precision measurements (e.g. top and W mass) • Search for new physics beyond the Standard Model (section 6). To get a flavour of what can be achieved with an integrated luminosity of only 100 pb−1 , the following table gives the number of expected events (including trigger √ efficiency and analysis cuts) for various physics channels for s = 10 TeV. At 7 TeV, one has to scale down these numbers by a factor between 2 and 4 depending of the mass of the produced object. 4. Standard Model “Rediscovery” These studies will be the first to be done with the early data (less than 10 pb−1 of integrated luminosity). One can list the following studies ordered with roughly
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
31
increasing amount of required integrated luminosity: • Minimum bias (MB) properties : The first MB studies, e.g. track multiplicity (see Fig. 5) and pT spectrum), will only need a few 10 µb−1 but the absolute cross section measurements will have to wait for a good estimate of the actual collider luminosity which may take a few months.
Fig. 5. Expected evolution of the central charged particle multiplicity in ppbar interactions as a function of the colliding energy for different Monte-Carlo simulators. The existing data points used to tune the simulators are also shown for comparison.
• Underlying event structure: Somewhat connected to the MB studies, this analysis of the protons debris in hard scattering processes is very important for the control of the analysis cuts in most physics studies.
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
32
• Jet production. Already with a few pb−1 , ATLAS will collect enough statistics to make significant contributions on the following topics: • Di-jet differential cross-section as a function of jet pT • Di-jet mass and angular distribution, jet shapes The challenge is the determination of the jet energy scale which is the key for understanding the large pT behaviour of the exponentially falling di-jet cross-section. A departure from the SM prediction at large pT could be a sign of new physics (e.g quark sub-structure). This determination, mainly based on the study of pT balance in γ-jet events, may need to wait for a larger integrated luminosity before reaching the required level of accuracy. • Drell-Yan lepton pair production – The study of low mass resonances (J/y, ψ) will start with the first pb−1 . • W/Z production (see Fig. 6).
Fig. 6. Left:Invariant mass distribution of the 2 electrons in dielectron events, showing the Z peak (simulation for 50 pb−1 at 10 TeV c-o-m energy). Right:Transverse mass distribution (built miss ) of event with a single µ + Emis . out of pµ T T and ET
When the integrated luminosity reaches ∼10 pb−1 , the Z->ee and Z-> µµ events will be extensively used for the detector calibration and alignment. On the physics side, the following topics will be studied: • µ+ µ− asymmetry distribution (which provides a handle on the u,d quark parton distribution function (pdf)) • W mass, with an accuracy σ ∼150 MeV for a integrated luminosity of 200 pb−1 . • Di-boson production for the study of gauge boson self couplings • Top quark production:
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
33
The top signal (it will be the first to be observed in Europe!) will be observable in the early days even with no b-tagging and with a simple analysis. For the goldenchannel lepton-jets (ttbar →bW bW →blν bjj, see Fig. 7), one expects ∼350 events √ in the µ channel (total detection efficiency∼4%) with 100 pb−1 at s=10 TeV (250 pb−1 at 7 TeV) which is comparable to the Tevatron sample in 2009. The top quark events contain all relevant signatures: (e,µ, jet, Emiss , b-jet) and T they form an excellent sample for e.g.: • commissioning the b-tagging, • set the jet Energy-scale using the W → jj peak The cross-section determination will make use of the more background free di-lepton channel.
Fig. 7. Left: Topology of ttbar events in the single lepton channel. The analysis cuts are given on the figure. Right: Three jets invariant mass distribution for the sample of candidate ttbar events.
5. Higgs Search The Tevatron experiments have excluded a domain of Higgs masses around mH =165 GeV with ∼5fb−1 per experiment5 using the most sensitive channel in this mass range, H → WW → lνlν. They expect about to collect about 8-9 fb−1 by the end of 2010. ATLAS will profit from a much larger cross section (e.g. a factor∼30 for gg→H at MH =170GeV) and a better Signal/Background ratio. Preliminary studies show √ that to reach the Tevatron sensitivity with s= 7 TeV p-p collisions, ATLAS needs at least ∼500 pb−1 of data (∼200pb−1 at 10 TeV). Regarding standard Higgs boson discovery in this channel (Fig. 8), a discovery √ in the MH range [140-180 GeV] with s= 10 TeV could be obtained with a 5σ √ significance with at least 2 fb−1 (e.g. 1 fb−1 per experiment). With s= 7 TeV, the required luminosity is about 4 fb−1 . Hence such a discovery is highly improbable in 2010 and even 2011.
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
34
Fig. 8.
Higgs boson discovery reach with 1 fb−1 as a function of the p-p collision energy
6. New Physics Beyond the Standard Model? Finding a deviation from SM predictions may be easy. Proving that it is real new physics is much harder. We shall need to care about: (1) The detector response: Is it really understood? This requires a good control of efficiencies, fake rates, energy/momentum scales, non-Gaussian resolution, dead channels effects . . . (2) Is the Standard Model background under control? This requires a good control of cross-sections, kinematic distributions, underlying event, . . . As much information as possible should be obtained from the data themselves. Here are examples of topics for the first few 100 pb−1 which will be discussed in this chapter. • Compositeness (from the di-jets ET spectrum). This analysis will not be limited by statistics but requires a very good control of the jet energy scale on the full pT range. • Supersymmetry • New gauge bosons and high mass resonances (di-leptons, dijets, ttbar . . . ) • Extra dimensions, black hole search 6.1. High mass di-lepton resonances A narrow Z’ resonance decaying in 2 leptons is predicted in many GUT extensions of the SM (for a recent review on present bounds see6,7 ). Such resonances are also
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
35
predicted in models with extra-dimensions (e.g. KK excitation of vector bosons in Randall-Sundum models).
√ Fig. 9. Left: Expected di-electron mass spectrum with s= 14 TeV collisions (Zχ ’ model). Right: Invariant µµ mass distribution for several misalignment scenarios.
Expected signals for a Zχ ’of mass MZ ’ =1 TeV and with s=14 TeV collisions and 100 pb−1 integrated luminosity are shown on Fig. 9 for the di-electron and di-muon channels including the dominant Drell-Yan background in the former case. The peak in the di-muon channel is mainly affected by the quality of the muon spectrometer alignment (a 100 µm precision has already been achieved with a final goal of 40 µm). The discovery reach for various models is shown on Fig. 10 which gives as a function of the Z’ mass the luminosity required to get a signal with 5σ significance.
Fig. 10.
Z’ discovery potential (5σ) for
√ s = 14TeV and for different theoretical models.
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
36
√ Even with s = 10 TeV and 200 pb−1 , di-lepton resonances can be discovered √ up to a mass of 1.5 TeV (1 TeV and ∼400 pb−1 with s = 7 TeV), beyond the Tevatron exclusion reach. 6.2. W’ discovery potential Heavy charged bosons W’ often arise in models with extra SU(2) gauge group(s). They can also arise in Kaluza-Klein models with an SU(2) gauge sector in the bulk. The differential production cross-sections as a function of the transverse mass (after analysis cuts) are shown on Fig. 11 for the electron and muon channels and √ for 2 possible W’ masses (in a minimal SSM model and with s=14 TeV).
Fig. 11. Left: W’ differential production cross-section (after cuts) for the electron channel. Right: W’ differential production cross-section (after cuts) for the muon channel.
Fig. 12.
Required luminosity for W’ discovery with 5σ significance as a function of the W’ mass.
The required luminosity as a function of the W’ mass for 5σ discovery potential √ at s=14 TeV is given on Fig. 12 including systematic (generators (higher orders effects, pdf), energy scale and resolution of leptons and jets). The discovery potential
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
37
is reduced for smaller at 7 TeV.
√
s as one expects a reach of only ∼1.5 TeV with 500 pb−1
6.3. Supersymmetry R-parity conserving SUSY could be found rather quickly if it is actually at ∼1TeV mass scale. For instance, due to the rather large cross-section for squark-squark, gluino-squark, gluinon-gluino production predicted for the benchmark SU3 SUGRA √ point, for an integrated luminosity of 200 pb−1 (with s=10 TeV) one expects ∼100 events in the spectacular golden di-lepton channel (Fig. 13).
Fig. 13. Cascade decays of a neutralino with a 2 leptons final state (top-left). Invariant mass spectrum of the 2 leptons, linked to the neutralino mass difference (bottom-left). Event display with a 2e final state (right).
Although no longer favored by many theorists, mSUGRA (minimum Super Gravity theory) is a convenient framework to account for SUSY breaking (it has only 4 free parameters + 1 sign) and for assessing the discovery potential for Rconserving SUSY with the neutralino χ01 as the Lightest Super-symmetric Particle (LSP, Dark matter candidate). From the common scalar mass m0 and gaugino mass m1/2 at the GUT scale, Renormalization Group equations can be used to predict the evolution of these masses down to the electroweak scale (Fig. 14-left). The search strategy is largely motivated by the cosmological constraints (e.g. from WMAP) on the DM relic density.8 Taking into account LEP and Tevatron results, benchmark points are defined in the (m0 , m1/2 ) plane as shown on Fig. 14right. Other SUSY breaking scenarios lead to a different phenomenology at the electroweak scale. For example, Gauge Mediated Susy Breaking may lead to models: • with gravitino LSP and χ01 NLSP (Next to LSP) • (or) with gravitino LSP and meta-stable slepton NLSP
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
38
Fig. 14. Left: Renormalization Group evolution of scalar and gaugino masses in a typical SUGRA. Right: Allowed region (in green) in the (m0 ,m1/2 ) plane of mSUGRA theories from cosmological constraints on the DM relic density and location of the benchmark study points SUn.
Heavy meta-stable particles with low velocity can be produced which appear in the detector as penetrating tracks with high pT and low β. Experimentally, many channels (n jets + m leptons) are being investigated, in particular: • Jets + Emiss channel, which has potentially the highest reach T • 1-lepton + jets + Emiss channel: more robust against background uncertainties T The study requires a very good understanding of backgrounds, in particular fake missing transverse energy coming from instrumental effects (noise, cracks...).
Fig. 15. tanβ.
5gσ discovery reach for 200 pb−1 in the mSUGRA (m0 ,m1/2 ) plane for two values of
When interpreted within the mSUGRA model and assuming a luminosity of 200 √ pb−1 and s =10 TeV, the 5gσ discovery reach in the (m0 , m1/2 ) plane is shown on Fig. 15 for two values of tanβ.
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
39
√ The discovery reach extends to masses ∼750 GeV. With s = 7 TeV, it still goes well beyond the expected exclusion region at the Tevatron (∼400 GeV).
7. Extra-Dimension Models 7.1. Large extra dimensions In this class of models (e.g. ADD model9 ), gravity only can propagate in n Large flat Extra-Dimensions (LED) which could be as large as a few µm. SM particles are restricted to a 3D brane (Fig. 16, left). The fundamental scale is not planckian: MS = MP l(4+n) ∼ TeV. From present experimental constraints: • 7> n>2 • MS > 1 TeV from Tevatron
Fig. 16. Left: quark/gluon scattering in LED models with SM particles restricted to a 3D brane. Right: Expected di-photon invariant mass distribution for ADD models with various number n of √ LED (simulation for s = 14 TeV).
On the experimental side, the existence of LED could proceed from the following signatures: • Real graviton emission, in association with a vector-boson V or a jet (Fig. 16top/left): – It appears in the detector as (mono) jets + missing ET , or V (e.g. gluon) + missing ET . – Not for the first few 100 pb−1 • Deviations in virtual graviton exchange (Fig. 16, bottom-left): – e.g. Excess above di-photon or di-lepton continuum (Fig. 16, right). √ – MS discovery reach for 100 pb−1 at s = 14TeV: 3. - 4.3 TeV
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
40
7.2. Universal Extra-Dimensions In the class of UED models,10 the SM fields can propagate into 4 + d dimensions, the extra dimensions being compactified at a scale 1/R >300 GeV. Similarly to SUSY, each SM field has a Kaluza-Klein partner (but same spin). Momentum conservation in the universal dimensions implies the conservation of a KK quantum number (the KK parity). Similarly to R-parity conserving SUSY, KK particles are also produced in pair with a Lightest KK particle (LKP, Dark matter candidate). The experimental analysis, which is similar to the R-parity conserved SUSY, leads to the expected discovery reach for 200 pb−1 displayed in Fig. 17 assuming d=1. A signal of Universal Extra Dimensions could be discovered if 1/R < 700 GeV.
Fig. 17. Discovery potential of UED with d=1 for the 3 experimental channels studied in ATLAS √ assuming a integrated luminosity of 200pb−1 and s = 10 TeV.
7.3. Micro black holes In models with Large Extra Dimensions or with a strongly warped ED, the fundamental scale of gravity can be as low as the electroweak scale. If the Planck scale is low enough, black holes (BH) could be produced at the LHC11 leading to spectacular events with very high multiplicity (challenging for high level trigger). Black holes would decay democratically to all particles of the SM and are characterized by a large number of high energy and transverse momentum objects. The primary SM background comes from states with high multiplicity or high energy jets. The analysis cuts for selecting the black holes signal are as follows: Σ|pT | > 2.5 TeV, 1 lepton with p>50 GeV.
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
41
Assuming a flat ADD extra dimension scenario with a Planck scale MS = 1TeV, √ the required integrated luminosity (with s = 14TeV) for a BH discovery is shown in Figure 18 as function of the BH mass threshold. An integrated luminosity of 1 fb−1 would allow a discovery to be made even if the production threshold was 8 √ TeV although the accessible mass threshold would be largely reduced for s = 7 TeV (work in progress).
Fig. 18. BH discovery potential expressed a the required luminosity as a function of black hole √ mass threshold. Error bars reflect statistical uncertainties only. Study done for s = 14 TeV.
8. Conclusions √ The first LHC physics runs will take place in 2010 with a c-o-m energy s = 7 TeV and an integrated luminosity of a few hundred pb−1 (and hopefully 1fb−1 in 2011). ATLAS is ready and well prepared to exploit these initial data thanks to an extensive commissioning (e.g. using muons from cosmic rays). A threefold approach to initial data analysis has been chosen with the following activities, proceeding somewhat in parallel: • refine the detector understanding with collision data • establish the properties of pp collisions at 7 TeV – ‘re-discovery’ of the Standard Model • search for new phenomena and surprises If nature is kind with us and the LHC luminosity matches our hopes, discoveries are possible beyond the present and future Tevatron exclusion limits for the following physics topics:
December 21, 2010
9:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.03˙Guyot
42
• R-parity conserving SUSY • High mass di-leptons resonances and extra gauge bosons • Extra-dimensions and black holes searches. References 1. J. Stirling, http://projects.hepforge.org/mstwpdf/plots/plots.html 2. C. Quiggs at arXiv:0908.3660v2 [hep-ph] 8 Sep 2009. 3. G. Aad et al. (ATLAS collaboration), JINST 3(2008) S08003. H. M¨ uller and B.D. Serot, Phys. Rev. C52 (1995) 2072. 4. G. Aad et al.(ATLAS collaboration), CERN-OPEN-2008-020, hep-ex/0901:0512. 5. T. Aaltonen et al. (CDF and DØ Collaborations) (2010), arXiv : 1001.4162 . 6. E. Salvioni et al., arXiv:0909.1320v1 [hep-ph] 7 Sep 2009. 7. T. Aaltonen et al., Phys. Rev. Lett. 102 (2009) 091805 [arXiv:0811.0053]. 8. J. Ellis et al., hep-ph/0303043. 9. N. Arkani-Hamed et al., Phys. Lett. B 429 1998) 263 [hepph/9803315]. 10. T. Appelquist et al, Phys. Rev. D 64 (2001) 035002. 11. S. Hossenfelder, arXiv:hep-ph/0412265.
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
43
SEARCH FOR DARK MATTER CANDIDATES WITH THE ATLAS DETECTOR AT THE LHC RACHID MAZINI, for the ATLAS Collaboration Institute of Physics, Academia Sinica CERN,CH-1211 Geneva 23, Switzerland ∗ E-mail:
[email protected] Supersymmetric (SUSY) models with R-parity conservation provide a perfect candidate for Dark Matter searches, The Lightest Supersymmetric Particle or LSP, will be actively searched for with the ATLAS detector at the Large Hadron Collider (LHC). Several SUSY scenarios have been tested and simulation-based results with the first few fb −1 of ATLAS data are presented. Extension of the ATLAS discovery reach to Dark Matter searches show that such measurements can be used to constraint the underlying SUSY LSP scenarios and extract Dark Matter properties. Keywords: ATLAS; LHC; Supersymmetry; Dark Matter; LSP; WIMP.
1. Introduction Astronomical observations have hinted at the existence of non-baryonic matter in the Universe, the so-called Dark Matter. They have shown that it constitutes about 90% of the matter density of the universe, where baryonic matter contributes only around 10%. Phenomena implying the presence of Dark Matter include the rotational speeds of galaxies, gravitational lensing of background objects by galaxy clusters and the behaviour of the Bullet cluster. The latter provides strong evidence that Dark Matter must be a Weakly Interacting Massive Particle (WIMP). Furthermore, precision measurements of the power spectrum fluctuations in the cosmic microwave background from WMAP1 strongly disfavour warm Dark Matter. All these observations do not answer many fundamental questions about Dark Matter. Do fundamental particles comprise the bulk of the Dark Matter? If so, is there a symmetry from which these particles originate? How and when were these particles produced? Many experiments are attempting to answer these questions.Astrophysical experiments attempt to detect Dark Matter by searching for Dark Matter annihilation processes in the galaxy using land-based gamma ray telescopes or space-based satellites. Particle physics experiments aim to create and study Dark Matter in the laboratory. Combining measurements from all these experiments could finally help to explain the greatest mystery of modern physics. New particle physics experiments have started taking data at the Large Hadron Collider (LHC) at CERN, in Geneva. Since March 2010, The LHC is delivering
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
44
proton-proton collisions at center-of-mass (CM) energy of 7 TeV with the ultimate aim of doubling this energy in the coming years. The beam intensity will also be gradually increased to reach the design luminosity of 1034 cm−2 s−1 . Thus, the LHC offers a unique opportunity to investigate the physics at the TeV scale and test the well established Standard Model (SM) of fundamental interactions beyond the current limits. Un-answered questions concerning the SM, such as the mechanism of electroweak symmetry breaking, the origin of particle masses and their hierarchy, etc. would be addressed with new experimental data. New physics that can manifest itself at the TeV scale could be discovered. One of the most popular extension the the Standard Model is Supersymmetry. It postulates a new fundamental symmetry between bosons and fermions and offers solutions to several problems of the SM. This new symmetry leads to new particles, not yet observed, as all existing SM particles would have partners called sparticles. The most obvious feature of SUSY is that none of the sparticles have been discovered yet and hence SUSY must be a broken symmetry with the mass of the super-partners much larger than their SM counterparts. The mechanism chosen for the spontaneous breaking of SUSY defines SUSY model and its phenomenology. A common phenomenological approach is to assume that SUSY exists in nature as the Minimal Supersymmetric Standard Model (MSSM). This model breaks the symmetry by including soft mass terms for the SUSY particles in the MSSM mutiplets. These terms contain a vast number of free parameters (∼ 100) that weaken the predictive power of the model. To obtain phenomenologically viable points in the large MSSM parameters space, constrained models such as minimal supergravity (mSUGRA) are used. In this model, SUSY breaking is mediated by gravitational interaction and the SUSY phase space can be described by five soft parameters at the unification scale:2 The Higgs field mixing µ, the universal scalar mass m0 , the universal gaugino mass m1/2 , the universal trilinear coupling A0 and tan β = υ1 /υ2 the ratio between the vacuum expectation values of the two Higgs doublets. From these parameters, the mass spectrum of the superpartners, the cross section of their production as well as the branching ration of their decays can be calculated. SUSY models also introduce a new quantum number, R-parity, in order to conserve baryonic and leptonic quantum numbers and protect the proton lifetime. Under R-parity, SM particles are even and SUSY particles are odd. In this proceeding, only models with exact R-parity conservation are considered. This has two important phenomenological consequences. First, sparticles can only be produced in pairs, and second, the lightest SUSY particle (LSP) is stable. Since no exotic strong or electromagnetic bound states (isotopes) have been observed, the LSP should be neutral and colorless making it a natural candidate for a WIMP. The detector signature of such a LSP is similar to that of a heavy neutrino. It would escape direct detection resulting in the characteristic feature expected for SUSY events: an imbalance of miss the transverse energy measured in the detector (ET ). This proceeding will show the discovery potential of MSSM predictions with the ATLAS detector at the LHC. It will also present as an example how measurements
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
45
at the LHC can be used to calculate the relic density. Different regions of the MSSM parameter space have different LSPs, with possible candidates including the gluino, sneutrino, gravitino and the lightest neutralino. The last particle in this list is the subject of the majority of studies as it presents a well defined signature and relatively low SM background. The main problem is to disentangle the underlying model with the observations. 2. SUSY Signatures and Search Strategy in ATLAS At the LHC, squarks and gluinos are produced via strong processes, hence their production will have a large cross section. For example, with 100 pb−1 of data, one could expect about 100 events with squarks of 1 TeV mass. Direct production of charginos, neutralinos and sleptons occurs via electroweak processes, hence the production cross sections are much smaller. They are produced much more abundantly in squark and gluino decays.The strongly interacting sparticles (squarks, gluinos) which dominate the production are much heavier than the weakly interacting and SM particles, giving long decay chains to the LSP and large mass differences between SUSY states. Consequently, searches for Supersymmetry at the LHC concentrates on cascade decays, which will produce events with many jets, leptons and a high miss ET , making it relatively easy to extract SUSY signals from the SM background. In ATLAS, the choice of the SUSY benchmark points in the mSUGRA model was motivated by the cosmological constraints. Table 1 summarizes the points relevant for the discussion in reference.3 Benchmark points SU1 and SU8.1 are two different variants of the Coannihilation region where χ ˜ 01 annihilate with near-degenerate ˜ l. SU2 is the Focus region near the boundary where µ2 < 0. This is the only region in mSUGRA where the χ ˜01 has a high higgsino component, thereby enhancing the annihilation cross-section for processes such as χ ˜01 χ ˜01 → W W . SU3 is the Bulk region point where LSP annihilation happens through the exchange of light sleptons. SU4 has been dubbed the Low mass point. It lies in the Bulk region but is characterized by the lowest allowed SUSY masses near to the Tevatron bound.6 Finally, SU6 is in the Funnel region where 2mχ˜01 ≈ mA . Since tan β 1, the width of the pseudoscalar Higgs boson A is large and it decay is dominated by the τ lepton channel. The search strategy in ATLAS starts with looking at deviations from SM predictions Table 1. Parameters of mSUGRA benchmark points chosen by ATLAS. point
M0
M1/2
A0
tan β
sign µ
σLO [pb]
SU1 SU2 SU3 SU4 SU6 SU8.1
70 3550 100 200 320 210
350 300 300 160 375 360
0 0 -300 -400 0 0
10 10 6 10 50 40
+ + + + + +
8.15 5.17 20.85 294.46 4.47 6.48
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
46
for some SUSY-like signatures. If an excess of events, w-r-t SM, is observed and is compatible with R-parity conserving SUSY, the procedure is to confirm such a model and to study the relation to Dark Matter according to the following: • Confirm deviations from the Standard Model with different signatures, for miss example, in the multi-jet plus ET plus variable number of leptons. • Is it SUSY? If so, establish the SUSY mass scale using inclusive variables, e.g. effective mass distributions defined as: Mef f =
n X
pjet,i + T
i=1
m X
miss plept,i + ET T
i=1
where n and m are the number of jets and lepton in the events. This is relevant to Dark Matter searches as it allows to verify if the discovered signal provides a possible Dark Matter candidate and to use model-independent calculation of LSP mass and compare with direct searches. • Which SUSY flavor is it? Determine model parameters, from particular decay chains and use kinematics to determine mass combinations. Extracted parameters could be used in model-dependent calculation of Dark Matter relic density. 3. The ATLAS Detector at the LHC The LHC accelerating complex is designed to collide proton on proton beams at a CM energy of 14 TeV and instantaneous luminosity of 1034 cm−2 s−1 . LHC bunches spaced by 25 ns will contain ≈ 1011 protons each. In the initial phase, however, it is planned to operate the LHC at a reduced CM energy and beam intensities. In years 2010 and 2011, the LHC is scheduled to operate at 7 TeV CM energy and to deliver ∼1 fb−1 of integrated luminosity per experiment. ATLAS4 is one of the two general purpose experiments at the LHC. It provides tracking, particle identification and hermetic calorimetry in the 4π solid angle. Charged particle tacking is provided by the Inner Detector (ID) covering the pseudorapidity range |η| < 2.5 and immersed in the solenoidal magnetic field of 2 T. Looking from inside out, the ID consists of three subsystems: the silicon pixel detector, the silicon strip detector (SCT) and the straw drift tube tracking device (TRT). The latter is equipped with an additional radiator allowing for e/π separation via detection of transition radiation. The ID provides highly efficient and precise tracking over the full η coverage with the momentum resolution of σ/pT ∼ 3.4−4 × pT (GeV) ⊕ 0.015. ATLAS is equipped with Pb-LAr accordion electromagnetic calorimeters with e/γ identification and triggering capabilities and energy resolution of ∼1% at 100 GeV and 0.5% at 1 TeV. These are surrounded with the Fe/scintillator (central region) and Cu/W/LAr (forward regions) hadronic calorimeters. They provide hermeticity, trigger, jet reconstruction miss and missing transverse energy (ET measurements down to |η| < 5 with an energy √ resolution σ/ET ∼ 50% E ⊕ 0.003.
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
47
Muon trigger and momentum measurement with a resolution < 10% up to Eµ ≈ 1 TeV is assured by large air-core toroids with gas-based drift chambers covering the solid angle of |η| < 2.7 surrounding the calorimeter system. ATLAS features three levels of trigger (only the first one being hardware-based) which reduces the initial 40 MHz rate of bunch crossing down to ∼ 200 Hz of recorded physics events. 4. SUSY Searches in ATLAS The ATLAS inclusive SUSY search strategy was developed using a twofold approach. Detailed studies have been used to define inclusive search channels using specific SUSY benchmark points: SU3, SU4 etc. These benchmark points and all relevant Standard Model background processes passed through a detailed simulation of the detector. The various search channels are exclusive with respect to each other, simplifying the procedure of combining the results. The insight gained from these detailed studies has been applied to several scans over subsets of SUSY parameter space. Given the large number of signal points, a fast, parametrized simulations have been used. The goal is to verify that the inclusive search channels provide sensitivity to a wide range of SUSY models. 4.1. Inclusive searches All inclusive search channels are based on the generic SUSY detector signature: miss ET + several high transverse momentum (pT ) jets + a certain number of leptons (electrons or muons). The main ATLAS inclusive SUSY search modes are classified by the lepton requirement as follows: miss • Zero-lepton mode: The presence of multiple jets together with large ET is the least model-dependent SUSY signature. At least four jets are demanded to reduce the background from QCD and (W/Z+jets) processes. • One-lepton mode: Requiring one lepton in addition to multiple jets and miss ET greatly reduces the QCD multi-jet background and gives better control over the remaining backgrounds (tt¯ and W+jets productions). miss • Two-lepton mode: Demands dileptons + multiple jets and ET . Opposite-sign leptons can arise from neutralino decays. They should be of the same flavor since neutralinos should not induce significant rates of µ → eγ and other lepton-flavor violating interactions at one loop. Same-sign dileptons can be common in SUSY because the gluino is a self-conjugate Majorana fermion. In Standard Model processes, however, the rate for same-sign dileptons is small. • Three-lepton mode: the trilepton signal can arise from direct gaugino production or from squark and gluino decays. Two scenarios have been miss studied: 3-leptons + 1 very-high pT jet and 3-leptons + ET . • Tau mode: SUSY models generically violate lepton universality. In some models τ decays can dominate. This search mode selects hadronic τ de-
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
48
104
SU3 SM background tt W Z QCD Diboson
103
102
ATLAS 10
1
events / 200 GeV / 1fb-1
events / 200 GeV / 1fb-1
cays since leptonic decays are indistinguishable from prompt leptons. In miss addition, four jets and ET are required. ˜ ˜ • b-jet mode: Light b and t together with enhanced heavy flavor production, due to Higgsino coupling, lead to many b quarks in SUSY decay chains. This feature can be exploited to suppress QCD background. This search mode miss requires four jets, out of which at least two are b-tagged, and ET .
3
ATLAS SU3 all BG tt W Z QCD Di-boson
10
102 10 1 10-1
0
500
1000
1500
2000
2500 3000 3500 4000 Effective Mass [GeV]
10-20
500
1000
1500
2000
2500 3000 3500 4000 Effective Mass [GeV]
Fig. 1. Effective mass distribution for 0-lepton (left) and 1-lepton (right) search modes for the SUSY SU3 benchmark point. Open circles represent the SUSY signal and the different SM backgrounds contributions are shown according to the legend. The shaded area shows the total SM background.
Extracting SUSY signals with the ATLAS detector for each of these modes requires strict selection criteria which are discussed in Reference.5 Figure 1 shows the effective mass distribution for the zero-lepton and one-lepton modes as estimated from the mSUGRA SU3 benchmark point. A clear signal could be extracted with 1 fb−1 integrated luminosity at high effective mass values. Figure 2 shows the Mef f distribution for all ATLAS mSUGRA benchmark points for both zero-lepton and 1-lepton search modes. It should be noted that channels with leptons will have smaller signal, but better signal to background conditions, providing a more robust discovery potential, specially in early data when uncertainties on the backgrounds are large. In most cases, a noticeable excess of events is observable at high effective mass values with 1 fb−1 of integrated luminosity. Only the SU2 benchmark point is not accessible in this channel. Scans over the parameter space of several models for SUSY breaking, all R-parity conserving, have been used to sample a wider range of possible signal signatures and to estimate the discovery reach. Figure 3 shows, for an integrated luminosity of 1 fb−1 , the 5σ discovery reach on two mSUGRA parameter grids for some of the search modes discussed above. Additionally, parameter scans over non-universalHiggs models and also gauge mediated SUSY breaking models have been performed. All these Monte Carlo studies include systematic uncertainties on the background. The results of the scans indicate that ATLAS should discover signals for R-parity conserving SUSY with gluino and squark masses less than O(1 TeV) after having
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
SU1 SU2 SU3 SU4 SU6 SU8.1 SM background
104
103
102
ATLAS
events / 200 GeV / 1fb -1
events / 200 GeV / 1fb-1
49
10
1
103
SU1 SU2 SU3 SU4 SU6 SU8.1 SM BG
102
10
ATLAS 1
0
500
1000
1500
2000
2500 3000 3500 4000 Effective Mass [GeV]
10-1
500
1000
1500
2000 2500 3000 Effective Mass [GeV]
Fig. 2. Effective mass (Mef f ) from various SUn mSUGRA benchmark points and total SM background (shaded area) for zero-lepton (left) and one-lepton (right) search modes.
1000
∼ τ1 LSP
ATLAS
~ g (2.0 TeV)
5 σ discovery MSUGRA tan β = 10
4 jets 0 lepton 4 jets 1 lepton 4 jets 2 leptons OS 1 jet 3 leptons
800 ~g (1.5 TeV)
m1/2 [GeV]
m1/2 [GeV]
~g (2.5 TeV)
~ g (2.0 TeV)
∼τ LSP 1
4 jets 1 τ
~ q (2.0 TeV)
~ g (1.5 TeV)
~q (2.0 TeV)
600
~ q (1.5 TeV)
~g (1.0 TeV)
4 jets 1 lepton
5 σ discovery MSUGRA tan β = 50
800
~ q (2.5 TeV)
600
4 jets 0 lepton
ATLAS
1000
~ q (1.5 TeV)
~ g (1.0 TeV)
400 400
200
~q (1.0 TeV)
~g (0.5 TeV) ~q (0.5 TeV)
200
∼+
χ (103 GeV) 1
0 0
500
1000
NO EWSB
1500
2000
2500
3000
m0 [GeV]
~ g (0.5 TeV)
~q (1.0 TeV)
~q (0.5 TeV)
500
NO EWSB
1000
1500
2000
2500
3000
m0 [GeV]
miss and various lepton requireFig. 3. m0 , m1/2 contour plot of 5σ discovery for the 4-jet + ET −1 ments for mSUGRA scenario with 1 fb integrated luminosity. The left plot is for tanβ = 10 and the right one is for tanβ = 50. The horizontal and curved gray lines indicate gluino and squark mass contours respectively in steps of 500 GeV.
accumulated and understood an integrated luminosity of about1 fb−1 . Already this moderate limit extend significantly the Tevatron limits.6 If the SUSY mass scale is in the sub-TeV range, early LHC data will likely be sufficient to claim a discovery of new physics, although new physics does not strictly mean SUSY as other scenarios may have similar features and properties. To distinguish different scenarios and to determine the full set of model parameters within one of them, as many measurements of the new observed phenomena as possible are needed. This includes the precise measurement of masses, spins and CP properties of the newly observed particles.
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
50
4.2. Mass measurement and parameter determination Once a signature consistent with SUSY has been established, the experimental focus will be to reconstruct the sparticles mass spectrum and to constrain the model parameters. In R-parity conserving models, sparticles decay chains cannot be fully resolved since the LSPs escape detection. As a consequence, edge positions rather than mass peaks in invariant mass distributions are measured and fitted. An example of suitable sparticle decay is: q˜L → χ ˜02 q(→ `˜± `∓ q) → χ ˜01 `+ `− q miss in events containing two opposite-sign electrons or muons, hard jets and ET . These characteristics ensure a large signal to background ratio. The kinematic endpoint in the dilepton invariant mass distribution is a function of involved sparticles masses. If the sleptons are heavier than the χ ˜02 then the decay proceeds through the 0 0 + − three body channel χ ˜2 → χ ˜1 ` ` . In this case, the invariant mass distribution is non-triangular in shape7,8 with an endpoint equal to the difference of the mass of the two neutralinos medge = mχ˜02 − mχ˜01 . If at least one of the sleptons is lighter `` 0 than χ ˜2 then the two-body decay channel χ ˜02 → `˜± `∓ → χ ˜01 `+ `− dominates. The dilepton invariant mass distribution is triangular with a sharp edge at the endpoint: s s mχ˜01 2 m`˜ 2 edge m`` = mχ˜02 1 − 1− mχ˜02 m`˜
χ2 / ndf 40.11 / 45 Prob 0.679 Endpoint 99.66 ± 1.399 Norm. -0.3882 ± 0.02563 Smearing 2.273 ± 1.339
50 40 30 20
ATLAS
10
χ2 / ndf Endpoint1 Norm1 Endpoint2 Norm2
100 80 60
25.08 / 26 55.76 ± 1.20 2125 ± 241.2 99.26 ± 1.31 2073 ± 292.5
ATLAS
40 20 0
0 -10 0
Entries / 4 GeV / 18 fb-1
Entries/4 GeV/ 1 fb-1
A measurement of the dilepton endpoint thus gives a handle on the masses of the two lightest neutralinos and any sleptons that are lighter than χ ˜02 . Figure 4 shows the dilepton invariant mass distribution for two SUSY benchmark points. The Standard Model background has been reduced by subtracting opposite lepton
-20
20
40
60
80
100 120 140 160 180 200 m(ll) [GeV]
0
20
40
60
80
100 120 140 160 180 200 m(ll) [GeV]
Fig. 4. Distribution of invariant mass after flavor subtraction for various benchmark points. The line histogram is the Standard Model contribution while the points are the sum of Standard Model and SUSY contributions. The fitting function is superimposed and the expected position of the endpoint is indicated by a dashed line. Left: Two-body decay with χ ˜ 02 decaying to right-sleptons for an integrated luminosity of 1 fb−1 (SU3). Right: two-body decay with χ ˜ 02 decaying to both −1 left- and right-sleptons for 18 fb (SU1).
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
51
pairs. Other sparticles decay chains give further kinematic endpoints. Having performed measurements of multiple mass differences using various final states, it is possible to do a global χ2 fit to SUSY parameters. The procedure would yield the most likely SUSY scenario, hence the sparticles mass spectrum. 5. Connecting ATLAS Measurements to Dark Matter Searches Assuming neutralinos are the only component of Dark Matter, their density at a certain time in the expansion of the early universe should have been low enough to cease annihilation leaving relic cold Dark Matter. Inflationary models of the universe along with astronomical data can be used to put limits on the rates of neutralino production and annihilation. There are four main mechanisms that can occur to cease annihilation: (1) Slepton exchange, which is suppressed unless the slepton masses are < 200 GeV. (2) Annihilation to vector bosons, which occurs when the neutralino LSP acquires a significant wino or higgsino component. (3) Co-annihilation with light sleptons, which happens when there are suitable mass degeneracies in the particle spectrum. (4) Annihilation to the third-generation fermions that is enhanced when the heavy Higgs boson A has a mass twice higher than the LSP one. To reproduce the observed relic density, the model parameters must ensure efficient annihilation of neutralinos in the early universe. This is possible in mSUGRA scenario within restricted regions of the parameter space where annihilation is enhanced either by a significant higgsino components in the lightest neutralino or through mass relationships. There are various strategies for determining the relic density; the one presented 9,10 at targets the weak scale parameters relevant to the relic density calculation. Endpoints are used to constrain sparticle masses, which are then used to constrain the neutralino mixing matrix, obtaining tan β dependent values of the mixing parameters. The slepton sector is constrained using a ratio of branching fractions that is sensitive to τ˜ mixing parameters: BR(χ ˜02 → ˜lR l)/BR(χ ˜02 → τ˜1 τ ). Finally, constraints on the Higgs sector are also considered, even if their benchmark point is in a region in which the LHC is expected to produce only the lightest (SM-like) Higgs boson. A relic density distribution as a function of mA is extracted. Improvement of this measurement is achieved by placing a lower limit of 300 GeV on mA due to its non-observation in cascade decays. This assumption provides an improvement in the control over the relic density, leading to a value of: +0.00 +0.01 +0.002 Ωχ h2 = 0.108 ± 0.01(stat + sys)−0.002 (m(A))−0.011 (tan β)−0.005 (m(˜ τ2 ))
6. Conclusion The ATLAS experiment performed detailed studies of its discovery potential over the wide range of models and parametrization. The complete documentation can
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
52
be looked up at the public page of the SUSY working group.11 Results presented in this report show that a wide range of SUSY parameters phase space can be explored at LHC nominal CM energy of 14 TeV with an integrated luminosity of 1 fb−1 . It has also been shown that ATLAS SUSY results may have a direct impact on Dark Matter searches as they can be used to test different hypothesis on its nature and origin. Dark Matter WIMP candidates can be observed in SUSY inclusive channels and the relic density may be determined with a precision of ≈10% depending on the underlying SUSY model. Last year, an update to the studies discussed above was done for scenario where the LHC would provide 200 pb−1 of data at 10 TeV CM energy over one year. The corresponding mSUGRA discovery limit contours are shown in figure 5. Except for the small far Focus region the discovery is dominated by the 4-jets plus 0,1lepton analyses. Extrapolation of the derived discovery potential to lower energies
miss and various lepton requireFig. 5. m0 , m1/2 contour plot of 5σ discovery for the 4-jet + ET ments for mSUGRA scenario at tanβ = 10 (left) and tanβ = 50 (right). Results are shown for 200 pb−1 integrated luminosity and LHC CM energy of 10 TeV.
is non-trivial as proton-proton cross-sections steeply fall especially for production of heavy objects. Not only the statistical significance is reduced but also important systematic uncertainties depend on the amount of collected data. The drop of the LHC CM energy from 10 TeV to 7 TeV should reduce the statistical significances by roughly a factor of two. With five times larger integrated luminosity than assumed in the quoted result at 10TeV, ATLAS will end up reaching similar sensitivity for SUSY signals with data collected at 7 TeV till the end of 2011 running. References 1. D. N. Spergel et al., Astrophys. J. Suppl. 148, 173 (2003). 2. S. Dawson, NATO adv. Study Inst. Ser. B Phys. 365, 33 (1997). 3. J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Phys. Lett. B 565, 176 (2003).
November 22, 2010
9:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.04˙Mazini
53
4. The ATLAS Collaboration, The ATLAS Experiment at the CERN Large Hadron Collider. J. Instrum. 3 S08003 (2008). 5. The ATLAS Collaboration, Expected Performance of the ATLAS Experiment, Detector, Trigger and Physics, Volume III, CERN-OPEN-2008-020, Geneva, 2008. 6. CDF Collaboration, http://www-cdf.fnal.gov/physics/exotic/exotic.html. D0 Collaboration, http://www-d0.fnal.gov/Run2Physics/np. 7. M. M. Nojiri and Y. Yamada, Phys. Rev. D 60, 015006 (1999). 8. De Sanctis, U et al., Eur. Phys. J. C 52, 743 (2007). 9. G. Polesello and D. R. Tovey, J. High Energy Phys. 05 071 (2004). 10. M. M. Nojiri, G. Polesello and D. R. Tovey, J. High Energy Phys. 063 (2006). 11. ATLAS Collaboration SUSY Working Group, SUSY Searches Public Results, https://twiki.cern.ch/twiki/bin/view/Atlas/SusyPublicResults.
November 22, 2010
9:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.05˙nardulli
54
B-PHYSICS AT THE LHC J. NARDULLI On behalf of the LHCb collaboration Science and Technology Facility Council, Rutherford Appleton Laboratory, Didcot, OX 11, 0QX, United Kingdom. E-mail:
[email protected] A set of key measurements of B decays which have the potential to uncover New Physics at the LHC is discussed. Together with the general purpose detectors ATLAS and CMS, the LHCb detector, which is devoted to B physics, can study these effects precisely. Due to the large b¯b cross section at 14 TeV, LHCb has access to 1012 B meson decays per year. This allows significant measurements of even very rare B decays and, in particular, the precision study of the B system. Keywords: LHCb, B decays, CP violation, Rare B decays
1. Introduction The Large Hadron Collider1 at CERN is a proton-proton collider which will operate at a centre-of-mass energy of 14 TeV, with a maximum luminosity of 1034 cm−2 s−1 . Several experiments are installed at the LHC: ATLAS (A Toroidal LHC ApparatuS) and CMS (Compact Muon Solenoid) are general-purpose experiments searching amongst others for the Higgs boson and for super-symmetric particles, ALICE (A Large Ion Collider Experiment) is a heavy ion experiment to study the behavior of nuclear matter at very high energies and densities. LHCb2 is a dedicated B physics experiment which will exploit the unprecedented quantity of B-hadrons produced at the LHC to over-constrain the CKM matrix and search for New Physics (NP) in the flavour sector. It will take data at a luminosity of 2 × 1032 cm−2 s−1 . The LHC b¯b cross section of approximately 500 µb means that LHCb will have a statistical reach unmatched by any previous B physics experiment, while the centre-of-mass energy of 14 TeV gives it unique access to all flavors of B mesons and baryons. In addition, the produced B mesons are highly boosted. LHCb will achieve a typical lifetime resolution of 40 fs, allowing for precision measurements of time dependent CP asymmetries in the neutral B sector. In these proceedings six key measurements concerning CP asymmetries and rare B decays are discussed.
November 22, 2010
9:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.05˙nardulli
55
2. Prospect For Rare Decays At The LHC Flavour Changing Neutral Currents (FCNC) have an important role in the search of new physics (NP). In the Standard Model (SM) they appear at loop level and they are suppressed by the GIM mechanism. The presence of new particles can be detected in the loops and produce in some cases effects similar or larger in size to those of the SM. The large amount of b¯b pairs to be recorded at the LHC opens the door to explore NP via the FCNC in the B sector. Among the most promising rare B decays are the B → K ∗ µ− µ+ , the Bs → µ+ µ− and the radiative decays like Bs → φγ. Some of these studies have already been performed at the B factories and Tevatron, and no discrepancy with the SM has been found so far. The large amount of data to be collected at LHC will enlarge the range to search for NP. 2.1. B → K ∗ µ+ µ− The B → K ∗ µ+ µ− decay, whose BR is 1.1 × 10−6 , proceeds mostly via an electroweak penguin diagram. Some NP scenarios predict a different muon distribution to the SM one. The differences are usually shown in the forward-backward asymmetry (AF B ) of the muon with respect to the B direction in the di-muon mass rest frame as a function of the di-muon mass. The value of the di-muon mass at which +0.33 the asymmetry is zero is, in the SM, predicted3 to be 4.36−0.31 GeV /c2 . For this channel the signal selection benefits from the low misidentification of muons, the great K/π separation provided by the RICH detector and the good invariant mass resolution (approximately 14 MeV), that leads to a yield of 7k events in 2fb−1 , with a background to signal ratio (B/S) of 0.2 in a 50 MeV window around the B mass. After the selection, the remaining background events are semi-leptonic B decays. BaBar and Belle have measured the branching ratio of the B → K ∗ µ+ µ− and the AF B vs the di-muon mass, with a precision comparable to what can be obtained by LHCb with less than 0.1 fb −1 . The precision of the zero crossing point of the asymmetry is expected to be 0.8 GeV2 with 0.5 fb−1 and 0.5 (0.3) GeV2 at 2(10) fb−14 . Performance studies in ATLAS and CMS are ongoing. 2.2. Bs → µ+ µ− The decay Bs → µ+ µ− has been identified as a process where NP can contribute significantly. This is a very rare decay that proceeds mostly via a electroweak (and Higgs) penguin diagrams, other box diagrams mediated via W are suppressed. The SM branching ratio is predicted to be 3.35 ± 0.32 10−9 with small theoretical error.5 In the Minimum Supersymmetric Standard Model (MSSM), the BR is proportional to tan6 β.6 In reference7 the authors have fit the present experimental results of electroweak and B physics precision observables, and taking into account the current limit in direct searches of the Higgs boson, to a particular realization of the
November 22, 2010
9:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.05˙nardulli
56
MSSM; they found a best fit (mostly driven from the results of the g - 2 experiment) that corresponds to tanβ ≈ 30 and MA ≈ 350 GeV, which will predict a Bs → µ+ µ− BR of ≈ 10−8 . LHCb is very suitable for the search of this channel despite the huge background level. This decay is triggered very efficiently thanks to the muon triggers. The efficiency to identify muons is 95% with a very low misidentification rate of pions as muons (0.5%). The excellent vertex capabilities of the LHCb detector allow the secondary vertex formed by the two muons to be well separated from the primary vertex, reducing further the combinatorial background. Finally the excellent invariant mass resolution of 20 MeV further reduces the background events under the mass peak. The total selection efficiency, computed in a MC signal sample, including the detector acceptance, trigger and selection efficiencies, is 10%. The analysis is based on 3 variables: the invariant mass, and two likelihood variables, one compiling the particle identification information and the second one the geometrical information of the decay (impact parameter of the muons, Bs proper time, etc). The space defined by these variables is divided in bins. The estimated background and the expected signal events for a given BR in each of these bins are used to compute the exclusion and discovery potential of the LHCb via the determination of the confidence level of the signal and the background according with the method described in reference.8 The main background has been identified as combinatorial semi-leptonic B decays, while the exclusive backgrounds such as B(s) → h+ h− (where h stands for hadron) and Bc+ → J/ψµ+ ν have been shown to be negligible. In 2 fb−1 LHCb expects 22 signal events for the SM BR and 180 background events in the most sensitive region of the 3D space. The current limit of the BR (established by CDF with 2 fb−112 ) is 3.3 × 10−8 at 90% CL. With no signal events observed with 0.1 fb−1 integrated luminosity, LHCb will set a 90% CL limit at 1.3 × 10−8 . With 0.5 fb−1 and no observed signal, LHCb should exclude a BR above the SM at 90 % CL. To get an evidence (observation) of the SM BR 2(6) fb−1 will be needed. The ATLAS and CMS experiments are instead using a cut and counting approach. The geometrical and particle ID information are used to select the events by applying cuts. With 10 fb−1 , corresponding for ATLAS and CMS to one year of data taking at nominal luminosity, ATLAS expects to have 5.7 signal events and 14 background events, while CMS expects 6.1 signal and 14 background events..
2.3. Radiative Decays Among the most valuable probes of NP models are the FCNC transitions b → sγ. The measurement of the inclusive ratio9 of the Bd meson is in agreement with the SM expectations and imposes stringent constraints on a variety of NP models. 10 The inclusive BR cannot be measured at LHCb, but several exclusive measurements such as Bs → φγ, are suitable for the LHCb detector capabilities. The SM predicts a left polarized photon in b → sγ transitions. In several exten-
November 22, 2010
9:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.05˙nardulli
57
sions of the SM, the photon can acquire an appreciable right-handed component11 without affecting the SM branching ratio. Photon helicity can be proved via mixinginduced CP-violation studies. LHCb expects to collect and select 1.1 (80)k events in 2 fb−1 of Bs → φγ (B → K ∗ γ ) with a B/S of 0.5 (0.7).4 With 2 fb−1 of integrated luminosity, the coefficient related to the fraction of wrongly polarized photons can be measured with 20% statistical uncertainty. 3. Prospect For CP Violation Measurements At The LHC Although knowledge of the CKM matrix, which describes CP violation in the Standard Model (SM), has improved significantly in recent years, several of its parameters remain poorly constrained by direct measurements. Most important among these are the CKM angle γ and the Bs mixing phase φs . The CKM Fitter13 average of γ from direct measurements is: γ = 73[+22 − 25]◦ .
(1)
φs is predicted in the SM to be very small: φs = −0.0369 ± 0.0018 rad ,
(2)
the tightest constraint on its value comes from the D0 collaboration14 +0.24 +0.07 φs = −0.57−0.30 (stat) −0.02 (syst) .
(3)
The results concerning the extraction of the CKM angle γ reported in the next sections concern only LHCb. Performance studies in ATLAS and CMS are ongoing. 3.1. Measuring γ From B → hh Decays The B(s) → h+ h− familya of decays have decay rates with non-negligible contributions from penguin diagrams, which make them sensitive to NP. The dependence on γ comes from the time-dependent CP asymmetries in the Bs → K + K − and Bd → π + π − decays. However, these asymmetries also depend on the amplitude ratio of the penguin and tree decay diagrams dhh eiθhh , as well as on the mixing phases φd and φs . Since there are four asymmetries and seven unknown parameters, it is necessary to employ U-spin symmetry,15 which leads to dππ = dKK and θππ = θKK . This allows γ to be measured when combined with external constraints on φd and φs . LHCb will reconstruct4 72k Bs → K + K − and 59k Bd → π + π − decays with 2fb−1 of data taking, with B/S ratios of 0.5 for both decays. These yields allow γ to be determinedb with a precision of 10◦ . a Where
B stands for a Bd or Bs and h stands for a π or a K meson. for a 20% level of U-spin breaking.
b Allowing
November 22, 2010
9:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.05˙nardulli
58
3.2. Measuring γ from B ± → D 0 K ± decays The charmed decays of charged B mesons proceed through tree-level diagrams, and enable a direct SM measurement of γ. First measurements of this kind have already been made at the B factories and are the inputs to the global average on γ quoted above. Different strategies exist for measuring γ, depending on the final state into which the D0 decays. In the GLW16 strategy, the D 0 decays into a CP eigenstate, and the sensitivity to γ comes from the interference between dominant and doubly color-suppressed decays. The ADS17 strategy combines color-suppressed B-decays with color-favoured D decays (and vice-versa), thus increasing the interference effects. The GGSZ18 strategy uses a Dalitz analysis of D 0 → KS ππ decays to extract γ together with the strong phases in the D 0 decay. It is expected4 that with these methods, when combined, LHCb will measure γ to approximately 5◦ with 2fb−1 of data taking at nominal luminosity.
3.3. Measuring γ from Bs,d → Ds,d (K, π) decays The time dependent CP asymmetries in the tree level decays Bs → Ds± K ∓ and Bd → D± π ∓ can be used to measure the SM value of γ + φs,d . The measured value will, in principle, suffer from an eightfold ambiguity. Together with the measurement of γ from charged B decays, they will provide a baseline SM measurement of γ; this will allow any NP effects in the measurement of γ from B(s) → h+ h− decays to be constrained. A yield4 of 14k Bs → Ds± K ∓ events is expected with 2fb−1 of data taking, with a B/S of 0.3, leading to a statistical precision of 10◦ on γ.
3.4. Measuring the B0 mixing phase φs Although the Bs mixing phase φs is very small in the SM, it can receive sizable NP contributions through box diagrams involving top quark exchange. The golden mode for measuring φs is Bs → J/Ψφ, which is expected to have a yield in LHCb4 of 120k events with 2fb−1 of data taking, with a B/S of 2.1. ATLAS and CMS expect to have 105k and 109k signal events respectively with a B/S of 0.3 for both experiments. The higher value of B/S in LHCb is due to a lifetime unbiased selection which has no cuts on the impact parameters of the daughters. This causes an higher background which can be easily identified in the φs fitting procedure. The measurement of φs from the time dependent decay rate asymmetries is complicated by the fact that Bs → J/ψφ is not a pure CP mode. CP-even and CP-odd contributions can be separated by studying their distributions in the transversity angle. A precision of 0.03 rad can be achieved on φs with 2fb−1 of data taking at LHCb. CMS expects a sensitivity on φs of 0.06 with 10 fb−1 . Sensitivity studies are ongoing in ATLAS.
November 22, 2010
9:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.05˙nardulli
59
4. Conclusion A set of key measurements of B decays which have the potential to uncover New Physics at the LHC have been discussed. The results for six key measurements, concerning CP asymmetries and rare B decays, of both the general purpose detectors ATLAS and CMS, and of the LHCb detector, devoted to B physics, have been presented. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
The LHC Study Group, CERN-AC/95-05, (1995). A. Alves et al. [LHCb Collaboration], JINST 3:S08005, (2008). M. Beneke et al., Eur. Phys. J. C 41 (2005) 173, hep-ph/0412400. B. Adeva et al., arXiv:0912.417 (2009). M. Blanke et al. TUM-HEP-626/06. J. High Energy Phys. 0610, 003 (2006); hepph/0604057v5. S. R. Chaudhury et al. Phys.Lett B 451 (1999) 86. J. Ellis et al. arXiv:0709.0098 (2007). A. L. Read, CERN Yellow Report 2000-005. E. Barberio et al. [HFAG Collaboration], hep-ex/0808.1297. T. Hurth, Frascati Phys. Ser. 41 (2006) 325. H. E. Haber et al., Phys Rep. 117 (1985) 75. CDF Collaboration, CDF public note 9892. CKMfitter Group (J. Charles et al.), Eur. Phys. J. C41 (2005) 1-131, hep-ph/0406184. D0 Collaboration, hep-ex: 0802.2255. R. Fleisher, Phys. Lett. B459, 306 (1999). M. Gronau and D. London, Phys. Lett. B253, 483-488 (1991); M. Gronau and D. Wyler, Phys. Lett. B265, 172-176 (1991). D. Atwood, I. Dunietz and A. Soni, Phys. Rev. Lett. 78, 3257-3260 (1997). A. Giri et al., Phys. Rev. D68, 054018 (2003).
November 22, 2010
9:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.06˙Gladyshev
60
LONG-LIVED SUPERPARTICLES AT THE LHC A. V. GLADYSHEV1,2 , D. I. KAZAKOV1,2 and M. G. PAUCAR1 1 Bogoliubov
Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 6 Joliot-Curie, 141980 Dubna, Moscow Region, Russian Federation 2 Institute of Theoretical and Experimental Physics, 25 Bolshaya Cheremushkinskaya, 117218 Moscow, Russian Federation
We consider the possibility of getting relatively light long-lived supersymmetric particles within the framework of the MSSM with gravity mediated SUSY breaking. It is shown that for the particular choice of parameters this possibility can be realized in the co-annihilation region with light staus, in the region with large negative trilinear scalar coupling A distinguished by light stops, and in the focus-point region where light charginos may be long-lived. It requires the fine-tuning of parameters, however the situation can take place in the constrained MSSM along the border LSP–NLSP lines. The phenomenology of the long-lived superparticles at the LHC is discussed. Keywords: Minimal Supersymmetric Standard Model, superpartners, LHC
1. Introduction Search for supersymmetric particles at colliders usually proceeds from the assumption that all of them are relatively heavy (few hundreds of GeV), with masses determined by soft supersymmetry breaking mass parameters m0 , m1/2 , A, and short-lived. Being heavier than the Standard Model particles they rapidly decay and result in usual particles with additional missing energy taken away by the neutral stable lightest supersymmetric particle (LSP) – neutralino. This situation takes place almost everywhere in the parameter space of the Minimal supersymmetric Standard Model (MSSM) for various mechanisms of supersymmetry breaking.1–3 There are, however, regions in the MSSM parameter space where the LSP is not the usual neutralino, but a slepton (mainly stau), or the relatively light superpartner of the t-quark (stop), or the lightest chargino. These regions are obviously considered as forbidden ones. However, at the border of these regions staus, stops, and charginos become next-to-lightest superparticles (NLSP), heavier than the neutralino and thus unstable. One of the important constraints on the parameter space of the MSSM is the relic density one. Given the amount of the dark matter from the WMAP experiment4,5 one is left with a narrow band of allowed region which goes along the stau border line (the co-annihilation region), then along the Higgs limit line and then along the radiative electroweak symmetry breaking line (the focus-point region). All three
November 22, 2010
9:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.06˙Gladyshev
61
regions are consistent with WMAP data. We found out that in this narrow band at the border the forbidden regions staus, stops, and charginos might be rather stable with the lifetime long enough to go through the detector, or produce secondary decay vertices inside the detector. Due to relatively small masses the production cross-section of long-lived next-to-lightest superparticles at LHC may reach a few per cent of pb for staus and charginos, and stop production cross-sections can be as large as tens or even hundreds pb. 2. Long-Lived Tau-Sleptons in the Coannihilation Region The co-annihilation region is shown qualitatively in the m0 − m1/2 plane, Fig. 1(a). The dark triangle shows the region where stau is the LSP. To the right of it the neutralino is the LSP. The WMAP constraint goes along the LSP triangle border and is shown as a straight line. Though the boundary of the LSP region with the WMAP allowed band is very narrow, its position depends on the value of tan β. In Fig. 1(b) we also show how the LSP triangle increases with tan β. Hence, even if it is very difficult to get precisely into this narrow band, changing tan β one actually sweeps up a wide area. The boundary region happens to be a transition region from the stau-LSP to the neutralino-LSP. In this very narrow zone the lifetime of stau rapidly changes from infinity to almost zero passing the tiny interval (smeared by the change of tan β) where stau is a long-lived particle. When the stau mass becomes larger than that of the neutralino, stau decays τe → χ e01 τ . The life time crucially depends on the mass difference between τe and χ e01 and quickly decreases while departing from the boundary line. Neglecting mixing in the stau sector, one has for the decay width:6 !2 m2χ0 1 2 Γ(˜ τ → χ01 τ ) = αem (N11 − N12 tan θW ) mτ˜ 1 − 21 , 2 mτ˜
m1/2
τ˜ — LSP
&
χ0 — LSP
Lifetime ∞ → 10−8 s
WMAP Relic Density Bound
m0
Fig. 1. (a) The LSP constraint in the m0 − m1/2 plane. (b) The tan β dependence of the LSP allowed region. The value of tan β increases from left to right.
November 22, 2010
9:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.06˙Gladyshev
62
Fig. 2. (a) The lifetime of stau as a function of m0 near the border line for tan β = 50; m1/2 increases from left to right. (b) The cross-sections for pair slepton production at LHC in pb as functions of m0 for various values of tan β in the co-annihilation region.
where N11 and N12 are the elements of the matrix diagonalizing the neutralino mass matrix. In Fig. 2(a) we show the lifetime of stau as a function of m0 for different values of m1/2 and tan β = 50. Consider now how these long-lived staus can be produced at LHC. The main process is given by a quark-antiquark annihilation channel. To calculate the mass of stau and the production cross-section, we choose a set of points along the LSP borderline for various values of tan β = 10 ÷ 50. One can see that for a small stau mass, the cross sections are relatively large for staus to be produced at LHC with the luminosity around 100 pb−1 . They may well be long-lived and go through the detector or decay in the secondary vertices, though the precise lifetime is very sensitive to the parameter space point and, hence, can not be predicted with high accuracy. Still this leaves a very interesting possibility of production of a heavy charged long-lived spinless particle.7,8
3. Lond-Lived Top-Squarks Another interesting region of parameter space is the one distinguished by the light stops. It appears only for large negative trilinear soft supersymmetry breaking parameter A0 . On the border of this region, in full analogy with the stau coannihilation region, the top squark becomes the LSP and near this border one might get the long-living stops. Projected to the m0 − m1/2 plane the position of this region depends on the values of tan β and A. In case when |A| is large enough the squarks of the third generation, and first of all the lightest stop t˜1 , become relatively light. This happens via the see-saw mechanism while diagonalizing the stop mass matrix
m ˜ 2tL mt (At − µ cot β) , mt (At − µ cot β) m ˜ 2tR
November 22, 2010
9:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.06˙Gladyshev
63
where 1 2 m ˜ 2tL = m ˜ 2Q + m2t + (4MW − MZ2 ) cos 2β, 6 2 2 m ˜ 2tR = m ˜ 2U + m2t − (MW − MZ2 ) cos 2β. 3 The off-diagonal terms increase with A, become large for large mq (that is why the third generation) and give negative contribution to the lightest top squark mass defined by minus sign in q 2 2 1 2 2 2 2 2 2 ˜ tL− m ˜ tR +4mt At −µcotβ . ˜ tL+ m ˜ tR ± m m ˜ 1,2 = m 2
Hence, increasing |A| one can make the lightest stop as light as one likes it to be, and even make it the LSP. The situation is similar to that with stau for small m0 and large m1/2 when stau becomes the LSP. For stop it takes place at small m0 and small m1/2 . One actually gets the border line where stop becomes the LSP. The region below this line is forbidden. It exists only for large negative A, for small A it is completely ruled out by the LEP Higgs limit.8,9
Fig. 3. Allowed region of the mSUGRA parameter space for A0 = −800, −1500, −2500, −3500 GeV and tan β = 10. At the left from the border stau is an LSP, below the border stop is the LSP. The dotted line is the LEP Higgs mass limit. Also shown are the contours where various stop decay modes emerge.
November 22, 2010
9:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.06˙Gladyshev
64
It should be noted that in this region one gets not only the light stop, but also the light Higgs, since radiative correction to the Higgs mass is proportional to the log of the stop mass. The stop mass boundary is close to the Higgs one and they may overlap for intermediate values of tan β. We show the projection of the parameter space to the m0 − m1/2 plane in Fig. 3 for different values of A and fixed tan β. One can see that when |A| decreases the border line moves down and finally disappears. On the contrary, increasing |A| one gets larger forbidden area and the value of the stop mass at the border increases. Changing tan β one does not influence the stop border line, the only effect is the shift of the stau border line. Since stops are relatively light in our scenario, the production cross sections are quite large and may achieve tens or even hundreds of pb for mt˜ < 150 GeV. The cross sections and their dependence on the stop mass for different |A| are shown in Fig. 4. As one expects they quickly fall down when the stop mass is increased. The range of each curve corresponds to the region in the (m0 − m1/2 ) plane where the light stop is the next-to-lightest SUSY particle, and the Higgs and chargino mass limits are satisfied as well. One may notice, that even for very large values of |A| when stops become heavier than several hundreds GeV, the cross sections are of order of few per cent of pb, which is still enough for detection at the LHC. Being created the stop decay. There are several different decay modes depending on the stop mass. If stop is heavy enough it decays to the bottom quark and the lightest chargino (t˜ → bχ ˜± 1 ). However, for large |A0 |, namely A0 < −1500 GeV the region where this decay takes place is getting smaller and even disappear due to mass inequality mt˜ < mb + mχ˜± . In this case the dominant decay mode is the decay 1 to the top quark and the lightest neutralino (t˜ → tχ ˜01 ). Light stop decays to the 0 charm quark and the lightest neutralino (t˜ → cχ ˜1 ). The latter decay, though it is loop-suppressed, has the branching ratio 100 %.
Fig. 4. (a) Cross sections of the pair stop production as a function of the stop mass; different curves correspond to different values of A0 parameter (A0 = −800, −1500, −2500, −3500 GeV). (b) The cross-sections for pair slepton production at LHC in pb as functions of m 0 for various values of tan β in the co-annihilation region.
November 22, 2010
9:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.06˙Gladyshev
65
4. Focus-Point Region and Long-Lived Charginos In this section we explore yet another region of parameter space which is a narrow band along the line where the radiative electroweak symmetry breaking fails (the focus-point region). On the border of this region the Higgs mixing parameter µ tends to zero. In this case the lightest chargino (χ± 1 ) and two lightest neutralinos 0 (χ1,2 ) are almost degenerate and have a mass of the order of µ. The mass terms are nondiagonal and look like 1 ¯ a λa − 1 χM ¯ (c) ψ + h.c.). LGaugino−Higgsino = − M3 λ ¯ (0) χ − (ψM 2 2 At the tree level the neutralino mass matrix is M1 0 −MZ cos β sinW MZ sin β sinW 0 M MZ cos β cosW −MZ sin β cosW 2 M (0) = −MZ cos β sinW MZ cos β cosW 0 −µ MZ sin β sinW −MZ sin β cosW −µ 0
(1)
,
(2)
For charginos one has M
(c)
=
√ √
M2 2MW cos β
2MW sin β µ
.
(3)
The physical neutralino and chargino masses are obtained as eigenvalues of these matrices after diagonalization. The mass matrices obtain radiative corrections which are known in the leading order. Typically they are of the order of a few per cent. When µ is small, which takes place in the focus point region near the border line of radiative electroweak symmetry breaking, the lightest chargino (χ ± 1 ) and two lightest neutralinos (χ01,2 ) are almost degenerate and have a mass of the order of µ. All of them in this case are predominantly higgsinos. In Fig.5 it is shown how the mass of the lightest neutralino and the mass of the lightest chargino depend on µ.
Fig. 5. The masses of the lightest chargino and neutralino as functions of µ for the rest of parameters fixed. The value of M2 is taken to be 600, 500 and 400 GeV, and tan β=10,50, respectively. Dark (red) lower band shows the experimental limit on chargino mass.
November 22, 2010
9:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.06˙Gladyshev
66
The degeneracy of masses mχ01 , mχ02 , mχ± takes place for any choice of the other 1 parameters. However, since the value of µ is not arbitrary in this approach but taken from the requirement of electroweak symmetry breaking, one has to find the region of parameter space where it is small. In Fig.5 this region is just above the chargino LEP limit in the right bottom corner of the plots. One can see that masses are degenerate, and the value of µ there is of the order of 150–200 GeV depending
Fig. 6. Allowed region of the mSUGRA parameter space for A0 = 0, −800, −3500 GeV and tan β = 10, 50, respectively. Dark (blue) areas show theoretically forbidden regions. Along the narrow green curve the amount of the Dark matter corresponds to WMAP data Ωh 2 = 0.09 ± 0.04. Also shown are experimental limits on the Higgs and chargino masses.
November 22, 2010
9:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.06˙Gladyshev
67
on the value of tan β. There is also a slight dependence on M2 (that is on m1/2 ), however, this dependence only show how far we may go along the lines having masses degenerate. It is clearly seen that the bigger M2 the larger values of µ are allowed. The mass of χ02 is not shown, it almost coincides with the χ± 1 mass. In Fig.6 we show the projection of SUSY parameter space to the m0 , m1/2 plane for different A and tan β. One can see that for small values of A0 the Dark matter line does not go along the electroweak symmetry breaking border but deviates from it, thus not allowing the small values of µ. For large negative A0 , on the contrary, these two lines almost coincide, the bigger the value of tan β the better. Note that though the region of small µ looks very fine-tuned and indeed is very sensitive to all input parameters, still in the whole four dimensional parameter space (assuming universality) it swaps a wide area and can be easily reached. The accuracy of fine-tuning defines the accuracy of degeneracy of the masses and, hence, the life time of the NLSP which is the lighest chargino. Whence the parameters are chosen in a way that one has mass degeneracy between the lightest chargino and the lightest neutralino, one again has a long-lived NLSP. Its mass is typically in the 100 GeV range and the production cross-section at the LHC is considerably high. Since three states are almost degenerate, one has also co-production which has to be taken into account. On average the cross-sections reach a few tenth of pb and slightly vary with the change of tan β. The cross-sections mainly depend on µ: the bigger the value of µ the smaller the cross-section.10,11 In Fig. 7 we show the lifetime of the lightest chargino as a function of the mass difference between the lightest chargino (NLSP) and the lightest neutralino (LSP). It appears that in order to get reasonable ”large” lifetimes one has to go very far along the focus point region. Then keeping µ small one can get lifetimes of the order of 10−10 s for practically degenerate LSP and NLSP. When the mass difference increases the lifetime falls down. However, if the degeneracy is within a few GeV, charginos are long-lived.
β
β µ
χ1
χ1
µ
χ1
χ1
Fig. 7. The lightest chargino lifetimes as a function of the mass difference between the lightest chargino (NLSP) and the lightest neutralino (LSP).
November 22, 2010
9:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.06˙Gladyshev
68
5. Conclusions We have shown that within the framework of the Minimal Supersymmetric Standard Model with soft supersymmetry breaking mechanism it is possible to get longlived superpartners of tau-lepton, top-quark and charged Higgs (or W -boson) which might be produced at the Large Hadronic Collider. The production cross-sections crucially depend on a single parameter – the mass of the superparticle and for light staus can reach a few % pb. The stop production cross-section can achieve even hundreds pb. The light stop and light chargino NLSP scenarios require large negative values of the soft trilinear supersymmetry breaking parameter A 0 . The events would have an unusual signature and produce a noticable signal rather than pure missing energy taken away by the lightest neutralino. The options are: • staus / stops / charginos go through the detector, • staus / stops / charginos produce a secondary vertex when they decay inside the detector, • stops can form of so-called R-hadrons (bound states of SUSY particles) if their lifetime is bigger than hadronization time. Experimental Higgs and chargino mass limits as well as WMAP relic density limit can be easily satisfied in our scenario. However, the strong fine-tuning is required. Moreover, it is worth mentioning that light stops are favoured by the baryon asymmetry of the Universe. Our stau/stop/charginoNLSP scenarios differ from the gauge mediated supersymmetry breaking scenario where next-to-lightest supersymmetric particles typically live longer. Acknowledgments Financial support from RFBR grant # 08-02-00856-a and grant of the Ministry of Education and Science of the Russian Federation # 3810.2010.2 is acknowledged. References 1. 2. 3. 4. 5. 6. 7. 8.
H.P. Nilles, Phys. Rept. 110 (1984) 1. H.E. Haber, G.L. Kane, Phys. Rept. 117 (1985) 75. A.V. Gladyshev, D.I. Kazakov, Phys. Atom. Nucl. 70 (2007) 1553. C.L. Bennett et al., Astrophys. Journal Suppl. 148 (2003) 1. D.N. Spergel et al., Astrophys. Journal Suppl. 148 (2003) 175. A. Bartl, W. Majerotto, B. Mosslacher, N. Oshimo, Z. Phys. C52 (1991) 677. A.V. Gladyshev, D.I. Kazakov, M.G. Paucar, Mod. Phys. Lett. A20 (2005) 3085. A.V. Gladyshev, D.I. Kazakov, M.G. Paucar, Long-living superpartners in the MSSM, in: Proc. of the 15th Int. Conf. on Supersymmetry and the Unification of Fundamental Interactions, eds. W. de Boer, I. Gebauer, p. 338, arXiv:0710.2322 [hep-ph]. 9. A.V. Gladyshev, D.I. Kazakov, M.G. Paucar, arXiv:0704.1429 [hep-ph], to appear in J. Phys. G. 10. A.V. Gladyshev, D.I. Kazakov, M.G. Paucar, J. Phys. G. 36 (2009) 125009. 11. A.V. Gladyshev, D.I. Kazakov, M.G. Paucar, Nucl. Phys. B (Proc. Suppl.) 198 (2010) 104107.
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
69
HEAVY IONS AT THE LHC: SELECTED PREDICTIONS GEORG WOLSCHIN∗ Institut f¨ ur Theoretische Physik, Universit¨ at Heidelberg, Philosophenweg 16, Germany ∗ E-mail:
[email protected] wolschin.uni-hd.de Baryon and charge transport in relativistic heavy-ion collisions are investigated within a nonequilibrium-statistical Relativistic Diffusion Model (RDM), and using a QCD-based gluon saturation model. The theoretical results are compared with Pb + Pb data at SPS, and Au + Au data at RHIC energies. Predictions are made for charged-hadron, net-baryon and net-kaon rapidity distributions in central Pb + Pb collisions at CERN √ LHC energies of up to sN N = 5.5 TeV both at forward rapidities, and in the central rapidity region where data will soon be available. Keywords: Relativistic heavy-ion collisions; Net-baryon and net-kaon distributions; Relativistic Diffusion Model; Saturation model; Predictions at LHC energies
1. Introduction With the CERN Large Hadron Collider LHC now in operation for proton-proton collisions at center-of-mass energies of up to 7 TeV 1,2 , the advent of heavy-ion collisions at the LHC is in sight towards the end of this year 2010, and corresponding theoretical predictions will then be tested. In this note such predictions for several selected observables will be presented. A phenomenological nonequilibrium-statistical model that had been formulated earlier is used to predict pseudorapidity distributions of produced charged hadrons as well as net-baryon, or net-proton (p − p¯), rapidity distributions. In a complementary approach that is based on quantum chromodynamics and in particular, on the gluon saturation model 3–6 , the same observables—with an emphasis on net-baryon rapidity distributions—are then obtained on a microscopic basis, with similar results for net baryons in the midrapidity region 7–9 . Several predictions for charged-hadron and net-proton rapidity distributions within the Relativistic Diffusion Model (RDM)10 are summarized in Sec. 2. The calculations for net-baryon rapidity distributions within a microscopic QCD-based model are reviewed in Sec. 3, and results at LHC energies are given in Sec. 4. In Sec. 5 the analysis is concentrated on the midrapidity region where data will soon be available. Conclusions are drawn in Sec. 6.
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
70
2. Relativistic Diffusion Model In the Relativistic Diffusion Model 10 , both charged-hadron and net-baryon rapidity distributions in relativistic heavy-ion collisions emerge from a superposition of beam-like nonequilibrium components that are broadened in rapidity space through diffusion due to soft (hadronic, low p⊥ ) collisions and particle creations, and a near-equilibrium (thermal) component at midrapidity that arises—among other processes—from hard (partonic, high p⊥ ) processes, and may indicate local quark-gluon plasma (QGP) formation. The time evolution of the distribution functions is governed by a Fokker-Planck equation (FPE) in rapidity space 11–17 i ∂ ∂ h ∂2 [R(y, t)]µ = − J(y)[R(y, t)]µ + Dy 2 [R(y, t)]ν ∂t ∂y ∂y
(1)
with the rapidity y = 0.5 · ln((E + p)/(E − p)). Here we use µ = 1 (due to norm conservation) and ν = 1 corresponding to the standard FPE, and a drift function J(y) = (yeq − y)/τy such that the model is linear. The rapidity diffusion coefficient Dy that contains the microscopic physics accounts for the broadening of the rapidity distributions due to interactions and particle creations, and it is related to the drift term J(y) by means of a dissipation-fluctuation theorem (Einstein relation) which is used to actually calculate Dy in the weak-coupling limit 11,12 . Collective expansion then leads to a further broadening of the distribution functions, with the expansion velocities obtained as proposed in18 from a comparison to the data. The drift J(y) determines the shift of the mean rapidities towards the central value, and besides the linear function also nonlinear forms have been discussed. @AD
1000 800 600 400 200 0
LHC
RHIC @DD
1800 dN dΗ 1500
@CD
1200
@AD
900
@BD
600 300 0 -10 -8
-6
-4
-2
0 Η
2
4
6
8
10
(b)
(a) Fig. 1. Rapidity distributions of central Pb + Pb and Au + Au in the Relativistic Diffusion Model. (a) Produced charged hadrons at RHIC (0.2 TeV: data from19 ) and LHC (5.5 TeV) energies. Model assumptions A-D are discussed in20 . (b) Net protons from SPS to LHC energies10,21,22 .
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
71
Two representative results of the RDM approach including predictions for central √ Pb + Pb at the maximum LHC energy of sN N = 5.52 TeV are displayed in Fig. 1(a) for pseudorapidity distributions of produced charged hadrons ,20 and in Fig. 1(b) for rapidity distributions of net protons p − p¯ .10
3. Rapidity Distributions for Net Baryons Based on QCD Gluon saturation has been the focal point of important and interesting particlephysics investigations since many years. Its observation would allow to access a new regime of quantum chromodynamics where high-density gluons form a coherent state. In regions of large parton densities the physics is governed by a single hard scale Qs ΛQCD which increases with energy and thus allows the use of smallcoupling techniques 3–6 . In this regime gluon recombination starts to compete with the exponentially increasing gluon splitting, and the gluon distribution function is expected to saturate. At the HERA, some evidence for gluon saturation in the proton had been found in deep inelastic e + p collisions at high energy and low values of Bjorken-x, but the results are still open to interpretation23 . The existence of geometric scaling as predicted by the color glass theory as an approach to saturation physics had indeed been confirmed, constituting the most important evidence for saturation so far24 . Since the saturation scale is enhanced by a factor A1/3 in heavy ions as compared to protons, it is natural to investigate saturation in relativistic heavy-ion collisions, as has been done by many authors25 . Most theoretical investigations concentrate on charged-hadron production from inclusive gluon interactions, and in the central rapidity region a reasonable understanding has been achieved in the color glass condensate framework6,26–28 through inclusive gluon production29,30 . However, the valence-quark scattering off the gluon condensate as an observable in net-baryon distributions7,8 is expected to provide interesting new information on gluon saturation, and on geometric scaling24 . Here the most promising effects arise at very forward angles, and correspond√ ingly large values of the rapidity y ' 5 − 8 at LHC energies of sN N = 5.5 TeV for Pb + Pb, with a beam rapidity of 8.68. For symmetric systems, two symmetric fragmentation peaks are expected to be present in the net-baryon distributions at forward/backward rapidities, as was shown in the RDM-calculations in the previous section. In particular, we have discussed in7 that it is in principle possible to determine the growth of the saturation-scale, λ ≡ d ln Qs /dyb , with the beam rapidity yb from the position of the fragmentation peak in rapidity space. In this region of relatively large values of Feynman-x ' 0.1 and correspondingly large rapidities y the valence-quark parton distribution in the projectile is wellknown, and can hence be used to access the gluon distribution at small x in the other nucleus where saturation is expected to occur due to the competition of gluon recombination with the exponentially increasing gluon splitting 26–28,31 . For the next years of LHC operation, however, experimental investigations of
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
72
central heavy-ion collisions with particle identification will concentrate on the midrapidity region y ≤ 2 and hence, we focus our predictions for net-baryon distributions at LHC energies to this region in Sec. 5. It turns out that here the saturationscale exponent λ can not be determined from net-baryon distributions because the dependence of the distribution function on λ is too weak near midrapidity. The differential cross section for valence quark production in a high-energy nucleus-nucleus collision is calculated from32–34 dN 1 1 = x1 qv (x1 , Qf ) ϕ (x2 , pT ) , (2π)2 p2T T dy
d2 p
(2)
where pT is the transverse momentum of the produced quark, and y its rapidity. The longitudinal momentum fractions carried, respectively, by the valence quark √ in the projectile and the soft gluon in the target are x1 = pT / s exp(y) and √ x2 = pT / s exp(−y). The factorization scale is usually set equal to the transverse momentum, Qf ≡ pT . We have discussed the gluon distribution ϕ(x, pT ) and details of the overall model in7 . The contribution of valence quarks in the other beam nucleus is added incoherently by changing y → −y. The valence quark distribution of a nucleus, qv ≡ q − q¯, is given by the sum of valence quark distributions qv,N of individual nucleons, qv ≡ Aqv,N , where A is the atomic mass number. Assuming that the rapidity distribution for net baryons is proportional to the valence-quark rapidity distribution up to a constant factor C, one obtains by integrating over pT , Z 2 C d pT dN = x1 qv (x1 , Qf ) ϕ (x2 , pT ) . (3) dy (2π)2 p2T It turns out that this is indeed a good approximation at sufficiently high energy, in particular, when comparing to Au + Au data from RHIC, and it is expected to be valid at LHC as well7,8 . The gluon distribution is related to the forward dipole scattering amplitude N (x, rT ), for a quark dipole of transverse size rT , through the Fourier transform Z 2 ϕ(x, pT ) = 2πpT rT drT N (x, rT )J0 (rT pT ). (4) In the fragmentation region of the projectile the valence quark parton distribution function (PDF) is dominated by large values of x1 . We integrate out the fragmentation function such that the hadron rapidity distribution is proportional to the parton distribution. The overall constant C depends on the nature of the produced hadron. One important prediction of the color glass condensate theory is geometric scaling: the gluon distribution depends on x and pT only through the scaling variable p2T /Q2s (x), where Q2s (x) = A1/3 Q20 x−λ , A is the mass number and Q0 sets the dimension. This has been confirmed experimentally at HERA24 . The fit value λ = 0.2 − 0.3 agrees with theoretical estimates based on next-to-leading order
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
73
Balitskii-Fadin-Kuraev-Lipatov (BFKL) results35,36 . To show that the net-baryon distribution reflects the geometric scaling of the gluon distribution, the following change of variables has been proposed in7 : x ≡ x1 , x2 ≡ x e−2y , p2T ≡ x2 s e−2y .
(5)
Thus, Eq. (3) for the rapidity distribution can be rewritten as Z 1 dN C dx (τ ) = xqv (x) ϕ(x2+λ eτ ), dy 2π 0 x
(6)
where τ = ln(s/Q20 ) − ln A1/3 − 2(1 + λ) y is the corresponding scaling variable. Hence, the net-baryon multiplicity in the peak region is only a function of a single scaling variable τ , which relates the energy dependence to the rapidity and mass number dependence. In the fragmentation region, the valence quark distribution is only very weakly dependent on Qf . From the equation for the isolines, τ = const, one gets the evolution of the position of the fragmentation peak in the forward region with respect to the variables of the problem 1 ypeak = ybeam − ln A1/6 + const, (7) 1+λ √ where ybeam = 1/2 · ln[(E + pL )/(E − pL)] ' ln s/m is the beam rapidity at beam energy E and longitudinal momentum pL with the nucleon mass m.
dN∆ B /dy
100 80
Pb+Pb NA49
17.3 GeV
Au+Au BRAHMS
62.4 GeV
Au+Au BRAHMS
200 GeV
60 40 20 0 -6
-4
-2
0
2
4
6
-6
-4
-2
0
2
4
6
-6
-4
-2
0
2
4
6
y Fig. 2. Rapidity distribution of net baryons in central (0 – 5%) Pb + Pb collisions at SPS energies √ of sN N = 17.3 GeV (left frame). The theoretical calculation for λ = 0.2 and Q 20 = 0.04 GeV2 is compared with NA49 results that have been extrapolated from the net-proton data (open circles 21 ). √ Black diamonds are more recent preliminary NA49 data points37 . At RHIC energies of sN N = 62.4 GeV (middle frame, 0 – 10%) and 200 GeV (right frame, 0 – 5%) for central Au + Au, our corresponding theoretical results are compared with BRAHMS net baryon data (circles) 22,38 . At 200 GeV, triangles are preliminary scaled BRAHMS net proton data points for 0 – 10% 39 . The full lines correspond to the no-fragmentation hypothesis with Q20 = 0.04 GeV2 and the dashed lines include fragmentation with Q20 = 0.1 GeV2 . Arrows indicate the beam rapidities. From Mehtar-Tani and Wolschin8 .
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
74
To take into account saturation effects in the target we choose the Golec-BiernatW¨ usthoff model 40 for the forward dipole scattering amplitude N , leading to (cf. Eq. (4) and 34 ) p2T p2T ϕ(x, pT ) = 4π 2 exp − 2 . (8) Qs (x) Qs (x) The valence quark parton distribution of the nucleus is taken to be equal to the valence quark PDF in a nucleon times the number of participants in the nucleus. We are focusing here on the forward rapidity region, and interpolate to mid-rapidity where small-x quarks are dominant, by matching the leading-order distributions41 and the Regge trajectory, xqv ∝ x0.5 , at x = 0.0142 . To account for large-x effects in the gluon distribution, we multiply the distribution function by (1−x2 )4 , cf.30 . Mass effects are considered through the replacement p 2 pT → p T + m 2 . Our results for net-baryon rapidity distributions in central Pb + Pb and Au + Au collisions are shown in Fig. 2. Solid curves are for λ = 0.2 and Q20 = 0.04 GeV2 without consideration of fragmentation, dashed curves (Q20 = 0.1 GeV2 ) include a fragmentation function for valence quarks to net protons43 . See Mehtar-Tani and Wolschin8 for a detailed discussion. √ We compare with SPS NA49 Pb + Pb results at sN N = 17.2 GeV 21 , and BRAHMS Au + Au data at 62.4 GeV and 200 GeV 22,38,39,44 . The estimated √ numbers of participant are 390, 315 and 357 for s = 17.3, 62.4 and 200 GeV respectively .38,44 The centrality dependence of the net-baryon distribution has also been investigated. Formally, the dependence of the rapidity distribution on the mass number A through the saturation scale is found to be Qs ∝ A1/6 . The centrality dependence of particle production is essentially determined by the number of participants and hence, we replace A → Npart . Larger values of A or Npart correspond to an increase in stopping: the fragmentation peaks shift towards mid-rapidity. The agreement with SPS and RHIC data is good, see 8 . 4. Predictions for Net Baryons at LHC Energies A prediction for central Pb + Pb at the maximum LHC energy of 5.52 TeV is shown in Fig. 3(a) for λ = 0, 0.15, and 0.3. At LHC energies the mid-rapidity region is almost baryon free, we obtain dN/dy(y = 0) ' 4 for net baryons at 5.52 TeV. The position of the fragmentation peak is very sensitive to the value of λ, with a difference of about 1.5 units of rapidity between the λ = 0 and 0.3 cases. It is possible that the full scaling regime with λ approaching 0.3 can be reached at or beyond LHC energies, but presently none of the LHC-experiments is capable of measuring identified protons or neutrons from central Pb + Pb collisions in the region of the fragmentation peaks. This would be a relevant proposal for future extensions of the detector capabilities at LHC.
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
75
6
dN/dy
80
Pb+Pb LHC
5.52 TeV
<δ y>
100
60 40
4 2
20 0
-10 -8 -6 -4 -2
0
y (a)
2
4
6
8 10
0
2
4
yb
6
8
10
(b)
Fig. 3. Central collisions of Pb + Pb and Au + Au. (a) Rapidity distribution of net baryons √ in central Pb + Pb collisions at LHC energies of sN N = 5.52 TeV. Theoretical distributions are shown over the full rapidity range including the fragmentation peaks for λ = 0 (dashed), λ = 0.15 (solid), and λ = 0.3 (dotted curve). (b) The mean rapidity loss hδyi as obtained from our theoretical results is plotted as a function of beam rapidity yb , solid curve. The star at ybeam = √ 8.68 is our prediction for central Pb + Pb at LHC-energies of sN N = 5.52 TeV with λ = 0.2 and 2 2 Q0 = 0.04 GeV . Analysis results from AGS Au + Au data (E917, E802/E866, triangles) 45 , SPS Pb + Pb data (NA49, square)21 , RHIC Au + Au data (BRAHMS, dots, with triangles as lower and upper limits)22,38 are compared with the calculations. From Mehtar-Tani and Wolschin7,8 .
With increasing energy (such as from RHIC to LHC) the peaks move apart, the solutions behave like travelling waves in rapidity space, which can be probed experimentally at distinct values of the beam energy, or the corresponding beam rapidity. According to Eq. (7) the peak position as a function of the beam rapidity is ypeak = v yb + const with the peak velocity v = 1/(1 + λ). The position of the peak in rapidity space as a function of the beam energy can in principle be determined experimentally, or at least estimated (RHIC). Theoretically, its evolution with energy provides a measure of the saturation scale exponent λ. Hence, a precise determination of the net-baryon fragmentation peak position as a function of beam energy would also provide detailed information about the gluon saturation scale. The mean rapidity loss hδyi = yb − hyi is shown in Fig. 3(b), our result is in agreement with the experimental values of baryon stopping that have been obtained at AGS and SPS energies21,45 . Assuming that the mean rapidity evolves similarly to the peak position, hyi ≡ ypeak+ const, the mean rapidity loss increases linearly λ yb + const: the slope is related to λ. at large yb , hδyi = 1+λ Hence, the mean rapidity loss that accompanies the energy loss in the course of the slow-down of baryons also provides a potential measure for λ and thus, a test of saturation physics. The grey band in Fig. 3(b) reflects the uncertainty of how to place the remaining baryons that are missing in the present model. It amounts to ∆N , with N ≡ Npart : The upper limit corresponds to the case where the missing baryons sit at the mean-rapidity, roughly about the peak rapidity. Then the corrected mean rapidity loss is equivalent to the theoretical one hδyicorr ≡ hδyi. The lower limit corresponds to the case where they sit at the beam rapidity, hδyicorr ≡
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
76
(1 − ∆N/N )hδyi. The full line is the mean value of the two calculations, and it is in reasonable agreement with the upper limit of the data given by BRAHMS. Our result for the mean rapidity loss as function of beam rapidity, or center-ofmass energy, emphasizes the importance of a detailed measurement at LHC energies to allow more definite conclusions about the value of λ from net-baryon distributions in relativistic heavy-ion physics. 5. Net Baryons and Kaons in the Midrapidity Region Since net-baryon data in central heavy-ion collisions at large rapidities will not be available for the next years, one first has to concentrate on the midrapidity region. This chapter summarizes corresponding predictions 9 . The LHC physics program starts with center-of-mass proton-proton energies of 7 and 10 TeV, the √ corresponding energies for Pb + Pb (scaling with Z/A) are sN N = 2.76 and 3.94 TeV. Predictions for the highest attainable Pb + Pb energy of 5.52 TeV are also shown. Since experimental results will be available for net protons, we calculate these at the highest LHC energy in the midrapidity valley |y| < 2 instead of net baryons, and also include a prediction for net kaons (K + − K − ) since these carry part of the valence quarks. The gluon distribution that appears in the expression for the rapidity distribution of net baryons (Eq. 2) is peaked at qT = Qs , or x1 = exp (−τ /2 + λ), with the saturation momentum squared Q2s = A1/3 Q20 x−λ 2 , the saturation-scale exponent λ, and the scaling variable τ = ln(s/Q20 ) − ln A1/3 − 2(1 + λ) y that has been introduced in7 . Here A is the nucleon number and Q0 sets the dimension. The peak at qT = Qs reflects the fact that most of the gluons sit at this value. Therefore, we √ expect dN/dy ∼ x1 q(hx1 i), with hx1 i ≡ hQs i/ s exp(y). With x2 = x1 exp(−2y) we can solve this equation for hx1 i, yielding hx1 i =
A1/6 Q0 √ s
1/(1+ λ2 )
1+λ y . exp 2 2+λ
(9)
In the region of small x1 and x2 corresponding to the midrapidity valley (y ∼ 0) away from the peaks, the valence quark distribution behaves as xqv ∝ x∆ , where the intercept ∆ has been calculated in the saturation picture42 leading to s 2αs CF ∆=1− (10) π(1 − λ) with CF = (NC2 − 1)/2NC , NC = 3. The value of ∆ had been fitted to the old preliminary BRAHMS data in42 , with ∆ ' 0.47, leading to a strong-coupling constant αs ' 0.3. Therefore, in the midrapidity valley Eq. (3) becomes 1 dN ∝ A dy
A1/6 Q0 √ s
∆/(1+ λ2 )
1+λ y , cosh 2∆ 2+λ
(11)
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
77
20
10
16
8
dN∆ K /dy
dN∆ p /dy
which reduces to Eq. (80) in42 for the special case λ = 0. The midrapidity values of the net-baryon or net-proton rapidity distributions at two different center-of-mass
12 8
4 2
4 0-4
6
-2
0
2
0-4
4
-2
0
2
4
y
y (b)
(a)
Fig. 4. (a) The rapidity distribution of net protons in central (0%–5%) Au + Au collisions at √ RHIC energies of sN N = 0.2 TeV as measured by BRAHMS22 (black dots) is fitted with our theoretical formula using a χ2 − minimization to fix the parameters for the predictions at LHC energies. The data point at y = 2.9 is neglected in the minimization. The grey band in the lower √ part of the figure shows our predictions for central Pb + Pb collisions at LHC energy of sN N = 2.76 TeV (corresponding to 7 TeV in p + p) with λ = 0.3 (upper bound), λ = 0.2 (solid curve), and λ = 0 (lower bound), using Eq. (12). (b) The net-kaon rapidity distribution in central (0%–5%) √ Au + Au collisions at RHIC energies of sN N = 0.2 TeV as measured by BRAHMS22 (black dots) is fitted with our theoretical formula using a χ2 − minimization. The grey band in the lower √ part of the figure shows our predictions for central Pb + Pb collisions at LHC energy of sN N = 2.76 TeV (corresponding to 7 TeV in p + p) with λ = 0.3 (upper bound), λ = 0.2 (solid curve), and λ = 0 (lower bound). From Mehtar-Tani and Wolschin9 .
3
dN∆ p /dy
dN∆ p /dy
4
2 1 0-2
-1
0
1
2
30 25 20 15 10 5 0 -8 -6 -4 -2 0 2 4 6 8
y
y (a)
(b)
Fig. 5. Rapidity distributions of central Pb + Pb in the gluon saturation model (a) Rapidity √ distributions of net protons in 0%–5% central Pb + Pb collisions at LHC energies of sN N = 2.76, 3.94, and 5.52 TeV. The theoretical distributions are shown for λ = 0.2. (b) Calculated rapidity distributions of net protons in 0%–5% central Pb + Pb collisions at LHC energies of √ sN N = 2.76, 3.94, 5.52 TeV. Our result for central Au + Au collisions at RHIC energies of 0.2 TeV is compared with BRAHMS data22 in a χ2 − minimization as in Fig. 2. From Mehtar-Tani and Wolschin9 .
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
78
energies in the nucleon-nucleon system are related through s ∆/(2+λ) dN dN 0 (s) = (s0 ). dy s dy
(12)
We now use the analytical form for dN/dy = a cosh(by) (cf. Eq. (11)) in a direct comparison with BRAHMS net-proton data in central Au + Au collisions at RHIC √ energies of sN N = 0.2 TeV 22 through a χ2 − minimization of the two parameters a and b, where b = 2∆(1 + λ)/(2 + λ). Our comparison with the BRAHMS Au + Au data in the midrapidity region is shown in Fig. 4(a) for net protons. The fit parameters are a = 6.79 ± 0.59, b = 0.575 ± 0.116 as results of the χ2 − minimization per degree of freedom (8 data points - 2 free parameters; χ2 /dof = 0.028). With the energy dependence as expressed in Eq.(12), the grey band in the lower part of the figure shows our √ predictions for central Pb + Pb collisions at a LHC energy of sN N = 2.76 TeV with λ = 0.3 (upper bound), λ = 0.2 (solid curve), and λ = 0 (lower bound). The mass-number dependence is very weak, and we neglect it in the discussion (AP b /AAu ' 1.056). For λ = 0, we have ∆ = b. Our value for b is slightly larger than, but within our error bars compatible with the one fitted by42 . We extract a value of ∆ ≈ 0.575 for λ = 0, and ∆ ≈ 0.509 for λ = 0.3, leading to αs ' 0.2. Our result for the midrapidity distributions should be compared directly to the forthcoming ALICE net-proton data in central Pb + Pb collisions. The predicted √ midrapidity value at sN N = 2.76 TeV is dN/dy ' 1.93, it depends only slightly on the saturation-scale exponent λ and hence, one cannot expect to determine the value of λ from midrapidity net-baryon data. From the overall accuracy of our prediction regarding the absolute value at midrapidity, and the shape of the netproton rapidity distribution, we will, however, be able to draw conclusions regarding the validity of the gluon saturation picture. Like net baryons, net charged mesons such as kaons and pions carry part of the valence quarks, and can thus be treated on the same footing. In particular, these can be used as a cross-check for the validity of our hypothesis that net-baryon and net charged-meson rapidity distributions essentially reflect the valence quark distributions, such that hadronization does not play a significant role. Here we study the net-kaon rapidity distribution since we don’t have access to the full net charged-meson distribution. In Fig. 4(b) we show the result for the net-kaon rapidity distribution in central √ Au + Au collisions at RHIC energies of sN N = 0.2 TeV in comparison with BRAHMS data46 through a χ2 − minimization of the two parameters, a = 2.087 ± 0.173 and b = 0.535 ± 0.031. Here the result of the minimization is χ2 /dof = 0.540. The value of b for ∆K is compatible with the one extracted for net protons. This indicates that the rapidity distribution is primarily sensitive to the initial conditions of the collision, not to the hadronization process, since the slope is not depending on the species of the produced particles (protons or kaons).
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
79
In Fig. 5(a) we display the energy dependence of our net-proton central Pb + √ Pb results near midrapidity for sN N = 2.76, 3.94, and 5.52 TeV. At y = 0 the corresponding values of dN/dy are 1.9, 1.7, and 1.4. A description for the net-proton rapidity distribution within the relativistic diffusion model (which is not based on QCD, but on nonequilibrium-statistical physics) had been developed in10 and described in Sec. 1. There the predicted midrapidity √ value for central Pb + Pb at the LHC energy of sN N = 5.52 TeV is dN/dy ' 1 – 2.5 depending on the model parameters and hence, comparable to the present QCD-based result. To emphasize how the midrapidity results are embedded into the overall shape of the rapidity distribution for net protons (baryons) in central relativistic Pb + Pb collisions at LHC energies, the total rapidity density distribution functions are shown for the BRAHMS Au + Au data 22 at 0.2 TeV, and for the three LHC √ energies sN N = 2.76, 3.94, 5.52 TeV in Fig. 5(b). Here we have used for the mid-rapidity valley Eq. (11) (as in Figs. 4a and 4b) matched at the point x2 = 0.01 with the parametrization (cf. Eqs. (7) and (8) in41 ) of the valence quark distribution function which describes the larger rapidities. As is evident from the figure, the transition between the two regimes is fairly smooth. Both up- and down-quark parton distribution functions are considered. Hadronization does not influence significantly the slope of the net-hadron rapidity distributions since net-proton and net-kaon rapidity distributions are related through a constant factor. Hence, net-baryon and net-charge transport provide a powerful tool to investigate initial-state dynamics in heavy-ion collisions. Finally, a value for the strong-coupling constant αs ≈ 0.2 has been extracted both from net-proton and net-kaon rapidity distributions in Au + Au at RHIC energies.
6. Conclusion To conclude, predictions for net-baryon (proton) and net-kaon rapidity distributions in central Pb + Pb collisions at LHC energies have been presented. The investigation started from a nonequilibrium-statistical Relativistic Diffusion Model10 , but then turned to a complementary microscopic approach based on quantum chromodynamics. A transparent QCD-based model 7,8 with well-established parton distribution functions in the context of saturation physics allows to calculate transverse momentum8 and rapidity distributions for net baryons and net kaons 7–9 . The underlying physical process is the scattering of valence quarks off the gluon condensate in the respective other nucleus. In the forward rapidity region the position of the fragmentation peak is very sensitive to the gluon saturation scale, such that the saturation-scale exponent could in principle be determined from LHC data should these become accessible in the coming years. In the near future, net-baryon (proton) and net-kaon data in central Pb + Pb collisions at LHC energies will, however, only be available near midrapidity |y| < 2. In this region detailed results have been presented, which will soon be compared to LHC heavy-ion data.
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
80
Acknowledgments I am grateful to Yacine Mehtar-Tani (now at Departamento de F´isica de Part´iculas and IGFAE, Universidade de Santiago de Compostela, Spain) for the close collaboration within a DFG-project. This work is supported by the ExtreMe Matter Institute, EMMI.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38.
K. Aamodt et al. (ALICE Collaboration), Eur. Phys. J. C 65, 111 (2010). V. Khachatryan et al. (CMS Collaboration), JHEP 02, 041 (2010). L. V. Gribov, E. M. Levin and M. G. Ryskin, Phys. Rep. 100, 1 (1983). A. H. Mueller and J. Qiu, Nucl. Phys. B268, 427 (1986). J. P. Blaizot and A. H. Mueller, Nucl. Phys. B289, 847 (1987). L. McLerran and R. Venugopalan, Phys. Rev. D 49, 2233 (1994). Y. Mehtar-Tani and G. Wolschin, Phys. Rev. Lett. 102, 182301 (2009). Y. Mehtar-Tani and G. Wolschin, Phys. Rev. C 80, 054905 (2009). Y. Mehtar-Tani and G. Wolschin, Phys. Lett. B, in press; arXiv:1001.3617 (2010). G. Wolschin, Prog. Part. Nucl. Phys. 59, 374 (2007). G. Wolschin, Eur. Phys. J. A 5, 85 (1999). G. Wolschin, Europhys. Lett. 47, 30 (1999). W. M. Alberico, A. Lavagno and P. Quarati, Eur. Phys. J. C 12, 499 (2000). M. Biyajima, M. Ide, T. Mizoguchi and N. Suzuki, Prog. Theor. Phys. 108, 559 (2002). M. Rybczy´ nski, Z. Wlodarczyk and G. Wilk, Nucl. Phys. B (Proc. Suppl.) 122, 325 (2003). G. Wolschin, Phys. Lett. B569, 67 (2003). G. Wolschin, M. Biyajima, T. Mizoguchi and N. Suzuki, Annalen Phys. 15, 369 (2006). G. Wolschin, Europhys. Lett. 74, 29 (2006). B. B. Back et al. (PHOBOS Collaboration), Phys. Rev. Lett. 93, 102301 (2003). R. Kuiper and G. Wolschin, Annalen Phys. 16, 67 (2007). H. Appelsh¨ auser et al. (NA49 Collaboration), Phys. Rev. Lett. 82, 2471 (1999). I. G. Bearden et al. (BRAHMS Collaboration), Phys. Rev. Lett. 93, 102301 (2004). ˘ ak et al. (eds.), AIP Conf. Proc. 828, 339 (2006). V. Sim´ A. M. Sta´sto, K. Golec-Biernat and J. Kwieci´ nski, Phys. Rev. Lett. 86, 596 (2001). N. Armesto et al. (eds.), J. Phys. G 35, 054001 (2008). I. Balitsky, Nucl. Phys. B463, 99 (1996). J. Jalilian-Marian, A. Kovner, A. Leonidov and H. Weigert, Nucl. Phys. B504, 034008 (1999). E. Iancu, A. Leonidov and L. D. McLerran, Nucl. Phys. A692, 583 (2001). D. Kharzeev, E. Levin and M. Nardi, Nucl. Phys. A747, 609 (2005). J. L. Albacete, Phys. Rev. Lett. 99, 262301 (2007). J. Jalilian-Marian, A. Kovner and H. Weigert, Phys. Rev. D 59, 014015 (1998). D. Kharzeev, Y. V. Kovchegov and K. Tuchin, Phys. Lett. B599, 23 (2004). R. Baier, Y. Mehtar-Tani and D. Schiff, Nucl. Phys. A764, 515 (2006). A. Dumitru, A. Hayashigaki and J. Jalilian-Marian, Nucl. Phys. A765, 464 (2006). L. N. Lipatov, Sov. J. Nucl. Phys. 23, 338 (1976). D. N. Triantafyllopoulos, Nucl. Phys. B648, 293 (2003). C. Blume et al. (NA49 Collaboration), PoS (Confinement) 8, 110 (2008). H. H. Dalsgaard et al. (BRAHMS Collaboration), Int. J. Mod. Phys. E 16, 1813 (2007).
November 22, 2010
10:29
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.07˙Wolschin
81
39. R. Debbie et al. (BRAHMS Collaboration), J. Phys. G 35, 104004 (2008). 40. K. Golec-Biernat and M. W¨ usthoff, Phys. Rev. D 59, 014017 (1998). 41. A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, Phys. Lett. B531, 216 (2002). 42. K. Itakura, Y. V. Kovchegov, L. D. McLerran and D. Teaney, Nucl. Phys. B730, 160 (2004). 43. S. Albino, B. A. Kniehl and G. Kramer, Nucl. Phys. B803, 42 (2008). 44. I. C. Arsene et al. (BRAHMS Collaboration), Phys. Lett. B677, 267 (2009). 45. F. Videbaek and O. Hansen, Phys. Rev. C 52, 2684 (1995). 46. I. G. Bearden et al. (BRAHMS Collaboration), Phys. Rev. Lett. 94, 162301 (2005).
November 22, 2010
11:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.08˙Bross
82
EXPLORING PHYSICS BEYOND THE STANDARD MODEL WITH A MUON ACCELERATOR FACILITY A. D. BROSS∗ Fermi National Accelerator Laboratory, Batavia, IL 60510/USA ∗ for the Neutrino Factory and Muon Collider Collaboration E-mail:
[email protected] An accelerator complex that can produce ultra-intense beams of muons presents many opportunities to explore new physics. This facility is unique in that it can present a physics program that can be staged and thus move forward incrementally, addressing exciting new physics at each step. An intense cooled low-energy muon source can be used to perform extraordinarily precise lepton flavor violating experiments and these same muons can be accelerated to be used in a Neutrino Factory or energy-frontier Muon Collider. In this paper I will give an introduction to muon accelerator facilities and their physics capabilities and then will discuss some of the limiting technologies that must be developed in order to make these concepts a reality. Keywords: Neutrino Factory, Muon Collider, ionization cooling
1. Introduction The physics potential of a high-energy lepton collider has captured the imagination of the high energy physics community for some time now. Understanding the mechanism behind mass generation and electroweak symmetry breaking, searching for, and perhaps discovering, supersymmetric particles and confirming their supersymmetric nature, and hunting for signs of extra space-time dimensions and quantum gravity, constitute some of the major physics goals of and energy-frontier lepton collider. In addition, experiments that can make very-high precision measurements of standard model processes open windows on physics at energy scales far beyond any forseeable direct reach. The Muon Collider provides a possible realization of a multi-TeV lepton collider, and hence a way to explore new physics beyond the capabilities of present colliders. A muon accelerator facility also presents the unique opportunity to explore new physics within a number of distinct programs that can be brought online as the facility evolves. A schematic that shows the evolution of a muon accelerator complex which ultimately reaches a multi-TeV Muon Collider1 is given in Fig. 1. The front-end of the facility provides an intense muon source that can perhaps support both a Neutrino Factory and an energy-frontier Muon Collider. The muon source is designed to deliver O 1021 low energy muons per year within the acceptance of the accelerator
November 22, 2010
11:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.08˙Bross
83
system, and consists of (i) a multi-MW proton source delivering a multi-GeV proton beam onto a liquid Mercury-jet pion production target, (ii) a high-field target solenoid that radially confines the secondary charged pions, (iii) a long solenoidal channel in which the pions decay to produce positive and negative muons, (iv) a system of RF cavities in a solenoidal channel that capture the muons in bunches and reduce their energy spread (phase rotation), and (v) a muon ionization cooling channel that reduces the transverse phase space occupied by the beam by a factor of a few in each transverse direction. At this point the beam could be used for low-energy muon experiments and also will fit within the acceptance of an accelerator system for a Neutrino Factory. However to obtain sufficient luminosity, a Muon Collider requires a great deal more muon cooling. In particular, the 6D phase-space must be reduced by O 106 , which requires a longer and more complex cooling channel. Finally after the cooling channel, the muons are accelerated to the desired energy and injected into decay rings for the Neutrino Factory or into a storage ring for the Muon Collider. In a Neutrino Factory, the ring has long straight sections in which the neutrino beam is produced by the decaying muons. In a Muon Collider, positive and negative muons are injected in opposite directions and collide for about 1000 turns before the luminosity becomes degraded due to the muon decays. 1.5 × 1022 protons/year
16 GeV/c Proton Accelerator π Production Target
5 × No. p's in MI Intense K Physics Stopped π
Pion Decay Channel
Muon Cooling Channel 1.5 × 1021 muons/year
µ+
100 MeV/c muons Muon Accelerators 10 GeV muons High Energy muons
µ-
Muon Collider
Fig. 1.
Stopped/Low Energy Muons Neutrinos from muon storage rings Intense HighEnergy Muon & Neutrino Beams
Higgs, tt, WW, ...
Schematic of a muon accelerator complex
2. Low-Energy Muon Physics One of the first physics programs that a muon accelerator facility could support would be sensitive tests of charged lepton flavor violation (cLFV), such as what
November 22, 2010
11:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.08˙Bross
84
could be explored with a µ → e conversion experiment. In the Standard Model this process occurs via ν mixing, but the rate is well below what is experimentally accessible. The rate (or limit on the rate) of this process puts very stringent constraints on physics beyond the Standard Model. For example supersymmetryic models predict the rate to be O 10−15 . The low-energy muon source of the muon accelerator facility provides a potential upgrade path2 for the next round of cLFV experiemnts currently being planned. This upgrade path could extend their sensitivity by upwards of two orders of magnitude, exploring a mass reach to 4 × 104 TeV. 3. The Neutrino Factory In the Neutrino Factory,3 the neutrino beam is generated from muons which decay along the straight section of a race track-like decay ring and, since the decay of the muon is well understood, the systematic uncertainties associated with a neutrino beam produced in this manner are very small. In addition since the muon (anti-muon) decays produce both muon and anti-electron neutrinos (anti-muon and electron neutrinos) many oscillation states are accessible at a Neutrino Factory and the reach in the ν oscillation parameter space is extended. The oscillation processes accessible at a Neutrino Factory are given in Table 1. Table 1. µ+
e+ ν
→ ¯µ eν ν¯µ → ν¯µ ν¯µ → ν¯e ν¯µ → ν¯τ νe → ν e νe → ν µ νe → ν τ
Oscillation processes in a Neutrino Factory. µ− → e− ν¯e νµ νµ → ν µ νµ → ν e νµ → ν τ ν¯e → ν¯e ν¯e → ν¯µ ν¯e → ν¯τ
disappearance appearance (challenging) appearance (atm. oscillation) disappearance appearance: “golden” channel appearance: “silver” channel
In the so-called “golden” channel listed in Table 1, the experimental signature in the neutrino detector is the presence of a muon with the“wrong” sign, a muon with the opposite sign to that which is stored in the decay ring. This requires that the neutrino detector be magnetized, but for Neutrino Factories with stored muons with energy of approximately 25 GeV, this represents standard neutrino detector technology.4 It has been shown5 that for a magic baseline of approximately 7500 km, the “golden” channel offers unprecedented sensitivity for the determination of the unknown mixing angle θ13 and the mass hierarchy. Adding a second detector at a baseline of approximately 4000 km adds sensitivity to the CP violating phase δ. Over the last decade there have been a number of studies6–9 that have explored the physics reach of Neutrino Factories to measure θ13 , determine the mass hierarchy and determine the CP violating phase, δ. The most recent study to be completed,10 the International Scoping Study of a future Neutrino Factory and super-beam facility (ISS), studied the physics capabilities of various future neutrino facilities: super-beam, β-beam and Neutrino Factory.
November 22, 2010
11:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.08˙Bross
85
3.1. The ISS physics study The ISS physics study11 set out as a goal to establish a strong physics case for the various proposed facilities and to find the optimum parameters of the accelerator facility and detector systems from a physics point of view. The study looked at super-beam facilities, a β-beam facility and the Neutrino Factory. The 5 facilities that were studied are: • a 4 MW facility at CERN (SPL) pointing to a 1 Mega Ton water Cerenkov detector at a baseline of 130 km (super-beam). • a 4 MW facility at JPARC (T2HK) pointing to a 1 Mega Ton water Cerenkov detector at baseline of 295 km (super-beam). • a 2 MW facility at FNAL (WBB) pointing to a 1 Mega Ton water Cerenkov detector at a baseline of 1300 km (super-beam). • a high-energy β-beam facility (BB350) pointing at a 1 Mega Ton water Cerenkov detector at a baseline of 730 km. • a 4 MW Neutrino Factory pointing to two 50 kT magnetized iron detectors at baselines of 4000 and 7500 km plus a 10 KT magnetized emulsion cloud chamber at 4000 km. Representative results from the study are shown in Fig. 2(a), Fig. 2(b) and Fig. 2(c) where 3σ exclusion contours are shown for the discovery reach in θ 13 , the determination of the mass hierarchy and the CP violating phase δ, respectively. 1
1 GLoBES 2007
GLoBES 2007
0.6
0.4 SPL T2HK WBB NF BB350
0.2
10-4 10-3 10-2 True value of sin2 2Θ13
10-1
0.8 Fraction of ∆CP
0.8 Fraction of ∆CP
Fraction of ∆CP
0.8
0 10-5
1 GLoBES 2007
0.6
0.4
0.2
0 10-5
(a)
SPL T2HK WBB NF BB350 10-4 10-3 10-2 True value of sin2 2Θ13
(b)
0.6
0.4 SPL T2HK WBB NF BB350
0.2
10-1
0 10-5
10-4 10-3 10-2 True value of sin2 2Θ13
10-1
(c)
Fig. 2. (a) NF exclusion plot for θ13 . (b) NF exclusion plot for the mass hierarchy (c) NF reach in CP phase δ.
3.2. Neutrino factory design The baseline Neutrino Factory Design from the International Scoping Study is shown in Fig. 3 (a). It consists of a 4 MW proton driver with a 2 ns bunch structure, a Hg-Jet target for pion produciton, capture, drift and phase rotation sections, a muon ionization cooling channel sufficient to reduce the transverse emittance of the muon beam to level consistent with the accelerator system’s acceptance, the
November 22, 2010
11:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.08˙Bross
86 FF
Prot on Dr
accelerator system and two decay rings pointing to two detectors at baselines of approximately 4000 and 7500 km as mentioned above.
AG/ sync
hrot ron option
Linac option
iver
1.1 km
Neutr ino B
Hg T
5m
75
Linac to 0.9–3.6 GeV RLA
arget
eam
Muon
0.9 GeV
Stora ge Ring
3.6–12.6 GeV RLA
12.6–25 GeV
Buncher
FFAG
Bunc h Rotation Cooling
1.5 km
(a) Beam
Muon
Stora ge Ring
(b)
Fig. 3. (a) Neutrino Factory baseline design from the International Scoping Study. (b) A 2 TeV center-of-mass Muon Collider schematic (Fermilab site)
4. The Energy Frontier The Muon Collider is the final step in the evolutionary process of the muon accelerator facility and it provides a very attractive possibility for studying the details of Terascale physics after the initial running of the LHC. The Muon Collider can study the same physics that electron-positron linear colliders (ILC and CLIC) address, but compared to these machines, a Muon Collider presents a very small footprint. It can easily fit on the Fermilab site (see Fig. 3 (b)), for example, and contains fewer complex components as a result. In addition Muon Colliders may have a special role for precision measurements in that the machine potentially has a very small beam energy spread and thus allows for very precise energy scans without the beamstrahlung that exists at multi-TeV electron-positron linear colliders which can limit their ultimate precision. 5. The Technological Challenges of a Muon Facility Many of the key technologies and components for a muon accelerator facility are currently under study. The MERIT experiment12 has successfully tested the concept of the liquid Hg jet target and it has shown very promising results which indicate that this type of target system can operate at a power level of 4 MW and above. The Muon Ionization Cooling experiment, MICE,13 is preparing to perform
November 22, 2010
11:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.08˙Bross
87
a demonstration and engineering test of 4D muon ionization cooling utilizing 200 MHz RF and liquid hydrogen absorbers. The MuCool14 program is investigating operation of vacuum RF cavities in the presence of high magnetic fields, has made preliminary studies on liquid hydrogen absorbers and will also be studying the use of LiH absorbers as an alternative to using liquid hydrogen absorbers in the muon cooling channel. The MuCool program focuses on component R&D and, in addition to the capability to test RF components at high power, will have the capability to test cooling channel components with a high-intensity proton beam from the Fermilab linac. The Electron Model with Many Applications (EMMA)15 experiment will study the properties of FFAGs which are a potential candidate for part of the acceleration system at a muon facility. Of all the underlying accelerator technologies that are required for the complex, it can be argued that RF technology is the single most important “LimitingTechnology”. It is of fundamental importance for these facilities in that it is needed in: 1. Muon capture, bunching and phase rotation, 2. Muon cooling and 3. Acceleration. Both normal conducting RF (front-end, 1 and 2 above) and superconducting RF (acceleration) are required. A crucial challenge for the front-end design and cooling channels is the operation of high-gradient normal-conducting RF (NCRF) in the presence of high magnetic field. This problem has been the primary focus of the MuCool program. What has been observed in MuCool is that the safe operating gradient limit degrades significantly when a NCRF cavity is operated in magnetic field, dropping by approximately a factor of 2 at the B field needed for the cooling channel lattice. There are a number of models that have been developed that attempt to describe this phenomenon, but all involve field emission from emitters (surface field enhancements in the regions of high gradient) in the cavity. The interaction of the field emission with the magnetic field can cause surface imperfections on the cavity to break off which then produces a plasma under bombardment by the field emission current. The plasma then initiates a breakdown. In order to address this problem, three approaches are being investigated. These are: 1. Processing the cavities with superconducting RF or atomic layer deposition,16 2. Using different materials (such as Be) for the cavity walls and 3. Operating the cavities filled with high-pressure hydrogen gas in order to use the Paschen effect to inhibit breakdowns.17
5.1. 6D muon ionization cooling Intense 6D cooling for the Muon Collider is not yet under experimental study, but is being modeled, studied with theoretical and computation tools and an experimental program exploring some of the major component parts of the system is being developed. At this point in time, we do not have a full end-to-end simulation for the cooling needed for a muon collider, but a self-consistent end-to-end cooling scheme has been developed and is shown schematically in Fig. 4 (a).18 This scheme utilizes the initial transverse cooling scheme from the Neutrino Factory Study 2a
November 22, 2010
11:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.08˙Bross
88
(up through step 2 in Fig. 4), the Guggenheim ring-FOFO19 and final cooling with 50T solenoids and liquid hydrogen absorbers.
(a) (b) Fig. 4.
(a) Cooling scheme for Muon Collider. (b) Schematic of the Guggenheim cooling channel
The Guggenheim channel uses a large-pitch helical lattice (see Fig. 4 (b)) in order to avoid the injection and extraction problems with a true Ring-FOFO. The component parts of the Guggenheim are the same as that for the Ring-FOFO and include vacuum 201 MHz normal-conducting RF cavities, liquid hydrogen absorbers and solenoids (all of which are components that will be under test in the MICE experiment). This approach has been under study for a number of years, but much work remains to be done in order to study the effects of tapering (changing the RF frequency along the channel), implementation of realistic B fields and absorber parameters in the simulations and the effects of windows. 6. US Roadmap to the Future A proposal that presented a R&D program aimed at completing a Design Feasibility Study (DFS) for a Muon Collider and, with international participation, a Reference Design Report (RDR) for a muon-based Neutrino Factory has been submitted to the US Department of Energy as a joint proposal from US Neutrino Factory and Muon Collider Collaboration and the Fermilab Muon Collider Task Force.20 The goal of the R&D program is to provide the HEP community with detailed information on future facilities based on intense beams of muons and give clear answers to the questions of the expected capabilities and performance of these muon-based facilities, while providing defensible estimates for their cost. This information, to-
November 22, 2010
11:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.08˙Bross
89
gether with the physics insights gained from the next-generation neutrino and the LHC experiments, will allow the HEP community to make well-informed decisions regarding the optimal choice of new facilities With regard to muon ionization cooling (or more generally speaking, the FrontEnd of the facility), the R&D plan will embark on both design and simulation and hardware efforts. The design and simulation work will study and optimize: 1. Pion capture and decay, bunching and phase rotation, 2. Precooling and 3. 6D cooling and final cooling. A full end-to-end simulation of muon production and cooling (through final cooling) with all interfaces between cooling sections will be a major component of the effort. The hardware effort on cooling has 3 main objectives: 1. Established the operational viability and engineering foundation for the concepts and components incorporated into the Muon Collider Design Feasibility Study and the Neutrino Factory Reference Design Report, 2. Establish the engineering performance parameters of these components and 3. Provide the basis for a defendable cost estimate. The most critical technical challenge for the Muon Collider is the demonstration of a viable cooling scenario. To this end, the R&D proposal will support the MICE experiment through all its phases and will strive to develop a single scheme for 6D cooling that is backed by rigorous component testing for this chosen scheme. We anticipate critical results from the RF tests in the first two years of our R&D program, at which time we will proceed with building a short cooling section for one cooling scheme. It is not envisioned that a 6D muon ionization cooling demonstration experiment will be performed within this program. Fig. 5 presents what we believe will be the Muon Collider Technical Foundation after a 5 Year program is completed, relative to where we believe the technology is today.
Fig. 5.
NF baseline design from the International Scoping Study
November 22, 2010
11:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.08˙Bross
90
Acknowledgments I would like to thank my colleagues in the Neutrino Factory and Muon Collider Collaboration and the Fermilab Muon Collider Task Force for all their hard work and support over the years. I also want to acknowledge the tremendous work of all my colleagues who participated in the International Scoping Study. This work was supported by the Fermi National Accelerator Laboratory, which is operated by Fermi Research Alliance, under contract No. DE-AC02-07CH11359 with the U.S. Department of Energy. References 1. M. Alsharo’a et al., (Muon Collider Collaboration), Phys. Rev. ST Accel. Beams, 6 081001 (2003). 2. C. Y. Yoshikawa et al. [Neutrino Factory and Muon Collider Collaboration],“Intense Stopping Muon Beams,”, FERMILAB-CONF-09-193-APC. 3. Geer, S., Phys. Rev. D57 6989(1998). 4. A. Cervera, et al Nucl. Phys. B579, 17 (2000). 5. Huber P, Winter W. Phys. Rev. D68 037301 (2003). 6. Geer S, Schellman H (Eds.) hep-ex/0008064. 7. Holtkamp N, Finley D (Eds.) Fermilab-Pub-00/108-E. 8. Autin B, Blondel A, Ellis J. CERN-99-02 ; DeRujula A, Gavela M, Hernandez P. Nucl. Phys. B547 21 (1999). 9. Mori Y. J. Phys. G: Nucl. Part. Phys. 29 (2003). 10. J. S. Berg et al., Accelerator design concept for future neutrino facilities, JINST 4 P07001 (2009). 11. A. Bandyopadhyay et al., Physics at a future Neutrino Factory and super-beam facility, Rept. Prog. Phys. 72 106201 (2009). 12. H. G. Kirk et al.,“The MERIT High-Power Target Experiment at the CERN PS,” Proc. of 11th European Particle Accelerator Conference (EPAC 08), Magazzini del Cotone, Genoa, Italy, 23-27 Jun 2008, pp WEPP169. 13. L. Coney [MICE Collaboration], “MICE Overview,” arXiv:0910.3479 [physics.insdet]. 14. D. Huang, “RF Studies at Fermilab MuCool Test Area”, presented at 2009 Particle Accelerator Conf. (PAC09), paper TU5PFP032. 15. C. Johnstone [EMMA Collaboration], “Hardware For A Proof-Of-Principle Electron Model Of A Muon FFAG,” Nucl. Phys. Proc. Suppl. 155, 325 (2006). 16. J. Norem et al., “Results from Atomic Layer Deposition and Tunneling Spectroscopy for Superconducting RF Cavities,” Proc. 11th European Particle Accelerator Conference (EPAC 08), Magazzini del Cotone, Genoa, Italy, 23–27 June 2008, paper WEPP099. 17. R. P. Johnson et al., “Gaseous Hydrogen and Muon Accelerators,” AIP Conf. Proc. 671, 328 (2003). 18. R. B. Palmer et al., “A Complete Scheme of Ionization Cooling for a Muon Collider,” Proc. 2007 Particle Accelerator Conf. (PAC07), p. 3193 (2007). 19. P. Snopok, G. Hanson, A. Klier, “Recent progress on the 6D cooling simulations in the Guggenheim channel,” Int. J. Mod. Phys. A 24, 987 (2009). 20. http://apc.fnal.gov/groups2/MCCC/Muon5yearplanFinalR0.pdf
November 22, 2010
14:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.09˙Takubo
91
MEASUREMENT OF LITTLE HIGGS PARAMETERS AT INTERNATIONAL LINEAR COLLIDER Y. TAKUBO∗ , M. ASANO, T. KUSANO, R. SASAKI, H. YAMAMOTO Department of Physics, Tohoku University, Sendai, Miyagi 980-8578, Japan ∗ E-mail:
[email protected] E. ASAKAWA Institute of Physics, Meiji Gakuin University, Yokohama, Kanagawa 244-8539, Japan E-mail:
[email protected] K. FUJII High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan E-mail:
[email protected] S. MATSUMOTO Department of Physics, University of Toyama, Toyama, Toyama 930-85555, Japan E-mail:
[email protected] We investigate a possibility of precision measurements for parameters of the Littlest Higgs model with T-parity at the International Linear Collider (ILC). The model predicts new gauge bosons (AH , ZH , and WH ), among which the heavy photon (AH ) is a candidate for dark matter. The masses of these new gauge bosons strongly depend on the vacuum expectation value that breaks a global symmetry of the model. Through Monte + − Carlo simulations of the processes: e+ e− → AH ZH and e+ e− → WH WH , we show how precisely the masses can be determined at the ILC for a representative parameter point of the model. We also discuss the determination of the Little Higgs parameters and its impact on the future measurement of the thermal abundance of the dark matter relics in our universe. Keywords: Little Higgs model; Dark matter; Relic abundance.
1. Introduction There is no doubt that the Higgs boson is the most important particle not only for the confirmation of the Standard Model (SM) but also for the exploration of physics beyond the SM. Quadratically divergent corrections to the Higgs mass term suggest that new physics should appear at the scale around 1 TeV. However, electroweak
November 22, 2010
14:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.09˙Takubo
92
precision measurements require that the scale is larger than O(10) TeV in order not to conflict with the measurements.1 This problem is called the little hierarchy problem, and many people expect that new physics involves some mechanism to solve the problem. In the Little Higgs scenario, the Higgs boson is regarded as a pseudo NambuGoldstone boson associated with a global symmetry at some higher scale. Though the symmetry is not exact, its breaking is specially arranged to cancel quadratically divergent corrections to the Higgs mass term at 1-loop level. This is called the Little Higgs mechanism. As a result, the scale of new physics can be as high as 10 TeV without a fine-tuning on the Higgs mass term. Due to the symmetry, the scenario necessitates the introduction of new particles such as heavy gauge bosons and top partners. In this article, we focus on the Littlest Higgs model with T-parity.2–4 Requiring T-parity, new particles are assigned to be T-odd (i.e. with a T-parity of −1), while the SM particles are T-even. Furthermore, the lightest T-odd particle is stable and provides a good candidate for dark matter. Heavy photon plays the role of dark matter in this model.4–7 In order to test the Little Higgs model, measurements of heavy gauge boson masses are quite important. Since heavy gauge bosons acquire mass terms through the breaking of the global symmetry mentioned above, precise measurements of their masses allow us to determine the most important parameter of the model, namely the vacuum expectation value of the breaking. Furthermore, because the heavy photon is a candidate for dark matter, the determination of its property gives a great impact not only on particle physics but also on astrophysics and cosmology. The International Linear Collider (ILC) will provide an ideal environment to measure the properties of heavy gauge bosons. The ILC is the future electronpositron linear collider for the next generation of the high energy frontier physics. At the ILC, electrons and positrons are accelerated by two opposing linear accelerators installed in an about 30 km long underground tunnel, and are brought into collision with a center of mass energy of 500 GeV-1 TeV. Heavy gauge bosons are expected to be produced in a clean environment at the ILC, which enables us to determine their properties precisely. In this article, we study the sensitivity of the measurements to the Little Higgs parameters at the ILC based on a realistic Monte Carlo simulation. In addition, from the simulation results, we estimate the capability of the ILC to determine the thermal abundance of the dark matter (heavy photon) relics in our universe. In this article, we describe summary of the results shown in Ref. 8 and update of the study.
2. Representative Point in Parameter Space In order to perform a numerical simulation at the linear collider, we need to choose a representative point in the parameter space of the Littlest Higgs model with T-parity. Firstly, the model parameters should satisfy the current electroweak pre-
November 22, 2010
14:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.09˙Takubo
93
e−
AH
e−
eH e+
ZH Fig. 1.
e+
γ, Z
−
WH
e−
+ WH
e+
−
νH
WH +
WH
+ − Diagrams for signal processes; e+ e− → AH ZH and e+ e− → WH WH .
cision data. In addition, the cosmological observation of dark matter relics also gives important information. Thus, we consider not only the electroweak precision measurements but also the WMAP observation9 to choose a point in the parameter space. We have selected a representative point (f, mh ) = (580 GeV, 134 GeV), fitting a χ2 function with the W boson mass (mW = 80.412 ± 0.042 GeV), weak mixing lept angle (sin2 θeff = 0.23153 ± 0.00016), leptonic width of the Z boson (Γl = 83.985 ± 0.086 MeV),10 and relic abundance of dark matter (ΩDM h2 = 0.119 ± 0.009).11 At the representative point, we have obtained ΩDM h2 of 1.05. The masses of the heavy gauge bosons are 81.9 GeV, 368 GeV, and 369 GeV for A H , WH , and ZH , respectively, where AH , WH , and ZH are the Little higgs partners of a gamma, W boson, and Z boson, respectively. It can be seen that all the heavy gauge bosons are lighter than 500 GeV, which allows us to consider their pair production at the √ √ ILC. In this article, we have studied AH ZH at s = 500 GeV and ZH ZH at s = 1 TeV. Feynman diagrams for the signal processes are shown in Fig. 1. Note that ZH decays into AH h, and WH± decays into AH W ± with almost 100% branching fractions.
3. Simulation Tools We have used MadGraph12 to generate signal events of the Little Higgs model, while Standard Model events have been generated by Physsim.13 We have ignored the finite crossing angle between the electron and positron beams. In both event generations, helicity amplitudes were calculated using the HELAS library,14 which allows us to deal with the effect of gauge boson polarizations properly. Phase space integration and the generation of parton 4-momenta have been performed by BASES/SPRING.15 Parton showering and hadronization have been carried out by using PYTHIA6.4,16 where final-state tau leptons are decayed by TAUOLA17 in order to handle their polarizations correctly. In the event generation, the initial state radiation, beamstrahlung, final state radiation, and beam energy spread are √ included for WH+ WH− at s = 1 TeV correctly, whereas they are not included for √ AH ZH at s = 500 GeV. The generated Monte Carlo events have been passed to a detector simulator called JSFQuickSimulator, which implements the GLD geometry and other detector-performance related parameters.18
November 22, 2010
14:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.09˙Takubo
94
4. Simulation Study In this section, we present some results from our simulation study for heavy gauge √ boson productions. The simulation has been performed at s = 500 GeV for the √ AH ZH production and at s = 1 TeV for the WH+ WH− production with an integrated luminosity of 500 fb−1 each. 4.1. AH ZH production The heavy gauge bosons, AH and ZH , are produced with the cross section of 1.9 fb at the center of mass energy of 500 GeV. Since ZH decays into an AH and Higgs boson, the signature is a single Higgs boson in the final state, mainly 2 jets from h → b¯b (with a 55% branching ratio). We, therefore, define AH ZH → AH AH bb as our signal event. For background events, contribution from light quarks was not taken into account because such events can be rejected to negligible level after requiring the existence of two b-jets, assuming a b-tagging efficiency of 80% for bjets with 15% probability to misidentify a c-jet as a b-jet. This b-tagging performance was estimated by the full simulation assuming a typical ILC detector. Signal and background processes considered in this analysis are summarized in Table 1. Table 1.
Signal and backgrounds processes considered in the AH ZH analysis.
Process AH ZH → AH AH bb ννh → ννbb Zh → ννbb tt → W W bb ZZ → ννbb ννZ → ννbb γZ → γbb
Cross sec. [fb] 1.05 34.0 5.57 496 25.5 44.3 1,200
# of events 525 17,000 2,785 248,000 12,750 22,150 600,000
# of events after all cuts 272 3,359 1,406 264 178 167 45
The clusters in the calorimeters are combined to form a jet if the two clusters satisfy yij < ycut . yij is defined as yij =
2Ei Ej (1 − cos θij ) , 2 Evis
(1)
where θij is the angle between momenta of two clusters, Ei(j) are their energies, and Evis is the total visible energy. All events are forced to have two jets by adjusting ycut . We have selected events with the reconstructed Higgs mass in a window of 100-140 GeV. In order to suppress the ννh → ννbb background, the transverse momentum of the reconstructed Higgs bosons (pT ) is required to be above 80 GeV. This is because the Higgs bosons coming from the W W fusion process, which dominates the ννh → ννbb background, have pT mostly below W mass. Finally, multiplying the efficiency of double b-tagging (0.8 × 0.8 = 0.64), we are left with 272 signal and 5,419 background events √ as shown in Table 1, which corresponds to a signal significance of 3.7 (= 272/ 5419) standard deviations. The indication of the new √ physics signal can hence be obtained at s = 500 GeV.
November 22, 2010
14:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.09˙Takubo
95
The AH and ZH boson masses can be estimated from the edges of the distribution of the reconstructed Higgs boson energies. This is because the maximum and minimum Higgs boson energies (Emax and Emin ) are written in terms of these masses, Emax = γZH Eh∗ + βZH γZH p∗h , Emin = γZH Eh∗ − βZH γZH p∗h ,
(2)
where βZH (γZH ) is the β(γ) factor of the ZH boson in the laboratory frame, while Eh∗ (p∗h ) is the energy (momentum) of the Higgs boson in the rest frame of the ZH boson. Note that Eh∗ is given as (MZ2 H + Mh2 − MA2 H )/(2MZH ).
350
(a)
300
Signal νν h Zh tt ZZ, νν Z,Z γ
250 200
100 80 60 40 20
150
0
100
-20
50
-40
0 100 120 140 160 180 200 220 240 260 280 E h[GeV]
(b)
mAH = 83.2 13.3 mZH = 366 16
-60 100 120 140 160 180 200 220 240 260 280 E h[GeV]
(a) Energy distribution of the reconstructed Higgs bosons with remaining backgrounds after the selection cut. (b) Energy distribution of the Higgs bosons after subtracting the backgrounds. The distribution is fitted by a line shape function determined with a high statistics signal sample. Fig. 2.
The energy distribution of the reconstructed Higgs bosons with remaining backgrounds is depicted in Fig.2(a). The signal distribution after backgrounds have been subtracted is shown in Fig.2(b). The endpoints, Emax and Emin , have been estimated by fitting the distribution with a line shape determined by a high statistics signal sample. The fit resulted in mAH and mZH being 83.2 ± 13.3 GeV and 366.0 ± 16.0 GeV, respectively, which should be compared to their true values: 81.85 GeV and 368.2 GeV. + − 4.2. WH WH production
√ WH+ WH− production has large cross section (277 fb) at the ILC with s = 1 TeV. Since WH± decays into AH and W ± with the 100% branching ratio, analysis procedure depends on the W decay modes. In this analysis, we have used 4-jet final states from hadronic decays of two W bosons, WH+ WH− → AH AH qqqq. Signal and background processes considered in the analysis are summarized in Table 2.
November 22, 2010
14:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.09˙Takubo
96 Table 2.
+ − Signal and background processes considered in the WH WH analysis.
Process + − WH WH → AH AH qqqq + W W − → qqqq e+ e− W + W − → e+ e− qqqq eνe W Z → eνe qqqq ZH ZH → AH AH qqqq ν ν¯W + W − → ν ν¯qqqq
cross sec. [fb] 106.5 1773.5 464.9 25.5 99.5 6.5
# of events 53,258 886,770 282,500 12,770 49,741 3,227
# of events after all cuts 37,560 306 23 3,696 3,351 1,486
All events have been reconstructed as 4-jet events by adjusting the cut on yvalues. In order to identify the two W bosons from WH± decays, two jet-pairs have been selected so as to minimize a χ2 function, χ2 = (rec MW 1 −
tr
2 MW )2 /σM + (rec MW 2 − W
tr
2 MW )2 /σM , W
(3)
rec
where MW 1(2) is the invariant mass of the first (second) 2-jet system paired as a W candidate, tr MW is the true W mass (80.4 GeV), and σMW is the resolution for the W mass (4 GeV). We required χ2 < 26 to obtain well-reconstructed events. Since AH bosons escape from detection resulting in a missing momentum, the missing transverse momentum (miss pT ) of the signal peaks at around 175 GeV. We have thus selected events with miss pT above 84 GeV. The numbers of events after the selection cuts are shown in Table 2. Notice that e+ e− W + W − events are reduced to a negligible level after imposing all the cuts. The number of remaining W + W − , eνe W Z, ZH ZH , and ν ν¯W + W − background events is much smaller than that of the signal. As in the case of the AH ZH production, the masses of AH and WH bosons can be determined from the edges of the W energy distribution. The energy distribution of the reconstructed W bosons is depicted in Fig.3(a). After subtracting the backgrounds from Fig.3(a), the distribution has been fitted with a line shape determined by a high statistics signal sample as shown in Fig.3(b). The fitted masses of AH and WH bosons are 82.46 ± 1.18 GeV and 367.8 ± 0.83 GeV, respectively, which are to be compared to their input values: 81.85 GeV and 368.2 GeV. Figure 4 shows the probability contours for the masses of AH and WH at 1 TeV together with that √ of AH and ZH at 500 GeV. The mass resolution improves dramatically at s = 1 √ TeV, compared to that at s = 500 GeV 5. Discussions Since the masses of the heavy gauge bosons are from the vacuum expectation value f , it is also possible to determine f by using the masses of the heavy gauge bosons obtained in the previous section. The parameter f is determined to be f = 576.0 ± √ 25.0 GeV from the process e+ e− → AH ZH at s = 500 GeV, while f = 579.8 ± 1.1 √ GeV from the process e+ e− → WH+ WH− at s = 1 TeV. Note that the input value of f is 580 GeV in our simulation study. Since the Little Higgs model has a candidate for WIMP dark matter,4,5 the most important physical quantity relevant to the connection between cosmology
November 22, 2010
14:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.09˙Takubo
97 5000 4500
Signal − + W W e ν WZ − νν W +W ZHZH Others
(a)
4000 3500 3000
4000 3500
A H mass : 82.46
1.177
W H mass : 367.8
0.8306
3000 2500
2500
2000
2000
1500
1500
(b)
Signal Fit line
1000
1000
500
500 0 0
0 0
50 100 150 200 250 300 350 400 450 500 W energy(GeV)
50 100 150 200 250 300 350 400 450 500 W energy(GeV)
(a) The energy distribution of the reconstructed W bosons with remaining backgrounds after the selection cuts. (b) The energy distribution of the W bosons after the subtraction of the backgrounds. The distribution is fitted by a line shape function determined with a high statistics signal sample. 440 420
376
(a)
Model Mean 1σ 2σ
WH mass(GeV)
Z H mass(GeV)
Fig. 3.
374
(b)
Model Mean 1σ 3σ 5σ
372
400
370 380
368 360
366 340
364
320 300 0
362 20
40
60
80
100
120 140 A H mass(GeV)
360 70
75
80
85
90 95 A H mass(GeV)
Probability contours corresponding to (a) 1- and 2-σ deviations from the best fit point in the AH and ZH mass plane, and (b) 1-, 3-, and 5-σ deviations in the AH and WH mass plane. The shaded area in (a) shows the unphysical region of mAH + mZH > 500 GeV.
Fig. 4.
and the ILC experiment is the thermal abundance of dark matter relics. It is well known that the abundance is determined by the annihilation cross section of dark matter.19 In the Little Higgs model, the cross section is determined by f and mh in addition to well known gauge couplings.4 The Higgs mass mh is expected to be measured very accurately at the ILC experiment,20–23 so that it is quite important to measure f accurately to predict the abundance. Figure 5 shows how accurately the relic abundance can be determined at the ILC with the center of mass energies of 500 GeV and 1 TeV. The probability density
November 22, 2010
14:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.09˙Takubo
Prob. density
98
0.09
PLANCK WMAP ILC(1TeV) ILC(500GeV)
0.1
0.11
0.12 Ωh2
√ The probability density of Ωh2 at s = 500 GeV and 1 TeV obtained from results in our simulation study. The measurement accuracies of cosmological observations (WMAP and PLANCK) are also shown as shaded regions.
Fig. 5.
of Ωh2 , which is obtained from the results in the previous section, is depicted. As shown in the figure, the abundance will be determined with O(10%) accuracy even √ √ at s = 500 GeV, which is comparable to the WMAP observation. At s = 1 TeV, the accuracy will improve to 1% level, which stands up to that expected for future cosmological observations such as from the PLANCK satellite.24 The measurement accuracies of these cosmological observations are also shown in the figure in order to see the connection between the ILC experiment and cosmology. 6. Summary The Littlest Higgs Model with T-parity is one of the attractive candidates for physics beyond the Standard Model for it solves both the little hierarchy and dark matter problems simultaneously. One of the important predictions of the model is the existence of new heavy gauge bosons, where they acquire mass terms through the breaking of global symmetry necessarily imposed on the model. In this article, we have performed Monte Carlo simulations in order to estimate measurement accuracies of the masses at the ILC for a representative parameter point of the model. √ At the ILC with s = 500 GeV, it is possible to produce AH and ZH bosons with a signal significance of 3.7-sigma level. Furthermore, by observing the energy distribution of the Higgs bosons from the ZH decays, the masses of these bosons can be determined with accuracies of 16.2% for mAH and 4.3% for mZH . Once the √ ILC energy reaches s = 1 TeV, the process e+ e− → WH+ WH− opens. Since the cross section of the process is large, the masses of WH and AH can be determined as accurately as 1.4% and 0.2%, respectively. We have also investigated how accurately the Little Higgs parameters can be determined at the ILC. From the results obtained in our simulation study, it turns
November 22, 2010
14:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.09˙Takubo
99
out that the vacuum expectation value, f , can be determined with accuracies of √ 4.3% at s = 500 GeV and 0.2% at 1 TeV. Finally, we have discussed the connection between the ILC experiment and cosmology, focusing on the thermal abundance of dark matter relics, which is the most important physical quantity for the connection. We have found that the abundance √ can be determined with 10% and 1% levels at s = 500 GeV and 1 TeV, respectively. These accuracies are comparable to those of current and future cosmological observations for the cosmic microwave background, implying that the ILC experiment will play an essential role to understand the thermal history of our universe. Acknowledgments The authors would like to thank all the members of the ILC physics subgroup25 for useful discussions. They are grateful to the Minami-tateya group for the help extended in the early stage of the event generator preparation. This work is supported in part by the Creative Scientific Research Grant (No. 18GS0202) of the Japan Society for Promotion of Science and the JSPS Core University Program. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.
R. Barbieri and A. Strumia, Phys. Lett. B 433, 63 (1998). J. Hubisz and P. Meade, JHEP 0408, 061 (2004). I. Low, JHEP 0410, 067 (2004). J. Hubisz and P. Meade, Phys. Rev. D 71, 035016 (2005). M. Asano, S. Matsumoto, N. Okada and Y. Okada, Phys. Rev. D 75, 063506 (2007). A. Birkedal, A. Noble, M. Perelstein and A. Spray, Phys. Rev. D 74, 035002 (2006). M. Perelstein and A. Spray, Phys. Rev. D 75, 083519 (2007). E. Asakawa and et al., Phys. Rev. D 79, 075013 (2009). E. Komatsu and et. al. [WMAP Collaboration], arXiv:0803.0547 [astro-ph] . L. O. S. C. L. E. W. G. S. E. G. ALEPH, DELPHI and S. H. F. Group, Phys. Rept. 427, 257 (2006). R. R. d. Austri, R. Trotta and L. Roszkowski, JHEP 0605, 002 (2006). http://madgraph.hep.uiuc.edu. http://acfahep.kek.jp/subg/sim/softs.html. H. Murayama, I. Watanabe and K. Hagiwara, KEK-91-11 , 184 (1992). T. Ishikawa, T. Kaneko, K. Kato and S. Kawabata, Comp. Phys. Comm. . T. Sjostrand, ˙ Comp. Phys. Comm. . http://wasm.home.cern.ch/wasm/goodies.html. GLD Detector Outline Document, arXiv:physics/0607154. E. W. Kolb and M. S. Turner, The Early Universe (Addison-Wesley, Reading, MA, 1990). P. Garcia-Abia and W. Lohmann, Eur. Phys. J. direct C 2, 2 (2000). N. T. Meyer and K. Desch, Eur. Phys. J. C 35, 171 (2004). P. Garcia-Abia, W. Lohmann and A. Raspereza, arXiv:hep-ex/0505096 . F. Richard and P. Bambade, arXiv:hep-ph/0703173 . P. Collaboration, arXiv:astro-ph/0604069 . http://ww-jlc.kek.jp/subg/physics/ilcphys.
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
100
PETAVAC: BOSON-BOSON COLLIDING BEAMS AT 100 TEV IN THE SSC TUNNEL PETER McINTYRE
∗
and AKHDIYOR SATTAROV
Department of Physics & Astronomy, Texas A&M University, College Station, TX 77845, USA ∗ E-mail:
[email protected] www.tamu.edu Recent developments in accelerator physics and superconducting magnet technology make it reasonable to design a hadron collider with 90 TeV collision energy and > 1035 cm−2 s−1 to be located in the existing SSC tunnel. We present a conceptual design for Petavac, a 100 TeV collider in which a ring of double-bore 15 T magnets is located in the SSC tunnel. We discuss the physics reach of Petavac, in which boson-boson collisions should dominate for production of new gauge phenomena. We discuss ways to control synchrotron radiation, electron cloud effect, and beam-beam interactions Keywords: Style file; LATEX; Proceedings; World Scientific Publishing.
1. Introduction Proton-antiproton colliding beams1 have produced three generations of discovery for high energy physics: the discovery of the electroweak bosons2 , the discovery of the top quark3 , and currently the search for the Higgs boson4 and supersymmetry5 . The Large Hadron Collider (LHC) at CERN6 is now producing its first collisions and will become the new center of effort for discovery7 . It takes a decade to prepare the technology and physics case for a next generation of collider. Today, at the dawn of the LHC era, is the appropriate time to examine the options for such next-generation facilities, and for each option to examine its physics potentia l8 and its requirements for accelerator physics and related technology. Options that are being evaluated to date include the International Linear Collider9 , CLIC10 , a multi-TeV muon collider11 , Project X12 , and an LHC energy upgrade13 . Although most of these proposed facilities would provide one or another complementary look at the particles that are accessible at the energy scale that will be opened by LHC, none would make it possible to extend significantly beyond that scale. In what follows the case for a hadron collider of 90 TeV will be examined: its phys-ics potential, the technology for its 15 T magnet ring; control of synchrotron light, elec-tron cloud effect, and beam-beam tune shift for ultimate luminosity; and the present status of the tunnel in Waxahachie.
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
101
2. Context for High Energy Physics At asymptotically high energy collisions of a proton are well approximated by the scattering of their constituent quarks and gluons, in which the energy share of each constituent is described by the parton distribution functions f (x, Q2 ). Dutta14 has used the CTEQ6 distribution functions to calculate the parton luminosity τsˆ dL dτ √ for gg scattering for three cases: the Tevatron ( collisions at s = 2T eV ), LHC √ √ (pp collisions at s = 14T eV ), and Petavac (pp collisions at s = 100T eV ). His results are shown in Fig. 1(b). The ability to access signals from new physical processes scales directly with these parton luminosities. The signal cross section for any given conjectural process can be obtained by integrating its dynamical cross-section through either channel with the parton luminosity for that channel. From Fig. 1(b) one can conclude that the Petavac would provide the same factor increase of mass reach for new physics beyond LHC as LHC does beyond Tevatron. At Petavac collision energy it is reasonable to expect that the highest mass scales would be accessed through boson-fusion diagrams such as the one shown in Fig. 1(a); the proton would in this respect exhibit intrinsic weakness akin to its intrinsic charm and beauty. The Petavac would provide a platform to explore gauge fields to mass scales of ∼5 TeV, covering a wide domain of conjectures about supersymmetry, new extra dimensions, and string phenomenology.
(a)
(b)
Fig. 1. a) Dominant Feynman diagrams for new heavy particle production at Tevatron, LHC, and Petavac; b) parton luminosities through g-g vertex at all three colliders.
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
102
Fig. 2.
Wire critical current density vs. field in round-wire superconductors at 4.2 K 15 .
3. High-Field Dipole Technology The technology for a hadron collider is paced by the ring of dipoles that bends the beams in a circular orbit. The orbit radius depends upon the beam momentum p p and the dipole magnetic field B as ρ = eB . There is an obvious premium to utilize the highest feasible magnetic field strength, with correspondingly smaller ring radius. Field strength is in turn limited by the superconducting alloy itself and the control of stored energy and Lorentz stress within the coils. All colliders to date have utilized the superconductor NbTi. Fig. 2 shows the current density jc as a function of B for several practical superconductors at 4.2 K temperature. The practical limit of magnetic field strength for NbTi-based dipoles is ∼8 T (with cooling to 1.8 K), the field chosen for CERNs LHC. Higher field requires the use of N b3 Sn, an A15-phase alloy which can support fields up to ∼17 T in coils. For the past decade there has been a sustained program to develop highperformance multi-filament wire using N b3 Sn16 and to develop coil technology for using it in high-field dipoles17 . That program has resulted in excellent wire, with current density jc = 3000A/mm218 , which is available today in km piece length as a manufactured product. Dipoles and quadrupoles using N b3 Sn windings must operate with very high compressive stress and high stored energy. Unlike NbTi, which is a tough, malleable alloy, N b3 Sn is brittle and fractures under stress greater than ∼150 MPa. Since the Lorentz stress σ in windings increases ∼ B 2 , it is necessary to control such stress in order to operate dipoles beyond ∼12 T. A succession of innovations has led to the successful testing of a first 16 T short-model dipole, retaining 95% of the
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
103
short-sample performance in its wire19 . Techniques have been developed to control Lorentz stress20 , preload windings21,22 , control low-field instability23 , compact insulation24 , and suppress persistent-current mul-tipoles at injection field25 .
(b)
(a)
(c)
(d)
Fig. 3. a) 15 T dual dipole for Petavac; b) 1.5 T Superferric injector shown in same scale; c) Decoupling of Lo-rentz stress between inner and outer windings in one layer of the Petavac dipole; d) Magnetic field distribution in one quadrant of the dipole, showing suppression of multipoles by a horizontal flux plate.
The LHC Accelerator R&D Program (LARP) has worked for the past six years to develop long-length quadrupole magnets using N b3 Sn coils to provide enhanced focusing in the low-beta insertions at the collision points in LHC26 . They have recently success-fully fabricated and tested a 3.6 m-longN b3 Sn quadrupole27 . While much development remains to be done, the above work lays a solid foundation for
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
104
developing collider di-poles and quadrupoles with maximum field up to ∼16 Tesla. In order to explore the parameters of such magnets, a preliminary designs has been developed for a 15 Tesla dipole, shown in Fig. 3(a) and for a 400 T/m quadrupole. The main parameters of the dipole are given in Table 1. Note that the windings are arranged in block elements. A high-strength support ma-trix is integrated within the block coil structure to intercept Lorentz stress midway through the windings and transfer it to the surrounding pre-loaded flux return so that it cannot accumulate to crushing levels anywhere within the coil. This stress resulting stress distribution for a similar dipole design is shown in Fig. 3(b). The maximum stress within the coils of the 15 T dipole is 150 MPa, below the limit for strain degradation. Note also that the dipole incorporates a horizontal flux plate, shown in detail in Fig. 3(c). The thin plate of magnetic steel is unsaturated at injection field and so imposes a strong dipole boundary condition that suppresses the multipoles that would be produced by persistent currents in the filaments of the superconducting strands. This provision is important to provide for a ∼10:1 working range between injection and collision field strength without introducing beam growth instabilities. It is interesting to note that the total cross-sectional area of N b3 Sn superconductor required in the coil of the 15 Tesla dipole is only 20% more than that of NbTi supercon-ductor required in the LHC dipole. Currently N b3 Sn is ∼6 times more expensive than NbTi, but most of the difference is because NbTi is manufactured in large-billet processing for large-volume demand, while N b3 Sn wire is made in small R&D billets with small demand. Happily, ITER is currently procuring ∼400 tons of N b3 Sn wire, which is moving its manufacture from small- to large-billet basis. The 100 TeV storage ring would require ∼5000 tons of N b3 Sn wire, ten times more than ITER. Could one push to yet higher magnetic field? We have prepared a magnetic design that would use only N b3 Sn and operate at 16.5 T to yield 100 TeV collision energy, but it would require twice as much expensive superconductor as the 15 T design presented here. We also prepared a 24 T dipole design containing inner windings of Bi-2212 and outer windings of N b3 Sn. That magnet could provide a basis for an LHC Tripler in the LHC tunnel13 , but luminosity performance would be limited by the short straight section length in the LHC tunnel. Excellent progress has been made towards developing coil technology using Bi-221228 , but much work remains before it will be ready for long di-poles for future colliders.
4. 90 TeV Collider in the SSC Tunnel The boring of the SSC tunnel was ∼70% complete at the time when the project was can-celled. Although only 30% of the circumference was lined, the entire tunnel should be intact although water-filled; it could be completed at a modest fraction of the cost of the tunnel itself. The SSC was the first and only hadron collider for which the tunnel was designed from the outset to provide for high-luminosity multi-
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
105
TeV colliding beams. The straight sections and the flanking segments in a hadron collider must accommodate require long strings of magnet elements for the low-beta insertion and for injection and abort systems. The Tevatron tunnel was designed to accommodate a 400 GeV synchrotron, not a TeV/beam collider. It has 50 m-long straight sections, only 5% of the bend radius. LHC occupies the LEP tunnel, which was designed to accommodate a 100 TeV e+ e− collider, not a 7 TeV /beam collider. It has 500 m-long straight sections, 15% of bend radius.
(b)
(a)
Fig. 4. SSC tunnel: a) vault for collider experiment; layout of 4.2 m diameter tunnel with 15 T collider dipole and 1.5 T superferric injector
The SSC tunnel has 7 km straight sections, designed to accommodate a string of high-luminosity collision points in succession with ample room for injection and abort. The lattice of Petavac can be designed to accommodate the very highest luminosity collisions and the most effective methods for scraping, tune shift correction, and suppression of luminosity-limiting phenomena. An example Petavac lattice is shown in Fig. 5(a). The luminosity L is directly related to the total beam-beam tune shift ξ produced by head-on and long-range tune shifts: ξ = NIR
r0 Np 4π
L=
3γξ F (σl /β ∗ ) (Bf )Np ∗ β 1 + p¯/p
(1)
4.1. Correction of beam-beam tune shift The lens action of each beam upon the other produces a beam-beam tune shifts ξx,y . Because the beams have an approximately Gaussian profile, the tune of each proton is shifted differently according to its transverse displacement from equilibrium orbit each time it passes through the other beam. If the shifted tune approaches an
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
106
integer-ratio resonance in the plot of horizontal vs. vertical betatron tune, weak non-linear couplings would excite long-term emittance growth and even rapid blowup of the beams. This limitation has until now posed a hard limit to luminosity because each succeeding bunch ’n’ in a store sees different values of beam-beam tune shift ξnx,y as it traverses the collid-er, so that the tunes of the ensemble appear spread when plotted as in Fig. 5(b) even though each bunch occupies only a small region. Over the past decade an electron lens system has been developed that can be used to compensate ξnx,y for each bunch29 . A few-ampere electron beam of velocity βe c is brought into alignment with the circulating proton beam over a length Le , and produces a tune shift upon the co-moving beam that is comparable to that of the other beam: ξex,y = ±
(a)
βx,y Le rp 1 ∓ βe je ( ) 2γec βe
(2)
(b)
Fig. 5. a) Lattice for one quadrant of Petavac, showing betatron functions and dispersion; b) tune plot of Tevatron, in which the beam-beam tune shift is shown for each bunch through the duration of a store. Resonance lines up to 12th order are shown (from 29 ). The arrows show the tune corrections of example bunches.
In order to compensate the tune of each bunch, one must be able to measure the tune of each bunch. This has become possible in the past two years with the development of the a.c. dipole30 , a small dipole that is modulated at ∼100 Hz just enough to create a tiny amplitude of coherent betatron motion in the beam. This motion is readily detected and yields direct measurement of the tune of each bunch in each beam Two electron lenses can be located so that one is phased (βx >> βy ) to correct ξn x and the other is phased (βx << βy ) to correct ξn y. The electron current in each
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
107
electron lens can be time-modulated so that the correction of each bunch can be individually tuned29 . The importance of this can be seen in Fig. 5(b): each bunch in the 36-bunch train experiences a different beam-beam tune shift. The tune shift of each bunch can be separately measured, and separately corrected, so that all bunches can be maintained at a desired tune value. This ’tune cooling’ opens a new chapter in beam dynamics for future colliders. 4.2. Synchrotron radiation An obvious concern for a 90 TeV collider is synchrotron radiation. The power P radiated into synchrotron radiation and the critical energy Ec of its spectrum are P =
2 e2 c β 4 γ 4 3 4π0 ρ2
Ec =
3 γ 3 ~c 2 ρ
(3)
The power radiated in one 20 m dipole is 840 W! Were this to be intercepted on a cryo-genic surface, it would be unfeasible to sustain refrigeration. Table 1.
collision energy E luminosity bunch spacing Tb # interactions/ collision IR total length IR optics: βmin βmax beam-beam tune shift ξ low-β gradient bend radius ρ # dipoles particles/bunch: p (¯ p) emittance p store time Ts synchrotron radiation: power/magnet/bore critical energy energy loss/turn damping time: longitudinal transverse dipoles: operating temp central field bore radius length stored energy conductor area/bore max coil stress
Main parameters of hadron colliders. Tevatron
SSC
LHC
Petavac
2 3x1032 396 5 50
40 1033 16 1 7255
14 1034 25 11 528
100 4x1035 16 150 7255
TeV cm−2 s−1 ns
0.35 800 .0022 141 0.8 840 3 (0.8) 20 24
0.5 7700 .0037 230 10.2 2x3832 0.1 1 24
0.55 4400 .0024 200 2.8 1250 1.1 3.75 24
0.5 9600 .0078 450 10.2 3200 1 1 6
m m
1011 π10−6 m h
5 280 .1
6 44 .007
1700 3200 3.1
W eV MeV
13 25 NbTi 5 6.6 40 17 .07 24
26 52 NbTi 1.8 8.3 56 14.3 .26 39
1.1 2.2 Nb3 Sn 4.2 K 14.7 60 20
h h
NbTi 4.2 4 75 6 .08 19
50 150
m
T/m km
T mm m MJ/m cm2 MPa
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
108
(a)
(b)
Fig. 6. a) Photon stop to intercept synchrotron radiation at room temperature insertion between dipoles; b) beams-eye view sh owing photon stop retracted for injection, then inserted at high energy
The critical energy for LHC is 44 eV (hard UV); for the 45 TeV beams of Petavac it is 3.2 keV (X-rays). The harder spectrum is actually a good thing from the point of view of localizing collection of synchrotron light: UV has significant reflection coefficient from surfaces; X-rays do not reflect and tend to deposit their energy in a ¡¡ mm thick-ness of the surface material. Photon stops are used in synchrotron light sources and free-electron lasers to localize the interception of synchrotron light containing comparable power31 . A blade is extended into the aperture just after a bending magnet so that it intercepts the fan of synchrotron light that otherwise would spray upon the side wall of the next dipole. Fig. 6 shows a preliminary design for a photon stop suitable for absorbing the light emitted by the proton beam. Operation of the blade at room temperature requires a cold/warm/cold insertion between each pair of 20 m dipoles. The central portion of the device contains a double-blade which is hinged at each end and connected at its center to a linear actuator. When beams are injected to the collider, the actuators are moved to the out position so that the blades are retracted parallel to the wall and full aperture is available for the injected beams. As the beams are accelerated, the actuator is moved in, so that at collision energy the innermost hinged joint of the stop is spaced only ∼20σ from the axis. It then intercepts all of the synchrotron light radiated in the flanking pair of dipoles. The cold/warm/cold insertion presents a challenging design problem; it must limit heat load to a few W, but it must also provide a conductive wall for image currents to flow with the beams. In the design shown, each warm/cold transition section consists of a ceramic tube with a sputtered film of copper on its inner surface.
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
109
The skin depth of copper at room temperature and 1 MHz is r 2ρCu δ= = 50µm µ0 ω
(4)
Choosing this thickness for the flashed film on the ceramic tube, and overall dimen-sions 30 cm long x 6 cm dia., yields a conductive heat load of 3 W per transition, or 6 W/dipole. The conductive heat transfer dominates over the radiative transfer for this geometry. If the transition can be made to approach this limit, the aggregate heat load per dipole would be ∼0.3 W/m, about the same as for the synchrotron light in LHC. 4.3. Electron cloud effect The electron cloud effect is a regenerative feedback that originates when the beams ionize residual gas atoms in the beam tube to produce free electrons. The electrons are ionized with thermal velocities, so they do not reach the wall before another bunch passes. That bunch produces sufficient electric field for sufficient time to accelerate the electron towards the axis to keV kinetic energy. The electron then travels ballistically to the wall, where it may release multiple electrons by secondary emission, launching the regenerative instability. The fate of such electrons depends upon the bunch separation, the bunch current, and the secondary electron yield on the (Cu-coated) wall surfaces. The secondary yield from Cu can take values from 1 to 2, spanning the range from inconsequence to a dominant mechanism for beam growth. The effect has been studied extensively, and may pose challenges for high-luminosity upgrades of LHC.32 The most effective way to kill the electron cloud effect is to place a clearing elec-trode running along one side wall of the beam tube around the entire extent of dipoles. Accomplishing that raises several challenges: how to support the electrode; how to pro-vide continuity from dipole to dipole so that the image currents flow without interruption; how to provide warm transition at the photon stops. An interesting possibility is to run a strip electrode along the side wall of the beam tube towards the outside of the collider curvature, so that it passes through the cold/warm/cold insertion and connects electrically to the photon stop blades. If the elec-trode is biased positive from ground, it will provide a clearing of electron cloud and also a local clearing of electrons liberated by Compton scattering of the X-rays of synchrotron light. The necessary bias is determined by the ballistics of the electrons that must be cleared in a time less than the bunch spacing Tb : eV >
4mr2 ≈ 10eV Tb2
(5)
Thus a very modest bias voltage will suffice to clear the electron cloud effect so that a 16 ns bunch spacing can be used. At a luminosity of 1035 cm−2 s−1 , this corresponds to 150 interactions per bunch crossing, which will pose a major challenge for detector design.
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
110
5. Conclusions An ultimate-luminosity 90 TeV hadron collider in the SSC tunnel would offer a nextgeneration facility for discovery of new physics up to 5 TeV. The tunnel is optimum for pushing luminosity to ultimate limits. Much detailed investigation of each of the issues discussed above must be done to evaluate the impacts upon performance, cost, and risk. Acknowledgments We wish to thank Bhaskar Dutta (Texas A&M) for calculating the parton luminosities shown in Fig. 1(b). It is a pleasure to acknowledge helpful discussions with Al McIn-turff, Teruki Kamon, and Richard Arnowitt (Texas A&M), Sam Ting (MIT), and Fritz Caspers (CERN). This work was supported by U.S. DOE grant DE-FG0206ER41405. References 1. P. McIntyre, Workshop on Colliding Beam Possibilities at the Tevatron, Fermilab, Jan. 1976. 2. G. Arnison et al., Phys. Lett. B122 (1982) 103. 3. F. Abe et al., Phys. Lett. 74 (1995) 2626. 4. M. Carena et al., http://arxiv.org/abs/hep-ph/0010338. 5. V. Barger et al., http://arxiv.org/abs/hep-ph/0003154; C. Hays, SUSY 2008, Seoul, June 16-21, 2008. 6. M. Bajko et al., IEEE Trans. Appl. Superconductivity 17, 2 (2007) 1097. 7. P. Nath et al., Nuclear physics B 93.1 (2010) 217. 8. A. de Roeck et al., Euro. Phys. J. C66 3-4 (2010) 525. 9. ILC Reference Design Report, http://media.linearcollider.org/rdr draft v1.pdf (2010). 10. R. Corsini, CLIC R&D: Technology, test facilities and future plans, Nucl. Phys. B. Proc. Suppl. 154 (2006) 157. 11. D.B. Cline, Nucl. Phys. B Proc. Suppl. 155, 297 (2006); M.M. Alsharoa et al., Phys. Rev. ST AB 6, 081001 (2003). 12. Report on accelerator physics and technology, Project X, Fermilab report, 11/13/2007 http://beamdocs.fnal.gov/DocDB/0029/002970/001/ProjectXWorkshopReport.pdf 13. P. McIntyre and A. Sattarov, Proc. Intl. Conf. DARK2004, Springer, Heidelberg (2005), pp. 348-365; EuCARD-AccNet mini-workshop on a high-energy LHC http://indico.cern.ch/conferenceDisplay.py?confId=97971. 14. B. Dutta, private communication. 15. http://magnet.fsu.edu/ lee/plot/plot.htm 16. R.M. Scanlan, IEEE Trans. Appl. Superconductivity 11, 1, 2150 (2001). 17. A. Devred et al., IEEE Trans. Appl. Superconductivity 15, 2, 1192 (2005). 18. J.A. Parrell et al., Appl. Superconduct. Conf., Washington, DC, Aug. 1-6, 2010. 19. S. Mattafirri et al., IEEE Trans. Appl. Superconductivity 15, 2, 1156 (2005). 20. Diaczenko et al., Proc. 1997 Particle Accelerator Conf, Vancouver, May 12-16, 1997, p. 3443. 21. S.E. Bartlett et al., IEEE Trans. Appl. Superconductivity 15, 2, 1136 (2005). 22. A. McInturff et al., IEEE Trans. Appl. Superconductivity 17, 2, 1157 (2007). 23. A.K. Ghosh et al., IEEE Trans. Appl. Superconductivity 18, 2, 993 (2008).
November 22, 2010
11:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.10˙McIntyre
111
24. R. Blackburn et al., IEEE Trans. Appl. Superconductivity 18, 1391 (2008). 25. R. Blackburn et al., IEEE Trans. Appl. Superconductivity 13, 2, 1355, (2003). 26. P. Wanderer, to be presented at Appl. Superconductivity Conf., Chicago, Aug. 17-22, 2008. 27. G. Ambrosio et al., Appl. Superconduct. Conf., Washington, DC, Aug. 1-6, 2010. 28. A. Godeke et al., Supercond. Sci. Technol. 23, 034022 (2010). 29. V. Shiltsev et al., New J. of Phys. 10, 043042 (2008). 30. R. Miyamoto et al., Phys. Rev. ST Accel. and Beams 11, 084002 (2008). 31. E. Hoyer et al., Proc. 1995 Part. Accel. Conf., Dallas, May 1-5, 1995, p.1444. 32. F. Caspers et al. The 2008 SPS electron cloud transmission experiment: first results, presented at ILC Damping Ring Workshop, Cornell Univ., July 2008.
November 22, 2010
12:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.11˙Sattarov
112
ACCELERATOR-DRIVEN THORIUM-CYCLE FISSION: GREEN NUCLEAR POWER FOR THE NEW MILLENNIUM PETER McINTYRE
∗
and AKHDIYOR SATTAROV
Department of Physics & Astronomy, Texas A&M University, College Station, TX 77845, USA ∗ E-mail:
[email protected] www.tamu.edu In thorium-cycle fission, fast neutrons are used to transmute thorium to fissionable 233 U and then stimulate fission. In accelerator-driven thorium-cycle fission (ADTC) the fast neutrons are produced by injecting a symmetric pattern of 7 energetic proton beams into a Pb spallation zone in the core. The fast neutrons are adiabatically moderated by the Pb so that they capture efficiently on 232 Th, and fission heat is transferred via a convective Pb column above the core. The 7 proton beams are generated by a flux-coupled stack of isochronous cyclotrons. ADTC offers a green solution to the Earth’s energy needs: the core operates as a sub-critical pile and cannot melt down; it eats its own long-lived fission products; a GW ADTC core can operate with uniform power density for a 7-year fuel cycle without shuffling fuel pins, and there are sufficient thorium reserves to run man’s energy needs for the next 2000 years.
1. Thorium-Cycle Fission as a Green Energy Technology Thorium is the most abundant element beyond lead in the periodic table. It is found in abundance in a number of mineral deposits, notably as monazite sand filling entire deserts and beaches in India and Brazil. The dominant isotope 232 Th is stable against spontaneous fission, but it can be transmuted into the fissionable isotope 233 U by fast neutron capture (Fig. 1(a)). In 1950 Ernest Lawrence proposed that proton beams could be used to efficiently produce the needed fast neutron flux by spallation on lead (Fig. 1(b)) and thereby harness thorium as an abundant resource for fission power without the need for isotope separation1 . Such spallation is used today to produce fast neutron beams for research, but it has never yet been harnessed to drive practical fission cores for nuclear power generation. The estimated world reserves of thorium total 2.2 million tons2 ; the present world energy consumption could be provided by ADTC using ∼1,000 tons/year; so ADTC could provide the worlds energy needs for the next two millennia. In 1995 Carlo Rubbia proposed a concept for accelerator-driven thorium-cycle (ADTC) fission power in which a ∼800 MeV proton beam is injected into a fission core consisting of thorium fuel pins arranged in a molten lead bath.3
November 22, 2010
12:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.11˙Sattarov
113
(b) (a) Fig. 1. (a) Sequence of neutron capture, decay, and stimulated fission in ADTC. ( b) Spallation of an 800 MeV proton on a lead nucleus to produce ∼20 fast neutrons.
(a)
(b)
Fig. 2. a) Beam current and energy for present and planned high-power proton accelerators and for our ADTC design (7xIC); b) dependence of beam current upon number of orbits in the PSI isochronous cyclotron6 .
The Pb bath serves multiple functions: spallation target to produce fast neutrons, adiabatic moderator to gradually reduce the energy of each neutron in successive scatterings to enhance capture on the thorium fuel, and convective transfer to transfer heat from fission in the core to steam coils located above the core. Rubbia showed that the adiabatic energy steps by which neutrons slow as they scatter from Pb nuclei maintains equilibrium amongst the fission fragment species and prevents the accumulation of long-lived waste isotopes in the core. This feature is only possible when fission is driven by fast neutrons in a heavy-nucleus moderator, and eliminates a problem that plagues all thermal fission reactors. Although Rubbias concept demonstrated several elegant features of ADTC fission, it did not address several daunting challenges: • A 1 GW ADTC core requires ∼10 MW of continuous proton beam. No single accelerator has ever produced that much beam power (see Fig. 2(a)). • The fission products that accumulate in the fuel pins are significant absorbers of fast neutrons. If a single proton beam were injected on the central axis of
November 22, 2010
12:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.11˙Sattarov
114
a fission core, neutron absorption by fission products would turn off fission in the outer region of the core so fuel pins would have to be shuffled frequently to maintain operation. • Accelerators do not typically operate with the reliability that is required for a fission power reactor. Sudden cessation of neutron drive could thermally shock the fuel pin structures and complicate the interface of ADTC for electric power production. Since that time several accelerator designs have been proposed for ADTC, aimed to deliver the necessary 10 MW beam power in a single beam, some designs based upon superconducting linacs4 , and some upon cyclotrons5 . The current state of the art for high-power proton accelerators is the 650 MeV isochronous cyclotron (IC) at PSI6 , which produces 2.5 mA beam current, 1.6 MW continuous beam power. The highest-power superconducting linac is the 1.0 GeV SNS7 , which has a design current of 1.4 mA, currently operates with 0.87 mA, and is planning upgrades to ∼4 mA. Thus there is today no operating accelerator capable of generating a single beam with the power required for a GW ADTC power plant.
(a)
(b)
Fig. 3. GW ADTC design: a) flux-coupled stack of seven 800 MeV isochronous cyclotrons; b) cross section through Pb-moderated core with seven drive beams.
Fig. 2(b) illustrates an important aspect of the accelerator physics of cyclotrons: the attainable beam current scales as N−3 , where N is the number of orbits made by the beam from injection to extraction. This scaling arises from the shift of the betatron tune at injection (produced by the lens action of the space charge depression in the beam, which in turn is proportional to the local circulating charge density) and also to the separation of orbits at extraction. Both phenomena scale with separation between turns (∼N−1 ), and with beam current (∼N−1 ), and with the total dwell time of each proton in the IC (∼N−1 ). It is for this reason that, if
November 22, 2010
12:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.11˙Sattarov
115
one provides ∼4 MV/turn rf acceleration to limit N∼100, an IC can accelerate ∼3 mA of continuous proton beam. These considerations led us to an alternative design that solves all three problems that plagued Rubbias earlier PDTC concepts: deliver the necessary beam power using a flux-coupled stack of 7 isochronous cyclotrons (Fig. 3(a), Table 1), and inject the 7 beams in a 6-on-1 symmetric pattern into the ADTC core (Fig. 3(b), Table 2) to provide a nearly isotropic flood of spallation neutrons within the core. Table 1. stack.
Parameters of each IC in fluxcoupled
beam energy: injection extraction beam current orbit radius: injection extraction # orbits rf acceleration: # main cavities frequency acceleration per cavity # 3rd harmonic cavities magnetic field in sectors betatron tunes (horizontal, vertical) vacuum
Table 2.
200 800 3
MeV MeV mA
3.35 4.98 100
m m
4 48 1 2 1.7
Tesla
1.9 10-9
Torr
MV
Parameters of 7-beam-drive thorium core.
thermal power proton drive power Pb moderator heat xchgr: radius, height mass thorium core: radius, height fuel bundles: pin radius pin cladding thickness bundle size (flat to flat) bundles, pins/bundle: inner fuel region outer fuel region total fuel inventory fuel cycle between access
1.5 7x1.6
GW MW
2.6. 25 5700
m ton
1.5,1.5
m
3.55 0.55 18
mm mm cm
6x20,271 6x14,331 26.5 7
tons years
November 22, 2010
12:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.11˙Sattarov
116
2. Flux-Coupled Stack of Isochronous Cyclotrons Each IC in the flux-coupled stack produces 3 mA at 800 MeV, a modest extrapolation from the performance at 650 MeV in the PSI IC. For the ADTC stack configuration the IC design utilizes superconducting coils in the sector magnets and superconducting rf cavities of a novel design for efficient acceleration. The rationale of this approach is to utilize a mature accelerator design, improve it for efficiency and reliability, and replicate it to deliver the necessary power from a common footprint. The injection, rf acceleration, extraction, and beam transport are independent for each accelerator, so that if any one IC were to experience an operating fault the other 6 could continue delivering beam to the ADTC core without interruption. If the outage were short term (e.g. arc-down of injection or extraction septum), the missing IC could be restored to operation without interruption of ADTC operation. If a component required access for repair, the ADTC core could continue in service at 85% power until a scheduled downtime was arranged. The stack concept is motivated by the design of the sector magnets for the Riken superconducting ring cyclotron8 . In that design each sector magnet consists of a pair of cryogenic coil assemblies (superconducting coil supported on cold-iron pole piece) suspended within a warm-iron flux return yoke. We extend the same principle by stacking 8-coil assemblies to create 7 cyclotron apertures. The fluxcoupled magnetics9 is arranged so that all coil assemblies are approximately buoyant (Lorentz forces in balance above and below). The flux-coupled stack is suspended within the warm-iron yoke using a pattern of low-loss tension supports shown in Fig. 3(a).
(a)
(b)
Fig. 4. a) 400 kV copper cavity for PSI IC; b) field design of 1 MV dielectric-loaded superconducting cavity to fit in the gap between sectors in each IC.
In the flux-coupled stack each pole piece is excited by a square coil (6x10 cm 2 ) of Al-stabilized NbTi superconductor, operating at 4.5 K with an average current density of 30 A/mm2 (1% of the short-sample current in the strand). The fringe field is shaped using a correction winding and shielded from the superconducting cavities in the gap between sectors. RF acceleration poses a major design chal-
November 22, 2010
12:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.11˙Sattarov
117
lenge for the flux-coupled stack. In all conventional cyclotrons the rf-cavities are room-temperature empty copper resonators, whose size is determined by the desired frequency. Fig. 4(a) shows the 50 MHz, 400 kV rf cavity for the PSI IC. It is 2 m high, whereas the spacing between cyclotrons in our flux-coupled stack design is 40 cm! Fig. 4(b) shows one way to solve this problem. Each cavity is a dielectricloaded stub-line resonator, symmetric above and below the beam plane. The design shown resonates at 48 MHz and fits in a 40 cm vertical gap and in the shielded gap between sector magnets. The dielectric is high-purity rutile (ab = 107, c = 240, δ = 7x10−6 @ 77K). The cavity should produce an accelerating gradient voltage of 1 MV within breakdown limits. The walls are superconducting Nb (4K), the rutile slabs are maintained at 77K. Total heat load for all 7x4 cavities is 1.0 kW @ 4 K, 130 kW @ 80K. Both heat loads can be refrigerated using a commercial helium closed-cycle refrigerator, requiring ∼1.5 MW of mains power.
3. Multi-Beam Proton Drive of the Thorium Core Each proton beam traverses a beam tube into the core and is transversely modulated so that protons strike the side walls of the beam tube uniformly along a 50 cm length, providing a line source for spallation. The proton beam energy was chosen to be 800 MeV, providing a fast neutron yield of ∼20 n/p. Molten lead is used as moderator, heat transfer medium, and shielding and reflection of neutrons at the core boundaries. The outer radius of the lead is chosen to be large enough to contain the core neutrons. Fig. 5(a) side view and (b) top view sections of core, center axis at left, showing locations of proton beam tubes, inner and outer fuel bundles, and Pb reflector. The proton beams are introduced in a 6-on-1 pattern within the fuelmoderator configuration of the core10 . The fuel region is subdivided into an inner region and an outer region. The inner and outer fuel regions have the same size fuel bundles (and fuel pin size), but with different pitch between fuel pins (different number of pins per bundle). An oxide fuel composition of 90% 232 Th10% 233 U is assumed; the starting 233 U fraction provides ∼70% of peak power from the outset. Note that the transmutation from 232 Th to 233 U proceeds through an intermediary of 233 Pa, which decays with a half-life of 27 days. Fig. 6 shows the thermal power output, the inventories of these nuclei in the core, and the neutron gain k∼0.98 through a seven-year life cycle for a 1.5 GW core driven by seven 2 mA proton beams. The neutronics for a 1.5 GWth core was simulated using MCNPX11 and SCALE4.312 . We selected a neutron gain k=0.985 (subcritical operation with the neutron balance provided by the proton beams). Optimizing the fuel bundle size to maximize power density yields a bundle size of 18 cm. We then calculated the heat distribution within the core and modeled the convective heat transfer from the core through the molten Pb column to steam heat exchangers located above the core. The ADTC core can operate for seven years as a sealed system, requiring no access to the core for shuffling fuel rods or other purposes. By contrast, in all other
November 22, 2010
12:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.11˙Sattarov
118
Fig. 5. a) side view and b) top view sections of core, center axis at left, showing locations of proton beam tubes, inner and outer fuel bundles, and Pb reflector.
Fig. 6. Thermal power, neutron gain, and fuel inventory through life cycle of fuel filling for GW ADTC core.
reactor technologies the fuel bundles must be re-shuffled frequently during core lifetime, which complicates securing the core against theft or terrorism. We calculated what happens if one or all drive beams are lost. If only one beam is lost, the reactor can continue operation with a slightly higher value of k (∼0.987) and still maintain uniform power density. If all beams are lost, the fission reaction ceases but the 233 Pa inventory decays into 233 U over the next month, so that when the reactor is re-started the criticality is just under 1.00. Although k can also be reduced in that situation using B4 C absorber plates, we plan to reduce somewhat the design value for k to ∼0.980, so that the core would remain subcritical under all contingencies. The Pb moderator has sufficient thermal mass that, in the event of a sudden shutdown, meltdown from subsequent decay heat is impossible under any circumstances. Neutrons scattering from the Pb moderator lose energy adiabatically, so all fission products can capture fast neutrons at structure resonance energies. The abundances of all fission products are thereby maintained in equilibrium so that long-lived waste nuclides do not accumulate as they do in thermal reactors. 4. Conclusions Accelerator-driven thorium-cycle nuclear fission power could be operated using a flux-coupled stack of isochronous cyclotrons feeding seven beams symmetrically into a Pb-moderated thorium core. The required accelerator performance is routinely achieved today. The ADTC design would utilize superconducting magnets and rf to facilitate the stacked configuration, and its 17 MW of beam is sufficient to drive a 2 GW thorium core. The 6-on-1 drive configuration permits the core to be operated with uniform power density and relatively constant power output for seven years without any access to the core. This feature is unique and opens new possibilities for non-
November 22, 2010
12:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
01.11˙Sattarov
119
proliferation. The ADTC core can continue to operate if one beam fails without any interruption in core power output or shock to core components. The core can be shut down in the event of all drive beams failing, then it can be restarted without risk of core shock or criticality. There is no mode of component failure that could lead to meltdown. There are sufficient thorium reserves to provide the Earths energy needs for two millennia. Given these beneficial aspects, it is reasonable to conclude that ADTC should soon become a significant element of energy technology. Acknowledgments This work was supported in part by the U.S. Dept. of Energy, grant DE-FG0395ER40924 References 1. E.O. Lawrence, ’AEC R&D Report:Facilities for Electronuclear Program’, Report LWS-24-736, 1953. 2. ’Thorium fuel cycle potential benefits and challenges’, IAEA-TECDOC-1450, http: //www-pub.iaea.org/MTCD/publications/PDF/TE_1450_web.pdf 3. C. Rubbia et al, CERN/AT/95-44(ET) (1995). 4. J. Alessi, D. Raparia, and A.G. Ruggiero, Proc. 20th Intl. Linac Conf., Monterey, California, 21-25 Aug 2000. 5. T. Stammbach et al., Proc. 2nd OECD Workshop on Utilization and Reliability of High-Power Proton Accelerators, Aix-en-Provence, 22-24 Nov 1999. 6. W. Wagner et al., Nucl. Instr. And Meth. In Phys. Res. A600 (2009) 5. http://www1. psi.ch/www_gfa_hn/abe/ringcyc.html 7. http://neutrons.ornl.gov/facilities/SNS/ 8. T. Kawaguchi et al., Proc. 15th Conf. on Cyclotrons and their Applications, Caen, 14-19 June 1998. 9. G. Kim, D. May, P. McIntyre, and A. Sattarov, Proc. 16th Conf. on Cyclotrons and Their Applications, East Lansing, MI, 1317 May 2001, p. 437. 10. M. Adams et al., Proc. Global-2003: ANS/ENS Intl. Winter Meeting and Nuclear Technology Expo, New Orleans, LA Nov. 16-20, 2003. 11. MCNPXT M USERS MANUAL v. 2.1.5, ed. L Waters, Los Alamos National Lab. 12. SCALE4.3, Oak Ridge Natl Lab.
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
PART II
Leptogenesis
divided
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
123
RECENT ISSUES IN LEPTOGENESIS MARTA LOSADA∗ Centro de Investigaciones, Universidad Antonio Nari˜ no,Cra 3 Este No 47A-15, Bloque 4 Bogot´ a, Colombia ∗ E-mail:
[email protected] www.uan.edu.co In this talk I review recent issues in leptogenesis which are relevant for the correct quantitative calculation of the final baryon asymmetry of the Universe, focusing on diverse lepton flavour effects. I also briefly present a new model for leptogenesis based on color octets, whose main motivation is its testability at the LHC. Keywords: Leptogenesis; Neutrinos; Color octets.
1. Introduction For many years it has been shown that leptogenesis,1 ,2 is a viable mechanism that provides an answer to the observed matter-antimatter asymmetry of the Universe. In leptogenesis a final lepton asymmetry is produced by the asymmetric decay of a heavy particle to lepton and antileptons, which is then converted into a baryon asymmetry through sphaleron interactions. For this, the heavy particle must decay out-of-equilibrium in a CP-violating way, thus satisfying all of the conditions established long ago by Sakharov.3 One of the many advantages of leptogenesis is that it is occuring in the same framework that generates neutrino masses. Thus contributing simultaneously to the explanation of two important experimental observations. However, one of leptogenesis drawbacks is precisely that it is very hard to verify experimentally, although it may be easier to falsify. The generic characteristics that are required for the leptogenesis mechanism to work are: • to have mixing/interference between (at least) two different states. Typically flavour, CP eigenstates, etc. • in general all possible mechanisms of CP violation can be implemented • the careful consideration of the background dynamics of the expanding Universe • the relevance of B and B-L conserving processes to determine the asymmetric densities of the different species • the explicit incorporation of L, C and CP violation is model dependent.
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
124
Keeping these features in mind there is a plethora of possible models such as:2 • Standard Model (SM) + see-saw4 (with 3 hierarchical right-handed neutrinos); and the corresponding supersymmetric version. • Resonant leptogenesis • Soft Leptogenesis • Dirac Leptogenesis • Scalar/Fermion Triplet Leptogenesis • Electromagnetic Leptogenesis • ..... In the minimal model the Lagrangian is given by: LN P = λαi `α φNi + h.c. + Mi Ni Ni
(1)
An initial lepton asymmetry is generated in the out-of-equilibrium decays of heavy singlet Majorana neutrinos, and is then partially converted in a baryon asymmetry by anomalous sphaleron interactions.5 Heavy Majorana singlet neutrinos are also a fundamental ingredient of the seesaw model,4 that provides an elegant explanation of the suppression of neutrino masses with respect to all other Standard Model (SM) mass scales. In the case of SM + see-saw, a relevant bound is placed on the net CP-asymmetry produced in the decay of the lightest right-handed neutrino:10
|N1 | ≤
3 mN 1 mν 3 . 16π v 2
(2)
and the final lepton asymmetry is given by Y` = ηN1
(3)
where η is the efficiency parameter which is controlled by the washout regime. These are the two main parameters that determine the final value of the lepton asymmetry. 1.1. Spectator effects We now consider all processes in the thermal bath of the early Universe that conserve B−L. For a given temperature T , from chemical equilibrium considerations we solve the system of equations that are derived from requiring that: • • • • •
Total isospin, hypercharge and color must be zero Flavor changing interactions involving the quarks are in equilibrium Yukawa interactions are equilibrium Electroweak and QCD sphalerons are in equilibrium With SUSY additionally particles in same multiplet have φ˜ = φ
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
125
With this we can then solve for B, L, B − L in terms of a chemical potential, say `.
2. Lepton Flavour Issues Intuitively what should be important is the direction in flavour space into which N1 decays. Given that he , hµ , hτ are small and the lepton asymmetry is obtained from a trace in flavour space how can flavour matter? However, as flavour distinguishes mass eigenstates, this has some unforeseen consequences.7 So, consider temperatures where Γτ , Γµ H then it is cleat that the charged lepton Yukawas have no effect of the CP asymmetry but the dynamical process of right-handed neutrinos decay and production and the corresponding lepton asymmetry is distributed among distinguishable flavours. In this way, we see that charged lepton Yukawa interactions have the effect of projectors, asymmetries in each flavour are washed out differently, contributing with different weights in the expression for the total asymmetry. In table 2 we show the effect of correctly cconsidering the flavoured regime, for temperatures below T < 1012 GeV, on the different parameters that are relevant for leptogenesis and the corresponding bounds that can be inferred. Unflavoured , m ˜ 1 , M1 , m ¯ 2.
Flavoured α , m ˜ 1α m ˜ M2
1 1 1 (λλ† )11 M1 = 8πv Γ = 8π 2 m ˜1 Γ K ≡ H |T =M1 = m ˜∗ < 1 η(m ˜ 1, m ¯ 2) η∝H Γ 2.5×10−3 eV 2 1/2 9 M1 > ) ∼ 10 GeV ( ∆m2 atm
3 M1 ≤ max = 16π v2 m ¯ < 0.15eV
(∆m2atm +∆m2sol ) m3
1 Γα = 8π |λ1α |2 M1 ˜α Γα /Γ = Kα = K m m ˜∗ < 1 ηα (m ˜ 1α ) ηα ∝ ΓHα decreases bound M1 √
3M1 m ¯ α =≤ 8πv 2 increases m ¯
2.1. Purely flavour leptogenesis In this type of models it is possible to generate the baryon asymmetry even in the P case in which α α = 0 without lepton number violation in decay.12 THe main features for this scenario are: • The Boltzmann equations for lepton flavour asymmetries in general present difP ferent washouts in the different flavours so..... YB−L ∝ ηα α 6= 0 • We rely on the condition that the dynamics of the different lepton flavors are decoupled at the leptogenesis temperature. • Most importantly, CP violation can originate from the non leptonic sector/lepton number conserving loop
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
126
• A specific case is a model with lepton flavour violation (LFV). There is no need to have lepton number violation in the loop, lepton number violation only via the washout processes. 2.2. Lepton flavour equilibration Most extensions of the SM, and most noticeably among these a , the Supersymmetric Standard Model (SSM), include new sources of Lepton Flavor Violation (LFV). If leptogenesis occurs at temperatures when these new sources mediate reactions that are in chemical equilibrium, then there are no flavor effects in leptogenesis. With Lepton Flavor Equilibration (LFE),21 we refer to the effect of reactions that would bring the different lepton doublets `α (α = e, µ, τ ) into chemical equilibrium. The main points to keep in mind are: • In the presence of LFV, fast `α → `β transitions effectively eliminate all dynamical flavor effects. • If at TLG the LFV processes are in chemical equilibrium then there are no flavour effects and the one-flavour approximation correctly describes the production of the lepton asymmetry. • Must be included in the chemical equilibration/BEs and Spectator processes become more relevant. • In general, models of new physics have new sources of LFV, so it is not immediate that flavour effects will survive. We will use as a general and most interesting example the SSM where, in the basis in which the charged lepton Yukawa couplings are diagonal, a source of LFV from soft supersymmetry breaking masses is generally present: Lsoft ⊃ m ˜ 2αβ `˜†α `˜β .
(4)
Here `˜α are the superpartners of the SU (2) lepton doublets. The terms in 4 affect (int) ˜ vertex the flavor composition of the mass eigenstates, and as a result the `α `˜α G ˜ ˜ ˜ for the sleptons gauge eigenstates (where G = Wa , B represent a SU (2) or U (1) gaugino) involve a unitary rotation to the slepton mass eigenstates: `˜(int) = Rαβ `˜β , α
Rαβ ∼ δαβ + O(
m ˜ 2αβ h2α T 2
),
(5)
where hα > hβ is the relevant charged lepton Yukawa coupling that determines at leading order the (thermal) mass splittings of the sleptons. A supersymmetric model with soft supersymmetry-breaking mass terms m2αβ gives rise to new diagrams that could contribute to a lepton asymmetry via PFL a ` la soft leptogenesis. Figure 1 shows the relevant diagram. P • Flavour CP asymmetries which are enhanced as ∝ g 2 λ, but = α α = 0. a The
connclusions are generic and do not rely in having a supersymmetric model.
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
127
`α `˜β ˜B ˜ W, Nk
φ˜u φu
Fig. 1. Vertex diagram generating lepton flavor violating CP asymmetries in the decays of the heavy Majorana neutrinos Nk → `α φu . Similar diagrams appear for Nk → `˜α φ˜u and in the decays ˜k → `α φ˜u , `˜α φu . of the neutrinos superpartners N
• Ineffective because lepton flavour densities equilibrate very fast, so ...LG described by single flavour approximation, which cannot be sourced by the type of diagram above. 3. Color Octet Leptogenesis Recently, Fileviez Perez and Wise (FW) discovered several viable possibilities for generating Majorana neutrino masses at one-loop with new color octet scalar and fermion degrees of freedom.17 The main attraction of these scenarios is that these color octets may be (but are not required to be) at the electroweak scale, thus accessible to the LHC. Here we study leptogenesis in the FW model. The CP-violating decays are those of the octet fermions F or scalars S. As opposed to Eq. (2), the CP-asymmetries in the FW model are not constrained by the smallness of the light neutrino masses. In principle, leptogenesis is possible with electroweak-scale masses for F and S. In this model the enhancement of the CP-asymmetry does not rely on a hierarchy amongst the couplings constants that provide the necessary CP violating phases, nor mass degeneracies or three body decays. We consider in detail the simplest leptogenesis scenario in the FW model: where the CP-asymmetry F1 is driven by two-body decays of the lightest octet fermion F1 → S`. A significant difference with respect to the standard type I see-saw model of leptogenesis is the existence of the additional gauge interactions of the heavy decaying particle. It has been shown for different scenarios that the gauge interactions of the heavy decaying particle need not dilute the lepton asymmetry excessively 11,13–15 both in the weak and strong washout regimes. In our case the strong gauge interactions of the new color octet scalar and fermion degrees of freedom will permit them to easily obtain a thermal abundance, strongly reducing the dependence on initial conditions11 . Although the color octets will be kept closely in thermal equilibrium through gauge interactions, this does not necessarily preclude the generation of a significant lepton asymmetry if the decay rate is larger than the gauge annihilation rate. Studies of fermionic and scalar SU(2)L triplet leptogenesis found that a sizable lepton asymmetry could be generated, with η ∼ O(1), despite small departures from
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
128
thermal equilibrium due to electroweak gauge interactions.14,15 We defer to future work a computation of the efficiency factor η for leptogenesis in the FW model. We briefly comment below on the impact on the net efficiency. However, these strong gauge interactions are certainly beneficial from a phenomenological perspective in that they provide a mechanism to produce the color octets at colliders. Indeed, this may be the most experimentally accessible leptogenesis scenario proposed so far. FW model extends the Standard Model through the inclusion of the following additional fields: (i) NS scalar fields S with SU(3)C ×SU(2)L ×U(1)Y quantum numbers (8,2,1/2), and (ii) NF fermions F with quantum numbers (8,1,0) b . We couple these fields to the SM through the most general gauge invariant and renormalizable interaction Lagrangian (using two-component spinor notation) i† † j u j d − V (H, S) . (6) Lint = yiab Li Fa Sb + gijb ui† R Sb Q + gijb dR Sb Q + h.c.
The scalar potential V contains many terms;18 here, the only one of relevance is V ⊃ −λbc Sb† H Sc† H + h.c. .
(7)
Our notation is as follows: the SU(2)L doublets are Li = (νLi , eiL ), Qi = (uiL , diL ), H = (H + , H 0 ), and S = (S + , S 0 ); the indices i, j = 1, 2, 3 label generation, while indices a = 1..NF , b, c = 1..NS label the new fields; we have suppressed SU(3)C ×SU(2)L indices; and lastly, the antisymmetric tensor (with 12 = 1) acts on the SU(2)L isospin space. This scenario provides a new mechanism for generating neutrino masses.17 Following these authors, we assume that λ is diagonal: λbc ≡ λb δbc . The left-handed neutrino Majorana mass matrix, arising at one-loop order, is X 1 m2Sb + m2Fa log(m2Fa /m2Sb ) − 1 ν 2 Mij = yiab yjab λb v mFa . (8) 2 4π 2 m2 − m 2 ab
Fa
Sb
The eigenvalues and mixing angles of this matrix are constrained by neutrino oscillation experiments. The minimum field content needed to reproduce these constraints is either NS = 1 and NF = 2, or NS = 2 and NF = 1;17 in these cases, the lightest neutrino is massless. This scenario can accomodate three massive neutrinos for NS = NF = 2. Within the FW model, there are several different leptogenesis scenarios, depending on the number of octet fermions and scalars, and the hierarchy of their masses. In this section we consider the simplest case, with NS = 1 and NF = 2, and compute the relevant CP-violating asymmetry. The overall scale of the light neutrino masses does not constrain the CP-violating asymmetries. As we show below, the decay rate and the asymmetry are proportional to the coupling constants y, but do not involve λ. As far as neutrino masses are concerned, we can have y ∼ O(1),
b For simplicity, we focus on only one choice for the SU(2) ×U(1) L Y quantum numbers of S and F . Other options are also viable.17
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
129
Li
Li
S
F1
F2
F1
(a)
F1
F2 Lj
Lj
S
Li
S
S
S (c)
(b)
Fig. 2. Tree-level (a) and one-loop (b,c) amplitudes for F 1 → SLi decay. The interference between these amplitudes gives rise to the CP-asymmetry F1 .
with TeV-scale masses mF and mS , as long as we tune λ to be sufficiently small: λ . 10−10 . Let us consider the case with masses mF2 > mF1 > mS . We assume that the CPasymmetry is driven primarily by decays of F1 , shown in Fig. 2. This assumption is most valid when mF2 mF1 , so that F2 freezes out while F1 is in equilibrium; any CP-asymmetry generated by the decays of F2 will be washed out by processes ¯ LS ↔ F1 ↔ S † L. The tree-level decay rate is Γ(F1 → SLi )
tree
¯ i ) = Γ(F1 → S † L
= tree
|yi1 |2 (m2F1 − m2S )2 . 16π m3F1
(9)
Here, we have suppressed the label b for Sb , since b = 1 only. The CP-violating asymmetry, defined as P ¯ i) Γ(F1 → SLi ) − Γ(F1 → S † L F1 ≡ Pi † ¯ , i Γ(F1 → SLi ) + Γ(F1 → S Li )
(10)
is non-zero due to the interference between tree-level (Fig. 2a) and one-loop amplitudes (Fig. 2b,c). We find the asymmetry P ∗ ∗ 2 2 2 3 i,j Im[yi1 yi2 yj1 yj2 ] (mF1 − mS ) P F1 = f (mF1 , mF2 , mS ) , (11) 2 8π m3F1 mF2 i |yi1 |
where we have defined
f (mF1 , mF2 , mS ) ≡
2 m2F2 − m2F1 )(m2F1 − m2S )4
3(m2F2 n × m4F1 (m2F2 − m2F1 )(m2F1 + m2F2 − 2m2S ) 2 mF1 (m2F1 + m2F2 − 2m2S ) × log m2F1 m2F2 − m4S
+ (m2F1 − m2S )2 (2m4F1 + m4S − m2F1 m2F2 − 2 m2F1 m2S )
such that f (mF1 , mF2 , mS ) = 1 in the limit that mF2 mF1 .
(12)
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
130
We now show that the CP-violating phases that enter into the light-neutrino mixing matrix are not relevant to the CP-asymmetry that drives leptogenesis. When NS = 1 and NF = 2, the neutrino mass matrix has the form X Mijν = Ma yia yja , (13) a
where λ v 2 mFa m2S + m2Fa log(m2Fa /m2S ) − 1 Ma ≡ , 2 4π 2 m2 − m 2 Fa
(14)
S
and we have suppressed the b index, since b = 1 only. In this case, the most general form for y that gives the correct light neutrino masses and mixing angles is y = U ·X, where for the normal hierarchy 0 q 0 q mν 2 2 mν 2 M m η2 M12 mνν2 x η1 , (15) X≡ M1 − m ν 3 x 3 q mν 3 M1 2 x −η1 η2 M2 − M2 x and for the inverted hierarchy qm ν1 η1 M1 − X≡ x 0
mν 1 mν 2
x2
q
M1 m ν 1 x q M2 m ν 2 mν 2 M1 −η1 η2 M2 − M2
η2
0
x2 .
(16)
Here, x is an undetermined complex parameter, and ηi = ±1. The matrix U is the neutrino mixing matrix, containing the mixing angles and phases that are in principle observable through studies of light neutrinos. The CP-asymmetry is proportional to the factor X X ∗ ∗ ∗ ∗ Im[yi1 yi2 yj1 yj2 ]= Im[Xi1 Xi2 Xj1 Xj2 ], (17) i,j
i,j
where the right side follows by the unitarity of U . Therefore, we find that the CPviolating phases that drive leptogenesis are contained in X and are independent of the phases in U . It is not difficult to generalize this argument to any NS and NF . We now compute the (unflavored) CP-asymmetry numerically. Our results, described below, are shown in Fig. 3. We take three representative choices for the scalar and fermion octet masses: (0.1, 0.2, 0.5) TeV [dotted red] (mS , mF1 , mF2 ) = (1, 2, 5) (18) TeV [dashed green] (10, 20, 50) TeV [solid blue] Next, we perform a parameter scan over λ and x; according to Eqs. (15,16), these are the only remaining free parameters in y if we enforce consistency with the observed
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
131
ΕF1 HCP-asymmetryL
1 10-2 10-4 10-6 10-8 10-10
10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 ÈΛÈ
ΕF1 HCP-asymmetryL
1 10-2 10-4 10-6 10-8 10-10
10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 ÈΛÈ
Results of our parameter scan, showing the correlation between the CP-asymmetry F1 and the (S † H)2 coupling λ. Areas under curves denote regions of viable parameter space, consistent with perturbativity in y and with observed light neutrino masses and mixing angles for normal (left panel) and inverted (right panel) hierarchies. The multiple curves denote different choices of octet masses: (mS , mF1 , mF2 ) = (0.1, 0.2, 0.5) TeV (dotted red), (1, 2, 5) TeV (dashed green), and (10, 20, 50) TeV (solid blue).
Fig. 3.
light neutrino mass parameters combined with the normal or inverted hierarchy cases. We scan over the following range: 10−14 < |λ| < 10−4 ,
10−6 < |x| < 10 ,
0 < arg(x) < 2π .
(19)
Furthermore, we randomly choose the signs of λ and η1,2 . We √impose perturbativity in y by discarding parameter points for which max(yia ) > 4π. The upper bound on |x| is essentially arbitrary and has been chosen so that it is possible to saturate the perturbativity bound on y during our scan. On the other hand, in the limit that |x| → 0, the matrix X is real and the CP-violating asymmetry vanishes. In Fig. 3, we show regions of parameter space consistent with our numerical scans. In both panels, we plot the CP-asymmetry F1 as a function of |λ|. In the left panel, the area under the red dotted curve shows the region of viable parameter space given by our scan for octet masses (mS , mF1 , mF2 ) = (0.1, 0.2, 0.5) TeV, con-
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
132
10-2
efficiency
10-3 10-4 10-5 10-6 10-7 10-8 10-2
10-1
1
10
102
z
Fig. 4.
Efficiency parameter considering gauge annihilations for K=100.
sistent with all observed neutrino masses and mixing angles for the normal hierarchy. The upper, diagonal bound arises due to the neutrino masses, while the left bound arises due to perturbativity in y. The other curves show the corresponding regions of viable parameter space for heavier choices of octet masses: (mS , mF1 , mF2 ) = (1, 2, 5) TeV (dashed green) and (mS , mF1 , mF2 ) = (10, 20, 50) TeV (solid blue). In the right panel, we show the same results for the inverted hierarchy case. Compared to the normal case, the CP-asymmetry in the inverted case tends to be suppressed by two orders of magnitude; numerically, this occurs due to the neardegeneracy of mν1 ' mν2 . The main conclusion is that the CP-asymmetry can be as large as O(1) in this model for normal hierarchy and O(10−2 ) for the inverted hierarchy of the light neutrino masses. Assuming that washout processes are not severe, this appears promising for leptogenesis. For fixed |λ|, the CP-asymmetry can be increased for larger octet masses; on the other hand, for fixed octet masses, the CP-asymmetry can be increased by decreasing |λ|. In this model a detailed analysis of the dynamics must be performed to obtain the final values of the lepton asymmetry, considering the different washout processes. Amongst others, the dominant effect that will determine the size of the washout will be annihilation of the color octet fermions into Standard Model particles via gluons. This is similar to the case considered in the literature of scalar and fermion SU(2) triplets. We do not present a full study of the dynamics here but instead we show that there are regions in parameter space for which an out-of-equilibrium decay can occur including the annihilation effect of the octets inyo SU(3) gauge bosons. In figure 4 we plot the distributions function of the right-hanged neutrinos and the value of the effciency parameter for K = 100. The problem is that in the minimal model
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
133
that we have considered it is not possible to separate the values of K and , in a more extended model this is feasible. A detailed study is in progress. 4. Conclusions Extensions of the Standard Model can provide the necessary ingredients for successful implementation of the leptogenesis mechanism. A careful treatment of the lepton flavour basis in relevant for the final quantitative results. Although detailed quantitative results can be obtained the more difficult aspect of the leptogenesis mechanism is currently its testability. An extension of the SM which included fermionic and scalar color octets could in principle provide an alternative model with increased possibilities of being testable at the LHC. Acknowledgments I would like to thank the organizers of the Beyond 2010 conference for the excellent meeting. I also thank all of my collaborators on different aspects of leptogenesis including: A. Abada, S. Davidson, F.X. Josse-Michaux,A. Ibarra, A.Riotto, E. Nardi, D. Aristizabal and S. Tulin. References 1. M. Fukugita and T. Yanagida, Phys. Lett. B 174, 45 (1986). M. A. Luty, Phys. Rev. D 45, 455 (1992). 2. For recent reviews see: W. Buchmuller, R. D. Peccei and T. Yanagida, Ann. Rev. Nucl. Part. Sci. 55, 311 (2005) [arXiv:hep-ph/0502169]. S. Davidson, E. Nardi and Y. Nir, Phys. Rept. 466, 105 (2008); arXiv:0802.2962 [hep-ph]. A. Strumia, arXiv:hep-ph/0608347; E. Nardi, arXiv:hep-ph/0702033; Y. Nir, arXiv:hep-ph/0702199; M. C. Chen, arXiv:hep-ph/0703087; E. Nardi, arXiv:0706.0487 [hep-ph]; A. Pilaftsis, arXiv:0904.1182 [hep-ph]. 3. A. D. Sakharov, Pisma Zh. Eksp. Teor. Fiz. 5, 32 (1967) [JETP Lett. 5, 24 (1967)]. 4. P. Minkowski, Phys. Lett. B 67 421 (1977); T. Yanagida, in Proc. of Workshop on Unified Theory and Baryon number in the Universe, eds. O. Sawada and A. Sugamoto, KEK, Tsukuba, (1979) p.95; M. Gell-Mann, P. Ramond and R. Slansky, in Supergravity, eds P. van Niewenhuizen and D. Z. Freedman (North Holland, Amsterdam 1980) p.315; P. Ramond, Sanibel talk, retroprinted as hep-ph/9809459; S. L. Glashow, in Quarks and Leptons, Carg`ese lectures, eds M. L´evy, (Plenum, 1980, New York) p. 707; R. N. Mohapatra and G. Senjanovi´c, Phys. Rev. Lett. 44, 912 (1980). 5. V. A. Kuzmin, V. A. Rubakov and M. E. Shaposhnikov, Phys. Lett. B 155, 36 (1985). 6. A. Pilaftsis, Phys. Rev. D 56, 5431 (1997) [arXiv:hep-ph/9707235]. 7. A. Abada, S. Davidson, F. X. Josse-Michaux, M. Losada and A. Riotto, JCAP 0604 (2006) 004 [arXiv:hep-ph/0601083]. E. Nardi, Y. Nir, E. Roulet and J. Racker, JHEP 0601 (2006) 164 [arXiv:hep-ph/0601084]. A. Abada, S. Davidson, A. Ibarra, F. X. Josse-Michaux, M. Losada and A. Riotto, JHEP 0609 (2006) 010 arXiv:hep-ph/0605281. R. Barbieri, P. Creminelli, A. Strumia and N. Tetradis, Nucl. Phys. B 575, 61 (2000) (for the updated version of this paper see [arXiv:hep-ph/9911315]). 8. A. Pilaftsis and T. E. J. Underwood, Phys. Rev. D 72, 113001 (2005) [arXiv:hepph/0506107].
November 22, 2010
13:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.01˙Losada
134
9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
23.
T. Endoh, T. Morozumi and Z. h. Xiong, Prog. Theor. Phys. 111, 123 (2004) [arXiv:hep-ph/0308276]; K. Hamaguchi, H. Murayama and T. Yanagida, Phys. Rev. D 65, 043512 (2002) [arXiv:hep-ph/0109030]. S. Davidson and A. Ibarra, Phys. Lett. B 535, 25 (2002) [arXiv:hep-ph/0202239]. M. Plumacher, Z. Phys. C 74, 549 (1997) [arXiv:hep-ph/9604229]. D. Aristizabal Sierra, M. Losada and E. Nardi, Phys. Lett. B 659, 328 (2008) [arXiv:0705.1489 [hep-ph]]. J. Racker and E. Roulet, JHEP 0903, 065 (2009) [arXiv:0812.4285 [hep-ph]]. T. Hambye, Y. Lin, A. Notari, M. Papucci and A. Strumia, Nucl. Phys. B 695, 169 (2004) [arXiv:hep-ph/0312203]. T. Hambye, M. Raidal and A. Strumia, Phys. Lett. B 632, 667 (2006) [arXiv:hepph/0510008]. For a summary of the typical mechanisms that can be implemented see T. Hambye, Nucl. Phys. B 633, 171 (2002) [arXiv:hep-ph/0111089]. P. Fileviez Perez and M. B. Wise, arXiv:0906.2950 [hep-ph]. A. V. Manohar and M. B. Wise, Phys. Rev. D 74, 035009 (2006) [arXiv:hepph/0606172]. For review of neutrino properties, see C. Amsler et al. [Particle Data Group], Phys. Lett. B 667, 1 (2008). A. Ibarra and C. Simonetto, arXiv:0903.1776 [hep-ph]. D. Aristizabal Sierra, M. Losada and E. Nardi, arXiv:0905.0662 [hep-ph]. M. L. Brooks et al. [MEGA Collaboration], Phys. Rev. Lett. 83, 1521 (1999) [arXiv:hep-ex/9905013]. M. Ahmed et al. [MEGA Collaboration], Phys. Rev. D 65, 112002 (2002) [arXiv:hepex/0111030]. C. P. Burgess, M. Trott and S. Zuberi, arXiv:0907.2696 [hep-ph].
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
135
ELECTROMAGNETIC LEPTOGENESIS IN A NUTSHELL SANDY S. C. LAW Department of Physics, Chung Yuan Christian University, Chung-Li, Taiwan 320, Republic of China E-mail:
[email protected] In this paper, we explore the possibility of lepton number creation in the early universe via the effective electromagnetic dipole couplings which link the ordinary light neutrinos to their postulated heavy counterparts. Since the presence of these operators can provide an alternative way for the heavy neutrinos to decay, new sources of L and CP violation can be expected. It has been found that there exists a parameter space where leptogenesis of this type can give rise to the required cosmic baryon asymmetry, and that the resulting relationship between the scales of leptogenesis and light neutrino masses is very much akin to the situation as seen in the standard Yukawa-mediated version. Keywords: leptogenesis; neutrino dipole moments; baryon asymmetry.
1. Introduction One of the outstanding problem in cosmology is why there is more matter than antimatter in the universe. The size of this asymmetry is usually represented by the baryon-to-photon ratio which can be measured experimentally. For instance, using data from the WMAP,1 this ratio has been estimated to be about 6.3 × 10−10 . The challenge is then to understand how such an asymmetry may be generated dynamically during the evolution of the early universe. Following the basic conditions Sakharov2 wrote down many years ago, different baryogenesis models have been developed. Amongst them, the well-studied ones include: electroweak baryogenesis3 (B production via phase transition), GUTa Baryogenesis4 (from heavy particle decays), Baryogenesis via Leptogenesis5 (from heavy lepton decays) and Affleck-Dine models6 (from inflaton decays). There are also other more exotic possibilities that involve black holes and extra-dimensions. It should be noted that the Standard Model (SM) actually contains all of the ingredients as laid out by Sakharov, but unfortunately, the SM parameter space cannot give rise to the amount of asymmetry observed. As a result, any viable baryogenesis mechanism must involve some kind of beyond the SM physics. Our interest in the leptogenesis path in solving this comes from the fact that there is a need to explain why light neutrinos have a tiny but a GUT
means Grand Unified Theories.
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
136
nonzero mass. So, it is sensible (in our opinion) to look for a solution in the lepton sector. One popular way to generate the light neutrino masses is via the type-I seesaw mechanism. b It involves adding (usually three) heavy right-handed (RH) neutrinos (which are electroweak singlets) to the SM. The corresponding interaction Lagrangian for the neutrino sector is then 1 Lint = −`L h φ NR − NRc M NR + h.c. , 2
(1)
where `L and φ are the SM lepton and Higgs doublets respectively, NR denotes the postulated RH neutrino, while h, M are arbitrary coupling matrices. When the neutral Higgs field gains a vacuum expectation value (hφ0 i ' 174 GeV) after electroweak phase transition, the first term in (1) becomes a Dirac mass term for the neutrinos. Consequently, for a model with three additional RH neutrinos, one obtains six Majorana mass eigenstates (three light and three heavy) with the light ones having a mass given by the seesaw formula mlight ' −mD M −1 mTD ,
where mD ≡ hhφ0 i ,
(2)
whereas for the heavy eigenstates, mheavy ' M . In arriving at (2), we have assumed that all entries of M mD . c A very attractive consequence of this setup is that the Lagrangian of (1) can lead to a lepton asymmetry in the early universe when the RH neutrinos decay via the first (Yukawa) term. Eventually, this lepton excess can be partially converted to a baryon asymmetry by electroweak sphalerons, hence solving the baryogenesis problem. In this work, we would like to ask the question as to whether given this extension (SM plus RH neutrinos), there are other ways for the ordinary leptons to interact with the heavy neutrinos besides the Yukawa couplings in (1). If so, we would like to further ask • whether such new interaction terms can provide a viable alternative for achieving successful leptogenesis; • what is the resulting leptogenesis scale (which is approximately equal to the mass scale of the RH neutrinos); and • what are its implications on the link between the high- and low-energy parameters in general (e.g. light neutrino masses). In the following, we shall begin by highlighting all the essential features in the canonical version of leptogenesis,5 so that it can serve as the backbone for our later discussion on the electromagnetic (EM) variant which possesses a similar structure. b For
a brief review of different neutrino mass models, see for example Ref. 7 is what we meant by the RH neutrinos are “heavy”.
c This
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
137
2. Brief Review of Standard Leptogenesis According to Sakharov’s conditions,2 a successful baryogenesis model must contain interactions that can violate baryon number (B); can violate C and CP symmetry; and are out of thermal equilibrium (if CP T invariance is assumed). Thermal leptogenesis,5 however, solves the baryogenesis problem by first creating a lepton (L) excess and then having it converted to a baryon asymmetry via Standard Model electroweak sphalerons. Thus, reinterpreting the criteria of Sakharov, one in fact requires lepton number violation in the model. As hinted earlier, the addition of heavy RH neutrinos with Majorana mass terms to the neutrino sector predicts L violation. Therefore, it is natural to simply take the seesaw Lagrangian in (1) for this purpose because it will have the potential to solve two different problems simultaneously. Indeed, through the Yukawa term in (1), the heavy RH neutrinos can decay into a lepton and a Higgs (see Fig. 1a). Such process violates lepton number because NR is Majorana. Furthermore, CP will be violated if the interference between the tree-level graph and its loop corrections (see Fig. 1b and 1c) is nonzero.d It turns out that this is the case when the Yukawa coupling h is a general complex matrix, and the intermediate state particles in Fig. 1b, 1c are allowed to go on-shell.
Nk
Nk
(a)
φ
`j
`j
`j
(b)
`n , ` n
Nm
Nk
Nm φ
(c)
`n
φ
Fig. 1. Yukawa-mediated (a) tree-level decay of the RH neutrinos, and its one loop (b) selfenergy, and (c) vertex correction diagrams for standard leptogenesis. Here j = e, µ, τ and k = 1, 2, 3.
The amount of CP asymmetry arising from such interactions is typically parametrized by ε=
Γ(N → `φ† ) − Γ(N → `φ) . Γ(N → `φ† ) + Γ(N → `φ)
(3)
For the standard leptogenesis setup, explicit calculations of the leading contribution to this (for k = 1) in the hierarchical RH neutrino limit (M1 M2,3 ) results in X Im (h† h)21m M1 |εstd | ' , (4) π(h† h)11 Mm m6=1
where Mm denotes the mass of the mth RH neutrino. d Note e From
e
that C violation is satisfied automatically in the SM. this, it is clear that CP violation requires that there are at least two RH neutrinos.
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
138
The condition of thermal non-equilibrium can be satisfied when the expansion rate of the universe becomes larger than the NR decay rate, Γ(N → `φ† ) . H, where H is a function of temperature T . Whilst the L asymmetry is being created, the nonperturbative SM processes, known as sphalerons, which violates B + L (but conserves B − L) will partially convert any excess in L into an excess in B. By carefully studying the interactions of each particle species in the thermal plasma, one can conclude that |B| ' 0.55|L|. So, this two-step process can generate the matter-antimatter asymmetry of the universe. f Examining Eqs. (2) and (4), it is apparent that coupling matrix h plays a crucial role in controlling both the lepton asymmetry and light neutrino masses. It is because of such intricate connection, the mass Jtnscale M for the RH neutrinos must be & 109 GeV for most viable leptogenesis scenarios.8 Moreover, this link could potentially imply that the high- and low-energy CP phases are closely related. 3. Electromagnetic Leptogenesis In this section, we introduce the idea of electromagnetic (EM) leptogenesis and explain some of its major features. 3.1. Electromagnetic dipole moment couplings Beside the obvious Yukawa coupling, `L h φ NR , that can provide a link between the light and heavy neutrinos, another possibility is through the dim-5 effective transition moment operators of the form: 1 i ψ µ σ αβ ψ2 Fαβ , ψ d γ 5 σ αβ ψ2 Fαβ , (5) Λ 1 Λ 1 where µ, d are dimensionless couplings, Fαβ is the electromagnetic field strength tensor and Λ is the cutoff scale of our effective theory. We assume that these operators are generated by some unspecified new physics at energy beyond Λ. It is actually more convenient to express the lepton fields in chiral form so that the two terms in (5) combine to become what we shall refer to as the electromagnetic dipole moment (EMDM) 1 ν L λ σ αβ NR Fαβ , with λ ≡ µ + id . (6) Λ This kind of coupling terms will be the cornerstones of the electromagnetic leptogenesis mechanism.9 3.2. A dim-5 toy model The idea and viability of electromagnetic leptogenesis is perhaps best illustrated by a simplified model where SM gauge invariance is not enforced (we shall fix this in f The
choice of parameter space needed will be discussed later.
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
139
νj
νj Nk
Nk
(a)
γ
Nm
νn , ν n
(b)
νj Nk
γ
(c)
νn ,
Nm
νn
γ
Fig. 2. (a) Tree-level decay of RH neutrinos in the early universe mediated by dim-5 EMDM operator of Eq. 7; and its corresponding (b) self-energy , and (c) vertex corrections.
the next subsection). In this toy model, we simply introduce a new dim-5 coupling between the light and heavy neutrinos of the form of (6) in the SM interaction Lagrangian (rewritten in index form below): Ld5 = −
1 λjk ν Lj σ αβ PR Nk Fαβ + h.c. , Λ
(7)
where PR = (1 + γ 5 )/2, j = e, µ, τ and k = 1, 2, 3. Through this dim-5 term, the heavy RH neutrinos can now decay into a light neutrino and a photon in the early universe (see Fig. 2a). This decay is L-violating and has a reaction rate given by λ† λ kk Mk3 Γ(Nk → ν γ) = Γ(Nk → ν γ) = , (8) 4π Λ2 where we have already summed over final flavor j. In order to check whether CP violation is present, we study the interference term between the tree-level and oneloop diagrams of Fig. 2 in an analogous way to the standard version. Through direct computation of the diagrams, it can be shown that in the limit of Mk Mm , m 6= k the raw CP asymmetry of this model is given by 2 1 † (Mk /Λ)2 X (5) ∗ † √ (λ λ)km + (λ λ)mk Im λjk λjm , (9) εk,j ' 2π(λ† λ)kk 3 z z m6=k
2 where z ≡ Mm /Mk2 . From (9), it is clear that this asymmetry would be nonzero as long as λ contains complex phases that cannot be removed by redefinitions of the neutrino fields. Since λ is completely arbitrary, it is not difficult to see that (9) is in general nonzero. Hence, a lepton asymmetry can in principle be generated by this EMDM toy model.
3.3. A more realistic EM leptogenesis model The previous setup is relatively simple and can help us visualize how EMDM-like interactions can provide a viable alternative for standard leptogenesis. But in order to build a model that can potentially describe our universe, we must make sure that all interactions are compatible with the SM symmetries. It turns out that the most economical effective EMDM operators are of dimension six,10 and thus the Lagrangian of interest becomes h i 1 ejk φ σ αβ ~τ · W ~ αβ PR Nk + h.c. Ld6 = − 2 `j λjk φ σ αβ Bαβ + λ (10) Λ
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
140 `j
`j Nk
(a)
φ Bα
Nk
(b)
`j
φ, φ† `n , ` n
Nm
φ Bα
Nm
Nk
`n ,
φ
`n Bα
Fig. 3. (a) Tree-level decay induced by the first term of Eq. 10; (b) examples of the corresponding loop correction diagrams.
where φ is the SM Higgs doublet, Bαβ and Wαβ are the U (1)Y and SU (2)L field tensors respectively, and τi ’s are the generators of SU (2)L . Cutoff scale Λ is assumed to be beyond the electroweak scale. After spontaneous symmetry breaking, these operators will become the transition moments of N and ν. Again, we do not specify the new physics for which these effective operators are originated from. With (10), the heavy neutrinos can undergo a three-body decay, N → ` + φ + (Bµ or Wµi ) ,
(11)
in the early universe. For simplicity, let us consider only one of the channels, the one involving Bµ . Fig. 3 shows the Feynman diagrams for the tree-level process as well as examples of the loop corrections associated with this decay mode. Although the mathematics is more complicated, one can eventually derive useful expressions for the decay rate and CP asymmetry parameter which happen to take a similar form to those from the dim-5 toy model:g 2 2 M1 M1 (6) (5) Γ(N1 →ν γ) , |ε1 | ' |ε1 | , (12) Γ(N1 → ` φ Bα ) = 8πΛ 8πΛ where we have taken k = 1 and summed over flavor j. Consequently, it is quite easy to see that successful leptogenesis is possible for this setup. But unlike the dim-5 version, involvement of the Higgs in the coupling term will eventually lead to a connection between the high-energy sector and light neutrino masses (just as in the standard version). This is because even though one can switch off the Yukawa couplings by hand (i.e. no neutrino mass terms at the lowest order), radiative corrections can generically induce neutrino mass terms.10–12 Diagrams induced by the dim-6 EMDM operators that can contribute to neutrino Dirac and Majorana mass terms are displayed in Fig. 4a and 4b respectively. Without knowing the UV completion of the theory, there is no model independent way of calculating the exact size of these radiative contributions. However, using na¨ive dimensional analysis, we can deduce an estimate of them. For the Dirac mass contribution of Fig. 4a, we obtain mD ' g In
λ g0 hφi , 16π 2
fact, we studied the dim-5 toy model partly because we have expected this similarity.
(13)
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
141 hφi
hφi
(a) Fig. 4. mass.
hφi
ν
N
ν
ν
(b)
ν N
Contribution to (a) the neutrino Dirac mass, mD ; and (b) the light neutrino Majorana
where g0 is the relevant gauge coupling constant. For the direct contribution to light neutrino Majorana mass, we estimate 2 λ2 hφi2 M mν ' , (14) M 4πΛ where M generically denotes the mass scale of the RH neutrinos. Through the seesaw relation of (2), the induced Dirac mass term in (13) will also lead to a contribution to the light neutrino mass given by λ2 hφi2 g0 2 m0ν ' . (15) M 16π 2
These expressions, through the coupling matrix λ, highlight the typical interplay between baryon asymmetry generation and low-energy neutrino properties as observed in the standard case. Indeed, it is evident that there are many parallels amongst the two schemes. The relevant quantities for both the standard and EM versions are summarized in Table 1. Table 1. Comparison of key quantities in standard and electromagnetic leptogenesis for k = 1 and summed over j. These expressions assume a hierarchical RH neutrino spectrum. Standard (h† h)11 M1 16π X Im[(h† h)2 ] M1 1m |ε| ' π (h† h)11 Mm m6=1 Γ1 =
mν '
h2 hφi2 M
Electromagnetic 2 M12 (λ† λ)11 M1 2 4π 8π Λ X Im[(λ† λ)2 ] M1 M 2 2 1m 1 |ε| ' π (λ† λ)11 Mm 8π Λ2 m6=1 " # λ2 hφi2 g0 2 M 2 mν ' + M 16π 2 4πΛ Γ1 =
3.4. The EM leptogenesis parameter space Given that the key equations in EM leptogenesis are in the same form as those in the standard case, it is sensible to assume that their dependence on the parameter space would also be largely similar. Thus, it is convenient to draw upon many of the established results from standard leptogenesis. To this end, one needs to get
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
142
oriented amongst all the contributing factors that may affect the final asymmetry from the standard version. Consider the common case where the asymmetry produced is predominantly due to the decay of N1 , the lightest RH neutrinos, h the final baryon asymmetry produced can be expressed as ηB ' d ε κ = O(10−10 ) ,
(16)
where ηB is the required baryon-to-photon ratio; i d is a dilution factor which depends on factors such as photons production rate, the number of relativistic degrees of freedom and electroweak sphalerons; ε is the raw CP asymmetry which is acquired from direct computation of the interfering diagrams; and κ is the efficiency factor that encapsulates the interplay between N1 productions and washout of the L asymmetry produced. For most scenarios in standard leptogenesis, d is about O(10−2 ) while a good ballpark estimate of the maximum raw CP asymmetry (when assuming hierarchical light neutrino spectrum) is given by8 M1 |ε1 |max ≈ 10−6 . (17) 1010 GeV Therefore, |ε1 | ' 10−6 demands that the RH neutrino mass scale must be at least 1010 GeV. The efficiency factor (κ) is determined from solving a set of Boltzmann equations which governs the out-of-equilibrium behaviors of all relevant processes. Generically, it is a function of the decay parameter K1 defined as K1 ≡
Γ(N1 → `φ) . H(T = M1 )
(18)
Depending on the size of K1 , the system can be in the state of strong or weak washout, which in turn may lead to a different final asymmetry, as well as different sensitivity for initial conditions. Because K1 is a function of the coupling matrix h (through Γ1 ), light neutrino data may influence the typical size for κ. It was suggested that (see e.g. Ref. 13) for M1 ' 1010 to 1014 GeV, neutrino oscillation data favors the mildly strong washout regionj (K1 ' 0.1 to 10), which in turn implies that typically κ ' O(10−2 ). Using these numbers as a guide, we can obtain some quantitative understandings on EM leptogenesis. If we take |ε| ' 10−6 as a necessary condition for sufficient asymmetry generation, we may for instant arrive at this estimate M1 M1 EM −6 4 , βΛ ≡ , (19) |ε | ∼ 10 βΛ 109 GeV Λ h This is a natural consequence of having a hierarchical RH neutrino spectrum (M < M 1 2,3 ), because any L asymmetry created from the decays of N2,3 would be washed out by the N1 mediated L-violating processes in equilibrium. i In other words, the size of the baryon asymmetry. j This is not to say that weak washout is not possible in standard leptogenesis. It is just that one may require more fine-tuning.
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
143
where we have ignored the matrix structure of λ and made some assumptions about the light neutrino masses. Although (19) is not an universal result, it can still tell us two things: firstly, the typical EM leptogenesis scale must be above 109 GeV, secondly, because of the presence of factor βΛ , the RH neutrino mass hierarchy must not be too strong. To demonstrate that there exists a workable parameter space for successful EM leptogenesis, consider a moderate RH neutrino hierarchy, Λ ' 10M2,3 ' 20M1 , with the EMDM couplings λ = O(10). Then, |εEM | ∼ 10−6 can be assured. Furthermore, for M1 ' 5 × 1012 GeV, the decay parameter K1 is about 0.3. Hence, we have moderate washout with κ ∼ 10−2 . Since we are expecting the out-of-equilibrium behavior for the electromagnetic version to be similar to that of the standard case, these input parameters can ensure that a sufficient baryon asymmetry can be created by the EMDM operators. Finally, we need to check that these parameters do not violate any experimental constraints such as the scale of light neutrino masses. Applying the above inputs in (14) and (15), we obtain mν ' 1 × 10−1 eV and m0ν ' 4 × 10−2 eV respectively, which are within the current limits. Another quantity which is of interest is the dipole moment of ordinary neutrinos induced through two-loop diagrams involving the effective EMDM operators (like the one shown in Fig. 5).k Order of magnitude estimate of this gives µeff ∼ λ2 g00 /(256π 4 Λ). Using the input parameters above, we then have µeff ' 5 × 10−19 µB (where µB is the Bohr magneton), which is well below the present upper bound of O(10−11 µB ).14–16
ν ν
Fig. 5.
g0
ν
N
γ
An example of a two-loop graph that contributes to light neutrino dipole moment.
Therefore, we have shown that EM leptogenesis can be a consistent solution to the baryogenesis problem and its operating scale is similar to the standard setup with M1 > 109 GeV. 4. Conclusions In our opinion, leptogenesis is an elegant solution to the baryogenesis problem where the cosmic matter-antimatter asymmetry originates from the L- and CP -violating decays of the postulated heavy RH neutrinos. In this work, we have investigated an alternative type of L- and CP -violating decay for the RH neutrinos (given the k There
can also be contribution to the light neutrino dipole moments via light and heavy neutrino mixings. However, such effect will be sub-dominant.
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
144
same particle content as standard leptogenesis), one which is mediated by effective electromagnetic dipole moment couplings between the light and heavy neutrinos. By first analyzing a more straight forward dim-5 toy model, where the parallels with the standard version is quite transparent, and then extended it to a more realistic theory, we have discovered that EM leptogenesis is a viable mechanism in producing the observed baryon asymmetry. In addition, we have found that the EM case is largely similar to the standard case in many respects. Since the dim-6 EMDM operators used in the proper setup is coupled to the SM Higgs, it turns out that light neutrino masses can be generated radiatively. As a result, the intricate connection between leptogenesis and neutrino properties remains in this EM scenario. It is also because of this fact, EM leptogenesis could only work if the RH neutrino mass scale is well beyond 109 GeV. A feature that is shared by standard leptogenesis. It should be noted that while this link between the high- and low-energy sectors makes it impossible to test leptogenesis directly in experiments (at least in the foreseeable future), it is however an important hint that the CP -violating couplings in leptogenesis (standard and/or EM) may manifest itself in the low-energy sector in the form of CP -violating mass matrices. As a result, looking for CP violation in light neutrino oscillations is very much motivated. Acknowledgments The author would like to thank Nicole Bell (the University of Melbourne) and Boris Kayser (Fermilab) for a fruitful and enjoyable collaboration. He would also like to thank the organizers of the BEYOND 2010 conference for the invitation to give a plenary talk and their kind hospitality during his stay in Cape Town. Travel support was provided by the National Centre for Theoretical Sciences (North), Physics Division (LHC Physics focus group). References 1. E. Komatsu et al. [WMAP Collaboration], Astrophys. J. Suppl. 180, 330 (2009) [arXiv:0803.0547 [astro-ph]]. 2. A. D. Sakharov, Pisma Zh. Eksp. Teor. Fiz. 5, 32 (1967) [JETP Lett. 5, 24 (1967 SOPUA,34,392-393.1991 UFNAA,161,61-64.1991)]. 3. A. G. Cohen, D. B. Kaplan and A. E. Nelson, Ann. Rev. Nucl. Part. Sci. 43, 27 (1993) [arXiv:hep-ph/9302210]. 4. M. Yoshimura, Phys. Rev. Lett. 41, 281 (1978) [Erratum-ibid. 42, 746 (1979)]. 5. M. Fukugita and T. Yanagida, Phys. Lett. B 174, 45 (1986). 6. I. Affleck and M. Dine, Nucl. Phys. B 249, 361 (1985). 7. S. S. C. Law, arXiv:0901.1232 [hep-ph]. 8. S. Davidson and A. Ibarra, Phys. Lett. B 535, 25 (2002) [arXiv:hep-ph/0202239]. 9. N. F. Bell, B. Kayser and S. S. C. Law, Phys. Rev. D 78, 085024 (2008) [arXiv:0806.3307 [hep-ph]]. 10. N. F. Bell, V. Cirigliano, M. J. Ramsey-Musolf, P. Vogel and M. B. Wise, Phys. Rev. Lett. 95, 151802 (2005) [arXiv:hep-ph/0504134].
November 22, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.02˙Law
145
11. S. Davidson, M. Gorbahn and A. Santamaria, Phys. Lett. B 626, 151 (2005) [arXiv:hep-ph/0506085]. 12. N. F. Bell, M. Gorchtein, M. J. Ramsey-Musolf, P. Vogel and P. Wang, Phys. Lett. B 642, 377 (2006) [arXiv:hep-ph/0606248]. 13. W. Buchmuller, P. Di Bari and M. Plumacher, Annals Phys. 315, 305 (2005) [arXiv:hep-ph/0401240]. 14. J. F. Beacom and P. Vogel, Phys. Rev. Lett. 83, 5222 (1999) [arXiv:hep-ph/9907383]. 15. H. T. Wong et al. [TEXONO Collaboration], Phys. Rev. D 75, 012001 (2007) [arXiv:hep-ex/0605006]. 16. C. Arpesella et al. [The Borexino Collaboration], Phys. Rev. Lett. 101, 091302 (2008) [arXiv:0805.3843 [astro-ph]].
November 22, 2010
15:14
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.03˙Kiessig
146
NEUTRINO DECAY INTO FERMIONIC QUASIPARTICLES IN LEPTOGENESIS ¨ CLEMENS P. KIEßIG⋆ , MICHAEL PLUMACHER
♥
Max-Planck-Institut f¨ ur Physik (Werner-Heisenberg-Institut), F¨ ohringer Ring 6 D-80805 M¨ unchen, Germany ⋆ E-mail:
[email protected], ♥ E-mail:
[email protected] MARKUS H. THOMA† Max-Planck-Institut f¨ ur extraterrestrische Physik, Giessenbachstraße, D-85748 Garching, Germany † E-mail:
[email protected] We calculate the decay rate of the lightest heavy Majorana neutrino in a thermal bath using finite temperature cutting rules and effective Green’s functions according to the hard thermal loop resummation technique. Compared to the usual approach where thermal masses are inserted into the kinematics of final states, we find that deviations arise through two different leptonic dispersion relations. The decay rate differs from the usual approach by more than one order of magnitude in the temperature range which is interesting for the weak washout regime. This work summarizes the results of Ref. 1, to which we refer the interested reader. Keywords: Leptogenesis; Thermal field theory; Finite temperature field theory; Hard thermal loop; Plasmino.
1. Introduction Leptogenesis2 is an extremely successful theory in explaining the baryon asymmetry of the universe by adding three heavy right-handed neutrinos Ni to the standard model, ¯i ∂µ γ µ Ni − λν,iα N ¯i φ† ℓα − 1 Mi N ¯i N c + h.c. , δL = iN (1) i 2 with masses Mi at the scale of grand unified theories (GUTs) and Yukawa couplings λν,iα similar to the other fermions. This also solves the problem of the light neutrino masses via the see-saw mechanism without fine-tuning.3 The heavy neutrinos decay into lepton and Higgs boson after inflation, the decay is out of equilibrium since there are no gauge couplings to the standard model. If the CP asymmetry in the Yukawa couplings is large enough, a lepton asymmetry is created by the decays which is then partially converted into a baryon asymmetry by sphaleron processes. As temperatures are high, interaction rates and the CP
November 22, 2010
15:14
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.03˙Kiessig
147
asymmetry need to be calculated using thermal field theory4 rather than vacuum quantum field theory. 2. Hard Thermal Loops and Thermal Masses When using bare thermal propagators in TFT,5 one can encounter IR singularities and gauge dependent results. In order to cure this problem, the hard thermal loop (HTL) resummation technique has been invented.6,7 If g is the coupling to the thermal bath, then for soft momenta K . gT , resummed propagators are used. For a scalar field e.g. this reads i∆∗ = i∆ + i∆(−iΠ)i∆ + · · · =
i i = 2 . ∆−1 − Π Q − m20 − Π
(2)
The self energy Π ∼ gT then acts as a thermal mass m2th = Π and gives a correction m2tot := m20 + m2th . 3. Decay and Inverse Decay Rate Since we are interested in regimes where both the Higgs boson and the lepton momentum can be soft, we resum both propagators. The HTL resummation technique has been considered in Ref. 8 for the case of a Dirac fermion with Yukawa coupling. In order to calculate the interaction rate Γ of N ↔ ℓφ, we cut the N self energy and use the HTL resummation for the lepton and Higgs boson propagators. Since λν,iα ≪ 1, it is justified to neglect the coupling of the neutrino to the thermal bath. According to finite-temperature cutting rules,9,10 the interaction rate reads Γ(P ) = −
1 tr[(P/ + M ) Im Σ(P )]. 2p0
At finite temperature, the self-energy reads Z X d3 k Σ(P ) = −g 2 T PL S ∗ (K) PR D∗ (Q), (2π)3
(3)
(4)
k0 =i(2n+1)πT
where PL and PR are the projection operators on left- and right-handed states, Q = P − K and we have summed over neutrino and lepton spins. We also sum over the two components of the doublets, particles and antiparticles and the three lepton flavors, such that g 2 = 4(λ†ν λν )11 . The HTL-resummed Higgs boson propagator is D∗ (Q) = 1/(Q2 − m2φ ), where m2φ /T 2 = (3/16 g22 + 1/16 gY2 + 1/4 yt2 + 1/2 λ) is the thermal mass of the Higgs boson. The couplings denote the SU(2) coupling g2 , the U(1) coupling gY , the top Yukawa coupling yt and the Higgs boson self coupling λ, where we assume a Higgs boson mass of 115 GeV. The other Yukawa couplings can be neglected since they are much smaller than unity and the remaining couplings are renormalized at the first Matsubara mode 2πT as explained in Ref. 4.
November 22, 2010
15:14
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.03˙Kiessig
148
The effective lepton propagator in the helicity-eigenstate representation is given by11 1 ˆ · γ) + 1 ∆− (K)(γ0 + k ˆ · γ), S ∗ (K) = ∆+ (K)(γ0 − k (5) 2 2 where −1 m2ℓ ±k0 − k k0 + k ∆± (K) = −k0 ± k + ±1 − ln (6) k 2k k0 − k and m2ℓ /T 2 = (3/32 g22 + 1/32 gY2 ). The trace can be evaluated as
tr[(P/ + M )PL S ∗ (K)PR ] = ∆+ (p0 − pη) + ∆− (p0 + pη),
(7)
where η = p · k/pk is the angle between neutrino and lepton. We evaluate the sum over Matsubara frequencies by using the Saclay method.12 For the Higgs boson propagator, the Saclay representation reads Z β 1 {[1 + nB (ωq )]e−ωq τ + nB (ωq )eωq τ }, (8) D∗ (Q) = − dτ eq0 τ 2ωq 0 where β = 1/T , nB (ωq ) = 1/(eωq β − 1) is the Bose-Einstein distribution and ωq2 = q 2 + m2φ . For the lepton propagator it is convenient to use the spectral representation13 Z β Z ∞ ′ ′ k0 τ ′ ∆± (K) = − dτ e dω ρ± (ω, k)[1 − nF (ω)]e−ωτ , (9) 0
−∞
where nF (ω) = 1/(eωβ + 1) is the Fermi-Dirac distribution and ρ± the spectral density.11 The lepton propagator in Eq. (5) has two different poles for 1/∆± = 0, which correspond to two leptonic quasiparticles with a positive (∆+ ) or negative (∆− ) ratio of helicity over chirality.14–16 The spectral density ρ± has a contribution from the poles and a discontinuous part. We are interested in the pole contribution ω2 − k2 (δ(ω − ω± ) + δ(ω + ω∓ )), (10) 2m2ℓ where ω± are the dispersion relations for the two quasiparticles, i.e. the solutions for 1/∆ ± (ω± , k) = 0, shown in Fig. 1 (a). An analytical solution for ω± can be found in the appendix of Ref. 1. One can assign a momentum-dependent thermal mass m± (k)2 = ω± (k)2 − k 2 to the two modes as√ shown in Fig. 1 (b) and for very large momenta the heavy mode m+ approaches 2 mℓ , while the light mode becomes massless. After evaluating the sum over k0 , carrying out the integrations over τ and τ ′ and integrating over the pole part of ρ± in Eq. (10), we get 2 X ω ± − k 2 1 + nB − nF nB + nF 1 ∗ + T D ∆± = − 2ωq 2m2ℓ p0 − ω ± − ω q p0 − ω ± + ω q k0 (11) 2 ω∓ − k2 nB + nF 1 + nB − nF + , + 2m2ℓ p0 + ω ∓ − ω q p0 + ω ∓ + ω q ρpole ± (ω, k) =
November 22, 2010
15:14
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.03˙Kiessig
149
where nB = nB (ωq ) and nF = nF (ω± ) or nF (ω∓ ), respectively. The four terms in Eq. (11) correspond to the processes with the energy relations indicated in the denominator, i.e. the decay N → φℓ, the production N φ → ℓ, the production N ℓ → φ and the production of N ℓφ from the vacuum, as well as the four inverse reactions.9 We are only interested in the process N ↔ φℓ, where the decay and inverse decay are illustrated by the statistical factors 1 + nB − nF = (1 + nB )(1 − nF ) + nB nF , given by the first term of Eq. (11). For carrying out the integration over the angle η, we use Im
1 ωq = −πδ(p0 − ω± − ωq ) = −π δ(η − η± ). p0 − ω ± − ω q kp
(12)
After integrating over η we get XZ k 1 dk |M± (P, K)|2 [1 + nB (ωq± ) − nF (ω± )] Γ(P ) = 16πp0 p ± −1≤η± ≤1 ω± Z 1 dk˜ d˜ q (2π)4 δ 4 (P − K − Q) |M± (P, K)|2 [1 + nB − nF ], = 2p0
(13)
where ωq± = p0 − ω± , we only integrate over regions with −1 ≤ η ≤ 1, dk˜ = d3 k/((2π)3 2 k0 ) and d˜ q analogously and the matrix elements are |M± (P, K)|2 = g 2
2 ω± − k2 ω± (p0 ∓ pη± ) . 2m2ℓ
(14)
In order to compare our result to the conventional approximation,4 we do the ∗ / − mℓ ). same calculation for an approximated lepton propagator Sapprox (K) = 1/(K 2
This amounts to setting ω 2 = k 2 + m2ℓ , ωq = p0 − ω and we get |M|2 = g2 (M 2 + m2ℓ − m2φ ) as matrix element. This result resembles the zero temperature result with zero temperature masses mℓ , mφ . The missing factor 1 + nB − nF = (1 + nB )(1 − nF ) + nB nF accounts for 3
q
1.4
k 2 + m2ℓ ω+ ω− k
2.5
1 √ ω 2 − k2 /mℓ
2 ω/mℓ
m+ /mℓ
1.2
1.5 1
0.8 0.6 m− /mℓ
0.4
0.5
0.2 0
0 0
0.5
1
1.5
2
2.5
3
0
1
2
(a)
3
4
5
k/mℓ
k/mℓ
(b)
Fig. 1. (a) The two leptonic dispersion relations compared with the standard dispersion relation. 2 − k2 . (b) The momentum-dependent quasiparticle masses m2± = ω±
November 22, 2010
15:14
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.03˙Kiessig
150
the statistical distribution of the initial or final particles. As pointed out in more detail in Ref. 17, we have shown that the approach to treat thermal masses like zero temperature masses in the final state4 is justified since it equals the HTL treatment with an approximate lepton propagator. However this approach does not equal the full HTL result. 4. Decay Density The quantity which enters the Boltzmann equations is the decay density integrated over all neutrino momenta. In equilibrium it reads Z ∞ Z 1 d3 p eq eq eq f (E) Γ = dE E p fN ΓD , (15) γD = D (2π)3 N 2π 2 M eq where E = p0 , fN (E) = [exp(Eβ) − 1]−1 is the equilibrium distribution of the eq neutrinos and ΓD = [1 − fN (E)] Γ. In Fig. 2, we compare our result to the conventional approximation.4 We evaluate the decay rate for M1 = 1010 GeV and m˜1 = (λν λ†ν )11 v 2 /M1 = 0.06 eV, where v = 174 GeV is the vacuum expectation value of the Higgs field.
10−8
γ0 γ±
γ/(1010 GeV)4
10−9 10−10
γ−
10−11 γ+ 10−12 10−13 10−14
(+) (0) 0
0.5
1
1.5
(−) 2
z = T /M Fig. 2. The neutrino decay density with the one lepton mode approach γ0 and the two-mode treatment γ± for M1 = 1010 GeV and m ˜ 1 = 0.06 eV. The thresholds for the two modes (+), (-) and one mode (0) are indicated.
In the one-mode approach, the decay is forbidden when M < mℓ + mφ . Considering two modes, the phase space is reduced for the positive mode due to the larger quasi-mass and at M = m+ (∞) + mφ , the decay is only possible into leptons with small momenta, thus the rate drops dramatically. The decay into the negative, quasi-massless mode is suppressed due to its much smaller residue. However,
November 22, 2010
15:14
WSPC - Proceedings Trim Size: 9.75in x 6.5in
02.03˙Kiessig
151
the decay is possible up to M = mφ . These rates differ from the one mode approach by more than one order of magnitude in the interesting temperature regime of z = T /M & 1. 5. Conclusions As discussed in detail in Ref. 17, we have, by employing HTL resummation and finite temperature cutting rules, confirmed that treating thermal masses as kinematic masses as in Ref. 4 is a reasonable approximation. We have calculated the decay density of the lightest heavy Majorana neutrino and its behavior can be explained by considering the dispersion relations ω± of the lepton modes and assigning momentum-dependent quasi-masses to them. The thresholds for neutrino decay reported in Ref. 4 are shifted and the decay density shows deviations of more than an order of magnitude in the interesting temperature regime T /M ∼ 1. In order to arrive at a minimal consistent treatment, also the decay φ → N ℓ at high temperatures needs to be included as well as Higgs boson and neutrino CP asymmetries which are corrected for lepton modes. This contribution summarizes the results of an earlier work1 and we refer the interested reader to the more elaborate treatment there. Acknowledgements We thank Georg Raffelt, Florian Hahn-W¨ornle, Steve Blanchet, Matthias Garny, Marco Drewes, Wilfried Buchm¨ uller, Martin Spinrath and Philipp Kostka for fruitful and inspiring discussions. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
C. P. Kießig, M. Pl¨ umacher and M. H. Thoma, arXiv:1003.3016 [hep-ph]. M. Fukugita and T. Yanagida, Phys. Lett. B 174, 45 (1986). P. Minkowski, Phys. Lett. B 67, 421 (1977). G. F. Giudice, A. Notari, M. Raidal, A. Riotto and A. Strumia, Nucl. Phys. B 685,89 (2004) [arXiv:hep-ph/0310123]. M. Le Bellac, Thermal field theory (Cambridge University Press, Cambridge, UK, 1996). E. Braaten and R. D. Pisarski, Nucl. Phys. B 337,569 (1990). E. Braaten and R. D. Pisarski, Nucl. Phys. B 339,310 (1990). M. H. Thoma, Z. Phys. C 66, 491 (1995). [arXiv:hep-ph/9406242]. H. A. Weldon, Phys. Rev. D 28, 2007 (1983). R. L. Kobes and G. W. Semenoff, Nucl. Phys. B 272, 329 (1986). E. Braaten, R. D. Pisarski and T. C. Yuan, Phys. Rev. Lett. 64, 2242 (1990). R. D. Pisarski, Nucl. Phys. B 309, 476 (1988). R. D. Pisarski, Physica A 158, 146 (1989). V. V. Klimov, Sov. J. Nucl. Phys. 33, 934 (1981) [Yad. Fiz. 33, 1734 (1981)]. H. A. Weldon, Phys. Rev. D 26, 2789 (1982). H. A. Weldon, Phys. Rev. D 40, 2410 (1989). C. P. Kießig and M. Pl¨ umacher, [arXiv:hep-ph/0910.4872].
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
PART III
New Interactions, Inflationary and Quantum Cosmology
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
155
ANOMALY DRIVEN SIGNATURES OF EXTRA U (1)’s IGNATIOS ANTONIADISa∗ , ALEXEY BOYARSKYb,c and OLEG RUCHAYSKIYb a CERN
PH-TH, CH-1211 Geneva 23, Switzerland Polytechnique F´ ed´ erale de Lausanne, FSB/ITP/LPPC, BSP, CH-1015, Lausanne, Switzerland b Ecole
c Bogolyubov
Institute for Theoretical Physics, Kiev 03680, Ukraine
Anomaly cancellation between different sectors of a theory may mediate new interactions between gauge bosons. Such interactions lead to observable effects both at precision laboratory experiments and at accelerators. Such experiments may reveal the presence of hidden sectors or hidden extra dimensions. Keywords: anomalies; additional U (1) fields; axion-like particles.
1. Introduction Theories in which fermions have chiral couplings with gauge fields are known to suffer from anomalies – a phenomenon of breaking of gauge symmetries of a classical theory at one-loop level. Anomalies make a theory inconsistent (in particular, its unitarity is lost). The only way to restore its consistency is to arrange for an exact cancellation of anomalies between the various chiral sectors. This happens, for example, in the Standard Model (SM), where the cancellation occurs between quarks and leptons within each generation.1 Another well studied example is the Green-Schwarz anomaly cancellation mechanism2 in string theory. In this case the cancellation arises between the anomalous contribution of chiral matter of the closed string sector with that of the open string. Formally, the Green-Schwarz anomaly cancellation occurs due to the anomalous Bianchi identity for the field strength of the closed 2-form. However, this modification of Bianchi identity arises from the 1-loop contribution of chiral fermions in the open string sector. A toy model, describing microscopically Green-Schwarz mechanism was studied e.g. in ref.3 Particles involved in anomaly cancellation may have very different masses. For example, the mass of top quark in the SM is much higher than the masses of all other fermions. However, gauge invariance should pertain at all energies, including those which are smaller than the mass of some particles involved in anomaly cancellation.
∗ On
´ leave from CPHT, UMR du CNRS 7644, Ecole Polytechnique, 91128 Palaiseau, France
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
156
The usual logic of field theory is that interactions, mediated by heavy fermions running in loops, are suppressed by the masses of these fermions.4 The case of anomaly cancellation presents a notable counterexample to this famous “decoupling theorem” – the contribution of a priori arbitrary heavy particles should remain unsuppressed at low energies. As it was pointed out in,5 this is possible because anomalous (i.e. gauge-variant) terms in the effective action have topological nature and therefore are scale independent. As a result, they are not suppressed even at energies much smaller than the masses of the particles producing these terms via loop effects. This gives a hope to see at low energies some signatures of new physics. 2. BSM Physics and Extra U (1) Fields As discussed above, non-trivial anomaly cancellation generically should involve at least one gauge field beyond the SM gauge sector. To reconcile this with existing experimental bounds, such an anomaly cancellation should take place between SM and “hidden” sector, with new particles appearing at relatively high energies. Here we concentrate on the case of one additional Abelian group. Extra U (1) fields appear in many extensions of the Standard Model (see e.g.6 and refs. therein). For example, additional U (1)s appear naturally in models in which SU (2) and SU (3) gauge factors of the SM arise as parts of unitary U (2) and U (3) groups (as e.g. in D-brane constructions of the SM7–10 ). A common feature of these models is a non-trivial cancellation of anomalies between various sectors of the theory. 3. Mixed Anomaly Involving Photon If the mixed anomaly cancellation between several groups of fermions involves the photon field Aµ , terms (often called Generalized Chern-Simons) can appear in the action κ Lcs = − µνλρ Xµ Aν Fλρ (1) 2 where Xµ is an extra U (1). Here κ is a dimensionless coupling constant. The ChernSimons-like interaction (1) appears in various models (see e.g.7,10–19 ). This term resembles an axion coupling to photon Laγγ =
1 µνλρ a µνλρ F µν F λρ = − (∂µ a) Aν Fλρ 4fa 2fa
(2)
under the identification of a derivatively coupled pseudo-scalar with the longitudinal part of the massive vector field Xµ , ∂µ a −→ mX Xµ , where mX is the mass of this new vector boson. An analog of Peccei-Quinn scale fa is played in this theory by the combination mX fa ↔ (3) κ Notice, that by making the coupling κ small, one can have fa mX .
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
157
The simplest model involving the interaction term (1) is given by the effective action (with the masses generated via the St¨ uckelberg mechanism): ! Z 1 2 1 2 m2X 2 m2γ 2 κ µνλρ 4 Xµ + A + Xµ Aν Fλρ (4) S = d x − Fµν − Xµν + 4 4 2 2 µ 2 Here Xµν = ∂µ Xν − ∂ν Xµ is the field strength of the massive vector field Xµ . The Chern-Simons-like term is not gauge invariant under the electromagnetic gauge transformation U (1)QED . To amend this drawback, we introduced a mass to the photon (mγ ), consistent with all existing restrictions (see e.g.20 for current bounds on the photon mass). Alternatively, one can impose an additional constraint in the theory: Fµν Xλρ µνλρ = 0.14 For optical experiments the phenomenology of the theory (4) is similar to that of the axion-like particles (ALPs) with (3) and (mX ↔ ma ). However, at higher energies the phenomenology of the theory (4) can get significantly different. Indeed, if a massive vector field couples to a conserved current, all the processes involving the longitudinal degree of freedom are suppressed at energies much greater than its mass mX . On the other hand, if the current is not conserved, for E mX the longitudinal polarization behaves as a derivatively coupled scalar (the so called Goldstone boson equivalence theorem 21 ). This is what may happen in theory (4). Although the theory can be written in a formally gauge invariant form under the U (1)X gauge symmetry by introducing a St¨ uckelberg field θX , the symmetry is realized by simultaneous gauge transformations of the X-field and θX -shifts. As a result, the field Xµ couples to a µ non-conserved current (jX ≡ δL/δXµ ) and therefore its longitudinal polarization behaves as an axion (for E mX ). However, the theory (4) is an effective field theory, valid up to a certain energy scale Λ. This scale Λ . mκX , as one can easily find by analyzing the unitarity bound in tree-level processes with outgoing longitudinally polarized X. It may naturally happen that for E & Λ the theory gets modified in such a way that the current µ jX becomes conserved. Then, all processes involving emission or absorption of the 2 longitudinal polarization of Xµ get suppressed as mEX . As we are interested in the situation where the field Xµ can be produced at laboratory energies (e.g. in laser experiments), its mass should be mX . Elab ∼ eV. The stringent constraints on ALPs come from stellar observations (see e.g. 22,23 ). The ALPs, created in stellar interior via the interactions (2), can significantly change burning cycles and life-times of stars (see22 ). To change the situation, as compared to a standard axion with energy-independent coupling, the scale of new physics Λ should be in the keV region Λ ∼ E? ∼ keV. The conservation of the current µ jX implies a suppression of emission of longitudinal vector boson by at least ∼ (Elab /E? )2 ∼ 10−6 . Taking into account the astrophysical constraints, one finds κ . 10−10 eV/mX . Thus, the theory with Chern-Simons (CS) interaction (4) does not resemble the theory of ALPs. In particular, the production of Xµ is strongly suppressed by the small value of the dimensionless CS coupling κ.
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
158
To illustrate this idea, assume that there is an additional fermion with mass Mf , interacting with the fields of the theory (4) and giving rise to an effective action of the following (schematic) form: κ 1 2 1 2 m2X κ L = − Fµν F F˜ − Xµν + (Dµ θX )2 + µνλρ Xµ Aν Fλρ + Mf2 θX −∂µ X µ 4 4 2 2 +Mf2 (5) where we introduced the notation F F˜ = 21 µνλρ Fµν Fλρ . For simplicity of the presentation we work with the non-local action (5), but one can find an example of a renormalizable field theory which shares these properties in Ref.15 Recall that we add to this theory the constraint µνλρ Fµν Xλρ = 0 to make it gauge invariant with respect to U (1)A transformations. Let us now demonstrate that this theory possesses the desired properties: At low energies (for E < Mf ) one obtains the action (4) (formally taking Mf → ∞). To analyze the theory at high energies (E Mf ), one can formally take Mf → 0 and neglect the interaction term proportional to θX in the action (5). As a result at high energies, the field Xµ couples to the conserved current µ jX =
∂µ κ µνλρ Aν Fλρ − κ (F F˜ ) . 2
(6)
Therefore, at energies E Mf the production of the longitudinally polarized Xµ field in theory (5) is suppressed. Of course, for E > Mf the current (6) should be computed directly in the microscopic theory producing the non-local terms in (5), containing additional particles, rather than in the non-local effective theory. However, the conclusion remains the same. The effect of decoupling of the longitudinal polarization of the vector boson at high energies, significantly changes the phenomenology. Most interestingly, it allows to reconcile the stellar constraints on ALPs (see e.g.22,23 ) with a possible signal in the high precision optical experiments, outside the standard axion parameter space (see e.g.24–29 ). Notice, that such a model requires fermions with masses Elab . Mf < E? , i.e. in the range from ∼ 1 eV to ∼ 1 keV. There are various restrictions on the charges qf of such fermions. First, laboratory bounds, coming from the contribution to the Lamb shift and invisible orthopositronium decay30 (based on the results of31 ) give qf < 10−4 . A stronger bound on millicharged fermions (qf < 10−6 ) with sub-eV masses comes from the requirement that such fermions do not distort the CMB spectrum too much.32 However, this restriction is not applicable in our case as the fermion masses are assumed to be above Mf > Elab ∼ 1 eV. Finally, the strongest bound (qf < 10−14 ) on the charges of fermions with mass below . 30 keV comes from limiting the contribution of these particles to the energy transfer in stars33 (see also22 ). To satisfy this bound, the vector field Xµ should be extremely light with mX ∼ 10−10 eV and κ ∼ 10−2815 (which is a possibility). However, these bounds can be avoided in our model because the paraphoton field Xµ acquires a kinetic mixing with the photon due to the loop corrections coming from
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
159
light fermions. Therefore, the mechanism of additional suppression of the coupling of fermions with the photon in stars, proposed in,34 is possible. The restriction then 2 E? becomes qf . 10−14 m , i.e. the stellar bound of30,33 is weakened by at least X six orders of magnitude, making the model compatible with existing observations (see15 for details). If a non-trivial anomaly cancellation involves the electromagnetic U (1) gauge group (c.f. e.g.14,15,35–41 ), observable effects may be present in optical experiments. Indeed, such high precision experiments (e.g. those measuring the change of polarization of light propagating in a strong magnetic field) could in principle see the ~ ·H ~ 6= 0. There exists a significant anomalous terms, proportional to F˜ · F = 4E experimental activity searching for such signals (see e.g.42–50 ), as various ALPs are expected to couple to F˜µν F µν and produce interesting signatures in parallel electric and magnetic fields. A different type of experiment using static fields, which may test effects caused by non-trivial anomaly cancellation in the electromagnetic sector, was suggested in.36 4. Mixed Anomalies Involving Z, W ± Bosons Let us turn our attention to a situation, where anomalous charges and therefore, anomaly-induced effects, are O(1). To this end we consider an additional UX (1) factor. As the SM fermions are chiral with respect to the EW group SU (2)W × UY (1), even choosing the charges for the UX (1) group so that the triangular UX (1)3 anomaly vanishes, still this may easily give rise to the appearance of mixed anomalies: UX (1)UY (1)2 , UX (1)2 UY (1), UX (1)SU (2)2 . In this work we are interested in the situation when only (some of these) mixed anomalies with the electroweak group SU (2)×UY (1) are non-zero. A number of works have already discussed such theories and their signatures (see e.g.10,13–15,19,51–53 ). The question of experimental signatures of such theories at LHC should be addressed differently, depending on whether or not the SM fermions are charged with respect to the UX (1) group: • If the SM fermions are charged with respect to UX (1) and the mass of the new X boson is in the TeV scale, one should be able to see the corresponding resonance in the forthcoming runs of LHC. In this case an important question is to distinguish between theories with non-trivial cancellation of mixed anomalies, and those which are anomaly free. • If the SM fermions are not charged with respect to the UX (1) group, the direct production of the X boson is impossible. Therefore, the question of whether an anomalous gauge boson with mass MX ∼ 1 TeV can be detected at LHC becomes especially interesting. A theory in which the cancellation of the mixed UX (1)SU (2)2 anomaly occurs between some heavy fermions and Green-Schwarz (i.e. tree-level gauge-variant) terms was considered in.53 The leading non-gauge invariant contributions from the
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
160
triangular diagrams of heavy fermions, unsuppressed by the fermion masses, cancels the Green-Schwarz terms. The triangular diagrams also produce subleading (gaugeinvariant) terms, suppressed by the mass of the fermions running in the loop. This leads to an appearance of dimension-6 operators in the effective action, having the 3 general form Fµν /Mf2 , where Fµν is the field strength of X, Z or W ± bosons. Such terms contribute to the XZZ and XW W vertices. As the fermions in the loops are heavy, such vertices are in general strongly suppressed by their mass. However, motivated by various string constructions,53 has assumed two things: (a) these additional massive fermions are above the LHC reach but not too heavy (e.g. have masses in tens of TeV); (b) there are many such fermions (for instance Hagedorn tower of states) and therefore the mass suppression can be compensated by the large multiplicity of these fermions. In16 another possible setup was considered, in which the anomaly cancellation occurs only within a high-energy sector (at scales not accessible by current experiments), but at low energies there remain terms XW W and XZZ unsuppressed by masses of heavy particles. Such a theory has unique experimental signatures and can be tested at LHC. At energies accessible at LHC and below the masses of the new heavy fermions, the theory in question is simply the SM plus a massive vector boson X: L = LSM −
1 M2 |FX |2 + X |DθX |2 + Lint 2 4gX 2
(7)
where θX is a pseudo-scalar field, charged under UX (1) so that DθX = dθX + X remains gauge invariant (St¨ uckelberg field). One can think about θX as being a phase of a heavy Higgs field, which gets “eaten” by the longitudinal component of the X boson. The interaction term Lint contains the vertices between the X boson and the Z, γ, W ± : Lint = c1 µνλρ
λρ † H † Dµ H λρ µνλρ HFW Dµ H D θ F + c Dν θ X ν X 2 Y |H|2 |H|2
(8)
The coefficients c1 , c2 in front of these terms are dimensionless and can have arbitrary values, determined entirely by the properties of the high-energy theory. We will often call the terms in eq. (8) as the D’Hoker-Farhi terms.5 The simplest possibility for the origin of these terms would be to add to the SM several heavy fermions, charged with respect to SU (2) × UY (1) × UX (1). An example of such a theory is provided in.16 The idea is to choose charges so that a group of fermions ψ should be vector-like with respect to the group U (1)Y and chiral with respect to the U (1)X and another group of fermions χ should be the other way around. The choice of charges is such that triangular anomalies [U (1)Y ]3 and [U (1)X ]3 cancel separately for the ψ and χ sector. It is also possible that the fermion masses are not generated via the Higgs mechanism, (e.g. coming from extra dimensions) and are not directly related to the masses of the gauge fields. In this case, the decoupling theorem may not hold and new terms can appear in a wide range of energies (see e.g.35,36 for discussion).
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
161
5. Phenomenology The terms in (8) in the EW broken phase generate three interaction vertices: XZZ, XZγ and XW + W − . The most important relevant fact to phenomenology is that the X boson is produced by and decays into SM gauge bosons. We shall discuss in turn the production mechanisms and the decay final states of the X boson and then estimate the discovery capability at colliders. Producing the X boson must proceed via its coupling to pairs of SM gauge bosons. One such mechanism is through vector-boson fusion, where two SM gauge bosons are radiated off initial state quark lines and fused into an X boson: pp → qq 0 V V 0 → qq 0 X or V V 0 → X for short,
(9)
where V V 0 can be W + W − , ZZ or Zγ. This production mechanism was studied in ref.53 One of the advantages is that if the decays of X are not much different than the SM, the high-rapidity quarks that accompany the event can be used as “tagging jets” to help separate signal from the background. This production mechanism is very similar to what has been exploited in the Higgs boson literature. A second class of production channels is through associated production: pp → qq 0 → V ∗ → XV 0
(10)
where an off-shell vector boson V ∗ and the final state V 0 can be any of the SM electroweak gauge bosons: XZ, XW ± or Xγ. It turns out that this production class has a larger cross-section than the vector boson fusion class. This is opposite to what one finds in SM Higgs phenomenology, where V V 0 → H cross-section is by O(102 ) greater than HV 0 associated production. The reason for this is that both vector bosons can be longitudinal when scattering into H, thereby increasing the V V 0 → H cross-section over HV 0 . This is not the case for the X boson production, in which only one longitudinal boson can be present at the vertex. This leads to a suppression √ by ∼ ( s/MV )2 of the process (9) as opposed to the similar process for the Higgs √ boson. For LHC energies ( s ∼ 10 TeV) this suppression is of the order 10−4 . Without special longitudinal enhancements, the two body final state XV 0 dominates over the three-body final state qq 0 X, which makes the associated production (10) about 2 orders of magnitude stronger than the corresponding vector-boson fusion. As we shall see below, the decays of the X boson are sufficiently exotic in nature that background issues do not change the ordering of the importance of these two classes of diagrams. Thus, we focus our attention on the associated production XV 0 to estimate collider sensitivities. In Fig. 1 (upper part) we plot the production cross-sections of XV for various √ V = W ± , Z, γ at s = 14 TeV pp LHC. 6. Collider Searches Combining the various production modes and branching fractions yields many permutations of final states to consider at high energy colliders. All permutations,
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
162
1
10
+
W- X WX ZX γX
0
10
-1
σ (pb)
10
-2
10
-3
10
-4
10
-5
10
-6
10 0
1000
2000
3000
4000
1 dσ σ dcosθ
MX (GeV)
500
400
300
200
100
0 −1.5
−1
−0.5
0
0.5
1
1.5 cosθ
√ Fig. 1. Upper Part: Production cross-section at s = 14 TeV LHC of XV 0 for various V 0 = ± W , Z, γ (in corresponding descending order from the left side) vs. the X boson mass with c 1 = c2 = 0.1. Down Part: The cos θl− distribution of the l− from X → γZ → γl+ l− decays for X being a scalar (middle-lower line) or vector (middle-upper line) particle. The angle is defined in the Z rest frame with respect to the Z boost direction, MX = 500 GeV.
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
163
after taking into account X decays, give rise to three vector boson final states such as ZZZ, W + W − γ, etc. The collider phenomenology associated with these kinds of final states is interesting, and we focus on a few aspects of it below. Our primary interest will be to study how sensitive the LHC is to finding this kind of X boson. After discovery is made a comprehensive study programme to measure all the final states, and determine production cross-sections and branching ratios would be a major endeavor by the experimental community. However, the first step is discovery. In this section we demonstrate one of the cleanest and most unique discovery modes to this theory. It turns out that the X → γZ decay mode is especially important for this kind of theory.10 Consulting the production crosssections results for LHC, we find that producing the X in association with W ± gives the highest rate. Thus, we focus our attention on discovering the X boson through XW ± production followed by X → γZ decay. The γZW ± signature is an interesting one since it involves all three electroweak gauge bosons. If the Z decays into leptons, it is especially easy to find the X boson mass through the invariant mass reconstruction of γl + l− . The additional W is also helpful as it can be used to further cut out background by requiring an additional lepton if the W decays leptonically, or by requiring that two jets reconstruct a W mass. In our analysis,54 we are very conservative and only consider the leptonic decays of the Z and the W . Thus, after assuming X → γZ decay, 1.4% of γZW ± turn into ± γl+ l− l0 plus missing ET events.a These events have very little background when cut around their kinematic expectations. For example, if we assume MX = 1 TeV we find negligible background while retaining 0.82 fraction of all signal events when making kinematic cuts η(γ, l) < 2.5, ml+ l− = mZ ± 5 GeV, pT (γ) > 50 GeV, pT (l+ , l− , l0 ) > 10 GeV, missing ET greater than 10 GeV and mγl+ l− > 500 GeV. Thus, for 10 fb−1 of integrated luminosity at the LHC, when ci = 1 (ci = 0.1) we get at least five events of this type, γl + l− l0 plus missing ET , if MX > 4 TeV (MX > 2 TeV). This would be a clear discovery of physics beyond the SM and would point to a new resonance, the X boson. After discovery, in addition to doing a comprehensive search over all possible final states, each individual final state will be studied carefully to see what evidence exists for the spin of the X boson. The topology of γZW ± exists within the SM for HW ± production followed by H → γZ decays. However, the rate at which this happens is very suppressed even for the most optimal mass range of the Higgs boson.55 A heavy resonance that decays into γZ would certainly not be a SM Higgs boson, but nevertheless a scalar origin would be possible. Careful studying of angular correlations among the final state particles can help answer the question of the X spin directly. For example, distinguishing between the a If the c coefficients are small, it may be helpful to analyze the more copious hadronic decays of i the W . The background is higher, but with additional cuts and analysis techniques one may be able to gain in significance.
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
164
scalar and vector spin possibilities of the X boson is possible by carefully analyzing the photon’s cos θγ distribution with respect to the X boost direction in X → γZ decays in the rest frame of the X. If X is a scalar its distribution is flat in cos θ, whereas if it is a vector it has a non-trivial dependence on cos θ. An even more discerning observable is the angular correlation of the leptons in subsequent Z → l + l− decays. This is because whether X is a scalar or a vector, the Z boson polarizations are different, and this information is transmitted to the decay products. The cos θl− distribution in the Z rest frame with respect to the Z boost direction carries this information. This is plotted in Fig. 1 (down panel) for the case of X being a scalar (middle-lower line) or a vector (middle-upper line) particle. With enough events (several hundred) this distribution can be filled in, and the spin of the X resonance can be discerned among the possibilities. Acknowledgments Work supported in part by the European Commission under the ERC Advanced Grant “MassTeV” ERC-2008-AdG 20080228 and the ITN contract “UNILHC” PITN-GA-2009-237920. References 1. D. J. Gross, and R. Jackiw, Phys. Rev. D6, 477–493 (1972); C. Bouchiat, J. Iliopoulos, and P. Meyer, Phys. Lett. B38, 519–523 (1972); H. Georgi, and S. L. Glashow, Phys. Rev. D6, 429 (1972). 2. M. B. Green, and J. H. Schwarz, Phys. Lett. B149, 117–122 (1984). 3. A. Boyarsky, J. A. Harvey, and O. Ruchayskiy, Annals Phys. 301, 1–21 (2002), hep-th/0203154. 4. T. Appelquist, and J. Carazzone, Phys. Rev. D11, 2856 (1975). 5. E. D’Hoker, and E. Farhi, Nucl. Phys. B248, 59 (1984); ibid. 77. 6. J. L. Hewett, and T. G. Rizzo, Phys. Rept. 183, 193 (1989). 7. I. Antoniadis, E. Kiritsis, and T. N. Tomaras, Phys. Lett. B486, 186–193 (2000), hep-ph/0004214. 8. L. E. Ibanez, F. Marchesano, and R. Rabadan, JHEP 11, 002 (2001), hep-th/0105155. 9. I. Antoniadis, E. Kiritsis, J. Rizos, and T. N. Tomaras, Nucl. Phys. B 660, 81–115 (2003). 10. P. Anastasopoulos, M. Bianchi, E. Dudas, and E. Kiritsis, JHEP 11, 057 (2006a), hep-th/0605225. 11. C. Coriano, N. Irges, and E. Kiritsis, Nucl. Phys. B746, 77–135 (2006), hep-ph/ 0510332. 12. C. Coriano, M. Guzzi, and S. Morelli (2008a), arXiv:0801.2949[hep-ph]. 13. R. Armillis, C. Coriano, and M. Guzzi (2007), 0711.3424[hep-ph]; 0709.2111 14. I. Antoniadis, A. Boyarsky, and O. Ruchayskiy (2006), hep-ph/0606306. 15. I. Antoniadis, A. Boyarsky, and O. Ruchayskiy, Nucl. Phys. B793, 246–259 (2008a), 0708.3001. 16. I. Antoniadis, A. Boyarsky, S. Espahbodi, O. Ruchayskiy, and J. D. Wells, Nucl. Phys. B824, 296–313 (2010), 0901.0639. 17. P. Anastasopoulos, et al., Phys. Rev. D78, 085014 (2008), 0804.1156. 18. J. A. Harvey, C. T. Hill, and R. J. Hill, Phys. Rev. D77, 085017 (2008), 0712.1230.
November 22, 2010
15:44
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.01˙Antoniadis
165
19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55.
E. Dudas, Y. Mambrini, S. Pokorski, and A. Romagnoni (2009), 0904.1745. C. Amsler, et al., Phys. Lett. B667, 1 (2008). J. M. Cornwall, D. N. Levin, and G. Tiktopoulos, Phys. Rev. D10, 1145 (1974). G. G. Raffelt, Stars as laboratories for fundamental physics, UofC Press, Chicago, USA, 1996. K. Zioutas, et al., Phys. Rev. Lett. 94, 121301 (2005), hep-ex/0411033. S.-J. Chen, H.-H. Mei, and W.-T. Ni, Mod. Phys. Lett. A22, 2815–2831 (2007), hep-ex/0611050. K. Ehret (2008), 0812.3495. P. Pugnat, et al., Phys. Rev. D78, 092003 (2008), 0712.3362. A. S. Chou, et al., Phys. Rev. Lett. 100, 080402 (2008), 0710.3783. A. Afanasev, et al., Phys. Rev. Lett. 101, 120401 (2008), 0806.2631. M. Fouche, et al., Phys. Rev. D78, 032013 (2008), 0808.2800. S. Davidson, S. Hannestad, and G. Raffelt, JHEP 05, 003 (2000a), hep-ph/0001179. T. Mitsui, et al., Phys. Rev. Lett. 70, 2265–2268 (1993). A. Melchiorri, A. Polosa, and A. Strumia, Phys. Lett. B650, 416–420 (2007), hep-ph/ 0703144. S. Davidson, B. Campbell, and D. C. Bailey, Phys. Rev. D43, 2314–2321 (1991). E. Masso, and J. Redondo, Phys. Rev. Lett. 97, 151802 (2006), hep-ph/0606163. A. Boyarsky, O. Ruchayskiy, and M. Shaposhnikov, Phys. Rev. D72, 085011 (2005a), hep-th/0507098. A. Boyarsky, O. Ruchayskiy, and M. Shaposhnikov, Phys. Lett. B626, 184–194 (2005b). H. Gies, J. Jaeckel, and A. Ringwald, Phys. Rev. Lett. 97, 140402–+ (2006), arXiv: hep-ph/0607118. M. Ahlers, H. Gies, J. Jaeckel, and A. Ringwald, Phys. Rev. D75, 035011 (2007), hep-ph/0612098. A. Ringwald (2005), hep-ph/0511184. J. Jaeckel, and A. Ringwald, Phys. Lett. B659, 509–514 (2008), 0707.2063. M. Ahlers, J. Jaeckel, J. Redondo, and A. Ringwald, Phys. Rev. D78, 075005 (2008), 0807.4143. G. Ruoso, et al., Z. Phys. C56, 505–508 (1992). R. Cameron, et al., Phys. Rev. D47, 3707–3725 (1993). E. Zavattini, et al., Phys. Rev. Lett. 96, 110406 (2006), hep-ex/0507107. S.-J. Chen, H.-H. Mei, and W.-T. Ni (2006), hep-ex/0611050. E. Zavattini, et al. (2007b), arXiv:0706.3419[hep-ex]. K. Ehret, et al. (2007), hep-ex/0702023. C. Rizzo, Laboratory and astrophysical tests of vacuum magnetism: the BMV project (2006), 2nd ILIAS-CAST-CERN Axion Training, http://cast.mppmu.mpg.de. C. Robilliard, et al. (2007), arXiv:0707.1296[hep-ex]. A. V. Afanasev, O. K. Baker, and K. W. McFarlane (2006), hep-ph/0605250. C. Coriano, N. Irges, and S. Morelli, JHEP 07, 008 (2007), hep-ph/0701010. C. Coriano, N. Irges, and S. Morelli, Nucl. Phys. B789, 133–174 (2008c), hep-ph/ 0703127. J. Kumar, A. Rajaraman, and J. D. Wells (2007), arXiv:0707.3488[hep-ph]. F. Maltoni, and T. Stelzer, JHEP 02, 027 (2003), hep-ph/0208156. A. Djouadi, V. Driesen, W. Hollik, and A. Kraft, Eur. Phys. J. C1, 163–175 (1998), hep-ph/9701342.
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
166
AN ADJUSTABLE COSMOLOGICAL CONSTANT JIHN E. KIM Department of Physics and Astronomy and Center for Theoretical Physics, Seoul National University, Seoul 151-747, Korea E-mail:
[email protected] There are three cosmological constant problems in particle physics. Here, I discuss Hawking’s idea of calculating the probability amplitude for our Universe to be at Λ = 0 and the initial inflationary period in a self-tuning model. I review what has been discussed on the Hawking type calculation with H 2 Lagrangian, present a (probably) correct way to calculate the amplitude, and show that the Kim-Kyae-Lee self-tuning model allows a finite range of parameters for the Λ = 0 to have a singularly large probability. Keywords: Cosmological constant; Self-tuning; Brane scenario; Probability amplitude.
1. Introduction The cosmological constant was introduced almost ninety-three years ago by Einstein. Since the spontaneous symmetry breaking is known, Veltman commented that the vacuum energy arising in spontaneous symmetry breaking adds to the cosmological constant,1 basically raising a question on the naturalness of setting it to zero. Even before considering the tree level cosmological constant (CC), the loop correction to the vacuum energy was a problem since the early days of quantum mechanics. In the CC discussion here, we will not rely on the anthropic arguments.2 So, we consider the cosmological constant generically, at the tree and also at loop levels unless the figures of Fig. 1 are forbidden. The LHS figure of Fig. 1 corresponds to 21 ~ω per mode and the RHS figure is the two-loop vacuum energy arising from the A-terms in supergravity. If a symmetry is present in changing Λ, one may try a scalar potential of Fig. 2 where the vertical axis corresponds to the CC. The vanishing CC is the point the arrow indicates and the vacuum is the point where the bullet is located. As Fig. 2 shows, in general the vanishing CC point does not corresponds to the bullet where the equation of motion is satisfied. So, a solution is not easily realizable in 4D. In addition, the CC problem must also take into account the onset of spontaneous symmetry breaking.1 The CC is a serious fine-tuning problem. In 4D, we do not find any symmetry such that the CC is forbidden. Note, however, recent tries of scale invariance and brane statistical search.3 Another question is at which energy scale, the CC is required to vanish. There
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
167
A∗ijk
Aijk
'N
m2B Λ2 8π2
'
Fig. 1.
1 (4π 2 )2
P ijk
Aijk A∗ijk Λ2
Loop corrections to the CC.
How is this chosen?
V
hFieldi
• Satisfied by Eq. of motion
Fig. 2.
A form of the potential energy in terms of CC.
exists a hierarchy of mass scales in particle physics: Planck scale : 2.44 × 1018 GeV
GUT scale : 2 × 1016 GeV
Inflation scale : ' 1016 GeV, down to the EW scale Axion scale : ' 1012 − 1011 GeV
Hidden sector scale : 1013 GeV EW scale : 100 GeV QCD scale : 1 GeV
Nuclear physics scale : 10 MeV electron mass scale : 0.5 MeV accelerating universe scale : (0.002 eV)4 Even though we suppose to have a CC solution at an EW scale, still 10−60 finetuning is required. 2. Probability Amplitude When we consider quantum mechanics, we talk in terms of the probability amplitude: The initial state |Ii to transform to a final state |F i. In this spirit, Baum4 and Hawking5 considered the Euclidian action, only with the Ricci scalar R and
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
168
V
How is this chosen? hFieldi
• Satisfied by Eq. of motion Fig. 3.
A form of the potential energy with multiple solutions.
the CC terms. Thus, the needed Euclidian action integral has the form ˜
2
e−IE = e3πMP /Λ .
(1)
So, for Λ = 0+ the action integral is dominated, which is interpreted as the probability for Λ = 0+ is close to 1. But, there are questions regarding to this Baum-Hawking solution. Hawking5 states, “My proposal requires that a variable effective CC be genarated in some manner. One possibility would be to include the values of the CC in the variables that are integrated over in the path integral.” Ref. [5] explicitly considered a scalar field without the kinetic energy term in terms of Aµνρ (or the field strength Hµνρσ ). In this scenario, the needed quantity to calculate is the action integral. Even if we understand the CC in this way, there exists another questions such as • How do we assign the initial state? • How does the needed primary inflation come about in this scenario? • How does it fit to the current dark energy? So the CC solution needs to explain the other two CC related questions also and furthermore needs an argument what was the proper initial condition of our Universe. The existing idea of Hawking in terms of Hµνρσ with no kinetic energy term cannot explain all the above questions. We must introduce the kinetic term with the potential shape given as that of Fig. 3 so that the point where the equation of motion is satisfied can be the point with Λ = 0. For example, Hµνρσ can achieve this but without the kinetic energy term in 4D we cannot realize in choosing the Λ = 0 point. If we want to use Hµνρσ field, we must work beyond 4D.
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
169
3. Self-Tuning Solution and the Initial State of the Universe In 2000, self-tuning solutions have been tried in the RS type models. Here, I just mention the initial try and its failure. In an RS-I type model, Ref. [6] tried to show that SM fields living at the SM brane located at x5 = 0 do not change, via the loop corrections at the brane, the CC solution of the bulk. Here, the bulk action is fixed with specific magnitude of coupling, Z √ 4 I5 = d5 x −g R − (∂φ)2 − Λeaφ 3 (2) Z √ + d4 x −g4 −V ebφ x5 =0 . In a sense, the vacuum energy of the SM brane is cured. However, the specific form of the above bulk action is arguable for a general CC solution. Even though we allow this procedure, still it has its own problem that a singularity is present at a point ys in the bulk. The singularity can be cured by inserting the Z2 symmetric branes at ±ys ,7 Z Z √ √ (3) I5 + d4 x −g4 −V+ eb+ φ x5 =+ys + d4 x −g4 −V− eb− φ x5 =−ys .
Then, to cancel the contribution of the SM brane of Eq. (3), one must fine-tune the CC contribution from the singularities of Eq. (3).7 Again a fine-tuning is needed: Λ4D = E0 + E+ + E− = 0, leading to a fine tuning between V+ , b+ , V− , and b− . Furthermore, there exists the no-go theorem under some plausible conditions such that one employs the usual kinetic energy term and assumes the existence of Lorentz symmetry and 4D gravity for a large distance separation.8 Here, we try to go beyond the above set-up. Namely, we do not specify the bulk action. Instead, we allow non-standard kinetic energy term to avoid the no-go theorem. In our discussion we will distinguish Λ’s, depending where it originates, the barred ones and the rest: Λ = obtained from gµν Λ = obtained from Tµν
(4)
In this spirit, there exists one self-tuning model by Kim, Kyae and Lee (KKL).9 The KKL model is worked out in the Randal-Sundrum II type model,10 with a nonstandard kinetic energy term of the antisymmetric field strength HM N P Q : ∼ 1/H 2 [9], Z Z Z 2 · 4! 1 √ 5 R(5) − − Λb − Λ1 δ(y) = dy d4 xE −IE = d xE g(5) 2 H2 (5) n o 1 2 · 4!Ψ4 4 2 3 00 2 0 2 4 − Ψ Λ1 δ(y) + RΨ + 4Ψ Ψ + 6Ψ (Ψ ) + − Ψ Λb 2 H2 where the metric is taken as ds2 = β 2 (y)ηµν dxµ dxν + dy 2 with the signature ηµν = diag.(−1, +1, +1, +1). The kinetic energy term with H 2 is not developing a VEV in the low energy theory, i.e. in the long wavelength limit (∂µ Aνρσ ) → 0. Fortunately,
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
170
however, we allow the bare CC in the bulk. So, even with hH 2 i = 0, H 2 can be moved to the denominator, 1 , with hH 2 i 6= 0. H2 The field equation and the Bianchi identity are satisfied with √ HMNP Q √ = 0, RM N P Q [ g(5) HM N P Q ] = 0 . ∂M g H4 With this Lagrangian, there exists a self-tuning solution,9 1/4 1 k β(y) = . a (4k|y| + c0 )1/4
(6)
(7)
But there are nearby dS and AdS solutions also. It is easy to show the existence of the nearby dS and AdS solutions by trying a small c(y) in Y ≡ β(y)4 = A[sech(ky + c0 ) + c(y)]. The equation satisfied by Y is √ 8 3 Λ1 2m2 k 2 1 Y − Y − δ(y)Y − Y 00 = 3Λ Y + 4 3 3h 3
(8)
from which one can fix for Λb = −m2 k 2 ,
3 16A2 2 (9) , k . 8 3h So, the question is, “How does one choose the flat one?” The above solution has a remarkable property as noted in [11] that the vanishing CC solution is not allowed for the parameter range of p (10) |Λ1 | ≥ −6Λb m2 =
since the boundary condition at the brane (β 0 /β)y=0+ = −Λ1 /6 cannot be satisfied. This situation is depicted in Fig. 4. Therefore, we argue that for a finite range of parameter of Λ1 satisfying (10) the inflation period continues. The particle physics action at the brane may change Λ1 such that it does not satisfy (10); in the central region of Fig. 4 all the possibilities are open, the flat, dS and AdS solutions. Then the inflationary period might end, but the important question is what is the probability to choose the flat solution. Here comes the probability calculation. 4. Calculation of Probability Amplitude in the KKL Model Hawking calculated the probability amplitude from Z hΛ|Ii ∝ d[g]e−IE [g]
(11)
and concluded that the volume integral is the dominant factor and concluded that the probability is the largest for Λ = 0+ .5 In Hawking’s case, it is not clear how the primordial inflation is taken into account. In our case, the primordial inflation
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
171
Λ
de Sitter
•
flat (dS, AdS)
•
anti de Sitter √ Fig. 4.
The initial inflation takes place for |Λ1 | >
−6Λb .
is interpreted as done in the previous section. Next, the initial state is given when Λ1 rolls into the flat space region of Fig. 4. This initial state that Λ1 sits in the flat space region is the quantum mechanical filtering process. So, to measure some state out of the state |Ii, we calculate the amplitude (11). If particle physics Lagrangian does not contribute to (11), it is sufficient to consider the CC term and the R term only as Hawking has done. Hawking’s basic argument was the size of the Euclidian volume. The dS space volume is finite, the flat space volume is infinite, and the AdS space volume is even more infinite. If we consider only the 1/ Λ term, the AdS wins in magnitude but the sign is opposite from the dS; thus the flat space is chosen. So, even if the AdS is considered, the flat space wins if we restricts only to 1/ Λ term. However, as in the self-tuning model of KKL, particle physics Lagrangian contributes to the final CC in general, and 2 the particle physics action has the 1/ Λ behavior which dominates the CC term contribution in the small Λ region. This may change Hawking’s view completely. If we consider the sizes of volumes, the AdS wins over the flat even though both are infinite. For the flat volume, we take the Λ = 0 limit of the dS case. For the AdS volume, we need to regularize the infinity to compare different cases of Λ’s. Here, I discuss in order what has been discussed on the H 2 Lagrangian in the Hawking type calculation, present a (probably) correct way to calculate the amplitude, and show that the KKL self-tuning model allows a finite range of parameters for which Λ = 0− has the singularly large probability.12
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
172
4.1. Hawking, Duff and Wu Hawking presented the first calculation, using H 2 term in the Lagrangian.5 Since then, there has been discussions on which value of the H 2 term must be used in the action integral. In this regard, a surface term µνρσ Hµνρσ has been noted, which does not change the equation of motion. It turned out that it amounts to changing the sign of H 2 term in the action integral,13 from which Duff concluded that the probability amplitude for Λ = 0 is least probable with Hµνρσ . Recently, Wu15 obtained the opposite result from that of Duff. In our case also, we can consider a surface term (due to x-independent Ψ8 /H 2 ), following the first order formalism of [14], Z Z Ψ4 2 2Ψ8 µνρσ 4 Hµνρσ − ρ −IE ⊃ dy d xE ρ 2 H 2
which does not affect the equation of motion. If we follow Duff’s method, it has the effect of changing the Rsign of 1/H 2 term inside theRaction integral with the surface √ √ term neglected, from d5 xE g(5) (2 · 4!Ψ4 /H 2 ) to d5 xE g(5) (−2 · 4!Ψ4 /H 2 ). So, both methods satisfy the equations of motion with the action integral, Duff
−IE =
Z
√ d5 xE g(5)
h1 2
↓ R(5) ± ↑
i 2 · 4! − Λ − Λ δ(y) b 1 H2
(12)
Wu Thus, it raises an important question, “Which method is correct?” 4.2. The α-vacuum To our view, the confusion arises from taking a specific vacuum in their calculation.13,15 As in the θ-vacuum of QCD, we have the α-vacuum of antisymmetric tensor field Hµνρσ . Duff took one extremum point corresponding to α = π and Wu took another vacuum corresponding to α = 0, and they obtained different results even though both satisfied equations of motion. As far as the α-vacuum is concerned, the discussion is parallel whether we use 2 H or 1/H 2 in the Lagrangian. So, for the notational brevity, we discuss α-vacuum with H 2 . For two antisymmetric indices from µ, ν, ρ, and σ, there are six (4 C2 = 6) independent second rank antisymmetric gauge functions, for which Aµνρ transforms as Aµνρ → Aµνρ − ∂µ Λνρ − ∂ν Λρµ − ∂ρ Λµν .
(13)
The gauge symmetry of the instanton solution is given by any six directions of Λ µν , three for the instanton (Λij ) and three for the anti-instanton (Λ0i ) and the instanton
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
173
action is
R
d5 x∂y µνρσ Hµνρσ . Namely, there exist maps of (14)
S3 → S 3 .
There exists only one type of solution, i.e. only one Pontryagin index. For d4 xHµνρσ to be finite, Hµνρσ should tend to r−4 for a large r. In the bulk, it arises from a 2D curl in 5D, Z Z Z 0 ~ ~ ~ dydx ∇ × A = d~s · A = dydx0 [∂y A0 − ∂0 A5 ] Z Z Z (15) = dydx0 ∂y A0 = dx0 A0 = d4 xHµνρσ
R
where A0 = A5 =
Z
Z
d3 x ∂0 Aijk = 3
d x ∂5 Aijk =
Z
Z
d3 x Hµνρσ (16) 3
d x H5ijk .
So a gauge invariant instanton of size ρ located at x0 takes the form Aαβγ ∝
αβγµ (x − x0 )µ , (r2 + ρ2 )2
r = |x − x0 |
(17)
so that Aµνρ is proportional to r −3 , and Hµνρσ is proportional to r −4 . The 4D integral of Hµνρσ is represented by a kind of Pontryagin integer n = ±1. Note, on the other hand, that the instanton field R of nonabelian gauge groups is of pure gauge form, so that the instanton action is d4 xTrF F˜ . In nonabelian gauge theories, there are many possible gauge configurations such that the irreducible instanton solution give many possible integers for the Pontryagin index. On the other hand, in our case at hand Hµνρσ instanton gives only ±1 for the Pontryagin index. Now, we construct a gauge invariant α-vacuum, following the θ-vacuum construction of QCD, |αi =
+∞ X
n=−∞
|nieinα
(18)
In the α-vacuum, after integrating out the H 2 field, what Duff chose is α = π and what Wu chose is α = 0. However, in the α vacuum, any value of α is allowed, i.e. not restricted to α = 0 and π. As in the θ-vacuum of QCD, any value of α is allowed in our case, and we go beyond what Duff and Wu considered. As commented above, this α-vacuum can be defined also with the 1/H 2 term. We calculate the action integral for α = 0 and π and for any α the action integral is between them. If one makes α a dynamical field as the QCD axion,16 then α is cosmologically settled to 0.
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
174
−IE
Λ
Fig. 5.
A schematic behavior of the probability amplitude in self-tuning models.
4.3. Parameters for the Λ = 0 dominance Now, we calculate the probability amplitude in the KKL model. For this, we use two Einstein equations of the bulk, 0 2 Λ β β 00 3 (µν) : − 3 2 + 3 +3 = −Λb − 2 · 4! β β β H2 (19) 0 2 6Λ β 1 (55) : − 2 + 6 = −Λb − 2 · 4! β β H2 We integrate out the 4D space x and the 5th space y. In this calculation the brane 2 tension Λ1 is also considered. For the coefficient of 1/Λ to be positive, the following condition on the parameters is required, k (20) F (c0 /k, dm ) 3 where F (c0 /k, dm ) is the result of the integration. Here, dm is the length scale defined from the parameters of [12]. If this condition is satisfied, the action integral −IE has the behavior shown in Fig. 5, and the vanishing CC is approached from the AdS side. In the gauge invariant α-vacuum, for the c0 independent part we obtain (3/8k)(π/2) and (9/2k)(π/2) for α = 0 and π, respectively. Therefore, it seems that for the parameters satisfying Eq. (20) we obtain the Λ = 0 dominance in the probability amplitude. tanh(c0 )sech2 (c0 ) ≤
4.4. The AdS volume Finally, we comment on our method of comparing the infinite volumes of the AdS spaces.
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
175
The n-dimensional Euclidian space metric is given by dr2 2 2 ds2 = a2 + r dΩ = a2 f 2 (η) dη 2 + η 2 dΩ2n−1 n−1 2 1 − kr
(21)
where k = 0, ±1 and in the second equation the Weyl transformation is used since it is simple because of the vanishing Weyl tensor in this space, √
dr = f (η)dη, 1 − kr2
f (η) =
r . η
(22)
Now, let us specify to the AdS space of k = −1. Then, Eq. (22) is integrated to give ln η = − sinh−1 (1/r), or f (η) =
2 r = . η 1 − η2
(23)
0 The Ricci scalar for the metric gµν = a2 f 2 (η)gµν is given by
R =a 0
−2 −2
f
2
R − 2(n − 1)∇ (ln f ) − (n − 1)(n − 2)
f0 f
2 !
.
(24)
0 0 or R0 = nΛ, we have Λ = −(n−1)/a2 in the n-dimensional Noting that Rµν = Λgµν Euclidian AdS. Using Eq. (22), the n-dimensional Euclidian AdS volume with the metric (21) is regularized to n n Z Z 1 2 2 n n n n−1 n VAdS = a d x =a dη η VS n−1 1 − η2 1 − η2 0 Z 1 1 1 dξ ξ n/2−1 (1 − ξ)−n = (2a)n VS n−1 B(n/2, 1 − n) (25) = (2a)n VS n−1 2 0 2 n/2 n/2 1 Γ(1 − n) 2π Γ(n/2)Γ(1 − n) 4(n − 1)π = (2a)n = . 2 Γ(n/2) Γ(1 − n/2) Γ(1 − n/2) |Λ|
Eq. (25) factored out the diverging Gamma functions, and we can compare the Λ dependences. For n = 4, it diverges as 1/|Λ|2 as Λ tends to zero. 5. Conclusion In conclusion, we observed that (1) the CC problem may be understandable in higher dimensions D > 4, (2) three CC problems should be addressed, and (3) the initial state of the Universe should be defined properly. A brane helps in solving the vanishing CC problem, since the loop effects of brane is not important to bulk physics. However, this idea is applicable only when there exists a self-tuning solution such as the one given in [9]. We noted that the action integral for a probability calculation is dominated from 2 the particle physics Lagrangian, and has the amplitude proportional to exp[#/ Λ ]. Near Λ = 0, the AdS space is preferred. But slightly outside Λ = 0, dS space is
November 22, 2010
15:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.02˙Kim
176
preferred. Also, we noted that the current acceleration should be addressed, which has not been discussed here. For this, the quintessential-axion idea may be useful.17 Our specific example presented here for the probability amplitude calculation uses the three index gauge field AM N P in the KKL model [9]. Here, one can consider an α-vacuum. Then, α becomes a parameter in the model and any value of α between 0 and π are permitted. It is like the θ parameter of QCD. We have shown that for any value of α, there exists a finite range of parameters such that Λ = 0− is chosen. If α is made dynamical as the QCD axion, then the probability amplitude choosing Λ = 0− is at α = 0. Depending on the scale of the Hµνρσ instanton size, there may be some cosmological interests. Acknowledgments: I am grateful to the organizers of BSM 2010, H. V. KlapdorKleingrothaus and R. D. Viollier for the hospitality during the conference, and thank K.-S. Choi, J.-H. Huh and B. Kyae for useful discussions. This work is supported in part by the Korea Research Foundation, Grant No. KRF-2005-084-C00001. References 1. M. T. Veltman, Cosmology and the Higgs mechanism, Phys. Rev. Lett. 34 (1975) 777. 2. S. Weinberg, Phys. Rev. Lett. 59 (1988) 2607; V. Agrawal, S. M. Barr, and J. Donoghue, Phys. Rev. D57, 5480 (1998). 3. M. Shaposhnikov and D. Zenh¨ ausern, Phys. Lett. B671 (2009) 162 [arXiv: 0809.3406]; F. K. Diakonos and E. N. Saridakis, J. Cos. Astropart. Phys. 02 (2009) 030 [arXiv: 0708.3143]. 4. E. Baum, Zero CC from minimum action, Phys. Lett. B133 (1983) 185. 5. S. Hawking, The CC is probably zero, Phys. Lett. B134 (1984) 403. 6. N. Arkani-Hamed, S. Dimopoulos, N. Kaloper, and R. Sundrum, Phys. Lett. B480 (2000) 193 [hep-th/0001197]; S. Kachru, M. B. Schulz, and E. Silverstein, Phys. Rev. D62, 045021 (2000) [hep-th/0001206] 7. S. Forste, Z. Lalak, S. Lavignac, and H. P. Nilles, Phys. Lett. B481 (2000) 360 [hepth/0002164]. 8. C. Csaki, J. Erlich, C. Grojean, and T. Hollowood, Nucl. Phys. B584 (2000) 359 [hep-th/0004133]. 9. J. E. Kim, B. Kyae, and H. M. Lee, Phys. Rev. Lett. 86 (2000) 4223 [hep-th/0011118]; Self-tuning solution of the CC problem with antisymmetric tensor field, Nucl. Phys. B613 (2001) 306 [hep-th/0101027]. 10. L. Randall and R. Sundrum, Phys. Rev. Lett. 83 (1999) 4690 [hep-th/9906064 ]. 11. J. E. Kim, J. High Energy Phys. 01 (2003) 042 [hep-th/0210117]. 12. J. E. Kim, CC is probably adjustable in brane worlds, arXiv: 0912.2733. 13. M. J. Duff, The CC is possibly zero, but the proof is probably wrong, Phys. Lett. B226 (1989) 36. 14. A. Aurilia, H. Nicolai and P. K. Townsend, Hidden constants: The θ parameter of QCD and the CC of N = 8 supergravity, Nucl. Phys. B176 (1980) 509. 15. Z. C. Wu, The CC is possibly zero, and the proof is possibly right, Phys. Lett. B659 (2008) 891. 16. J. E. Kim and G. Carosi, Rev. Mod. Phys. 82 (2010) 557 [arXiv: 0807.3125[hep-ph]]. 17. J. E. Kim and H. P. Nilles, Phys. Lett. B553 (2003) 1 [hep-ph/0210402]; J. Cos. Astropart. Phys. 05 (2009) 010 [arXiv: 0902.3610 [hep-th]].
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
177
COSMIC INFLATION MEETS PARTICLE PHYSICS S. ANTUSCH, J. P. BAUMANN, K. DUTTA and P. M. KOSTKA Max-Planck-Institut f¨ ur Physik (Werner-Heisenberg-Institut), F¨ ohringer Ring 6, 80805 M¨ unchen, Germany While there exists a large multitude of inflationary models, the connection of inflation to particle physics is still an unsolved puzzle. In particular, in supergravity theories, where the so-called η-problem tends to spoil slow-roll inflation, the construction of convincing and technically natural models of inflation is challenging. We discuss some recent developments regarding the quest for particle physics models of inflation in supergravity. One such development is provided by a new class of inflationary models, referred to as “tribrid inflation”, which is taylor-made for solving the η-problem by shift symmetry or Heisenberg symmetry. Based on this approach, it has recently been shown that inflation can be consistently realised with a gauge non-singlet inflaton field (residing e.g. in a GUT representation), with, simultaneously, the η-problem solved by a Heisenberg symmetry. Keywords: Inflation, Beyond the Standard Model, Supergravity
1. Introduction Inflation model building offers a multitude of possibilities for realizing inflation.1,2 Among the many classes of models, hybrid inflation3–5 is especially promising to make a connection between the inflationary paradigm and particle physics: The “waterfall” ending hybrid inflation may be associated with particle physics phase transitions such as the spontaneous breaking of the gauge group of a Grand Unified Theory (GUT) or, alternatively, a family symmetry.6 In local supersymmetry, i.e. supergravity (SUGRA), which provides a solution to the hierarchy problem associated with such new physics at high energies, the η-problem is well known to put inflation models under considerable pressure.7 In this talk, we discuss a new variant of hybrid inflation models within SUGRA, referred to as “tribrid inflation”,8 where in addition to the inflaton and waterfall field, the model contains a third “driving” field which contributes the large vacuum energy during inflation by its F-term. We discuss how the η-problem of SUGRA inflation can be solved in tribrid inflation using either a Heisenberg symmetry or a shift symmetry of the K¨ ahler potential. a We then turn to the possibility that the inflaton field is a gauge non-singlet (GNS) field that resides, e.g., in a GUT representation. a Symmetry solutions to the η-problem can also be applied to the class of large field chaotic inflation models, as has been shown for shift symmetry in Ref. 9 and for Heisenberg symmetry in Ref. 10. In “standard” SUSY hybrid inflation models7,11 the use of these symmetries is problematic.12,13
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
178
2. Tribrid Inflation In the following, we will discuss inflationary models of the “tribrid inflation” type, where the superpotential is given by W = κ S H 2 − M 2 + g(Φ, H) . (1)
We denote the chiral superfield containing the slow-rolling inflaton as scalar component by Φ and the one containing the waterfall field by H. The F-term of the field S contributes the vacuum energy that drives inflation in this class of models whereas inflation ends when the waterfall field acquires a vacuum expectation value (VEV) hHi ∼ M . For simplicity, we use only singlet fields. Using gauge multi¯ with H ¯ being another field in the conjugate plets, one would substitute H 2 → H H, representation. Note that in contrast to the “standard” SUSY hybrid inflation models,7,11 where the driving field is identical with the inflaton, in tribrid inflation each of the three main ingredients of the inflationary model is distributed to a separate fieldb . S stays at zero during inflation and only contributes the large vacuum energy by its F-term. A large mass stabilizing S at zero is typically generated by SUGRA effects from generic non-minimal K¨ ahler potentials. Φ is the flat inflaton direction which slow-rolls and stabilizes H via the term g(Φ, H) (e.g. g(Φ, H) = Mλ∗ Φ2 H 2 ) until Φ reaches a critical value and thus triggers the waterfall. This ends inflation due to the fact that H develops a tachyonic mass squared and quickly falls towards the true vacuum hHi ∼ M . With S = H = 0 during inflation, tribrid inflation satisfies W = WΦ = 0 during inflation. 3. Tribrid Inflation and Solutions to the η-Problem With a general expansion of the K¨ ahler potential in terms of fields over some cutoff scale and a suitable adjustment of the expansion parameters, it is always possible to “tune away” the η-problem in both the “standard” hybrid14 and the tribrid inflation scenarios.15 However, if one attempts to solve the η-problem by a fundamental symmetry in the K¨ ahler potential, this turns out to be extremely difficult to achieve in “standard” hybrid-type models.12,13 The reason for this is that, for example, the use of a shift symmetry typically lead to a tachyonic direction in the potential which can only be stabilized at the cost of some extra complications, for instance by using the couplings to additional moduli fields which themselves have stabilization problems and induce dangerous couplings to the inflaton via the SUGRA F-term scalar potential c i h ¯ ¯ − 3|W |2 , VF = eK K ij Di W D¯j W (2) where the derivative Di W ≡ Wi + W Ki has been introduced.
b Hence c We
the name tribrid inflation. use units where we set the reduced Planck scale MP ∼ 2.4 × 1018 GeV to one.
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
179
These problems do not arise if one combines the tribrid inflation scenario in Eq. 1 with a symmetry protecting the K¨ ahler potential.16,17 The main reason is that the vanishing of the inflationary superpotential Winf and its derivative with respect to the inflaton during inflation (see also Ref. 18) avoids the appearance of tachyonic directions in the potential. In addition, various potentially dangerous terms in the scalar potential, concerning for example couplings to moduli fields, are automatically absent in tribrid inflation compared to the “standard” hybrid case.19 We now discuss two realizations of tribrid inflation in supergravity where the ηproblem is solved naturally by either a Heisenberg symmetry or a shift symmetry of the K¨ ahler potential.16,17 3.1. Heisenberg symmetry solution As a specific realization of tribrid inflation, in Ref. 16 we have considered the superpotential in Eq. 1 with (c.f. Ref. 15) g(Φ, H) =
λ 2 2 Φ H , M∗
(3)
in combination with a Heisenberg symmetry invariant K¨ ahler potential of the form K = |H|2 + 1 + κS |S|2 + κρ ρ |S|2 + f (ρ) . (4)
The invariant combination under the non-compact Heisenberg group transformations is given by ρ = T + T ∗ − |Φ|2 . As an explicit example, we may take f (ρ) of no-scale form, i.e. f (ρ) = − 3 ln (ρ) .
(5)
The Heisenberg symmetry of the K¨ ahler potential, or in other words the fact that K depends on ρ only, together with the absence of kinetic mixing in the (ρ, Φ)basis, protects the potential Eq. 2 from obtaining tree-level SUGRA corrections to the mass of the inflaton |Φ|. We have shown that it is possible to stabilize the modulus ρ by the additional coupling κρ with the help of the vacuum energy during inflation (c.f. figure 1). While the Heisenberg symmetry solves the η-problem by keeping the tree-level potential exactly flat in |Φ|-direction, one-loop corrections due to the Heisenberg symmetry breaking operator in Eq. 3, with the waterfall sector fermions, scalars and pseudoscalars running in the loops, lift the flatness of the potential and generate the slope necessary for slow-roll inflationary dynamics. 3.2. Shift symmetry solution As another realization of the tribrid scenario, in Ref. 17 we have considered the superpotential in Eq. 1, again with the same function g(Φ, H) defined in Eq. 3, combined with the following Khler potential: 1 κΦ 2 κS 4 κSH (Φ + Φ∗ ) + 2 |S|2 |H|2 +. . . , (6) K = |H|2 +|S|2 + (Φ + Φ∗ ) + 2 |S|4 + 2 2 Λ 4Λ Λ
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
180
1.0
ρ
ρ, φ [MP ]
0.8
1
0.6
0.5 0.4 0
PSfrag replacements
0
1
2
3
4
5
0.2
φ 0.0
0
20
40
60
80
100
H t (' Ne for H t ≥ 5) Fig. 1: Evolution of the modulus field ρ (blue) and the inflaton field φ (red) as a function of H t. The inlay shows the behavior of the fields for the period H t ≤ 5 during which ρ settles to its minimum. ρ and φ are given in units of the reduced Planck mass.
where the ellipsis symbolize all possible similar terms of the same order and the suppressed higher order terms. For fairly generic values of the couplings in the K¨ ahler potential, it is possible to make all scalars in the theory except for the inflaton heavier than the Hubble scale during inflation. Due to the shift symmetry Φ → Φ√+ i µ in the K¨ ahler potential we obtain a tree-level flat inflaton direction φI = 2 Im(Φ) and hence evade the η-problem. Again, radiative corrections induced by the shift symmetry breaking term in Eq. 3 lift the flatness of the potential. For sufficiently large values of the parameter κSH , the loop-corrected potential can be of hilltop-form leading to a reduced spectral index consistent with best-fit values to the WMAP 7 year data. 4. Tribrid Inflation with a Gauge Non-Singlet (GNS) Inflaton Using a Heisenberg symmetry to solve the η-problem opens up the possibility to realise tribrid inflation in supergravity with a GNS inflaton which may, for instance, reside in a GUT representation.20 To illustrate the basic ideas, let us consider the following superpotential ¯ HH ¯ ¯ − M2 + ζ Φ Φ (7) W = κ S HH Λ where the superfield S is a singlet under some gauge group G, while the superfields ¯ as well as Φ and Φ ¯ reside in conjugate representations (reps) of G. The FH and H term of S provides the vacuum energy to drive inflation and the scalar components ¯ are waterfall fields. The latter take zero values during inflation but of H and H are switched on when the inflaton reaches some critical value, ending inflation and ¯ = M . Typically breaking the gauge group G at their global minimum hHi = hHi
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
181
¯ are the Higgs fields which break that G is identified as a GUT group and H, H 11 ¯ group. The GNS superfields Φ and Φ, which contain the inflaton direction couple to the Higgs superfields via a non-renormalizable coupling controlled by a dimensionless coupling constant ζ and a scale Λ.d Inflation is realized via slowly rolling scalar fields contained in the superfields Φ ¯ with the singlet field S staying fixed at zero during (and after) inflation. In a and Φ SUGRA framework, non-canonical terms for S in the K¨ ahler potential can readily provide a large mass for S such that it quickly settles at S = 0. On the other hand, ¯ using a Heisenberg large SUGRA mass contributions can be avoided for Φ and Φ 16 symmetry as discussed in section 3.1. In contrast, a shift symmetry cannot be used in combination with a GNS inflaton. While the singlet S field is held at a zero value by SUGRA corrections, the scalar ¯ denoted by φ and φ¯ in the following, having no such SUGRA components of Φ, Φ, corrections, are free to take non-zero values during the inflationary epoch. The nonzero φ, φ¯ field values provide positive mass squared contributions to all components ¯ during inflation, thus stabilizing them at zero by of the waterfall fields H and H the F-term potential from the second term in Eq. 7. As in standard SUSY hybrid inflation, the F-term of S yields a large vacuum energy density V0 = κ2 M 4 which drives inflation and breaks SUSY. Since φ, φ¯ are the only fields which are allowed to take non-zero values during inflation, they may be identified as inflaton(s) provided that their potential is sufficiently flat. Since both φ and φ¯ carry gauge charges under G, their VEVs break G already during inflation, thus, although φ and φ¯ are GNS fields under the original gauge group G, they are clearly gauge singlets under the surviving subgroup of G0 ⊂ G respected by inflation. This trivial observation will help to protect the φ and φ¯ masses against large radiative corrections, as we shall see later. Another key feature is that the quartic term in the φ and φ¯ potential arising from D-term gauge interactions is avoided in a D-flat valley in which the conjugate fields φ and φ¯ take equal VEVs. Let us assume that the potential of φ and φ¯ is sufficiently flat to enable them to be slowly rolling inflaton(s), and that the dominant contribution to the slope of the inflaton potential arises from quantum corrections due to SUSY breaking which make φ and φ¯ slowly roll towards zero. Then the waterfall mechanism which ends inflation works in a familiar way, as follows. Once a critical value of φ and φ¯ is reached, the negative mass squared contributions to the scalar components of ¯ dominate, destabilizing them to fall towards their true vacuum. In this H and H phase transition, the breaking of G is basically “taken over” by the Higgs VEVs ¯ ∗ i = hHi ∼ M and at the same time inflation ends due to a violation of the hH slow-roll conditions. The vacuum energy is approximately cancelled by the Higgs VEVs and SUSY is approximately restored at the global minimum. d For
illustrative purposes in this section we only consider the single operator contraction shown even though other distinct operators with different contractions are expected. A fully realistic inflation model of this type has recently been constructed in Ref. 20.
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
182
4.1. Simple Example with G = U (1) Let us now explicitly calculate the full global SUSY potential for the model in Eq. 7, assuming an Abelian gauge group G = U (1). Any SUSY gauge theory gives rise to a scalar potential V = VF + VD = Fi∗ F i +
1 a a D D . 2
(8)
For G = U (1) and equal charge for φ and H we find D = ¯ 2 + |H|2 − |H| ¯ 2 , where the index a has disappeared because a U (1) −g |φ|2 − |φ| has only one generator and g is the gauge coupling constant. Thus we obtain a D-term contribution (setting a possible Fayet-Iliopoulos term to zero) g2 ¯ 2 + |H|2 − |H| ¯ 2 2, |φ|2 − |φ| (9) 2 ¯ ∗ i = 0 obviously has a D-flat direction which in the inflationary trajectory hHi = hH ∗ hφi = hφ¯ i. Under the assumption that the D-term potential Eq. 9 has already VD =
stabilized the fields in the D-flat valley, the remaining potential is generated from the F-term part 2 2 ¯ − M 2 2 + ζ φ¯ (H H) ¯ + ζ φ (H H) ¯ VF = κ H H Λ Λ (10) 2 2 ζ ζ ¯ ¯ ¯ ¯ + κ S H + (φ φ) H + κ S H + (φ φ) H , Λ Λ
which can be calculated with the equations of motion F ∗ i = −∂W/∂φi . Plugging the D-flatness condition hφi = hφ¯∗ i into Eq. 10 and setting S = 0, the F-term potential reduces to 2 2 2 ¯ 2 + |ζ| |φ|4 |H|2 + |ζ| |φ|4 |H| ¯ 2 . (11) ¯ 2 + 2 |ζ| |φ|2 |H|2 |H| VF = κ2 M 2 − H H Λ2 Λ2 Λ2 The upper panel of Fig. 2 depicts the F-term scalar potential within the D-flat valley for all model parameters set to unity. Obviously, in the inflationary valley ¯ = 0 it has a flat inflaton direction |φ| and a tachyonic waterfall direction S=H=H below some critical value |φc |.
4.2. Topological defects One potential problem that arises if the waterfall is associated with the breaking of a non-Abelian unified gauge symmetry G is the possibility of copiously producing topological defects22 like magnetic monopoles in the waterfall transition at the end of inflation. For such topological defects to form it is necessary that at the critical value when the waterfall occurs several different vacuum directions have degenerate masses and none is favored over the other. If the same vacuum is chosen everywhere in space, no topological defects can form. In this respect, it is crucial to note that the VEV of the inflaton field already breaks the gauge symmetry G. Due to this
February 24, 2011
14:9
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03
183
2 1
V (φ, H) |φ|
1
0.5 0 0
1
0
-1
¯ + H), Im(H ¯ − H) Re(H
2 1
V (φ, H) |φ|
1
0.5 0 0
1
0
-1
¯ + H), Im(H ¯ − H) Re(H ¯ ∗ , without Fig. 2: Plot of the F-term hybrid inflation potential in the D-flat valleys φ = φ¯∗ , H = H deformations by higher-dimensional effective operators (upper plot). The lower plot displays the ¯ has been switched on. This term deformed potential where an effective superpotential term δ (H φ) ¯ = 0 that forces the field into the global minimum at positive M . gives rise to a slope at H = H
¯ m φp φ¯q can lead to a deforbreaking, effective operators containing terms like H n H mation of the potential which can force the waterfall to happen in a particular field direction everywhere in space, avoiding the production of potentially problematic topological defects. This is illustrated in the lower plot of figure 2 for the Abelian example (even though no monopoles can be created in this case; domain walls, however, can).
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
184
4.3. Radiative corrections The tree-level flat direction is only lifted radiatively due to inflaton-dependent, ¯ SUSY breaking waterfall masses. Diagonalizing the mass matrices in the (H, H)basis, the eigenvalues calculated from Eqs. 7 and 11 are 1 Dirac fermion with the squared mass m2F = |ζ|2 |φ|4 /Λ2 and 2 complex scalars with squared masses m2S = |ζ|2 |φ|4 /Λ2 ± |κ|2 M 4 . Yet another potential problem may arise when the inflaton is a gauge non-singlet. It is due to two-loop corrections (c.f. figure 3) to the inflaton potential which can induce a mass for the inflaton that is generically larger than the Hubble scale during inflation and would thus spoil slow-roll inflation.21 However, due to the breaking of the gauge symmetry during inflation these corrections to the inflaton potential are not problematic in our model since they get suppressed by powers of the large gauge boson masses induced by the inflaton VEV. PSfrag replacements ¯c H c, H c replacements ¯c ¯c PSfrag replacements PSfrag replacements PSfrag H c, H H φ ,H ¯c H c, H Aµ Aµ Aµ Aµ Aµ Aµ Aµ PSfrag replacements + δν φ φ φ φ ¯c φ H c, H δν − + δνH φ − Aµ δνH + ¯c δν H c, H H Aµ δν +
δν − − δνH
Aµ φ
ψH c δν +
λ
a
λa ψφ
φ
φ
Fig. 3: Two-loop diagrams contributing to the mass of the GNS inflaton. Due to the breaking of the gauge symmetry during inflation by the VEV of the inflaton these mass corrections are not problematic in tribrid GNS inflation.
4.4. Inflationary predictions Since the two-loop corrections turn out to be negligible, it is enough to consider the effective potential up to one-loop level when calculating predictions for the observable quantities. In particular for a single field model as in the case G = U (1), the relevant inflationary predictions are given in terms of the number of e-folds Ne , the amplitude PR as well as the spectral index ns and the running spectral index dns / d ln k of the power spectrum for the scalar metric perturbations and the
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
185
tensor-to-scalar ratio r giving the amplitude of the tensor metric perturbations. All of the predictions of these quantities can be calculated from the potential and its derivatives, especially from the slow-roll parameters given by 2 00 0 000 1 V0 V V V 2 = , η= , ξ = . (12) 2 V V V2 The number of e-folds can be calculated by Ne =
Zφi
φe
1 dφ √ , 2
(13)
where φe denotes the field value at the end of inflation and φi is the field value Ne efolds before the end when observable scales leave the horizon. The other observables in terms of the slow-roll parameters Eq. 12 read dns = 16 η − 24 2 − 2 ξ 2 , d ln k while the amplitude of the scalar power spectrum has the form ns = 1 − 6 + 2 η ,
r = 16 ,
(14)
1 V 3/2 1/2 PR = √ . (15) 2 3 π |V 0 | The class of tribrid inflation models typically features a small tensor-to-scalar ratio r 0.01 and a red-tilted scalar spectral index ns in agreement with present data. 4.5. Realisation within SO(10) GUTs One attractive feature of SO(10) GUTs is that all matter fields of a family, including right-handed neutrinos, are contained in one 16 representation of SO(10). If we furthermore consider a SUSY GUT, these fields are accompanied by their scalar superpartners. It is then tempting to try to realize inflation by one (or more) of the scalar fields belonging to such a 16 superfield. Let us now sketch a toy model in the SO(10) framework. The matter superfields will be denoted as Fi = 16i (i=1,...,4) and F¯ = 16. After inflation, one vectorlike combination will get a GUT scale mass whereas three 16s, containing the SM fermions, remain light. The “waterfall” Higgs fields are unified into the SO(10) ¯ = 16. An example superfield content with associated representations H = 16 and H symmetry assignments is displayed in table 1. Up to dimension seven operators, the allowed terms in the superpotential read hXi ¯ λij ¯H ¯ + γ F¯ F¯ HH + ζi Fi F¯ H H ¯ W =κ S HH − M 2 + Fi Fj H Λ Λ Λ Λ (16) hθi 2 ¯ ¯ hθi Fi h Fj + y˜ 3 h F h F + . . . , + yij Λ Λ where h = 10 contains the SM Higgs superfields. We assume that X has already acquired its large VEV hXi ∼ Λ before inflation has started. Furthermore we assume hθi = 0 during inflation which corresponds to the situation that the Yukawa
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
186
S X H ¯ H Fi F¯ h θ
SO(10) 1 1 16 16 16 16 10 1
R 1 0 0 0 1/2 1/2 0 0
10
0 7 1 2 3 4 4 0
2
+ + + + + + − −
Table 1: Example of SO(10) superfield content and associated symmetries.
couplings are generated after inflation when an additional family symmetry (not discussed here explicitly and only represented schematically by a “flavon field” θ) gets broken. The part of the superpotential of our model relevant for inflation has the form ¯H ¯ + γ F¯ F¯ HH + ζi Fi F¯ H H ¯ + . . . . (17) ¯ − M 2 + λij Fi Fj H Winf = κ S H H Λ Λ Λ
We assume that SO(10) is broken to GPS before inflation and then inflation, as well as the waterfall after inflation are realized as discussed in section 3.3 of Ref. 20. We would like to emphasize at this point that the minimalist field content and the choice of symmetries mainly serves the purpose of giving a proof of existence that GNS inflation can be realized in SO(10) GUTs. In a fully realistic model, which e.g. may also contain a full flavor sector, different symmetries may have to be chosen and the field content may have to be extended. 5. Summary and Conclusions
In summary, we have discussed how the η-problem of SUGRA inflation can be solved in a novel class of F-term inflation models, which we have referred to as tribrid inflation. When tribrid inflation is combined with a Heisenberg or shift symmetry invariant K¨ ahler potential, higher order operators from the SUGRA expansion that give rise to the η-problem are forbidden. Furthermore, due to the properties of W = WΦ = 0 during inflation, tribrid inflation avoids stability problems which appear when “standard” hybrid inflation models are combined with fundamental symmetries in the K¨ ahler potential. Therefore, we conclude that tribrid inflation is tailor-made for solving the η-problem by symmetries in the K¨ ahler potential. In addition, it also allows for attractive connections to particle physics: The righthanded sneutrino, for example, provides an interesting inflaton candidate in tribrid inflation. Furthermore, with the η-problem solved by a Heisenberg symmetry, the inflaton field can be a GNS field that resides in a GUT representation, e.g. in the 16-plet of a SO(10) GUT. GNS inflation with solved η-problem opens up new possibilities to construct particle physics models at GUT energies that not only aim at
November 22, 2010
16:16
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.03˙Antusch
187
explaining the unification of forces and the flavour structure of the known particles, but that can simultaneously realise a consistent history of the early universe. Acknowledgments The authors would like to thank Steve F. King and Mar Bastero-Gil for collaboration in part of the works presented here. The authors acknowledge partial support by the DFG cluster of excellence “Origin and Structure of the Universe”. References 1. A. H. Guth, Phys. Rev. D23 (1981), 347–356; A. D. Linde, Phys. Lett. B108 (1982), 389–393; A. Albrecht and P. J. Steinhardt, Phys. Rev. Lett. 48 (1982), 1220–1223; for a review, see e.g.: D. H. Lyth and A. Riotto, Phys. Rept. 314 (1999) 1. 2. For textbook reviews on inflation see: A. R. Liddle and D. H. Lyth, “Cosmological inflation and large-scale structure,” Cambridge, UK: Univ. Pr. (2000) 400 p; A. D. Linde, “Particle Physics and Inflationary Cosmology,” [arXiv:hep-th/0503203]; V. Mukhanov, “Physical Foundations of Cosmology,” Cambridge, UK: Univ. Pr. (2005) 421 p. 3. A. D. Linde, Phys. Lett. B 259, 38 (1991); 4. A. D. Linde, Phys. Lett. B 249 (1990) 18; A. D. Linde, Phys. Lett. B 259 (1991) 38; A. D. Linde, Phys. Rev. D 49 (1994) 748; E. J. Copeland, A. R. Liddle, D. H. Lyth, E. D. Stewart and D. Wands, Phys. Rev. D 49 (1994) 6410; D. H. Lyth, hepph/9609431; A. D. Linde and A. Riotto, Phys. Rev. D 56, R1841 (1997). 5. A. D. Linde and A. Riotto, Phys. Rev. D 56, 1841 (1997) [arXiv:hep-ph/9703209]. 6. S. Antusch, S. F. King, M. Malinsky, L. Velasco-Sevilla and I. Zavala, Phys. Lett. B 666 (2008) 176 [arXiv:0805.0325 [hep-ph]]. 7. E. J. Copeland, A. R. Liddle, D. H. Lyth, E. D. Stewart and D. Wands, Phys. Rev. D 49, 6410 (1994). 8. S. Antusch, K. Dutta and P. M. Kostka, AIP Conf. Proc. 1200 (2010) 1007 [arXiv:0908.1694 [hep-ph]]. 9. M. Kawasaki, M. Yamaguchi and T. Yanagida, Phys. Rev. Lett. 85, 3572 (2000). 10. S. Antusch, M. Bastero-Gil, K. Dutta, S. F. King and P. M. Kostka, Phys. Lett. B 679 (2009) 428 [arXiv:0905.0905 [hep-th]]. 11. G. R. Dvali, Q. Shafi and R. K. Schaefer, Phys. Rev. Lett. 73, 1886 (1994). 12. P. Brax, C. van de Bruck, A. C. Davis and S. C. Davis, JCAP 0609, 012 (2006). 13. S. C. Davis and M. Postma, JCAP 0804, 022 (2008). 14. M. Bastero-Gil, S. F. King and Q. Shafi, Phys. Lett. B 651, 345 (2007). 15. S. Antusch, M. Bastero-Gil, S. F. King and Q. Shafi, Phys. Rev. D 71 (2005) 083519. 16. S. Antusch, M. Bastero-Gil, K. Dutta, S. F. King and P. M. Kostka, JCAP 0901, 040 (2009). 17. S. Antusch, K. Dutta and P. M. Kostka, Phys. Lett. B 677, 221 (2009). 18. E. D. Stewart, Phys. Rev. D51 6847-6853 (1995) [arXiv:hep-ph/9405389]. 19. S. Mooij and M. Postma, arXiv:1001.0664 [hep-ph]. 20. S. Antusch, M. Bastero-Gil, J. P. Baumann, K. Dutta, S. F. King and P. M. Kostka, arXiv:1003.3233 [hep-ph]. 21. G. R. Dvali, Phys. Lett. B 355 (1995) 78 [arXiv:hep-ph/9503375]. 22. T. W. B. Kibble, J. Phys. A 9, 1387 (1976); A. Vilenkin, Phys. Rept. 121 (1985) 263; A. Vilenkin and E. P. S. Shellard, Cosmic strings and other topological defects (Cambridge University Press, 1994)
November 22, 2010
16:26
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.04˙Ward
188
RESUMMED QUANTUM GRAVITY AND PLANCK SCALE COSMOLOGY B.F.L. WARD∗ Department of Physics, Baylor University, Waco, TX 76798, USA ∗ E-mail: BFL
[email protected] www.baylor.edu We show that, by using amplitude-based resummation techniques for Feynman’s formulation of Einstein’s theory, we get quantum field theoretic ’first principles’ predictions for the UV fixed-point values of the dimensionless gravitational and cosmological constants. We discuss our results in the framework of the phenomenological asymptotic safety analysis of Planck scale cosmology by Bonanno and Reuter. Keywords: Exact Amplitude-Based Resummation; Quantum General Relativity; Planck Scale Cosmology.
1. Introduction Sometime ago, Weinberg1 pointed-out that quantum gravity may be asymptotically safe in that the UV behavior of the theory corresponds to a UV-fixed point with a finite dimensional critical surface so that the S-matrix only depends on a finite number of dimensionless parameters. Recently, Bonanno and Reuter2,3 have shown, using a realization developed by Reuter4 of the idea via Wilsonian field space exact renormalization group methods, that one arrives at a purely Planck scale quantum mechanical formulation the inflationary cosmological scenario of Guth and Linde5,6 – this is very attractive as it opens the possibility of a deeper understanding of that scenario without the need of the hitherto unseen inflaton scalar field. In what follows, using the new resummed theory7–13 of quantum gravity, which is based on Feynman’s original approach14,15 to the subject, we recover the properties as used in Refs. 2,3 for the UV fixed point of quantum gravity with the added results that we get ’first principles’ predictions for the fixed point values of the respective dimensionless gravitational and cosmological constants in their analysis. The discussion proceeds as follows. In the next section we review the formulation of Einstein’s theory by Feynman, as it is not generally familiar. In Section 3, we present the elements of the resummed version of Feynman’s formulation, resummed quantum gravity. Section 4 presents the applications to Planck scale cosmology as it is formulated by Bonanno and Reuter.2,3 Section 5 contains our concluding remarks.
November 22, 2010
16:26
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.04˙Ward
189
2. Feynman’s Formulation of Einstein’s Theory In Feynman’s approach14,15 to quantum gravity, the starting point is that the metric of space-time undergoes quantum field theory fluctuations just like all point-particle fields: we write the metric of space-time as gµν (x) = ηµν + 2κhµν (x) where ηµν = √ diag(1, −1, −1, −1) is the flat Minkowski space background metric and κ = 8πGN so that hµν (x) is the quantum field of the graviton when GN is Newton’s constant. For definiteness and reasons of pedagogy, we specialize the complete theory here, which is L(x) =
√ 1 √ −g (R − 2Λ) + −gLGSM (x) 2 2κ
(1)
where R is the curvature scalar, g is the determinant of the metric of space-time g µν , Λ is the cosmological constant and LGSM (x) is the diffeomorphism invariant form of the SM Lagrangian obtained from the well-known SM Lagrangian in Ref. 16 by standard differential-geometric methods,7 to the case of a single scalar field, the Higgs field ϕ(x), with a rest mass set at m = 120 GeV,17,18 in interaction with the graviton so that the relevant Lagrangian is now that already considered by Feynman14,15 when ignore the small cosmological constant19 (we will re-instate it shortly): √ √ −g −g µν R + g ∂µ ϕ∂ν ϕ − m2o ϕ2 2 2κ 2 0 0 0 1 µν,λ ¯ h hµν,λ − 2η µµ η λλ ¯ hµλ ,λ0 η σσ = 2 ¯ µ0 σ,σ0 + 1 ϕ,µ ϕ,µ − m2o ϕ2 h 2 1 − κhµν ϕ,µ ϕ,ν + m2o ϕ2 ηµν 2 1 ¯ ρλ ϕ,µ ϕ,µ − m2o ϕ2 − κ2 [ hλρ h 2 0 ¯ ρ ν ϕ,µ ϕ,ν ] + · · · − 2ηρρ0 hµρ h
L(x) = −
(2)
where ϕ,µ ≡ ∂µ ϕ. We define y¯µν ≡ 21 (yµν + yνµ − ηµν yρ ρ ) for any tensor yµν . The Feynman rules for this theory were already worked-out by Feynman.14,15 where we ¯ νµ = 0. use his gauge, ∂ µ h Concerning the non-zero value of Λ, Λ/κ2 ∼ (0.0024 eV)4 ,19 we see that it is so small on the EW scale represented by the Higgs mass that its main effect in our loop corrections will be to provide an IR regulator for the graviton infrared (IR) divergences. This subtle point should be understood as follows. Our non-zero value of Λ means that the true background metric is that of de Sitter, not that of Minkowski. We study the theory using the Minkowski background as an approximate representation of the actual de Sitter one, adding in the required corrections when we probe that regime of space-time where the correction is significant: this is in the far IR where the effective graviton IR regulator mass, already noted by Feynman,15 represents the effect of the de Sitter curvature in our loop calculus. Thus, we are not in violation of the no-go theorems in Refs. 20,21.
November 22, 2010
16:26
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.04˙Ward
190
'$
k
'$
k
&%
q
&%
+
k+q (a)
(b)
Fig. 1. The scalar one-loop contribution to the graviton propagator. q is the 4-momentum of the graviton.
The main stumbling block of the Feynman formulation is already evident in Fig. 1, wherein we see that, by naive power counting, the graphs have superficial degree of divergence D = 4, so that, even if we take gauge invariance into account, we still have Def f ≥ 0, and higher loops give higher values of Def f . The theory is thus, from this perspective, non-renormalizable as it is well-known. As we explain in Refs. 7–13, this bad UV behavior can be greatly improved by applying the methods of amplitude-based, exact resummation theory to arrive at what we have called resummed quantum gravity. We review this approach to the UV behavior of quantum gravity in the next section.
3. Resummed Quantum Gravity The basic strategy we use is to make an exact re-arrangement of the Feynman formulated perturbative series for Einstein’s theory with the idea that the interactions in the theory actually tame the attendant bad UV behavior dynamically. Intuitively, Newton’s force is attractive between two positive masses, so that it becomes repulsive for negative mass-squared as we have in the deep Euclidean regime of the UV and this repulsion, in Feynman’s overall space-time path-space approach, would lead to severe damping of UV propagation, thereby taming the otherwise bad UV behavior. This all would be consistent with Weinberg’s asymptotic safety approach as recently developed in Refs. 2–4,22–24. As we have shown in Refs. 7, exact resummation of the IR dominated part of the proper self-energy function for a scalar particle of mass m gives the exact re-arrangement 00
i∆0F (k)|Resummed =
ieBg (k) 2 (k − m2 − Σ0s + i)
(3)
November 22, 2010
16:26
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.04˙Ward
191
where we have7 Bg00 (k)
R
d4 ` 1 16π 4 `2 − λ2 + i 1 2 (` + 2`k + ∆ + i)2 2 4
= −2iκ k
when the use the IR regulator mass λ for the graviton to represent the leading effect of the small recently discovered19 cosmological constant, an effect Feynman already pointed-out in Ref. 15, for example. The residual self-energy function Σ0s starts in O(κ2 ), so we may drop it in calculating one-loop effects. We note the following: 1. In the deep UV, explicit evaluation gives m2 κ2 |k 2 | ln , (4) Bg00 (k) = 8π 2 m2 + |k 2 | so that the resummed propagator falls faster than any power of |k 2 |! Observe: in the Euclidean regime, −|k 2 | = k 2 so there is trivially no analyticity issue here. 2. If m vanishes, using the usual −µ2 normalization point we get Bg00 (k) = 2 κ2 |k2 | µ which again vanishes faster than any power of |k 2 |! This means 8π 2 ln |k2 | that one-loop corrections are UV finite! Indeed, as we show in Ref. 7, all quantum gravity loops are UV finite! 3. In non-Abelian gauge theories, the K¨ all´en-Lehmann representation cannot be used to show that the attendant gauge field renormalization constant Z3 is formally less than 1 so that Weinberg’s argument1 that the attendant spectral density condition, in an obvious notation, ρK-L (µ) ≥ 0 prevents the graviton propagator from falling faster than 1/k 2 does not hold in such theories, as he has intimated himself. 4. One might think that Ward-Takahashi identities would require that the vertex correction resummation compensate any propagator resummation so that the net effect in a loop calculation if both vertices and propagators are resummed is to leave the power counting in the UV for the loop unchanged.25 In fact, if we put the square root of the propagator as a factor for each leg entering or leaving a vertex and resum as well the corresponding large IR effects in the vertex, we still have exponential damping because the large resummed IR effects in the vertex behave sub-dominantly26 in the deep UV and this does not cancel the propagator fall-off. 5. The fact that we find that the dynamics of quantum gravity leads to UV finiteness is consistent with both the asymptotic safety approach of Weinberg, as recently developed by Refs. 2–4,22–24 and with the recent leg renormalizable result of Kreimer,27 wherein he finds at least for the pure gravity part of Einstein’s theory, using the Hopf-algebraic Dyson-Schwinger equation realization of renormalization theory,28 that, while quantum gravity is non-renormalizable order by order in perturbation theory, there is an infinite set of relations among residues of the respective amplitudes so that when all are imposed only a finite number of unknown constants obtain, i.e., he finds in this way more evidence that quantum gravity is non-perturbatively renormalizable.
February 24, 2011
14:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03
192
k q
k+q (a) +
k
q - -
+
'$
k
(b)
&%
q
k+q (c)
Fig. 2. The graviton((a),(b)) and its ghost((c)) one-loop contributions to the graviton propagator. q is the 4-momentum of the graviton.
We have called our representation of the quantum theory of general relativity resummed quantum gravity (RQG). A number of applications have been workedout in Refs. 7–13. We turn to its implications29 for Planck scale cosmology in the next section. 4. Planck Scale Cosmology Consider the graviton propagator in the theory of gravity coupled to a massive scalar(Higgs) field.14,15 We have the graphs in Fig. 2 in addition to that in Fig. 1. Using the resummed theory, we get that the Newton potential becomes ΦN (r) = −
GN M (1 − e−ar ), r
(5)
for a∼ = 0.210MP l, so that we have G(k) = GN /(1 +
k2 ), a2
(6)
February 24, 2011
14:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03
193
which implies fixed point behavior for k 2 → ∞, in agreement with the asymptotic safety approach of Weinberg as recently developed in Refs. 2–4,22–24. Indeed, in Refs. 7–13, we have shown that we are in agreement with the results in Refs. 2–4, 22–24 on several aspects of the UV limit of quantum gravity, such as the final state of Hawking radiation30,31 for an originally very massive black hole. Let us note for completeness that Ref. 32 gets a similar result in loop quantum gravity.33 Here we show that we also agree with the Planck scale cosmology phenomenology developed in Refs. 2,3. We believe this strengthens the case for asymptotic safety. Specifically, Bonanno and Reuter2,3 present a phenomenological approach to Planck scale cosmology wherein the starting point is the Einstein-Hilbert theory L(x) =
1 √ −g (R − 2Λ) . 2κ2
(7)
Using the phenomenological exact renormalization group for the Wilsonian coarse grained effective average action in field space, the authors in Refs. 2,3,22 show that attendant running Newton constant GN (k) and running cosmological constant Λ(k) approach UV fixed points as k goes to infinity in the deep Euclidean regime – k 2 GN (k) → g∗ , Λ(k) → λ∗ k 2 for k → ∞ in the Euclidean regime. Due to the thinning of the degrees of freedom in Wilsonian field space renormalization theory, the arguments of Ref. 34 are obviated.35 The contact with cosmology then proceeds as follows: invoking a phenomenological connection between the momentum scale k characterizing the coarseness of the Wilsonian graininess of the average effective action and the cosmological time t, the authors in Ref. 2,3 show the standard cosmological equations admit the following extension: a˙ K 1 8π ( )2 + 2 = Λ + GN ρ a a 3 3 a˙ ρ˙ + 3(1 + ω) ρ = 0 a Λ˙ + 8πρG˙N = 0
(8) (9) (10)
GN (t) = GN (k(t))
(11)
Λ(t) = Λ(k(t))
(12)
in a standard notation for the density ρ and scale factor a(t) with the RobertsonWalker metric representation as dr2 2 2 2 2 2 2 2 ds = dt − a(t) + r (dθ + sin θdφ ) (13) 1 − Kr2 where K = 0, 1, −1 corresponds respectively flat, spherical and pseudo-spherical 3-spaces for constant time t for a linear relation between the pressure p and ρ p(t) = ωρ(t).
(14)
November 22, 2010
16:26
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.04˙Ward
194
The functional relationship between the respective momentum scale k and the cosmological time t is determined phenomenologically via ξ (15) t with the positive constant ξ determined phenomenologically . Using the phenomenological, exact renormalization group (asymptotic safety) UV fixed points as discussed above for k 2 GN (k) = g∗ and Λ(k)/k 2 = λ∗ the authors in Refs. 2,3 show that the system in (12) admits, for K = 0, a solution in the Planck regime (0 ≤ t ≤ tclass , with tclass a few times the Planck time tpl ), which joins smoothly onto a solution in the classical regime (t > tclass ) which agrees with standard Friedmann-Robertson-Walker phenomenology but with the horizon, flatness, scale free Harrison-Zeldovich spectrum, and entropy problems solved by Planck scale quantum physics. The fixed-point results g∗ , λ∗ depend on the cut-offs used in the Wilsonian coarse-graining procedure. The key properties of g∗ , λ∗ used for the analysis in Refs. 2,3(hereafter referred to as the B-R analysis) are that they are both positive and that the product g∗ λ∗ is cut-off/threshold function independent. Here, we present the predictions for these UV limits as implied by resummed quantum gravity theory, providing a more rigorous basis for the B-R analysis. Specifically, in addition to our UV fixed-point result for GN (k) → a2 GN /k 2 ≡ g∗ /k 2 , we also get UV fixed point behavior for Λ(k): using Einstein’s equation k(t) =
Gµν + Λgµν = −κ2 Tµν
(16)
and the point-splitting definition ϕ(0)ϕ(0) = lim ϕ()ϕ(0) →0
= lim T (ϕ()ϕ(0)) →0
(17)
= lim {: (ϕ()ϕ(0)) : + < 0|T (ϕ()ϕ(0))|0 >} →0
we get for a scalar the contribution to Λ, in Euclidean representation, R 4 2 2 2 2 d k (2~k 2 + 2m2 )e−λc (k /(2m )) ln(k /m +1) Λs = −8πGN 2(2π)4 k 2 + m2 3 1 ∼ ], ρ = ln = −8πGN [ 2 2 GN 64ρ λc
(18)
2
2m with λc = M 2 . For a Dirac fermion, we get −4 times this contribution. Pl From these results, we get the Planck scale limit
Λ(k) → k 2 λ∗ , X X 1 ( nj )( (−1)Fj nj ) λ∗ = 960ρavg j j
(19)
where Fj is the fermion number of j, nj is the effective number of degrees of freedom of j, and ρavg is the average value of ρ – see Ref. 29.
November 22, 2010
16:26
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.04˙Ward
195
All of the Planck scale cosmology results of Bonanno and Reuter2,3 hold, but with definite results for the limits k 2 G(k) = g∗ and λ∗ for k 2 → ∞: solution of the horizon and flatness problem, scale free spectrum of primordial density fluctuations, initial entropy, etc., all provided by Planck scale quantum physics. For reference, our UV fixed-point calculated here, (g∗ , λ∗ ) ∼ = (0.0442, 0.232), can be compared with the estimates of B-R, (g∗ , λ∗ ) ≈ (0.27, 0.36), with the understanding that B-R analysis did not include SM matter action and that the attendant results have definitely cut-off function sensitivity. The qualitative results that g ∗ and λ∗ are both positive and are significantly less than 1 in size with λ∗ > g∗ are true of our results as well. We argue that this puts the results in Refs. 2,3 on a more firm theoretical basis. 5. Summary In this discussion, we have shown that the application of exact amplitude-based resummation methods, where we stress that for the 1PI 2-point function for example we have resummed the IR part of its loops in Feynman’s formulation of Einstein’s theory for arbitrary values of the respective external line momenta, we achieve the first first principles calculations of the UV limits of the dimensionless gravitational and cosmological constants. We have shown that these results agree with those found by the phenomenological asymptotic safety based exact, Wilsonian field space renormalization group analysis of Refs. 2–4,22–24 and that our results support the properties of these limits as they are used in Refs. 2,3 to formulate Planck scale cosmology as an alternative to the standard inflationary cosmological paradigm of Guth and Linde.5,6 We believe our analysis puts the arguments in Refs. 2,3 for such an alternative on a more firm theoretical basis. Ultimately, we do expect experiment to make a choice between the two. Acknowledgments We thank Profs. L. Alvarez-Gaume and W. Hollik for the support and kind hospitality of the CERN TH Division and the Werner-Heisenberg-Institut, MPI, Munich, respectively, where a part of this work was done. Work partly supported by the US Department of Energy grant DE-FG02-05ER41399 and by NATO Grant PST.CLG.980342.
References 1. S. Weinberg, in General Relativity, eds. S.W. Hawking and W. Israel,(Cambridge Univ. Press, Cambridge, 1979) p.790. 2. A. Bonanno and M. Reuter, Phys. Rev. D65 (2002) 043508. 3. A. Bonanno and M. Reuter, J. Phys. Conf. Ser. 140 (2008) 012008, and references therein.
November 22, 2010
16:26
WSPC - Proceedings Trim Size: 9.75in x 6.5in
03.04˙Ward
196
4. M. Reuter, Phys. Rev. D57 (1998) 971; O. Lauscher and M. Reuter, ibid. 66 (2002) 025026, and references therein. 5. See for example A. H. Guth and D.I. Kaiser, Science 307 (2005) 884, and references therein. 6. See for example A. Linde, Lect. Notes. Phys. 738 (2008) 1, and references therein. 7. B.F.L. Ward, Open Nucl. Part. Phys. J 2 (2009) 1; arXiv:0810.0721, and references therein. 8. B.F.L. Ward, Mod. Phys. Lett. A17 (2002) 237. 9. B.F.L. Ward, Mod. Phys. Lett. A19 (2004) 14. 10. B.F.L. Ward, J. Cos. Astropart. Phys.0402 (2004) 011. 11. B.F.L. Ward, hep-ph/0605054, Acta Phys. Polon. B37 (2006) 1967. 12. B.F.L. Ward, hep-ph/0503189, Acta Phys. Polon. B37 (2006) 347. 13. B.F.L. Ward, Int. J. Mod. Phys. D17 (2008) 627, and references therein. 14. R. P. Feynman, Acta Phys. Pol. 24 (1963) 697. 15. R. P. Feynman, Feynman Lectures on Gravitation, eds. F.B. Moringo and W.G. Wagner (Caltech, Pasadena, 1971). 16. See for example D. Bardin and G. Passarino, The Standard Model in the Making: Precision Study of the Electroweak Interactions, ( Oxford Univ Press, London, 1999). 17. D. Abbaneo et al., hep-ex/0212036. 18. M. Gruenewald, hep-ex/0210003, in Proc. ICHEP02, eds. S. Bentvelsen et al., (NorthHolland,Amsterdam, 2003), Nucl. Phys. B Proc. Suppl. 117(2003) 280. 19. S. Perlmutter et al., Astrophys. J. 517 (1999) 565; and, references therein. 20. H. van Dam and M. Veltman, Nucl. Phys. B22 (1970) 397. 21. V.I. Zakharov, Pisma Zh. Eksp. Teor. Fiz. 12 (1970) 447. 22. A. Bonanno and M. Reuter, Phys. Rev. D62 (2000) 043008, and references therein. 23. D. F. Litim, Phys. Rev. Lett.92(2004) 201301; Phys. Rev. D64 (2001) 105007, and references therein. 24. R. Percacci and D. Perini, Phys. Rev. D68 (2003) 044018. 25. J. Polchinski, arXiv:0810.3707. 26. B. F. L. Ward, in preparation. 27. D. Kreimer, Ann. Phys. 323 (2008) 49. 28. D. Kreimer, Ann Phys 321 (2006) 2757. 29. B. F. L. Ward, Mod. Phys. Lett. A23 (2008) 3299. 30. S. W. Hawking, Nature (London) 248 (1974) 30; Commun. Math. Phys. 43 (1975) 199; Erratum, ibid. 46 (1976) 206. 31. S.W. Hawking, in Proc. GR17, eds. P. Florides, B. Nolan and A. Ottewill,(World Sci. Publ. Co., Hackensack, 2005) pp. 56-62. 32. M. Bojowald et al., Phys. Rev. Lett. 95 (2005) 091302. 33. T. Thiemann, in Proc. 14th International Congress on Mathematical Physics, ed. J.C. Zambrini,(World Scientific Publ. Co., Hackensack, 2005) pp. 569-83; L. Smolin, hep-th/0303185; A. Ashtekar and J. Lewandowski, Class. Quantum Grav. 21 (2004) R53-153, and references therein. 34. R. Foot et al., Phys. Lett. B664 (2008) 199. 35. B.F.L. Ward, arXiv:0908.1764; see also I.L. Shapiro and J. Sola, preprint arXiv:0808.0315; J. Grande, A. Pelinson and J. Sola, Phys. Rev. D79 (2009) 043006; I.L. Shapiro and J. Sola, Phys. Lett. B475(2000) 236; J. High Energy Phys. 0202 (2002) 006; J. Sola, J. Phys. A41 (2008) 164066; J. Sola and H. Stefancic, J. Cos. Astropart. Phys. 0501 (2005) 012; S. Basilakos, M. PLionis and J. Sola, arXiv:0907.4555; F. Bauer and L. Schrempp, J. Cos. Astropart. Phys. 0804 (2008) 006, and references therein.
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
PART IV
SUSY/SUGRA Phenomenology, Fundamental Symmetries
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
199
SUPERSYMMETRIC SO(N ) FROM A PLANCK-SCALE STATISTICAL PICTURE ROLAND E. ALLEN Department of Physics and Astronomy, Texas A&M University College Station, Texas 77843, U.S.A. ∗ E-mail:
[email protected] http://www.physics.tamu.edu/ Several refinements are made in a theory which starts with a Planck-scale statistical picture and ends with supersymmetry and a coupling of fundamental fermions and bosons to SO(N ) gauge fields. In particular, more satisfactory treatments are given for (1) the transformation from the initial Euclidean form of the path integral for fermionic fields to the usual Lorentzian form, (2) the corresponding transformation for bosonic fields (which is much less straightforward), (3) the transformation from an initial primitive supersymmetry to the final standard form (containing, e.g., scalar sfermions and their auxiliary fields), (4) the initial statistical picture, and (5) the transformation to an action which is invariant under general coordinate transformations. Keywords: supersymmetry, SO(N ) gauge theory
1. Introduction This paper contains several refinements of ideas proposed earlier, in the context of a theory which starts with a statistical picture at the Planck scale and ultimately results in a supersymmetric SO(N ) gauge theory.1–3 The present treatment supersedes previous versions. 2. Transformation to Lorentzian Path Integral: Fermions We begin with the following low-energy action for the (initially massless) fundamental fermions and bosons, which follows from essentially the same arguments as in Refs. 1 and 2, within a Euclidean picture (as in Eq. (6.17) of Ref. 1) but with all the components of the vierbein real (as in Eq. (3.45) of Ref. 2): S = Sf + Sb Z Sf = d4 x ψf† (x) eµα iσ α Dµ ψf (x) Z Sb = d4 x ψb† (x) eµα iσ α Dµ ψb (x)
Dµ = ∂µ − iAiµ ti
(1) (2) (3) (4)
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
200
in an obvious notation (which is defined in Refs. 1 and 2). The transformation of Sb to the standard form for scalar bosons will be treated in the next section, and here we consider Sf only. A key point is that the low-energy operator eµα iσ α Dµ in Sf is automatically in the correct Lorentzian form, even though the initial path integral is in Euclidean form. It is this fact which permits the following transformation to a Lorentzian path integral: Within the present theory, neither the fields nor the operators (nor the meaning of the time coordinate) need to be modified in performing this transformation. In a locally inertial coordinate system, the Hermitian operator within Sf can be diagonalized to give Z Sf = d4 x ψf† (x) iσ µ Dµ ψf (x) (5) X (6) = ψef∗ (s) a (s) ψef (s) s
where
ψf (x) =
X s
with
Z
U (x, s) ψef (s)
ψef (s) =
,
Z
d4 x U † (s, x) ψf (x)
iσ µ Dµ U (x, s) = a (s) U (x, s) X d4 x U † (s, x) U (x, s0 ) = δss0 , U (x, s) U † (s, x0 ) = δ (x − x0 )
(7)
(8) (9)
s
so that Z
d4 x U † (s, x) iσ µ Dµ U (x, s0 ) = a (s) δss0 .
(10)
U (x, s) is a multicomponent eigenfunction (which could also be written Us (x) or hs | xi, with U † (s, x) written as Us† (x) or hx | si). Alternatively, U (x, s) is a unitary matrix which transforms ψef (s) into ψf (x). There is an implicit inner product in U † (s, x) ψf (x) =
X
Ur† (s, x) ψf,r (x)
(11)
† Ur,a (s, x) Ur,a (x, s)
(12)
r
U † (s, x) U (x, s) =
X r,a
with the 2N components of ψf (x) labeled by r = 1, ..., N (spanning all components of all gauge representations) and a = 1, 2 (labeling the components of Weyl spinors), and with s and x, r each formally regarded as having 2N values. Evaluation of the Euclidean path integral (a Gaussian integral with Grassmann
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
201
variables) is then trivial for fermions: as usual, Z Zf = D ψf† (x) D ψf (x) e−Sf Z YZ ∗ = d ψf,ra (x) d ψf,ra (x) e−Sf
(13) (14)
x,ra
=
Y
zf (s)
(15)
s
with zf (s) =
Z
d ψef∗ (s)
= a (s)
Z
e∗
e
d ψef (s) e−ψf (s) a(s) ψf (s)
since the Jacobian J of the transformation in the path integral is unity: X X dψf (x) = U (x, s) d ψef (s) , dψf† (x) = d ψef∗ (s) U † (s, x) s
(16) (17)
(18)
s
which gives
Now let
J = det (U ) det U † = det U U † = 1 . ef = Z
Z
D ψef∗ (s) D ψef (s) eiSf
with the notation in this context now meaning that Y ef = Z zef (s)
(19)
(20)
(21)
s
where
zef (s) = i
Z
d ψef∗ (s)
= a (s)
Z
e∗ e d ψef (s) ei ψf (s) a(s) ψf (s)
(22) (23)
so that ef . Zf = Z
(24)
This is the path integral for an arbitrary time interval (with the fields, operator, and meaning of time left unchanged), so the Lorentzian path integral Zef will give the same results as the Euclidean path integral Zf for any physical process. The same is true of more general path integrals derived from more general operators, as long as they can be put into Gaussian form. When the inverse transformation from ψef to ψf is performed, we obtain Z Zf = D ψf† (x) D ψf (x) eiSf (25) with Sf having its form (5) in the coordinate representation.
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
202
3. Transformation to Lorentzian Path Integral: Bosons For bosons we can again perform the transformation (7) to obtain X Sb = ψeb∗ (s) a (s) ψeb (s) . s
The formal expression for the Euclidean path integral is Z Zb = D ψb† (x) D ψb (x) e−Sb Z ∞ YZ ∞ = d (Re ψb,ra (x)) d (Im ψb,ra (x)) e−Sb x,ra
=
Y
−∞
(26)
(27) (28)
−∞
zb (s)
(29)
s
with
zb (s) =
Z
∞ −∞
d(Re ψeb (s))
Z
∞ −∞
d(Im ψeb (s)) e−Sb .
(30)
We will now show that this action can be put into a form which corresponds to scalar bosonic fields plus their auxiliary fields. First, if the gauge potentials Aiµ were zero, we would have iσ µ ∂µ U0 (x, s) = a0 (s) U0 (x, s) .
(31)
Then U0 (x, s) = V −1/2 u (s) eips ·x , ps · x = ηµν pµs xν , ηµν = diag (−1, 1, 1, 1)
(32)
(with V a four-dimensional normalization volume) gives −ηµν σ µ pνs U0 (x, s) = a0 (s) U0 (x, s)
(33)
where σ µ implicitly multiplies the identity matrix for the multicomponent function U0 (x, s). A given 2-component spinor ur (s) has two eigenstates of pks σ k : → − → − k + − pks σ k u+ , pks σ k u− (34) r (s) = | p s | σ ur (s) r (s) = − | p s | ur (s) 1/2 − − where → p s is the 3-momentum and |→ p s | = pks pks . The multicomponent eigen− µ states of iσ ∂µ and their eigenvalues a0 (s) = p0s ∓ |→ p s | thus come in pairs, corresponding to opposite helicities. For nonzero Aiµ , the eigenvalues a (s) will also come in pairs, with one growing out of a0 (s) and the other out of its partner a0 (s0 ) as the Aiµ are turned on. To see this, first write (8) as i∂0 + Ai0 ti U (x, s) + σ k i∂k + Aik ti U (x, s) = a (s) U (x, s) (35) or
i∂0 + Ai0 ti
rr 0
Ur0 (x, s) + Prr0 Ur0 (x, s) − a (s) δrr0 Ur0 (x, s) = 0 Prr0 ≡ σ k i∂k + Aik ti
rr 0
(36) (37)
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
203
with the usual implied summations over repeated indices. At fixed r,r 0 (and x, s), apply a unitary matrix u which will diagonalize the 2 × 2 matrix Prr0 , bringing it into the form prr0 σ 3 + prr0 σ 0 , where prr0 and prr0 are 1-component operators, while at the same time rotating the 2-component spinor Ur0 : 0 0 3 , Ur0 0 = uUr0 uPrr0 u† = Prr 0 = prr 0 σ + prr 0 σ 10 1 0 0 3 σ = , σ = . 01 0 −1
,
uu† = 1
(38) (39)
But Prr0 is traceless, and the trace is invariant under a unitary transformation, so prr0 = 0. Then the second term in (36) becomes u† prr0 σ 3 Ur0 0 (x, s). The two independent choices 1 0 Ur0 (x, s) ∝ , σ 3 Ur0 0 (x, s) = Ur0 0 (x, s) (40) 0 0 , σ 3 Ur0 0 (x, s) = −Ur0 0 (x, s) (41) Ur0 0 (x, s) ∝ 1 give ±u† prr0 Ur0 0 (x, s). Now use u† Ur0 0 = Ur0 to obtain for (36) i∂0 + Ai0 ti rr0 Ur0 (x, s) ± prr0 Ur0 (x, s) − a (s) δrr0 Ur0 (x, s) = 0 so (35) reduces to two N × N eigenvalue equations with solutions i∂0 + Ai0 ti U (x, s) + σ k i∂k + Aik ti U (x, s) = a (s) U (x, s)
(42)
(43)
a (s) = a1 (s) + a2 (s)
(44)
i∂0 + Ai0 ti U (x, s0 ) + σ k i∂k + Aik ti U (x, s0 ) = a (s0 ) U (x, s0 )
(45)
a (s ) = a1 (s) − a2 (s) 0
(46)
where these equations define a1 (s) and a2 (s). Notice that letting σ k → −σ k in (35) reverses the signs in (42), and changes the eigenvalue of U (x, s) to a (s 0 ) = a1 (s) − a2 (s). The action for a single eigenvalue a (s) and its partner a (s0 ) is seb (s) = ψeb∗ (s) a (s) ψeb (s) + ψeb∗ (s0 ) a (s0 ) ψeb (s0 ) = ψeb∗ (s) (a1 (s) + a2 (s)) ψeb (s) + ψeb∗ (s0 ) (a1 (s) − a2 (s)) ψeb (s0 ) .
(47)
1/2 1/2 ψeb (s0 ) = a (s) φeb (s) = (a1 (s) + a2 (s)) φeb (s) −1/2 e −1/2 e ψeb (s) = a (s) Fb (s) = (a1 (s) + a2 (s)) Fb (s)
(49)
1/2 1/2 ψeb (s) = a (s0 ) φeb (s) = (a1 (s) − a2 (s)) φeb (s) −1/2 e −1/2 e ψeb (s0 ) = a (s0 ) Fb (s) = (a1 (s) − a2 (s)) Fb (s)
(51)
(48)
There are 4 cases: For a1 (s) > 0 and a2 (s) > 0, let
(50)
and for a1 (s) > 0 and a2 (s) < 0
(52)
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
204
so that for both of these first two cases
where
seb (s) = φe∗b (s) e a (s) φeb (s) + Feb∗ (s) Feb (s)
a1 (s) > 0
,
2
(53)
2
e a (s) = a (s) a (s0 ) = a1 (s) − a2 (s) .
(54)
For a1 (s) < 0 and a2 (s) < 0, let
1/2 1/2 ψeb (s0 ) = (−a (s)) φeb (s) = (−a1 (s) − a2 (s)) φeb (s) −1/2 e −1/2 e ψeb (s) = (−a (s)) Fb (s) = (−a1 (s) − a2 (s)) Fb (s)
(55) (56)
and for a1 (s) < 0 and a2 (s) > 0
1/2 1/2 ψeb (s) = (−a (s0 )) φeb (s) = (−a1 (s) + a2 (s)) φeb (s) −1/2 e −1/2 e ψeb (s0 ) = (−a (s0 )) Fb (s) = (−a1 (s) + a2 (s)) Fb (s)
so for each of these last two cases i h seb (s) = − φe∗b (s) e a (s) φeb (s) + Feb∗ (s) Feb (s)
,
a1 (s) < 0 .
(57) (58)
(59)
Then we have
Sb = =
X0
seb (s)
s 0 X
a1 (s)>0
h
(60)
φe∗b (s) e a (s) φeb (s) + Feb∗ (s) Feb (s) 0 h X
−
a1 (s)<0
i
φe∗b (s) e a (s) φeb (s) + Feb∗ (s) Feb (s)
i
where a prime on a summation or product over s means that only one member of an s, s0 pair (as defined in (43)-(46)) is included, so that there are only N terms rather than 2N . All of the transformations above from ψeb to φeb and Feb have the form so that
1/2 ψeb (s1 ) = A (s) φeb (s)
dψeb (s1 ) = A (s)
1/2
and the Jacobian is
J0 =
dφeb (s)
Y
s
,
−1/2 e Fb (s) ψeb (s2 ) = A (s)
(61)
,
dψeb (s2 ) = A (s)
(62)
A (s)1/2 A (s)−1/2 = 1 .
−1/2
dFeb (s)
(63)
These transformations lead to the formal result Zb =
0 Y
a1 (s)>0
zb (s) ·
0 Y
a1 (s)<0
zb (s)
(64)
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
205
zb (s) =
Z
∞ −∞
× e Z zb (s) =
d(Re φeb (s))
−e a(s)
(Re φeb (s))
2
e a(s)
d(Re φeb (s))
h
(Re φeb (s))
2
∞ −∞
d(Im φeb (s))
eb (s)) +(Im φ
∞
−∞
× e
h
Z
Z
2
i
e
−
h
d(Im φeb (s))
eb (s)) +(Im φ
2
i h
∞ −∞
d(Re Feb (s))
(Re Feb (s))
∞ −∞
Z
Z
2
eb (s)) +(Im F
∞ −∞
Re Feb (s)) e(
2
d(Re Feb (s))
eb (s)) +(Im F
2
i
Z
∞
−∞ i 2
d(Im Feb (s))
, a1 (s) > 0 (65)
Z
∞ −∞
d(Im Feb (s))
, a1 (s) < 0 .
(66)
At this point we encounter a difficulty which is not present for fermions, since the integral over Grassmann variables is well-defined for both positive and negative a (s), whereas the corresponding integrals above, over ordinary commuting variables, are divergent for the states with either e a (s) < 0 or a1 (s) < 0. This divergence results from the approximate linearization that led to (3), and will ultimately be controlled by various nonlinear effects, beginning with a self-interaction term 2 involving ψb† (x) ψb (x) which is present in the original theory, but also including gauge interactions and various other complications which certainly lie beyond the simple treatment given here. We will therefore omit these states, which have a different status and require special treatment, in the expansions of φb (x) and Fb (x): Z X e (x, s) φeb (s) , φeb (s) = d4 x U e ∗ (s, x) φb (x) U (67) φb (x) = s>0
Fb (x) =
X s>0
e (x, s) Feb (s) U
,
Feb (s) =
Z
e ∗ (s, x) Fb (x) d4 x U
(68)
where s > 0 means that a1 (s) > 0 and e a (s) > 0. e is an N × N matrix (since there is no longer a spinor index a, and the Here U number of values of s has also been reduced by a factor of 2) which satisfies h i e (x, s) = a1 (s)2 − a2 (s)2 U (x, s) = e η µν Dµ Dν U a (s) U (x, s) (69) Z
e ∗ (s, x) U e (x, s0 ) = δss0 . d4 x U
(70)
e (x, s) which are the appropriate I.e., we assume the existence of basis functions U solutions of (69). In cases where an appropriate set of such basis functions does not exist, these bosons will exhibit further nonstandard behavior. In the case of free fields (i.e. with Aiµ = 0), we have e (x, s) = V −1/2 eips ·x U
(71)
1/2 − − a1 (s) = ω ≡ , a2 (s) = ± |→ p s | , |→ p s | ≡ pks pks 2 2 − − − e a (s) = p0s ± |→ p s | p0s ∓ |→ p s | = p0s − |→ p s| p0s
and
e (x, s) = η µν ∂µ ∂ν U e (x, s) = η µν Dµ Dν U
h
p0s
2
i 2 e − − |→ p s| U (x, s) .
(72) (73)
(74)
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
206
Also, s > 0 then means that ω > 0 and − ω > |→ p s| .
(75)
With a return to the general case, (64) becomes Zb =
0 Y
zb (s)
(76)
s>0
where zb (s) = Now let eb = Z
≡
Z
zeb (s) ≡ −
Z
0 Y
(77)
(78)
zeb (s)
∞ −∞
for s > 0 .
D φe†b (s) D φeb (s) D Feb† (s) D Feb (s) eiSb
s>0
with
π π · e a (s) 1
d(Re φeb (s))
Z
∞ −∞
d(Im φeb (s))
Z
∞
d(Re Feb (s))
Z
(79)
∞
d(Im Feb (s))
−∞ −∞ i h i h eb (s))2 +(Im φ eb (s))2 i (Re F eb (s))2 +(Im F eb (s))2 ie a(s) (Re φ
×e e (80) π π = · for s > 0 (81) e a (s) 1 R∞ R∞ since −∞ dx −∞ dy exp ia x2 + y 2 = iπ/a. (Nuances of Lorentzian path integrals are discussed in, e.g., Peskin and Schroeder.4 ) eb , or after a transformation to the coordinate We have then obtained Zb = Z representation via (67) and (68), Z Zb = D φ†b (x) D φb (x) D Fb† (x) D Fb (x) eiSb (82) Z h i Sb = d4 x φ†b (x) η µν Dµ Dν φb (x) + Fb† (x) Fb (x) . (83) Again, this is the path integral for an arbitrary time interval, so the Lorentzian eb will give the same results as the Euclidean path integral Zb for any path integral Z physical process, and the same is true for more general path integrals derived from more general operators. Recall, however, that the states with s < 0 have been omitted from the expansion of φb (x) and Fb (x), so these bosonic fields should exhibit nonstandard behavior, and this feature may provide the most testable new prediction of the present theory.
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
207
4. Supersymmetry The total action for fermions and bosons is S = Sf + Sb (84) Z h i = d4 x ψf† (x) iσ µ Dµ ψf (x) + φ†b (x) η µν Dµ Dv φb (x) + Fb† (x) Fb (x) (85)
which in a general coordinate system becomes Z † e µ ψ (x) − g µν D e µ φ (x) D e v φ (x) + F † (x) F (x) S = d4 x e ψ † (x) ieµα σ α D where gµν is the metric tensor, e = det eα µ = (− det gµν ) and ψ (x) = e−1/2 ψf (x)
,
φ (x) = e−1/2 φb (x)
,
1/2
(86) e µ = Dµ +e−1/2 ∂µ e1/2 , ,D
F (x) = e−1/2 Fb (x) . (87)
We thus obtain the standard basic form for a supersymmetric action, where the fields φ, F , and ψ respectively consist of 1-component complex scalar bosonic fields, 1-component complex scalar auxiliary fields, and 2-component spin 1/2 fermionic fields ψ. These fields span the various physical representations of the fundamental gauge group, which must be SO(N ) (e.g., SO(10)) in the present theory. I.e., ψ includes all the Standard Model fermions and the Higgsinos, and φ includes the sfermions and Higgses. 5. Higher-Derivative Terms in the Initial Bosonic Action It was mentioned below Eq. (3.21) in Ref. 2 that higher-derivative terms are required in the initial bosonic action in order for the action in the internal space to be finite. It is easy to revise the treatment in Ref. 3 between Eqs. (73) and (87) to obtain the lowest-order such term. First (for better-defined statistical counting) we choose the length scale a in external space to be the same as the original fundamental length scale a0 and rewrite Eq. (73) of Ref. 3 as D Ei X X h 2 2 2 S = S0 + a h∆ρk i aD − b h∆ρ i + (δρ ) aD . (88) k k 0 0 x,k
x,k
We then retain the second-order term in δρk : δρk =
∂∆ρk 1 ∂ 2 ∆ρk 2 δx + 2 (δx) . M M 2 ∂x ∂ x
With ∂∆ρk /∂xM = ∂ (ρk − ρ) /∂xM = ∂ρk /∂xM , it follows that !2 D E X ∂ρ 2 a 2 a 4 2 1 ∂ ρ k 0 0 k 2 (δρk ) = + M 2 2 ∂ xM 2 2 ∂x M !2 2 2 2 X a ∂ φ ∂φ k k + 0 = ρk a20 16 ∂ xM 2 ∂xM M
(89)
(90)
(91)
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
208
with the higher-order term involving the first derivative neglected. In the continuum R∞ D P limit, x aD → d x, this leads to 0 a0 ! 2 2 Z ∞ 2 2 X X µ 1 a0 ∂ φk ∂φk (92) φ2k − + S = S00 + dD x m 2m2 ∂xM 16 ∂ (xM )2 a0 k
M
with the lower limit a0 automatically providing an ultimate ultraviolet cutoff. Eq. (87) of Ref. 3 is then replaced by " # Z ∞ ∂Ψ†b ∂Ψb a20 ∂ 2 Ψ†b ∂ 2 Ψb 1 D + Sb = d x 2m2 ∂xM ∂xM 16 ∂ (xM )2 ∂ (xM )2 a0 −µ Ψ†b Ψb + iVe Ψ†b Ψb . (93)
Ordinarily we can let a0 → 0, but both the nonzero lower limit and the higherderivative term in the action can be relevant in the internal space, where the length scales can be comparable to a0 , which may itself be regarded as comparable to the Planck length. Finally, we emphasize that the randomly fluctuating imaginary potential iVe is a separate postulate of the theory. As mentioned below, the present theory is based on both statistical counting and these stochastic fluctuations, as well as the specific symmetry-breaking or “geography” of our universe. 6. Gravity and Cosmological Constant
According to (86), the coupling of matter to gravity is very nearly the same as in standard general relativity. However, if S is written in terms of the original fields ψf and φb , there is no factor of e. In other words, in the present theory the original action has the form Z S = d4 x L (94) whereas in standard physics it has the form Z S = d4 x e L .
(95)
For an L corresponding to a fixed vacuum energy density, there is then no coupling to gravity in the present theory, and the usual cosmological constant vanishes. This point was already made in Ref. 1, where the “cosmological constant” was defined to be the usual contribution to the stress-energy tensor from a constant vacuum Lagrangian density L0 , which results from the factor of e. However, as was also pointed out in this 1996 paper, “There may be a much weaker term involving δL0 /δg µν , but this appears to be consistent with observation.” This much weaker term we now interpret to be a “diamagnetic response” of vacuum fields to changes in both the vierbein and gauge fields, which results from a shifting of the energies of the vacuum states when fields are applied, just as the energies of the electrons in a metal are shifted by the application of a magnetic
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
209
field. We postulate that this effect produces contributions to the action which are consistent with the general coordinate invariance and gauge symmetry of the present theory. The lowest-order such contributions are, of course, the Maxwell-Yang-Mills and Einstein-Hilbert actions, plus a relatively weak cosmological constant arising from this same mechanism: 1 i i (4) Lg = − g0−2 eFµν Fρσ g µρ g νσ , LG = e Λ + `−2 R. (96) P e 4 Here g0 is the coupling constant for the fundamental gauge group (e.g.SO(10)), Λ is a constant, and `2P = 16πG. These terms are analogous to the usual contributions to the free energy from Landau diamagnetism in a metal. The actions for gauginos and gravitinos are postulated to have a similar origin, as the vacuum responds to these fields. Particle masses and Yukawa couplings are postulated to arise from supersymmetry breaking and radiative corrections. As pointed out in Ref. 1, the above gauge and gravitational curvatures require that the order parameter contain a superposition of configurations with topological defects (without which there could be no curvature). Here we do not attempt to discuss these defects in detail, but we now interpret them as 1-dimensional defect lines in 4-dimensional spacetime, analogous to vortex lines in a superfluid. There is clearly a lot of work remaining to be done – including actual predictions for experiment – but the theory is relatively close to real-world physics, and the following arise as emergent properties from a fundamental statistical picture: Lorentz invariance, the general form of Standard-Model physics, an SO(N ) fundamental gauge theory (with e.g. SO(10) permitting coupling constant unification and neutrino masses), supersymmetry, a gravitational metric with the form (−, +, +, +) , the correct coupling of matter fields to gravity, vanishing of the usual cosmological constant, and a mechanism for the origin of spacetime and fields. The new predictions of the present theory appear to be subtle, but include Lorentz violation at very high energies and nonstandard behavior of scalar bosons. 7. Conclusion For a theory to be viable, it must be mathematically consistent, its premises must lead to testable predictions, and these predictions must be consistent with experiment and observation. The theory presented here appears to satisfy these requirements, although it is still very far from complete. Experiment should soon confront theory with more stringent constraints. For example, supersymmetry,5 fundamental scalar bosons,6 and SO(N ) grand unification7 seem to be unavoidable consequences of the theory presented here, but there is as yet no direct evidence for any of these extensions of established physics. The present theory starts with a picture which is far from that envisioned in more orthodox approaches: There are initially no laws, and instead all possibilities are realized with equal probability. The observed laws of Nature are emergent phenomena, which result from statistical counting and stochastic fluctuations, together
November 22, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.01˙Allen
210
with the specific symmetry-breaking (or “geographical features”) of our universe. It is reassuring that such an unconventional picture ultimately leads back to both established physics and standard extensions like the three mentioned above. Perhaps this fact helps to demonstrate the robustness and naturalness of these extensions, and the importance of experimental searches for supersymmetric partners, dark matter, Higgs bosons, the various consequences of grand unification, and related phenomena in cosmology and astrophysics. The present theory shares several central concepts with string theory – namely supersymmetry, higher dimensions, and topological defects – perhaps indicating that these elements may be inescapable in a truly fundamental theory. Acknowledgments I have benefitted greatly from many discussions with Seiichirou Yokoo and Zorawar Wadiasingh. In particular, Seiichirou Yokoo obtained a determinantal transformation for free fields which was a precursor of the explicit transformation of fields in Eqs. (49)-(52) and (55)-(58). References 1. R. E. Allen, Intern. J. Mod. Phys. A 12, 2385 (1997); hep-th/9612041. 2. R. E. Allen, in Beyond the Desert 2002, edited by H. V. Klapdor-Kleingrothaus (Institute of Physics, Bristol, 2003); hep-th/0008032. 3. R. E. Allen, in Beyond the Desert 2003, edited by H. V. Klapdor-Kleingrothaus (Springer, Heidelberg, 2004); hep-th/0310039. 4. M. E. Peskin and D. V. Schroeder, Introduction to Quantum Field Theory (Perseus, Reading, Massachusetts, 1995), p. 286. 5. Perspectives on Supersymmetry II, edited by G. L. Kane (World Scientific, Singapore, 2010). 6. J. F. Gunion, H. E. Haber, G. Kane, and S. Dawson, The Higgs Hunters Guide (Addison-Wesley, Redwood City, California, 1990). 7. V. D. Barger and R. J. N. Phillips, Collider Physics (Addison-Wesley, Reading, Massachusetts, 1997).
November 22, 2010
16:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.02˙Rueckl
211
SUSY LEPTON FLAVOR VIOLATION: RADIATIVE DECAYS AND COLLIDER SEARCHES ∗ ¨ R. RUCKL
Institute for Theoretical Physics and Astrophysics, University of W¨ urzburg, 97246 W¨ urzburg, Germany ∗ E-mail:
[email protected] Lepton flavor violation (LFV) is studied in the framework of supersymmetric type I seesaw models. Imposing the constraints from neutrino data and present bounds on charged lepton flavor violation I discuss in particular the prospects of searching for LFV at the LHC and ILC. Keywords: Neutrino masses and mixing, seesaw mechanism, supersymmetry, lepton flavor violation, rare radiative decays, collider searches
1. Introduction The observed neutrino oscillations imply the existence of neutrino masses and lepton flavor mixing, and give hints towards physics beyond the Standard Model. In particular, the smallness of the neutrino masses suggests the existence of a seesaw mechanism involving heavy right-handed Majorana neutrinos close to the GUTscale. The latter violate lepton number and in general also CP symmetry, and thus may lead to leptogenesis. In supersymmetric models, the lepton flavor violation (LFV) present in the neutrino sector is transmitted to the slepton sector and gives rise to rare radiative decays as well as to LFV processes of charged leptons measurable at colliders. 2. Supersymmetric Seesaw Mechanism A minimal model is obtained if three right-handed neutrino singlet fields νR are added to the MSSM particle content. In this model, one can have the following Majorana mass and Yukawa interaction terms: 1 cT c cT − νR M νR + νR Yν L · H 2 , 2
(1)
where M is the Majorana mass matrix, Yν is the matrix of Yukawa couplings, and L and H2 denote the left-handed lepton and hypercharge +1/2 Higgs doublets, respectively. Electroweak symmetry breaking then generates the neutrino Dirac mass matrix mD = Yν hH20 i, where hH20 i = v sin β is the appropriate Higgs v.e.v. with
November 22, 2010
16:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.02˙Rueckl
212 hH 0 i
v = 174 GeV and tan β = hH20 i . If the mass scale MR of M is much greater than 1 the electroweak scale of mD , one naturally obtains three light neutrinos with the mass matrix Mν = mTD M −1 mD = YνT M −1 Yν (v sin β)2
(2)
and three heavy neutrinos with the mass matrix MN = M . In the basis assumed, M is diagonal, while Mν is to be diagonalized by the unitary MNS matrix U : U T Mν U U
= =
diag(m1 , m2 , m3 ), VCKM (θ12 , θ13 , θ23 , δ) · diag(e
(3) iφ1
,e
iφ2
, 1),
θij being mixing angles, δ and φi being Dirac and Majorana phases, respectively, and mi being the light neutrino mass eigenvalues. The heavy neutrino mass eigenstates Ni , i = 1, 2, 3 are too heavy to be observed directly, but they influence the evolution of the MSSM slepton mass matrix: ! 2 † mL m2† δm2L δm2LR 2 LR m˜l = + . (4) m2LR m2R MSSM δm2LR δm2R N
The first matrix on the r.h.s. is the usual MSSM contribution, while the second matrix contains the renormalization effects of the heavy neutrino fields. Adopting the minimal supergravity (mSUGRA) scheme one finds, in leading logarithmic approximation 1, 3 1 (3m20 + A20 )Yν† LYν , δm2R = 0, δm2LR = − A0 v cos βYl Yν† LYν , 2 8π 16π 2 (5) where Lij = ln(MGUT /Mi )δij , Mi being the heavy neutrino masses, and m0 and A0 are the universal scalar mass and trilinear coupling, respectively, at the grand unification scale MGUT . It is these flavor off-diagonal virtual terms which lead to charged LFV. By inverting Eq. (2), the neutrino Yukawa matrix can be written as follows 2 : p p p √ √ √ 1 Yν = diag( M1 , M2 , M3 )·R·diag( m1 , m2 , m3 )·U † . (6) v sin β δm2L = −
Here, R is an arbitrary complex orthogonal matrix which may be parametrized in terms of 3 complex angles θi = xi + Iyi : c2 c3 −c1 s3 − s1 s2 c3 s1 s3 − c1 s2 c3 R = c2 s3 c1 c3 − s1 s2 s3 −s1 c3 − c1 s2 s3 , (7) s2 s1 c2 c1 c2
with (ci , si ) = (cos θi , sin θi ) = (cos xi cosh yi − I sin xi sinh yi , sin xi cosh yi + I cos xi sinh yi ). While the light neutrino masses mi and the mixing angles θij have already been measured or at least constrained, the phases φi and δ, the heavy neutrino masses Mi and the matrix R are presently unknown. Using the available neutrino data as input in the appropriate renormalization group equations, Yν is
November 22, 2010
16:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.02˙Rueckl
213
evolved from the electroweak scale to the GUT scale and then put into the renormalization of the slepton mass matrix from MGU T down to the electroweak scale. In our study of the type I SUSY seesaw model outlined above, we have used the central values of the global fit in Ref. 3, and the errors expected from future experiments as discussed in Ref. 4,5. The Dirac and Majorana phases are left unconstrained. Furthermore, we consider hierarchical and degenerate neutrino mass spectra taking m1 ≤ 0.03 eV and m1 = 0.3 eV, as claimed by 0νββ decay6,7 , respectively. 3. Rare Radiative Decays The renormalization effects, Eq. (5), induce LFV radiative decays li → lj γ of charged leptons via contributions of virtual sleptons in loops. To lowest order in LFV couplings one has1,2 |(δm2L )ij |2 tan2 β, (8) m ˜8 m ˜ characterizing the typical sparticle masses in the loop. For simplicity, we first consider the case of mass degenerate heavy Majorana neutrinos with Mi = MR . If R is real, i.e. yi = 0 in Eq. (7), then it will drop out from the product Yν† Yν in this case as do the Majorana phases φ1 and φ2 , leaving MR and δ as the only unconstrained parameters. Fig. 1 shows the typical rise of Br(li → lj γ) with MR2 suggested by Eq. (8) for fixed light neutrino masses. Also indicated is the impact of the uncertainties in the neutrino data. From the present bound8 Br(µ → eγ) < 1.2 · 10−11 one obtains an upper limit on MR of order 1014 GeV. Γ(li → lj γ) ∝ α3 m5li
10
10
-6
-8
-10
Br(li →lj γ)
10
-12
10
-14
10
-16
10
-18
10
10
11
10
12
13
10 MR / GeV
10
14
15
10
Fig. 1. Br(τ → µγ) (upper red points) and Br(µ → eγ) (lower black points) at the mSUGRA point SPS1a versus MR for real R and degenerate light neutrino masses. The solid (dashed) horizontal lines mark the present 8,9 (expected future) bounds. (From Ref. 10)
The above results are rather conservative, since for real R the branching ratios can be substantially enhanced if the light neutrino masses are hierarchical instead of
November 22, 2010
16:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.02˙Rueckl
214 10- 6 10- 8
10- 10
BrHΤ®ΜΓL
BrHΜ®eΓL
10- 8
- 12
10
10- 14
10- 12 10- 14
10- 16 10- 5
10- 10
10- 4
10- 3
10- 2 y
0.1
1
10- 5
10- 4
10- 3
10- 2
0.1
1
y
Fig. 2. Br(li → lj γ) versus yi = y for fixed MR = 1012 GeV, hierarchical (dark red) and degenerate (light green) light neutrino masses, and the mSUGRA point SPS1a. The parameters are scattered as in Fig. 1 and 0 < xi < 2π. (From Ref. 11)
degenerate, or if R is complex, as Fig. 2 shows. In the latter case, the LFV branching ratios for degenerate neutrinos can be larger than that for hierarchical neutrinos.
BrHΤ® ΜΓL
10- 7 10- 9 10- 11 10- 13 10- 15 10- 17 10- 15 10- 13 10- 11 10- 9 10- 7 BrH Μ®eΓL Fig. 3. Br(τ → µγ) versus Br(µ → eγ) in the mSUGRA scenario SPS1a with neutrino parameters scattered within their experimentally allowed ranges3. For degenerate heavy neutrino masses, both hierarchical (red triangles) and degenerate (green diamonds) light neutrino masses are considered with real R and 1011 GeV < MR < 1014.5 GeV. In the case of hierarchical heavy and light neutrino masses (blue stars), xi is scattered over 0 < xi < 2π, while yi and Mi are scattered in the ranges allowed by leptogenesis and perturbativity 12. Also indicated are the present experimental bounds Br(µ → eγ) < 1.2 × 10−11 and Br(τ → µγ) < 6.8 × 10−8 . (From Ref. 11)
Obviously, in the model assumed one expects correlations among |(δm2L )ij |2 , i 6= j ∈ e, µ, τ , and hence among observables in different flavor channels. This is shown in Fig. 3 for Br(µ → eγ) and Br(τ → µγ). The correlations depend relatively little on the nature of the neutrino spectra. It is interesting to note that in the model considered the present bound on Br(µ → eγ) implies a bound on Br(τ → µγ) which is considerably stronger than the existing direct experimental bound.
November 22, 2010
16:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.02˙Rueckl
215
4. Collider Searches At high energies, feasible tests of LFV are provided by the processes e+ e− → ˜la− ˜ lb+ → − + 0 li lj + 2χ ˜1 . Analogously to Eq. (8), one can derive the approximate expression5 σ(e+ e− → li− lj+ + 2χ ˜01 ) ≈
|(δmL )2ij |2 σ(e+ e− → li− li+ + 2χ ˜01 ), m˜2l Γ˜2l
(9)
where the cross section on the r.h.s. refers to the corresponding flavor conserving processes. By comparing Eqs. (8) and (9), it is immediately clear that the LC processes are flavor-correlated with the rare decays considered before. These correlations are shown in Fig. 4 for the two most important channels. As expected 1
future sensitivity
10
0
-1
10
-2
10
present bound
+ -
+ -
0
σ(e e →µ e + 2χ ) / fb
10
-3
10
-4
10 -16 10
-15
10
-14
10
-13
-12
10 Br(µ→eγ)
10
-11
10
-10
10
Fig. 4. Correlation of LFV LC processes and rare decays in the µe-channel (left) and the τ µchannel (right). The seesaw parameters are scattered as in Fig. 3. The mSUGRA scenarios are (from left to right): SPS1a, G’ (µe) and C’, B’, SPS1a, G’, I’ (τ µ). (From Ref. 10,11)
the uncertainties in the neutrino parameters nicely drop out except at large cross sections and branching ratios. This observation implies that once the SUSY parameters are known, a measurement of, e.g., Br(µ → eγ) will lead to a prediction for σ(e+ e− → µe + 2χ ˜01 ) and vice versa. At the LHC, a feasible test of LFV is provided by squark and gluino production, followed by cascade decays via neutralinos and sleptons13–18 : pp → q˜α q˜β , g˜q˜α , g˜g˜; q˜α (˜ g) → χ ˜02 qα (g); χ ˜02 → ˜la li ; ˜ la → χ ˜01 lj ,
(10)
where α, β, a run over all sparticle mass eigenstates including antiparticles. LFV can occur in the decay of the second lightest neutralino and/or the slepton, resulting in different lepton flavors, i 6= j. The total cross section for the signature li+ lj− + X can then be written as X σ(pp → li+ lj− + X) = σ(pp → q˜α q˜β ) × Br(˜ qα → χ ˜02 qα ) α,β
+
X
σ(pp → q˜α g˜) × (Br(˜ qα → χ ˜02 qα ) + Br(˜ g→χ ˜02 g))
a
+σ(pp → g˜g˜) × Br(˜ g→χ ˜02 g) Br(χ ˜02 → li+ lj− χ ˜01 ).
(11)
November 22, 2010
16:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.02˙Rueckl
216
104
0 ®ΤΜ Χ 0 LLHCyr NH Χ 2 1
0 ® Μe Χ 0 LLHCyr NH Χ 2 1
In general, the system X consists of jets, leptons and LSPs produced by lepton flavor conserving decays of squarks and gluinos, as well as low energy proton remnants.
102 1 10- 2 10- 4 10- 6
BrHΜ®eΓL
104 102 1 10- 2 10- 4 10- 6
10- 20 10- 18 10- 16 10- 14 10- 12 10- 10
BrHΜ®eΓL
10- 20 10- 18 10- 16 10- 14 10- 12 10- 10
Fig. 5. Correlation of the number of χ ˜ 02 → µ+ e− χ ˜01 (left) and χ ˜02 → τ + µ− χ ˜01 (right) events per year at the LHC with Br(µ → eγ) in mSUGRA scenario C’ (m0 = 85 GeV, m1/2 = 400 GeV, A0 = 0 GeV, tan β = 10 GeV, signµ = +) for the case of hierarchical (blue stars) and degenerate (green triangles) νR/L and degenerate νR /hierarchical νL (red boxes). The neutrino parameters are scattered as in Fig. 3. An integrated LHC luminosity of 100fb−1 per year is assumed. The current limit on Br(µ → eγ) is displayed by the vertical line. (From Ref. 11)
!!! ΣHe+e- ® Μ+e-+2 Χ01 L fb, s =1500 GeV
0 ®Μ+ e- Χ 0 L 100fb-1 , deg. Ν NH Χ R 2 1 10-13
10-13
1400 0.1
50
1 2
1000
10-12
B
m12 GeV
m12 GeV
10-12
5×10-13
1200
3
800
BrHΜ®eΓL D
4
600
5×10-12
C
400
10
1000
A
800 100 200
600
BrHΜ®eΓL 10-11
400
5×10-11 10-11
200
E
200
400
600 800 1000 m0 GeV
1200 1400
5 10-11
200 100
200
300
400 500 m0 GeV
600
700
Fig. 6. Contours in the (m0 , m1/2 )–plane of the polarized cross section σ(e+ e− → µ+ e− + √ 2χ ˜0 ) for see = 1.5 TeV, Pe− = +0.9, Pe+ = +0.7 (left solid), and N (χ ˜02 → µ+ e− χ ˜01 ) for √1 spp = 14 TeV (right solid) in comparison with Br(µ → eγ) (dashed). The remaining mSUGRA parameters are A0 = 0 GeV, tan β = 5, sign(µ) = +. The neutrino oscillation parameters are fixed at their central values as given in Ref. 3, the lightest neutrino mass m1 and all complex phases are set to zero, and the degenerate right-handed neutrino mass scale is M R = 1014 GeV. The shaded (red) areas are already excluded by mass bounds from various experimental sparticle searches. (From Ref. 11)
November 22, 2010
16:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.02˙Rueckl
217
Just as for the linear collider discussed above, one can correlate LFV processes at the LHC with LFV rare decays. This is shown in Fig. 5 for the event rates N (χ ˜02 → + − 0 0 + − 0 µ e χ ˜1 ) and N (χ ˜2 → τ µ χ ˜1 ), respectively, originating from the cascade reactions Eq. (10). We find that the present bound on Br(µ → eγ) still allows observable rates of around 102 to 103 events per year for an integrated luminosity of 100fb−1 in the mSUGRA scenario C’. As in the linear collider case, these correlations are relatively weakly affected by the nature of the neutrino mass spectra and the uncertainties in the neutrino data, but highly dependent on the mSUGRA parameters. 5. Summary For Majorana mass scales in the range MR = 1010 to 1014 GeV, the supersymmetric seesaw mechanism suggests the existence of sizeable LFV in the charged lepton sector. Although the details are very model-dependent, strong correlations in the µe, τ e, and τ µ channels are generic and allow for very powerful cross checks. The results of a systematic study of the model dependence are summarized in Fig. 6 by contour plots for σ(e+ e− → µ+ e− + 2χ ˜01 ), N (χ ˜02 → µ+ e− χ ˜01 ), and Br(µ → eγ) in the (m0 , m1/2 )–plane with the remaining mSUGRA parameters fixed. As can be seen, searches for rare LFV decays and LFV processes at colliders are to a large extent complementary. For sufficiently small scalar masses m0 of a few hundred GeV, LHC and LC searches can be competitive to rare decay experiments, while at larger values of m0 the latter have a much farther reach. References 1. J. Hisano and D. Nomura, Phys. Rev. D 59, 116005 (1999) [arXiv:hep-ph/9810479]. 2. J. A. Casas and A. Ibarra, Nucl. Phys. B 618, 171 (2001) [arXiv:hep-ph/0103065]. 3. M. Maltoni,T. Schwetz, M.A. Tortola and J.W.F. Valle, Phys. Rev. D 68, 113010 (2003) [arXiv:hep-ph/0309130]. 4. F. Deppisch, H. P¨ as, A. Redelbach, R. R¨ uckl and Y. Shimizu, Eur. Phys. J. C 28, 365 (2003) [arXiv:hep-ph/0206122]. 5. F. Deppisch, H. P¨ as, A. Redelbach, R. R¨ uckl and Y. Shimizu, Phys. Rev. D 69, 054014 (2004) [arXiv:hep-ph/0310053]. 6. H.V. Klapdor-Kleingrothaus, I.V. Krivosheina et al., Phys. Lett. B586 (2004) 198-212. 7. H.V. Klapdor-Kleingrothaus, I.V. Krivosheina, Mod. Phys. Lett. A21 (2006) 15471566. 8. S. Eidelman et al. [PDG Collab.], Phys. Lett. B 592, 1 (2004) [arXiv:hep-ph/0310053]. 9. B. Aubert et al. [BABAR Collab.], Phys. Rev. Lett. 95, 041802 (2005) [arXiv:hepex/0502032]. 10. S. Albino, F. Deppisch and R. R¨ uckl, 41st Recontres de Moriond, La Thuile, Italy, 2006 [arXiv:hep-ph/0606226]. 11. S. Albino, F. Deppisch, D. Ghosh and R. R¨ uckl, in Eur. Phys. J. C 57, 13 (2008) [arXiv:0801.1826]. 12. F. Deppisch, H. P¨ as, A. Redelbach and R. R¨ uckl, Phys. Rev. D 73, 033004 (2006) [arXiv:hep-ph/0511062]. 13. K. Agashe and M. Graesser, Phys. Rev. D 61, 075008 (2000) [arXiv:hep-ph/9904422]. 14. I. Hinchliffe and F. E. Paige, Phys. Rev. D 63, 115006 (2001) [arXiv:hep-ph/0010086].
November 22, 2010
16:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.02˙Rueckl
218
15. J. Hisano, R. Kitano and M. M. Nojiri, Phys. Rev. D 65, 116002 (2002) [arXiv:hepph/0202129]. 16. D. F. Carvalho, J. R. Ellis, M. E. Gomez, S. Lola and J. C. Romao, Phys. Lett. B 618, 162 (2005) [arXiv:hep-ph/0206148]. 17. N. G. Unel, arXiv:hep-ex/0505030. 18. Yu. M. Andreev, S. I. Bityukov, N. V. Krasnikov and A. N. Toropin, arXiv:hepph/0608176.
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
219
NEW PHYSICS WITHOUT NEW ENERGY SCALE∗ MIKHAIL SHAPOSHNIKOV Institut de Th´ eorie des Ph´ enom` enes Physiques, EPFL, CH-1015 Lausanne, Switzerland † E-mail:
[email protected] We argue that there may be no intermediate particle physics energy scale between the Planck mass MP l ∼ 1019 GeV and the electroweak scale MW ∼ 100 GeV. At the same time, the number of problems of the Standard Model (neutrino masses and oscillations, dark matter, baryon asymmetry of the Universe, strong CP-problem, gauge coupling unification, inflation) could find their solution at MP l or MW .
1. Introduction In this talk I describe a possible scenario for physics beyond the Standard Model (SM) that does not require introduction of any new energy scale besides already known, namely the electroweak and the Planck scales, but can handle different problems of the SM mentioned in the abstract (for a more detailed review see2 ). This point of view, supplemented by a requirement of simplicity, has a number of experimental predictions which can be tested, at least partially, with the use of existing accelerators and the LHC and with current and future X-ray telescopes. The paper is organised as follows. In Section 2 we will review different arguments telling that the SM model cannot be a viable effective field theory all the way up to the Planck scale. In Section 3 we will discuss different arguments in favour of existence of the intermediate energy scale and their weaknesses. In Section 4 we discuss a proposal for the physics beyond the SM based on an extension of the SM which we called the νMSM. Section 5 is devoted to the discussion of crucial tests and experiments that can confirm or rule out this scenario. 2. Necessity of Extension of the Standard Model There are no doubts that the Standard Model defined as a renormalisable field theory based on SU(3)×SU(2)×U(1) gauge group and containing three fermionic families with left-handed particles being the SU(2) doublets, right-handed ones being the SU(2) singlets (no right-handed neutrinos) and one Higgs doublet is not ∗ An
updated and shortened version of the talks given also previously at the Workshop on Astroparticle Physics: Current Issues (APCI07), Budapest, Hungary, June 21-25, 2007 and at the 11th Paris Cosmology Colloquium 2007, Paris, France, August 16-18, 2007, see. 1
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
220
a final theory. On field-theoretical grounds, it is not consistent as it contains the U(1) gauge interaction and a self-coupling for the Higgs field, both suffering from the triviality, or Landau-pole problem. Though the position of this pole may correspond to experimentally inaccessible energy scale, this calls for an ultraviolet (UV) completion of the theory. The existence of gravity with the coupling related to the Planck scale MP l = −1/2 GN = 1.2×1019 GeV (GN is the Newtonian gravitational constant) allows to put forward the hypothesis that the Landau pole problem is solved somehow (see, e.g.3 which uses the ideas of asymptotically safe gravity of 4 ) by a complete theory that includes the quantum gravity. In that way the triviality is “swept under the carpet”, provided the position of the Landau pole is above the Planck scale. It is generally accepted that if the pole occurs below the Planck mass then the UV completion of the SM must not be related to gravity. It is well known that the requirement that the Landau pole in the scalar selfcoupling must not appear below some cutoff scale Λ puts an upper bound on the mass MH of the Higgs boson. Moreover, for sufficiently small Higgs masses the SM vacuum is unstable, which leads to a lower bound on the Higgs mass, depending on the value of the cutoff Λ, which we take to be Λ = MP l . In other words, theoretically it is possible to think that the SM is valid all the way up to the Planck scale, and some complete theory takes over above it, though this is only feasible if the Higgs mass lies in the interval mmin < mH < mmax , where mmin = [126.3 + mt −171.2 × 2.1 mt−171.2 αs −0.118 4.1− αs−0.1176 ×1.5] GeV , and m = [173.5+ ×0.6− ×0.1] GeV . max 0.002 2.1 0.002 Here αs is the strong coupling at the Z-mass, with theoretical uncertainty in mmin equal to ±2.2 GeV. These numbers are taken from the recent two-loop analysis of 5 (see also6,7 and references therein). Let us see now if this point of view can survive when confronted with different experiments and observations. Since the SM is not a fundamental theory, the low energy Lagrangian can contain all sorts of higher-dimensional SU(3)×SU(2)×U(1) invariant operators, suppressed by the Planck scale: L = LSM +
∞ X On . MPn−4 l n=5
(1)
These operators lead to a number of physical effects that cannot be described by the SM, such as neutrino masses and mixings, proton decay, etc. For example, the lowest order five-dimensional operator ¯ α φ˜ φ† Lc O5 = Aαβ L (2) β
leads to Majorana neutrino masses of the order mν ∼ v 2 /MP l ' 10−6 eV (here Lα and φ are the left-handed leptonic doublets and the Higgs field, c is the sign of charge conjugation, φ˜i = ij φ∗j and v = 175 GeV is the vacuum expectation value of the Higgs field). The fact that mν following from this Lagrangian is so small in comparison with the lower bound on neutrino mass coming from the observations of neutrino
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
221
p oscillations mν > ∆m2atm = matm ' 0.05 eV (∆m2atm is the atmospheric neutrino mass difference, for a review see8 ) rules out the conjecture that the SM is a viable effective field theory up to the Planck scale. Though it is enough to kill a theory just by one observation, let us discuss another two, though not so solid ones as they related to cosmological observations rather than particle physics experiments. (i) Since the SM has no candidate for the Dark Matter (DM) particle and the theory (1) does not contain any new degrees of freedom, it fails to describe the dark matter in the Universe. (ii) Though the SM has all the ingredients9 to produce the baryon asymmetry of the Universe,10 it fails to do so since there is no first order EW phase transition with experimentally allowed Higgs boson masses.11 In addition to experimental and observational drawbacks of the SM one usually adds to the list of its problems different naturalness issues, such as: “Why the EW scale is so much smaller than the Planck scale?”, “Why the cosmological constant is so small but non-zero?”, “Why CP is conserved in strong interactions?”, “Why electron is much lighter than t-quark?” etc., making the necessity of physics beyond the SM even more appealing. 3. Arguments in Favour of Intermediate Energy Scale and Why They Could Be Irrelevant There is the dominating point of view that we must have some new particle physics between the electroweak scale and the Planck mass. Let us go through these arguments and try to see whether they are really convincing. GUT and SUSY scales. We start with gauge coupling unification.12 If one uses the particle content of the SM and considers the running of the three gauge couplings one finds that they intersect with each other at three points scattered between 1013 and 1017 GeV (for a recent review see13 ). This is considered as an indication that strong, weak and electromagnetic interactions are the parts of the gauge forces of some Grand Unified Theory (GUT) based on a simple group like SU(5) or SO(10) which is spontaneously broken at energies MGU T ∼ 1016 GeV which is close, but still much smaller than the Planck scale. The fact that the constants do not meet at the same point is argued to be an indication that there must exist one more intermediate threshold for new physics between the GUT scale and the electroweak scale, chosen in such a way that all the three constants do intersect at the same point. The most popular proposal for the new physics below the GUT scale is the low energy supersymmetry (SUSY). Indeed, it is amazing that the gauge coupling unification is almost perfect in the Minimal Supersymmetric Standard Model (MSSM).14 So, these considerations lead to the prediction of two intermediate energy scales between MW and MP l : one in the potential reach of the LHC whereas the other can only be revealed experimentally by the search of proton decay or other processes with baryon number non-conservation. Is there an alternative to this logic which removes the necessity of introduction
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
222
of these intermediate scales? Perhaps, a simplest possibility is to say that there is no Grand Unification and the fact that the gauge couplings nearly meet at high energy scale is a pure coincidence. Then the “stand alone” EW theory contains just one energy scale, the Higgs mass. True, the theory is not mathematically consistent because of the Landau pole, but hiding this pole and the vacuum instability above the Planck scale leaves the solution for a complete theory of gravity. Of course, it is a pity to give up the Grand Unification. In addition to gauge coupling unification GUTs provide an explanation of charge quantization15 and give some non-trivial relations between quark and lepton masses.16 An alternative is to have gauge coupling unification at the Planck scale. It is known17,18 that this possibility can be easily realised in GUTs, if higher order non-renormalisable P On operators are included in the analysis, L = LGUT + ∞ n=5 M n−4 . Indeed, if Fµν Pl
is the GUT gauge field strength and Φ is the scalar field in adjoint representation which is used to break spontaneously the GUT group down to the SM, the operators like O4+n = Tr[Fµν Φk F µν Φn−k ] , 0 ≤ k < n, n > 0 will rescale the SM gauge couplings with large effect if hΦi ∼ MP l . It was shown in19–21 that it is sufficient to add dimension 5 and 6 operators to the minimal SU(5) theory to bring the unification scale up to the Planck one. In this case the corrections due to higher order operators are reasonably small and within 10%. Note that the fact of charge quantization in GUTs does not depend on the unification scale, while the breaking of the minimal SU(5) GUT predictions for lightest fermion masses is in fact welcome. To summarize: it is appealing to think that there is no new field-theoretical scale between MW and MP l and that the gauge couplings meet at MP l ensuring that all four interactions get unified at one and the same scale. This is only self-consistent if the Higgs mass lies in the interval given above. Moreover, it is quite possible that the Planck mass cannot be considered as a field-theoretical cutoff (or as a mass of some particle in the dimensional regularization) as we still do not know what happens at the Planck scale. The experimental fact that MH MP l remains unexplained, but the absence of any field-theoretical cutoff below the Planck mass makes this hierarchy stable, at least in the minimal subtraction renormalization scheme.22,23 Inflation. The energy density of the Universe at the exit from inflation Vinf (for a review and historical account of inflationary cosmology see24 ) is not known and may vary from (2 × 1016 GeV)4 at the high end (limit is coming from the CMB observations) down to (few MeV)4 (otherwise the predictions of Big Bang Nucleosynthesis will be spoiled). At the same time, there are the “naturalness” arguments telling that Vinf should better be large (for a review see25 ), as otherwise it is difficult to reconcile the necessary number of e-foldings with the amplitude of scalar perturbations. The simplest quadratic potential V (χ) = 21 m2χ χ2 fits reasonably the CMB data with mχ ∼ 1013 GeV. This fact, and also the closeness of this number to the GUT scale is often considered as an extra argument in favour of existence of the high energy scale between MW and MP l . However, as has been demonstrated recently in,26 the inflaton can be identified with the Higgs boson of the SM. Therefore, no intermediate energy scale is required
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
223
for inflation. The key observation which allows such a relation is associated with a possible non-minimal coupling of the Higgs field H to the gravitational Ricci scalar 2 2 R, Lnon−minimal = ξH † HR . For large Higgs backgrounds ξh2 > ∼ MP l (here h = † eff 2 2H H) the masses of all the SM particles and the induced Planck mass [MP ] ∝ ξh2 are proportional to one and the same parameter, leading to independence of physical effects on the magnitude of h. In other words, the Higgs potential in the large-field region is effectively flat and can result in successful inflation. The constant ξ is fixed by the Higgs mass and by the amplitude of scalar fluctuations known from COBE observations of the CMB. After inflation the Universe is heated up to the temperature T = Treh > 1.5×1013 GeV creating all particles of the SM27 (see also28 ). Higgs-inflation predicts the specific values for spectral indexes describing scalar (ns ) and tensor (r) perturbations, which are in accordance with the WMAP-5 observations.26 It reveals the non-trivial relation between the Higgs mass and properties of cosmological perturbations5,29 (see also30,31 ). Strong CP-problem. One of the fine-tuning problems of the SM is related to complicated vacuum structure of QCD leading to the existence of the vacuum θ angle32,33 leading to CP-non-conservation in strong interactions. A most popular solution to the problem is related to Peccei-Quinn symmetry 34 which brings θ to zero in a dynamical way; a degree of freedom which is responsible for this is a new hypothetical (pseudo) scalar particle - axion35,36 or invisible axion.37–39 Axion has never been seen yet, and the strong limits on its mass and couplings are coming from direct experiments40 and from cosmology and astrophysics.41 They lead to an 12 admitted “window” for the Peccei-Quinn scale 108 GeV < ∼MP Q < ∼10 GeV where the lower and upper bound depend on the type of axion and different cosmological assumptions. So, it looks like an intermediate scale appears again! In fact, the axion solution to the strong CP-problem is not a unique one. In particular, the mere existence of the strong CP problem is based on the assumption that the space can be considered as a smooth manifold and that the number of dimensions of the space-time is four. Indeed, the existence of θ vacua is related to topology: the mapping of the three-dimensional sphere, representing our space, to the gauge group SU(3) of QCD is non-trivial, π3 (S3 ) = Z. This leads to the existence of classical vacua with different topological numbers, and the quantum tunneling between these states forms a continuum of stable vacua characterized by θ ∈ [0, 2π). These considerations are only valid if the space is continuous and 3dimensional. However, it is very well possible that one cannot talk about space at all at Planck scales. Also, in higher dimensional theories, where the 3-dimensional character of the space is just a low-energy approximation, the strong CP-problem has to be re-analysed. For example, if the topology of the higher-dimensional space is such that the mapping of it to the gauge group is trivial, strong CP-problem disappears. Concrete examples were given42,43 for 4 + 1 dimensional space-time, where the space is a 4sphere S4 . As the compactification may happen at the Planck scale the strong CP-problem, if fact, does not necessarily point to the existence of an intermediate
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
224
scale. Neutrino masses. A popular argument in favour of existence of the very large mass scale is related to neutrino masses.44–47 Indeed, let us add to the Lagrangian of the Standard Model a dimension five operator (2) suppressed by an (unknown a-priory) mass parameter Λ and find it then from the requirement that this term 2 gives the correct active neutrino masses. One gets immediately that Λ ' mv ' atm 14 6 × 10 GeV , which is amazingly close to the GUT scale. However, this estimate provides an upper bound on the scale of new physics beyond the SM rather than the value of this scale, see below. Baryogenesis. One of the key points of any baryogenesis scenario is departure from thermal equilibrium.9 One of the popular mechanisms is called thermal leptogenesis.48 In this scenario heavy Majorana neutrinos N with the mass MN decay with non-conservation of lepton number and CP and produce lepton asymmetry of the Universe which is then converted to baryon asymmetry in rapid EW anomalous processes with fermion number non-conservation.10 An estimate of the required Majorana mass gives MN ∼ 1011 GeV,49 signaling the necessity of the intermediate scale. Electroweak baryogenesis, in which the only source for baryon number nonconservation is the electroweak anomaly, requires strongly first order phase transition.50,51 As this phase transition is absent in the SM,11 the use of EW anomaly for baryogenesis calls for modification of the scalar sector of the EW theory by introducing new scalar singlets or doublets and thus to a new physics in the vicinity of the EW scale. Though both of these arguments are certainly true for specific mechanisms of baryogenesis, they are not universal, see below. Dark matter. A particle physics candidate for dark matter must be a longlived or stable particle. The most popular candidates are related to supersymmetry (neutralino etc.) or to the axion, which we have already discussed. The scenario for WIMPs assumes that initially these particles were in thermal equilibrium and then annihilated into the particles of the SM. Quite amazingly, if the cross-section of the annihilation is of the order of the typical weak cross-section (for a review see 52 ) one gets roughly correct abundance of dark matter, suggesting that the mass of DM particles is likely to be of the order of the EW scale, as it happens, for example, in the MSSM, and thus to a new physics nearby. This argument is based on the specific processes by which the dark matter can be created and destroyed and thus is not valid in general. In the next section we will discuss the νMSM dark matter candidate with completely different properties. 4. The νMSM as an Alternative In Section 2 we reviewed the arguments that the SM must necessarily be extended while in Section 3 we argued that the solutions to the problem of gauge coupling unification and strong CP problem can be shifted up to the Planck scale. This cannot
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
225
be done with neutrino masses, and unlikely to be possible with dark matter and baryon asymmetry of the Universe. In this section we shortly review how a minimal extension of the SM, the νMSM, can solve all these problems. As for inflation, it can be associated with the Higgs boson and thus can be a consequence of the SM itself. Let us add to the SM three right-handed fermions NI , I = 1, 2, 3 (they can be called singlet leptons, right-handed leptons or Majorana neutrinos) and write the most general renormalisable interaction between these particles and fields of the SM: ¯I i∂µ γ µ NI − FαI L ¯ α NI φ˜ − MI N¯c NI + h.c. . LνM SM = LSM + N (3) I 2 Here LSM is the Lagrangian of the SM, α = e, µ, τ , and both Dirac (M D = FαI hφi) and Majorana (MI ) masses for singlet fermions are introduced. This Lagrangian contains 18 new parameters in comparison with the SM. Why this Lagrangian? Since we even do not know where the SM itself is coming from, the answer to this question can only be very vague. Here is an argument in its favour. The particle content of the SM has an asymmetry between quarks and leptons: every left quark and charged lepton has its counterpart - right quark or right-handed lepton, while the right-handed counterpart for neutrino is missing. The Lagrangian (3) simply restores the symmetry between quarks and leptons. Interestingly, the requirement of gauge and gravity anomaly cancellation, applied to this theory, leads to quantization of electric charges for three fermionic generations, 53 which was not the case for the SM, because of new relations coming from Yukawa couplings and Majorana masses. Besides fixing the Lagrangian, one should specify the masses and couplings of singlet fermions. The see-saw logic picks up the Yukawa term in (3) and tells that it is “natural” to have Yukawa coupling constants of new leptons of the same order of magnitude as Yukawa couplings of quarks or charged leptons. Then the mass parameters for singlet fermions must be large, M ∼ 108 − 1014 GeV, to give the correct order of magnitude for active neutrino masses. This leads to an intermediate energy scale already discussed above. The νMSM logic picks up the mass term in (3) and assumes that it is “natural” to have it roughly of the order of another mass term in the EW Lagrangian, namely that of the Higgs boson. This does not lead to any intermediate scale but requires smaller Yukawa couplings. To get a more precise idea about the values of Majorana masses, a phenomenological input, discussed below, is needed. Neutrino masses and oscillations. The Lagrangian (3) can explain any pattern of active neutrino masses and their mixing angles for arbitrary (and, in particular, below the EW scale) choice of the Majorana neutrino masses. This is a simple consequence of the parameter counting: the active neutrino mass matrix can be completely described by 9 parameters whereas (3) contains 18 arbitrary masses and couplings. Dark matter. The dark matter candidate of the νMSM is the long-lived lightest
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
226
singlet fermion.54 The mass of this particle is not fixed by theoretical considerations. However, there are some cosmological and astrophysical arguments giving a preference to the keV region. In particular, the keV scale is favoured by the cosmological considerations of the production of dark matter due to transitions between active and sterile neutrinos. This particle has never been in thermal equilibrium in the early Universe and thus the arguments about the mass scales of the dark matter particle of the previous section do not apply to it. For a review of different astrophysical constraints on the properties of the sterile neutrino dark matter, and the mechanisms of its cosmological production see.2 Baryogenesis. The phase structure of the νMSM is the same as that of the SM: there is no EW phase transition which could lead to large deviations from thermal equilibrium. The masses of singlet fermions are smaller than the electroweak scale, they decay below the sphaleron freeze out temperature and thus the thermal leptogenesis of 48 does not work. However, the presence of singlet fermions provides another source of thermal non-equilibrium, simply because these particle, due to their small Yukawa couplings, interact very weakly. The mechanism of baryogenesis in this case is related to coherent resonant oscillations of singlet fermions.55,56 To explain simultaneously neutrino masses, dark matter and baryon asymmetry of the Universe at least three singlet fermions are needed, with two of them with the mass preferably in the GeV region.57 They are required to be almost degenerated.55,56 The specific pattern of the singlet lepton masses and couplings leading to phenomenological success of the νMSM can be a consequence of the leptonic U(1) symmetry discussed in.57 5. Crucial Tests and Experiments As we argued, none of the arguments in favour of existence of the intermediate energy scale really requires it: gauge coupling unification and solution of the strong CP-problem can both occur at the Planck scale, whereas inflation, neutrino masses, dark matter and baryogenesis can all be explained by the particles with the masses below the electroweak scale. Moreover, a well known argument, related to stability of matter, works against introducing new physics below 1016 GeV. Indeed, using the effective theory (1) with a generic cutoff scale below 1016 GeV one gets a (fast) proton decay due to six-dimensional operators like O6 ∝ QQQL (Q is the quark doublet). To forbid it in theories with low energy sypersymmetry or with large extra dimensions one has to introduce new symmetries especially designed for this purpose. The related danger in theories with fundamental gravity scale in TeV region is the baryon number non-conservation due to virtual black holes with small masses. The point of view that there is no intermediate energy scale between the weak and Planck scales and that the low energy effective theory is the νMSM which explains neutrino oscillations, dark matter and baryon asymmetry of the Universe is rather fragile. It predicts an outcome of a number of experiments, and if any of the
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
227
predictions is not satisfied, this conjecture will be ruled out. LHC physics. Nothing but the Higgs with the mass in the window MH ∈ [126, 194] GeV in which the νMSM provides inflation.5 Neutrino physics. Hierarchical structure of active neutrino masses with one of them smaller than O(10−5 ) eV.58,59 Two other masses are fixed to be m3 = +0.6 +0.2 +0.6 [4.8−0.5 ] · 10−2 eV and m2 = [9.05−0.1 ] · 10−3 eV ([4.7−0.5 ] · 10−2 eV) in the normal (inverted) hierarchy. Majorana mass of electron neutrino is smaller than the atmospheric mass difference, mee ≤ 0.05 eV.60 Dark matter searches. Negative result for the WIMP and axion searches. The existence of a narrow X-ray line due to two-body decays of the sterile dark matter neutrino. The position and the intensity of this line are quite uncertain, with a possible cosmological preference for a few keV energy range, though higher values are certainly allowed as well. The laboratory searches of the dark matter sterile neutrino would require a precision study of kinematics of β−decays of tritium.61 B-non-conservation. No sign of proton decay or neutron-antineutron oscillations. Flavour physics. Existence of two almost degenerate weakly coupled singlet leptons which can be searched for in rare decays of mesons or τ -lepton and their own decays can be looked for in dedicated experiments discussed in.62 Though the masses of these particles cannot be precisely fixed, they should be below MW with a preference for small masses O(1) GeV.63 Visible lepton number non-conservation in N decays, with CP-breaking that can allow to fix theoretically the sign and magnitude of the baryon asymmetry of the Universe. References 1. M. Shaposhnikov, hep-th/0708.3550 (2007). 2. A. Boyarsky, O. Ruchayskiy and M. Shaposhnikov, Ann. Rev. Nucl. Part. Sci. 59, 191 (2009). 3. M. Shaposhnikov and C. Wetterich, Phys. Lett. B683, 196 (2010). 4. S. Weinberg, in: General Relativity: An Einstein Centenary Survey, Cambridge University Press , p. 790 (1979). 5. F. Bezrukov and M. Shaposhnikov, JHEP 07, p. 089 (2009). 6. J. R. Espinosa, G. F. Giudice and A. Riotto, JCAP 0805, p. 002 (2008). 7. J. Ellis et al., Phys. Lett. B679, 369 (2009). 8. A. Strumia and F. Vissani, hep-ph/0606054 (2006). 9. A. D. Sakharov, Pisma Zh. Eksp. Teor. Fiz. 5, 32 (1967). 10. V. A. Kuzmin, V. A. Rubakov and M. E. Shaposhnikov, Phys. Lett. B155, p. 36 (1985). 11. K. Kajantie, M. Laine, K. Rummukainen and M. E. Shaposhnikov, Phys. Rev. Lett. 77, 2887 (1996). 12. H. Georgi, H. R. Quinn and S. Weinberg, Phys. Rev. Lett. 33, 451 (1974). 13. S. Raby, hep-ph/0608183 (2006). 14. S. Dimopoulos, S. Raby and F. Wilczek, Phys. Rev. D24, 1681 (1981). 15. H. Georgi and S. L. Glashow, Phys. Rev. Lett. 32, 438 (1974). 16. A. J. Buras, J. R. Ellis, M. K. Gaillard and D. V. Nanopoulos, Nucl. Phys. B135, 66 (1978). 17. C. T. Hill, Phys. Lett. B135, p. 47 (1984).
November 22, 2010
16:49
WSPC - Proceedings Trim Size: 9.75in x 6.5in
04.03˙Shaposhnikov
228
18. Q. Shafi and C. Wetterich, Phys. Rev. Lett. 52, p. 875 (1984). 19. M. K. Parida, P. K. Patra and A. K. Mohanty, Phys. Rev. D39, 316 (1989). 20. B. Brahmachari, U. Sarkar, K. Sridhar and P. K. Patra, Mod. Phys. Lett. A8, 1487 (1993). 21. X. Calmet, S. D. H. Hsu and D. Reeb, Phys. Rev. D81, p. 035007 (2010). 22. M. Shaposhnikov and D. Zenhausern, Phys. Lett. B671, 187 (2009). 23. M. Shaposhnikov and D. Zenhausern, Phys. Lett. B671, 162 (2009). 24. A. Linde, Lect. Notes Phys. 738, 1 (2008). 25. D. H. Lyth and A. Riotto, Phys. Rept. 314, 1 (1999). 26. F. L. Bezrukov and M. Shaposhnikov, Phys. Lett. B659, 703 (2008). 27. F. Bezrukov, D. Gorbunov and M. Shaposhnikov, JCAP 0906, p. 029 (2009). 28. J. Garcia-Bellido, D. G. Figueroa and J. Rubio, Phys. Rev. D79, p. 063531 (2009). 29. F. L. Bezrukov, A. Magnin and M. Shaposhnikov, Phys. Lett. B675, 88 (2009). 30. A. De Simone, M. P. Hertzberg and F. Wilczek, Phys. Lett. B678, 1 (2009). 31. A. O. Barvinsky et al., JCAP 0912, p. 003 (2009). 32. R. Jackiw and C. Rebbi, Phys. Rev. Lett. 37, 172 (1976). 33. C. G. Callan, Jr., R. F. Dashen and D. J. Gross, Phys. Lett. B63, 334 (1976). 34. R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38, 1440 (1977). 35. S. Weinberg, Phys. Rev. Lett. 40, 223 (1978). 36. F. Wilczek, Phys. Rev. Lett. 40, 279 (1978). 37. M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. B166, p. 493 (1980). 38. A. R. Zhitnitsky, Sov. J. Nucl. Phys. 31, p. 260 (1980). 39. M. Dine, W. Fischler and M. Srednicki, Phys. Lett. B104, p. 199 (1981). 40. S. J. Asztalos et al., Ann. Rev. Nucl. Part. Sci. 56, 293 (2006). 41. G. G. Raffelt, Phys. Rept. 198, 1 (1990). 42. S. Y. Khlebnikov and M. E. Shaposhnikov, Phys. Lett. B203, p. 121 (1988). 43. S. Khlebnikov and M. Shaposhnikov, Phys. Rev. D71, p. 104024 (2005). 44. P. Minkowski, Phys. Lett. B67, p. 421 (1977). 45. T. Yanagida, Prog. Theor. Phys. 64, p. 1103 (1980). 46. P. Ramond, hep-ph/9809459 (1979). 47. R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, p. 912 (1980). 48. M. Fukugita and T. Yanagida, Phys. Lett. B174, p. 45 (1986). 49. S. Davidson, E. Nardi and Y. Nir, Phys. Rept. 466, 105 (2008). 50. M. E. Shaposhnikov, JETP Lett. 44, 465 (1986). 51. M. E. Shaposhnikov, Nucl. Phys. B287, 757 (1987). 52. G. Bertone, D. Hooper and J. Silk, Phys. Rept. 405, 279 (2005). 53. R. Foot, G. C. Joshi, H. Lew and R. R. Volkas, Mod. Phys. Lett. A5, 2721 (1990). 54. S. Dodelson and L. M. Widrow, Phys. Rev. Lett. 72, 17 (1994). 55. E. K. Akhmedov, V. A. Rubakov and A. Y. Smirnov, Phys. Rev. Lett. 81, 1359 (1998). 56. T. Asaka and M. Shaposhnikov, Phys. Lett. B620, 17 (2005). 57. M. Shaposhnikov, Nucl. Phys. B763, 49 (2007). 58. T. Asaka, S. Blanchet and M. Shaposhnikov, Phys. Lett. B631, 151 (2005). 59. A. Boyarsky, A. Neronov, O. Ruchayskiy and M. Shaposhnikov, JETP Lett. 83, 133 (2006). 60. F. Bezrukov, Phys. Rev. D72, p. 071303 (2005). 61. F. Bezrukov and M. Shaposhnikov, Phys. Rev. D75, p. 053005 (2007). 62. D. Gorbunov and M. Shaposhnikov, JHEP 10, p. 015 (2007). 63. M. Shaposhnikov, JHEP 08, p. 008 (2008).
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
PART V
Neutrinos (Double Beta Decay, ν-Oscillations, Solar and Astrophysical Neutrinos, Tritium Decay)
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
231
DOUBLE BETA DECAY AND BEYOND STANDARD MODEL PARTICLE PHYSICS HANS V. KLAPDOR-KLEINGROTHAUS Heidelberg, Germany ∗
[email protected], http : //www.klapdor − k.de IRINA V. KRIVOSHEINA Heidelberg, Germany and Nishnij-Novgorod, Russia,
[email protected], http : //www.klapdor − k.de Neutrinoless nuclear double 0νββ decay, recently experimentally observed for 76 Ge on a confidence level of > 6 σ, has fundamental consequences for particle physics - violation of (total) lepton number, Majorana nature of the neutrino. It further leads to sharp restrictions for SUSY theories, sneutrino mass, right-handed W-boson mass, superheavy neutrino masses, compositeness, leptogenesis, violation of Lorentz violation and equivalence principle in the neutrino sector. The masses of light neutrinos follow to be degenerate, and to be at least 0.22 ± 0.02 eV. This fixed the contribution of neutrinos as hot dark matter to ≥ 4.7% of the total observed dark matter. The neutrino mass determined might also solve the dark energy puzzle. The observation of 0νββ decay is naturally a great challenge for future experiments, in particular also with other 0νββ - emitter isotopes. There are several important experiments under construction. Present experiments have hardly a chance to reach the sensitivity required for confirmation - although sometimes this impression is given (e.g. by unproperly comparing 1.5 σ limits (of CUORICINO etc.) with the 6σ result obtained for 76 Ge. Keywords: Heidelberg-Moscow Experiment, lepton number violation, Majorana neutrino, double beta decay, neutrino mass, R-parity violating SUSY, sneutrino mass, superheavy neutrinos, composite neutrinos, right-handed W boson mass, hot dark matter, dark energy.
1. Double Beta Decay In the Nuclidic Chart a special and intriguing role for the research into fundamental particle physics, astrophysics and cosmology is offered by neutron-rich nuclei more or less far from the stability line. In the 5. Edition in 1981 the editors of the Karlsruhe Nuclidic Chart were pioneering in including the first microscopic calculations of beta decay half lives of nuclei far from stability which later were published in Nuclear Data Tables,1 and which lead to new insights into element synthesis by the ∗ Postal
address: Stahlbergweg 12, 74931 Lobbach, Germany
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
232
astrophysical r-process, the age of the universe and cosmology (the cosmological constant).2,3 There are about 35 neutron-rich nuclei, which can undergo the socalled nuclear double beta decay - a second-order weak decay mode.3,4 The weak interaction is the most universal interaction after gravitation and operates on at least all fermions. It is the o n l y interaction, which can alter the charge of the fermions (the most famous example is beta decay) and their flavours. Double beta decay can become observable for nuclei for which no other decay process (in particular β decay) is possible. This is the case for several even-even nuclei (even number of protons and of neutrons) which because of the pairing energy have lower energy ground states than their odd-odd neighbours (odd number of protons and of neutrons). They may be converted into a more stable isotope then only under double beta decay (see Fig. 1). This process may be understood as simultaneous β decay of two neutrons (for β − β − decay) or of two protons (for β + β + decay). There are essentially two modes of double beta decay, the neutrino-accompanied (2νββ ) mode, which is allowed in the Standard Model of Particle Physics and has been observed by geochemical experiments already since 1950, and by direct detection first in 1987 22 , and the neutrinoless mode (0νββ ), which is not allowed in the Standard Model. In 2νβ − β − decay two electrons are emitted together with two electron-antineutrinos ν¯e (see Fig.2), so that lepton number L is conserved:
Fig. 1. a) Energetic situation of potential double beta emitters. Because of the pairing energy, nuclei with an even number of protons and an even number of neutrons are energetically depressed in comparison with neighbouring nuclei. Thus many nuclei are stable against single β decay, but may be converted into a more stable isotope under double beta decay. b) Schematic diagram of double beta decay of 76 Ge.
A
Z
X −→ A Z+2 X + 2e− + 2¯ νe
L:0
0
+2
−2
=⇒
∆L = 0.
(1)
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
233 -ν e-
e-
-ν
e-
en1
n1
n2
2ν Fig. 2.
mM n 2
0ν
0ν and 2ν double β decay pioneered by E. Majorana and M. Goeppert-Mayer. (a) d
u
d W
0νββ
d
e e
(b) u
ε
d
u e
e
ν
ν e
e
u
d d
W
ε
u
d
u
d
W
u u e e
e e d
ε
u
d
(c)
u (d)
Fig. 3. Feynman graphs of the general double beta decay rate, with long range (a-c) and short range interactions (d) (from7 ]).
Much more interesting than 2νββ decay, which has been observed meanwhile for about ten nuclides, with half lives of the order of 1019 -1024 years, is the socalled neutrinoless double beta decay (0νββ ), which may be viewed as an exchange of a neutrino between the two decaying nucleons (see Figs. 2, 3): Z Z+2 X + 2e− A X −→ A L:0 0 +2 =⇒
∆L = 2
(2)
In this case the (total) lepton number L a) is n o t conserved. Such a process is only possible if neutrino and antineutrino are identical (i.e. the neutrino is a Majorana particle) b , and if either the neutrino has a non-vanishing mass, or there exists a right-handed weak interaction. In Grand Unified Theories (GUT’s) the latter two conditions are not independent.3,33 A right-handed component here is only effective in simultaneous association with a Majorana mass. If we go beyond the Standard Model, there are further mechanisms for 0νββ decay, such as Higgs boson exchange, exchange of a SUSY particle (gluino, photino, ..), of leptoquarks, composite neutrinos, etc. (see, e.g.,6,8 ). The process (2) a Total
lepton number (L) is defined as the sum of the family lepton numbers Le ,Lµ , Lτ (L =Le +Lµ +Lτ ). Non-conservation of family lepton number (but not of L) has been observed by socalled neutrino oscillations. L was found in all experiments up to now, as also the baryon number, to be a conserved quantity in particle physics. b All other fermions known today are Dirac particles, where each particle has its defined antiparticle.
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
234
therefore yields broad access to many topics of particle physics beyond the standard model at the TeV range, on which new physics is expected to manifest itself. It can provide an absolute scale of the neutrino mass, and yields sharp restrictions for SUSY models, leptoquarks, compositeness, left-right symmetric models, test of special relativity and equivalence principle in the neutrino sector and others (see section 4). For details, we refer to.6,8,11,12 The history of ββ decay using the nucleus as a complicated laboratory for a wide range of particle physics started about 70 years ago.6 This history is connected with fundamental discoveries of particle physics, such as parity non-conservation and of gauge theories, and double beta research has become one of the most important fields of non-accelerator particle physics. Concerning neutrino physics, without 0νββ decay there is no way to decide the nature of the neutrino (Dirac or Majorana particle), and of the structure of the ν mass matrix, since neutrino oscillation experiments measure only differences of neutrino mass eigenstates. Only investigation of double beta decay, tritium decay, and also cosmological experiments can lead to an absolute mass scale.
2. History of Experimental Development of Double Beta Decay The long and close association between the phenomenon of nuclear double beta decay, the violation of lepton number conservation and the nature and mass of the neutrino began shortly after the ”discovery” of the neutrino by W. Pauli in 1930. The motivation of M. Goeppert-Mayer, however, when performing the first calculations of the half life of ββ decay (in 1935) was not the nature of the neutrino, not conservation of leptons, but the stability of even-even nuclei over geological times. In 1939 W.H. Furry showed that the ”symmetrical” theory of neutrino and antineutrino by E. Majorana (1937) could give rise to the process of neutrinoless double beta decay. The first experiments on double beta decay were undertaken, before the existence of neutrinos was proved d i r e c t l y by Cowan and Reines (in 1955). While most of the very first experiments in the period 1948–1952 were looking for the decay electrons, a remarkable exception was the experiment performed by M.G. Inghram and J.H. Reynolds (1949, 1950). They looked for the daughter nucleus and exploited the fact, that measurable amounts of the daughter might accumulate over geological times in ores, which are rich in the corresponding parent nucleus. They analyzed a tellurium ore from Boliden, Sweden, which was about 1.5 billion years old, and reported evidence for the transition 130 T e −→130 Xe with a halflife of 1.4×1021 years, which they attributed to 2νββ decay of 130 T e.40 Another early approach was to look for radioactive daughter nuclei which in principle are detectable in much smaller quantities than stable rare gases. The experiment of Inghram and Reynolds was the forerunner of a series of geochemical experiments
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
235
which definitely proved the occurrence of 2νββ decay, and confirmed their value within a factor of about 2.41,42 The first observation of 2νββ decay in ”direct” experiments (not geochemical and radiochemical) was claimed in 1987, for 82 Se.22 The first ”active source experiment” (in which the detector material is at the same time the ββ emitter) was the one by E. der Mateosian and M. Goldhaber using CaFe2 , in 1966.43 A particularly favorable case is presented by the ββ candidate 76 Ge. This germanium isotope occurs with an abundance of 7.8% in natural germanium, from which large high resolution detectors can be manufactured. Thus Ge can be used simultaneously as source and detector allowing for large source strength without spoiling the high energy resolution of such detectors. The most sensitive experiment using detectors from natural Ge was over many years the one by D. Caldwell in California,36–38 until in the early nineties the first Ge experiments using Ge e n r i c h e d in the isotope 76 Ge (to 86%) were started5,39 . The use of enriched Ge drastically increased the sensitivity, and started a new era of ββ experiments. The largest experiment of this type (with 11 kg of enriched detectors) and the first one using high-purity Ge detectors, and the most sensitive experiment since 1993 until now is the Heidelberg–Moscow experiment,5,6 which was operated in the Gran Sasso underground laboratory from 1990 to 2003. 3. Nuclear Matrix Elements - Some Necessary Comments The half-life for neutrinoless double beta decay is connected with particle physics parameters. If we consider only two mechanisms for triggering the decay, exchange of a massive Majorana neutrino and right-handed weak currents, we have77–79 + −1 0ν [T1/2 (0+ = i → 0f )] 2
hmν i hmi 2 2 = Cmm hmi m2 + Cηη hηi + Cλλ hλi + Cmη hηi me + Cmλ hλi me + Cηλ hηihλi, e
iφ2 iφ3 hmi = |m(1) |m(2) |m(3) ee | + e ee | + e ee | ,
(i)
(i)
(3)
where mee ≡ |mee | exp (iφi ) (i = 1, 2, 3) are the contributions to the effective mass hmi from individual mass eigenstates, with φi denoting relative Majorana phases connected with CP violation, and Cmm , Cηη , ... denote nuclear matrix elements squared, multiplied by a phase factor. hmν i is the effective neutrino mass, η and λ are right-handed weak current parameters. Ignoring contributions from righthanded weak currents on the right-hand side of the above equation, only the first term remains. The mere occurrence of the process eq.(2) proves violation of total lepton number, and proves according to the fundamental paper by Schechter and Valle33 that the neutrino is a Majorana particle. For these most fundamental conclusions from
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
236
an observed 0νββ signal obviously n o knowledge of nuclear matrix elements is needed at all. We h a v e to calculate the nuclear matrix elements (NME), when we want to determine the contributions of neutrino mass, right-handed weak currents and others, such as contributions by SUSY, leptoquarks, etc. (see 6 ) to the process, or if we want to deduce information on the neutrino mass etc. from an experiment. Such calculations have been made since more than three decades now, and large progress has been made in improving the precision of these calculations - with fundamental steps of progress in the understanding of the field by the inclusion of the effects of spin-isospin and quadrupole-quadrupole ground state correlations in the wave functions, by Grotz and Klapdor around 1984, 3,32 which have triggered the partly equivalent inclusion of the pp force in QRPA (quasiparticle random phase) calculations, 1986 - 1989, 2000, 200177–79,97–100 . In the last 15 years various kinds of refinements have been tried, such as ’second QRPA’, ’RQRPA’, ’full-RQRPA’ etc. (see, e.g.6 ). They have led, however, except perhaps for the first of them to rather limited progress and to more or less oscillating (in time and in sign) corrections. The reason for the latter is simply that our knowledge of nuclear theory just does not allow to pin down the matrix elements to something better than the order of a factor of two or so in good cases (see, e.g.101 ). So, the results by e.g.96 agree (as the results of these authors did earlier already for short time in 2004 Ref.102 ), after many ”back and forth” finally with the results obtained twenty years ago by, 77–79 which were used in the analysis given in sections 4, 5. It seems that sometimes the importance of and the potential for calculations of NME has been systematically exaggerated - from reasons, which may not always be justified by its real scientific importance and by the real potential of improvement.15 4. The Result of the HEIDELBERG-MOSCOW Experiment - and the Neutrino Mass The result from the HEIDELBERG-MOSCOW experiment (see13,14 ) is shown in Figs. 4, 5. The background around Qββ is (with pulse shape analysis) around 5×10−3 counts/kg y keV, i.e. close to the level which had been planned in the GENIUS project.23 The signal at Qββ , where a 0νββ signal should occur, has a confidence level of more than 6 σ (11.0 ± 1.8 events), according to the selection by neuronal net and pulse shape library (see13 ), and of 6.4 σ (7.05 ± 1.11 events) in the selection by the neuronal net (NN) alone (see Fig. 4). This is the first and up to now only indication for the occurrence of this process. The intensity of the observed signal13,14 corresponds to a half-life for 0νββ decay 0νββ +0.44 of T1/2 = (2.23−0.31 ×1025 ) y. c From the half-life one can derive information on the effective neutrino mass hmi and the right-handed weak current parameters hηi, hλi. Under the assumption that only one of the three terms (the effective mass term c independent
analysis19,109 confirms this result (for details see also14 ).
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
237 5
20 18
4
16 14 Counts/keV
3
2
12 10 8 6
1 4 2
0 2000
2020
2040
2060
2080
2100
0 2000
2010
2020
2030 Energy, keV
2040
2050
2060
Fig. 4. The pulse shape selected spectrum (basing on the analysis of the measured time structure of the individual pulses13 in the range 2000-2100 taken with detectors 2,3,4,5 (51.39 kg y) by the neuronal net (NN) method (left) and the corresponding full spectrum of all five detectors in the range 2000-2060 keV (56.66 kg y) (right), in the period 1995-2003 (see 13 ). The Qββ value of ββ decay of 76 Ge is known to be 2039.006 ± 0.050 keV. Fig 4 (left) is a zoom of Fig. 5 (upper part). 15 Counts/keV
SSE 10
5
0 1800
1850
1900
1950
2000 2050 Energy, keV
2100
2150
2200
2250
150 Counts/keV
Full 100
50
0 1800
1850
1900
1950
2000 2050 Energy, keV
2100
2150
2200
2250
Fig. 5. Top: The pulse-shape selected spectrum of single site events by the neuronal net (NN) method measured with detectors 2,3,4,5 from 1995-2003. Below: The full spectrum measured with detectors 2,3,4,5 from 1995-2003 (from13 ).
and the two right-handed weak current terms) contributes to the decay process, and ignoring potential other processes connected with SUSY theories, leptoquarks, +0.03 compositeness, etc. (see6 ), we find: hmi = (0.32−0.03 ) eV, or +0.26 −9 hηi = (3.05−0.25 ) × 10 , or +0.58 hλi = (6.92−0.56 )×10−7 . These are the upper limits for these quantities. When ’calibrating’ the corresponding matrix element from the measured rate of 2νββ decay of 76 Ge (see13 ) the effective neutrino mass becomes lower, down to (0.22 ± 0.02) eV. P ∗ 2 The effective electron neutrino mass is hmν i = (Uei ) mi , where P U∗ei = hνe |νi i = hνi |νe i∗ , and |νe i = i Uei |νi i. From neutrino oscillation experiments d one can determine the mixing parameters and thus the effective mass hmi d From
observation of solar, atmospheric and accelerator neutrinos, neutrino oscillations have been
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
238 m ee ee (eV) 10
0
NOW (2007)
HEIDELBERG−MOSCOW 10
Heidelberg-Moscow OLD (2000)
-1
CUORE MOON GENIUS 1 t
-2
10
GENIUS 10 t
INV
2DEG
HIER
VAC
LMA
SMA
VAC
LMA
SMA
VAC
LMA
SMA
VAC
Hierarchy Hierarchy
Triple
LMA
-3
SMA
10
Degeneracy PartialDegeneracy Degeneracy Inverse InverseHierarchy Hierarchy 4v4 ν Degeneracy Partial
<m > (eV) ee
10
0
BEST VALUE
6.4 σ
WMAP’03
HEIDELBERG−
2003−2007
<m ee > = 0.2 eV
−1
10
MOSCOW
CUORE 600kg MOON 3.3t
−2
GENIUS 1t, EXO 10t
10
−3
GENIUS 10t
10
LMA
LMA
LMA
LMA Inverse Partially Hierarchy Degeneracy Hierarchy Degeneracy
Fig. 6. Summary of expected values for the effective neutrino mass hmee i in different neutrino mass schemes and the result of the HEIDELBERG-MOSCOW experiment. The bars denote allowed ranges of hmi in different neutrino mass scenarios, still allowed by neutrino oscillation experiments (see13,21 ). All models except the degenerate one and the sterile neutrino scenario with inverted hierarchy are excluded by the part) the P 0νββ decay result. Also shown is (lower exclusion line from WMAP, plotted for mν < 1.0 eV , (which is according to27 , too strict). WMAP does not rule out any of the neutrino mass schemes. Further shown are for history the expected sensitivities expected earlier for the planned double beta experiments CUORE, MOON, EXO and the 1 ton and 10 ton project of GENIUS6,23 .
to be expected in 0νββ experiments for different ν mass scenarios (different ν mass models). Fig. 6 shows these expectations together with the value of hmi determined from the HEIDELBERG-MOSCOW experiment (under the assumption of dominating mass mechanism). It can be seen that in a scenario of three neutrino flavours only the solution of d e g e n e r a t e (i.e. essentially identical, except for the small differences determined by neutrino oscillation experiments, see footnote c ) masses remains. If allowing for sterile neutrinos (4ν scenarios), also the sterile observed, i.e. transitions from one neutrino flavour into another, e.g. ν e −→ νµ . These experiments allow to determine the difference of the ν mass eigenstates (which are found to be of the order of 0.008 and 0.05 eV),30,31 and their flavour composition. The a b s o l u t e neutrino masses c a n n o t be determined by neutrino oscillation experiments.
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
239
neutrino scenario with inverted hierarchy (see107 ) remains. All other mass models (hierarchical, inverse hierarchy, partially degenerate) are excluded by the HEIDELBERG-MOSCOW ββ experiment. The common mass eigenvalue corresponding to this effective mass can be determined with recent mixing angles from solar neutrinos to be 0.22-0.48 eV 6,34,35 . The analysis of the cosmological experiments SDSS and WMAP together27 P yields an upper limit for the sum of the neutrino masses of mν <1.7 eV (95% c.l.). This would correspond to ≤ 12% of hot dark matter of the total dark matter observed in the universe. The above double beta result yields a lower limit of 4.7.%. The SDSS result means that the individual neutrino mass should be smaller than ∼ 0.6 eV, which is consistent with the above value from double beta decay e . The Fig. 6 shows as example the limits set by the cosmic microwave experiment P WMAP (assuming mi = 1.0 eV). The Planck mission - the new generation cosmic microwave background experiment just started, is expected to reach a sensitivity P down to mν = 0.2 eV .29 This means it will be able to test the HEIDELBERGP MOSCOW - result of mν ≥ 0.6 eV (derived under the assumption of a dominating ν mass mechanism) earlier than any other ββ experiment (see below, section 6). PLANCK thus can decide whether the double beta decay process is triggered by the neutrino mass mechanism or another one. 5. The Results for Other Beyond Standard Model Physics Assuming other mechanisms to dominate the 0νββ decay amplitude, which have been studied extensively in our group, and other groups, in recent years, the result allows to set stringent limits on parameters of SUSY models, leptoquarks, compositeness, masses of heavy neutrinos, the right-handed W boson and possible violation of Lorentz invariance and equivalence principle in the neutrino sector and others. Figs.7,8,9 and 10, show as examples some of the relevant graphs which can in principle contribute to the 0νββ amplitude and from which bounds on the corresponding parameters can be deduced assuming conservatively the measured half-life as upper limit for the individual processes. Figs.11,13 and 12 show some results deduced from the experimental 0νββ result for the underlying theories. For more details, see.6 5.1. SUSY with R–parity breaking The constraints on the parameters of the minimal supersymmetric standard model with explicit R–parity violation deduced49–51 from the 0νββ half–life limit are more stringent than those from other low–energy processes and from the largest high 0 energy accelerators. The limit for the R-parity breaking Yukawa coupling λ 111 (see Fig. 11) is m 2 m 21 0 q˜ g ˜ λ111 ≤ 3.9 · 10−4 (4) 100GeV 100GeV e Other
P evaluations of WMAP 28 yield systematically lower values, down to mi < 0.6 eV (also still consistent with the 0νββ result), but according to the convincing argumentation of 27 as result of less generally justified priors used by Ref.28
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
240 dR
eL
dR
eL ~ uL
~ uL uL
uL
χ, ~g
χ , g~
uL
uL ~ dR
dR
~ uL dR
eL
eL
Fig. 7. Examples of Feynman graphs for 0νββ decay within R–parity violating supersymmetric models (see, e.g.11 ). ~ u
d χ-
e~
d
u χ
u
0
W
~ e
e
_ ν=ν
χ0
~ ν e
W
e
W
d
Fig. 8.
e~
-
e
d
u
u
Examples of RP conserving SUSY contributions to 0νββ decay (see, e.g.11 ). u d e S, V µ
d
S, V µ
_ ν=ν
u
e
_ ν=ν
e W
e
W
d
u
d
u
Fig. 9. Examples of Feynman graphs for 0νββ decay within LQ models. S and V µ stand for scalar and vector LQs, respectively (see, e.g.11 ).
with mq˜ and mg˜ denoting squark and gluino masses, respectively, and with the assumption md˜R ' mu˜L . This result was important for the discussion of new physics in the connection with the high–Q2 events seen at HERA. It excluded the possibility of squarks of first generation (of R–parity violating SUSY) being produced in the high–Q2 events.59,60 We find further51,52 0
0
0
0
λ113 λ131 ≤ 3 · 10−8
λ112 λ121 ≤ 1 · 10−6 .
(5)
(6)
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
241 e-
eν∗ M
Wn
W-
p n
p
(A,Z+2)
(A,Z)
λ 111 .1
CCU
HERA-B
1. ’
TEVATRON
Fig. 10. Neutrinoless double beta decay (∆L = +2 process) mediated by a composite heavy Majorana neutrino.
OW
ββ
0ν
.01
C OS
-M
G ER
B
EL
LHC
D EI
.001
H .0001
100.
200.
500.
1000.
2000.
m
5000.
q
[GeV]
Fig. 11. Comparison of sensitivities of existing and future experiments on R p -violating SUSY 0 models in the plane λ111 -mq˜. Note the doubly logarithmic scale! Shown are the areas currently excluded by the experiments at the TEVATRON and HERA-B, the limit from charged-current universality, denoted by CCU, and the limit from 0νββ-decay from the HEIDELBERG-MOSCOW Experiment. The area beyond (or left of ) the lines is excluded. The estimated sensitivity of LHC is also given (from11 ).
5.2. SUSY with R–parity conservation Also R–parity conserving softly broken supersymmetry has been found (by the Heidelberg group,63,65 ) to give contributions to neutrinoless double beta decay, via the (B–L) violating sneutrino mass term being a generic ingredient of any weak–scale SUSY model with a Majorana neutrino mass. These contributions are realized on the level of sneutrino box diagrams (Fig. 8). For the (B − L) violating sneutrino mass m ˜ M the following limits are obtained63 m 23 SU SY ˜ GeV, χ'B (7) m ˜M ≤ 2 100GeV 7 2 m SU SY ˜ m ˜ M ≤ 11 GeV, χ'H (8) 100GeV ˜ as suggested by for the limiting cases that the lightest neutralino is a pure Bino B, the SUSY solution of the dark matter problem,64 or a pure Higgsino. Actual values for m ˜ M for other choices of the neutralino composition should lie in between these
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
242
two values. Another way to deduce a limit on the ‘Majorana’ sneutrino mass m ˜ M is to start from the experimental neutrino mass limit, since the sneutrino contributes to the Majorana neutrino mass mνM at the 1–loop level proportional to m ˜ 2M . This yields 63 under some assumptions m ˜ M(i) ≤ (60 − 125)
mexp 1/2 ν(i)
M eV (9) 1eV Starting from the mass limit determined for the electron neutrino by 0νββ decay this leads to m ˜ M(e) ≤ 40M eV.
(10)
This result is somewhat dependent on neutralino masses and mixings. A non– vanishing ‘Majorana’ sneutrino mass would result in new processes at future colliders, like sneutrino–antisneutrino oscillations. Reactions at the Next Linear Collider (NLC) like the SUSY analog to inverse neutrinoless double beta decay e− e− → χ− χ− (where χ− denote charginos) or single sneutrino production, e.g. by e− γ → ν˜e χ− could give information on the Majorana sneutrino mass, also. This is discussed by63,65,66 . A conclusion is that future accelerators can give information on second and third generation sneutrino Majorana masses, but for first generation sneutrinos cannot compete with 0νββ decay. 5.3. Leptoquarks Assuming that either scalar or vector leptoquarks contribute to 0νββ decay (see Fig. 9), the following constraints on the effective LQ parameters can be derived: 68 2 MI , (11) I ≤ 2.8 × 10−9 100GeV 2 MI (L) −10 αI ≤ 3.5 × 10 , (12) 100GeV 2 MI (R) αI ≤ 7.9 × 10−8 . (13) 100GeV Since the LQ mass matrices appearing in 0νββ decay are (4 × 4) matrices,68 it is difficult to solve their diagonalization in full generality algebraically. However, if one assumes that only one LQ-Higgs coupling is present at a time, the (mathematical) problem is simplified greatly and one can deduce that either the LQ-Higgs coupling must be smaller than ∼ 10−(4−5) or there can not be any LQ with e.g. couplings of electromagnetic strength with masses below ∼ 250GeV . These bounds from ββ decay were of interest in connection with recently discussed evidence for new physics from HERA.59,69–71 Assuming that actually leptoquarks have been produced at HERA, double beta decay (the HEIDELBERG-MOSCOW experiment) would allow to fix the leptoquark–Higgs coupling to a few 10−6 (see 61 ).
February 24, 2011
14:20
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05
243
5.4. Compositeness Evaluation of the 0νββ half–life assuming exchange of excited composite Majorana neutrinos ν ∗ yields for the mass of the excited neutrino a lower bound of 47,67 mN ≥ 3.4mW
(14)
for a coupling of order O(1) and Λc ≃ mN . Here, mW is the W–boson mass. The constraints concerning composite excited neutrinos of mass mN deduced from ββ decay are more strict than the results of LEPII, as shown in Fig. 12.
DELPHI (LEP II)
G-MOSCOW
HEIDELBER
Fig. 12. Comparison between the √ ββ0ν HEIDELBERG-MOSCOW experiment and the LEP II upper bound on the quantity |f |/( 2MN ) as a function of the heavy composite neutrino mass MN , with the choice Λc = MN . Regions above the curves are excluded. The dashed and solid circle curves are the ββ0ν bounds from the HEIDELBERG-MOSCOW experiment (for details see47 ).
5.5. Superheavy neutrinos, right-handed W-boson It was discussed already since 1980 by Mohapatra53,54 that not only exchange of a left–handed neutrino, but also of a heavy right–handed neutrino, which naturally occurs in left–right symmetric models, could induce neutrinoless double beta decay. This process which was discussed in more detail later by the Heidelberg group,57,58 yields at present the most restrictive lower bound on the mass of a right–handed W boson, of mWR ≥ 1.4 TeV11,52,57 . In the case of the exchange of heavy or superheavy left–handed neutrinos one can exploit the mass dependence of the matrix element (see e.g.9 ) to obtain lower limits on the latter (see in6 [HM95]).
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
244
The deduced lower limit10,11,52 is hmH i ≥ 9 × 10 7 GeV . Assuming the bound 2 on the mixing matrix44–46 Uei = 5 × 10 −3 and assuming no cancellation between the involved states, the limit implies a bound on the mass eigenstate Mi > 5 × 105 GeV . To obtain the same information by inverse double beta decay e− e− −→ W − W − at a Next Linear Collider, the latter should have a center of mass energy of two TeV 44,45 (Fig. 13), which would be very far future. 10-1
10-2
-3
0 νββ HEIDELBERG-MOSCOW LEP .5 TeV
.8 TeV
U ei2
10
4 TeV
10-4
10-5
10 TeV
10-6 1
10
M i (TeV)
Fig. 13. Discovery limit for e− e− −→ W − W − at a linear collider as function of the mass Mi of √ a heavy left-handed neutrino, and of U2ei for s between 500 GeV and 10 TeV. In all cases the parameter space above the line corresponds to observable events. The limits from the HEIDELBERG-MOSCOW 0νββ experiment are shown also, the areas above the 0νββ contour line are excluded. The horizontal line denotes the limit on neutrino mixing, U2ei , from LEP (from45 ).
5.6. Extra dimensions Neutrinoless double beta decay in the presence of extra dimensions has been discussed by R. Mohapatra and A. P´ erez-Lorenzana.55 They show that the higher Kaluza-Klein modes of the right-handed W-boson provide new contributions to this process. In this way correlated limits on mWR and the inverse size of the extra dimensions can be obtained from double beta decay. The model building conditions under which a 0νββ signal of the observed level can be obtained due to Kaluza-Klein singlet neutrinos in theories with large extra dimensions we have investigated in56 (see chapter 1.5.5 of 6 ). 5.7. Special relativity, equivalence principle Violation of Lorentz invariance (VLI): The bound obtained from the HEIDELBERG-MOSCOW experiment is δv < 4 × 10−16
for θv = θm = 0
(15)
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
245
δv
where δv = v1 − v2 is the measure of VLI in the neutrino sector. θv and θm denote the velocity mixing angle and the weak mixing angle, respectively. In Fig. 14 (from73 ) the bound implied by double beta decay is presented for the entire range of sin2 (2θv ), and compared with bounds obtained from neutrino oscillation experiments (see72 ).
10
10
10
-14
-15
-16
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
2
1
sin (2θv)
Fig. 14. Double beta decay bound (solid line) on violation of Lorentz invariance in the neutrino sector, excluding the region to the upper left. Shown is a double logarithmic plot in the δv–sin 2 (2θ) parameter space. The bound becomes most stringent for the small mixing region, which has not been constrained from any other experiments. For comparison the bounds obtained from neutrino oscillation experiments (from72 ) in the νe − ντ (dashed lines) and in the νe − νµ (dashed-dotted lines) channel, excluding the region to the right, are shown (from73 ).
Violation of equivalence principle (VEP): Assuming only violation of the weak equivalence principle, there does not exist any bound on the amount of VEP. It is this region of the parameter space which is most restrictively bounded by neutrinoless double beta decay. In a linearized theory the gravitational part of the Lagrangian to first order in a weak gravitational field gµν = ηµν +hµν (hµν = 2 cφ2 diag(1, 1, 1, 1)) can be written as L = − 21 (1 + gi )hµν T µν , where T µν is the stress-energy in the gravitational eigenbasis. In the presence of VEP the gi may differ. We obtain73 the following bound from the Heidelberg–Moscow experiment, for θv = θm = 0: φδg < 4 × 10−16 (for m ¯ < 13eV) φδg < 2 × 10−18 (for m ¯ < 0.08eV).
(16)
2 Here g¯ = g1 +g can be considered as the standard gravitational coupling, for which 2 the equivalence principle applies. δg = g1 − g2 . The bound on the VEP thus, unlike the one for VLI, will depend on the choice for the Newtonian potential φ.
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
246
5.8. Dark matter and dark energy A degenerate neutrino mass of ≥ 0.22 eV corresponds to a contribution of neutrinos as hot dark matter of ≥ 4.7% to the total observed dark matter in the Universe (see Fig. 15). According to Goldman et al.106 there is a possible correlation between the mass of the neutrino and dark energy. The conclusion is that a neutrino of mass around 0.3 eV could solve the problem of dark energy.
Fig. 15.
Contribution of hat dark matter to the mass distribution in the Universe.
“In a cloud of massive fermions interacting by exchange of a light scalar field, the effective mass and the total energy density eventually increases with decreasing density. In this regime, the pressure density relation can approximate that required for dark matter energy. Applying this to the expansion of the Universe with a very light scalar field leads to the conclusion that Majorana neutrinos of a mass of ∼ 0.3 eV” (as observed in 0νββ decay) “may be consistent with current observation of dark energy”.106 Also the paper by Stephenson (these Proceedings)111 gives the same result for the electron neutrino mass in the range 0.2-0.48 eV as determined by 0νββ decay. 6. Future ββ Experiments and Some Problems The main problem is that present and future ‘confirmation’ experiments usually are not sensitive enough (see16 and more in6 ). A good example is the NEMO III 0ν experiment. The half-life limits reached (at a 1.5σ level) of T1/2 =1.0×1023 and 4.6×1023 years for 100 M o and 82 Se (see80 ) after 389 days of effective measurement
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
247
are a factor 20 away from the half-lives required to check the HEIDELBERGMOSCOW resultqon a 1.5σ level. Since the half-life is connected with the measuring
t·M 0ν time by T1/2 ∼ δE·B , this means that NEMO III would have to measure more than 400 years, to see the signal on a 1.5σ level, and correspondingly longer, to see it on a higher c.l.16,74 CUORICINO - which has the general problem, that it cannot distinguish between β and γ-events, and because of its high background cannot see the 2νββ spectrum of 130 T e, could see the HEIDELBERG-MOSCOW signal assuming an uncertainty in the knowledge of the nuclear matrix element75 of a factor of only 2, within 1 and 30 years - on a 1.5σ c.l.16,74 It can thus never disprove the HEIDELBERG-MOSCOW result. The unproper comparison of 1.5σ limits with results of a 6.4σ signal as given in,110 is of course highly misleading! The large version CUORE with a by a factor of 16 larger mass also would need many years for a statement on a 6σ level. EXO - the main problem is that no tracks are visible in a liquid 136 Xe experiment.83 This kills the main idea of the experiment to separate ββ from γ events, and just reduces it to complicated calorimeter. Since the other main idea - laser identification of the daughter nucleus, is not (yet) working, the present rather modest aim is 84 to reach a background level as reached in the HEIDELBERG-MOSCOW experiment, instead of the factor of 1000 less, projected earlier. They claimed to reach this goal now around 2010 (see Ref.84 ). GERDA (the ’copied’ GENIUS project proposed in 199723 , planning to operate naked 76 Ge crystals in liquid nitrogen) was started about 10 years ago, but has not yet started operation. From our earlier Monte Carlo calculations for GENIUS,23 we expected to get a large potential for ββ research. The only long-term experience with naked detectors in liquid nitrogen has been collected since then with our GENIUS-TestFacility24 . Why any GENIUS-like project will not be able to confirm our evidence in a reasonably short time, is described in25 . SNO+ - The only experiment having a good chance to perform a high-statistics 0νββ experiment, is perhaps the SNO+ project, sometimes in its application for double beta decay called SNO++82 . The idea is to fill SNO, the Sudbury (Solar) Neutrino Observatory, with liquid scintillator. This liquid scintillator then serves as the medium, in which double beta decay candidate materials would be deployed. The use of 150 N d (in form of N d2 O3 nanoparticles) would take advantage of the relatively large phase space and nuclear matrix element of 150 N d (see81 ). For 50% enriched 150 N d and 0.1% loading in SNO+ a 5σ sensitivity for an effective neutrino mass of 0.04 eV is expected to be reached after 3 years of running. Since the start of the 150 N d experiment is expected for the year 2010, SNO+ could be the near- or medium-time scale future of double beta decay. COBRA - using or planning to use CdTe pixelized semiconductor detectors which may have in principle the potential of looking for β + β + and β + EC decay, still is almost 10 orders of magnitude away (see113 ) from the required sensitivity, to become useful for double beta decay research (see16,76 ), e.g. for the points discussed below in this section. The increase in sensitivity of about one order of magnitude in the
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
248
last years shows that rather long time scales may be expected for this technique to come to fruitful applicability for ββ decay. Concerning expected information on the ν mass, there is another problem in present experimental approaches. Even if one of these β − β − experiments would be able to confirm the HEIDELBERG-MOSCOW result, no new information would be obtained. It is known for 20 years - but surprisingly often overlooked - that a β − β − experiment can give information on the effective neutrino mass only under some assumption on the contribution of right-handed weak currents (parameters η, λ) or others like SUSY, ... to the ββ -amplitude (see e.g.6 ). In general one obtains only an upper limit on hmi. So if neutrino masses are deduced from 0νββ experiments, this is always done under the assumption of vanishing η, λ etc. In that sense it is premature to compare as often done such number with numbers deduced e.g. from WMAP or other experiments, or to use it as a landmark for future tritium experiments. It is unfortunate that even an additional high-sensitive β − β − experiment (e.g. 136 Xe) together with the 76 Ge HEIDELBERG-MOSCOW result can give no information to decide the individual contribution of hmi, hηi, hλi to the 0νββ decay rate. This has been shown already in 199476 , and investigated in more detail in108 . 6.1. Proposed way out In the same paper76 it has been shown that the only realistic way to get this information on the individual contributions of m, η, λ is to combine the β − β − result from 76 Ge (HEIDELBERG-MOSCOW ), with a very high-sensitivity (level of 1027 y) mixed mode β + /EC decay experiment (e.g. of 124 Xe), (see also16,108 and Fig. 16.) So it might be wise to combine future efforts to confirm the HEIDELBERGMOSCOW result with a possibility to pin down some of the various contributions to the 0νββ decay amplitude. 7. Summary and Outlook We reached with the HEIDELBERG-MOSCOW experiment,5,13,14,16,18,74 what we wanted to learn from our large GENIUS project, proposed in 199723 at a time where a signal was not yet seen - namely observation of 0νββ decay. There is now a >6σ signal for 0νββ decay. The neutrino is a Majorana particle. Total lepton number is violated. Presently running and planned experiments are usually - with one possible exception - not sensitive enough to check the HEIDELBERG-MOSCOW result on a reasonable time scale (see Fig. 17) f . f Also the authors of Ref.96 confirm (see their Figs. 3 and 5), that according to their own calculations of nuclear matrix elements no present experiment probes the signal range of KK-K 13 , and none of them can exclude a fraction of the range given by HEIDELBERG-MOSCOW at a comparable confidence level. They state, that more optimistic claims by 104 were based on a larger favoured range for the 130 T e half-life, and thus were wrong.
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
249
Fig. 16. Upper left: The allowed regions in the hλi, hmi parameter space for different ββ parent isotopes using experimental limits. Upper right: Allowed region for the hmi – hλi parameter space 0νββ 25 136 Xe (T 0νββ = 1.93+0.44 ×1025 ) y. Bottom for 76 Ge (T1/2 = 2.23+0.44 −0.31 × 10 ) y and assuming −0.31 1/2 left: Allowed region for 100 M o for the 0+ → 2+ and 0+ → 0+ transition (assuming half lives, respectively, 3.1 × 1031 y and 5.8 × 1024 y. Bottom right: The case of neutrinoless mixed β + /EC mode of 124 Xe for the assumed half-life (1.34-1.87)×1027 y compared to the 76 Ge measured halflife (hmν i- hλi plane). The dashed areas are consistent with both experiments. (From Ref. 108 )
Nuclear double beta decay has developed in the last decades to one of the most exciting means of the research into physics beyond the standard model. That for the first time, an evidence has been found, that the process of 0νββ decay is occurring in nature is, because of its fundamental consequences for particle physics, a huge challenge for future experiments. For a better understanding of the various mechanisms contributing to the 0νββ amplitude, we will have to go to completely different types of ββ experiments, than presently persued (see Ref.6,13,16 ). Recent information from many independent sides is consistent with a neutrino mass of the order of the value found by the HEIDELBERG-MOSCOW experiment. This is the case for the results from CMB and LSS, neutrino oscillations, particle theory and cosmology (for a detailed discussion see6 ). To mention a few examples: Neutrino oscillations require in the case of degenerate neutrinos common mass eigenvalues of m > 0.04 eV. An analysis of CMB measurements by SDSS and large
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
250
GENIUS 10t 10t
2015? SuperNEMO
CUORE ~750kg
100 kg
(68%)
76
Ge
10t
82
Se
100
Mo
2010 ? KAMLAND
Kiev 116
Cd
130
Te
DAMA TPC
} } }
Ca
} }
}
48
7 kg
UCI TPC
(76%)
ELEGANT
(90 %)
Beijing
10
2006
ELEGANT
SNO +
1t
2007
2007 NEMO 3
TeO 2 CUORICINO
2007 NEMO 3
1
EXO 10t
2013 ?
0.2 eV
XMASS
30t
2015?
1 kg
< m > [eV]
0.1
MOON 1t
200 kg
71.7 kg y 6.4 σ
UCI TPC
0.01
Neuchatel Caltech
HEIDELBERG− MOSCOW
136
Xe
150
Nd
Fig. 17. Present sensitivity, and expectation for the future, of the most promising ββ experiments. Given are limits for hmi, except for the HEIDELBERG-MOSCOW experiment where the measured value is given. Framed parts of the bars: present status for running experiments; solid and dashed lines: experiments under construction or proposed, respectively - dashed lines: far from realization. For references see6,13
P scale structure yields mν < 1.7 eV27 (see footnote (d)). Theoretical papers require degenerate neutrinos with m > 0.1, or 0.2 eV or 0.3 eV,85,87,88,93,94 and the alternative cosmological concordance model requires relic neutrinos with mass of order of eV.95 As mentioned already earlier13,17,20 the results of double beta decay and CMB measurements together indicate that the neutrino mass eigenvalues have the same CP parity, as required by the model of.88 Also the approach of 105 comes to the conclusion of a Majorana neutrino. The Z-burst scenario for ultra-high energy cosmic rays requires mν ∼ 0.4 eV,89,90 and also a non-standard model (g-2) has been connected with degenerate neutrino masses >0.2 eV.86 The neutrino mass determined from 0νββ decay is consistent also with present models of leptogenesis in the early Universe92 (see also talk of M. Losada, these Proceedings112). Finally we have pointed out that the probably fasted test and confirmation of the result of our 76 Ge ββ result will be delivered by the new PLANCK satellite mission launched in May 2009. It will investigate the cosmic microwave background with unprecedented precision and in this way also the neutrino mass. It will either confirm the neutrino mass deduced by the 76 Ge experiment (under the assumption of a dominating mass mechanism), or will decide that the process of ββ decay is triggered by another mechanism. It thus will yield information which can hardly be obtained ever by any future ββ experiments.
Acknowledgments The authors thank Dr. Irina Titkova for efficient and pleasant collaboration. The authors acknowledge the invaluable support from DFG and BMBF, and LNGS for this project.
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
251
References 1. H.V. Klapdor et al., Atomic Data and Nuclear Data Tables 31 (1984) 81-111; 2. H.V. Klapdor, Progr. in Part. and Nucl. Phys. 10 (1983) 131-225; 17 (1986) 419-455; Sterne und Weltraum 1985/3 132-139; Fortschritte der Physik 33, Heft 1 (1985) 1-55; The Astrophysical J. 301 (1986) L39-43. 3. K. Grotz and H.V. Klapdor, ”Die schwache Wechselwirkung in Kern-, Teilchen- und Astrophysik”, Teubner, Stuttgart 1989; Adam Hilger – IOP Bristol 1990; Mir, Moskva, 1992; Shandong Science and Technology Press, Jinan, China 1996. 4. H.V. Klapdor-Kleingrothaus and A. Staudt, ”Teilchenphysik ohne Beschleuniger”, Teubner, Stuttgart 1995; IOP, Bristol 1995, sec. ed. 1998; Nauka Fizmatlit, Moscow, 1997. 5. H.V. Klapdor, Proposal, Internal Report, MPI-1987-V17, September 1987. 6. H.V. Klapdor-Kleingrothaus, ”70 Years of Double Beta Decay - From Nuclear Physics to Beyond Standard Model Particle Physics”, World Scientific, Singapore (2010) 1520 pages; ”60 Years of Double Beta Decay - From Nuclear Physics to Beyond the Standard Model”, World Scientific, Singapore (2001) 1281 pages. 7. H. P¨ as, M. Hirsch, H.V. Klapdor-Kleingrothaus and S.G. Kovalenko Phys. Lett. B453 (1999) pp. 194-198. 8. H.V. Klapdor-Kleingrothaus, Int. Journ. of Mod. Phys. D13 (2004) 2107. 9. K. Muto and H.V. Klapdor ”Neutrinos”, ”Graduate texts in contemporary physics”, ed. H.V. Klapdor Berlin, Germany: Springer (1988) pp. 183-238. 10. H.V. Klapdor-Kleingrothaus in Proc. ”Lepton and Baryon Number Violation in Particle Physics, Astrophysics and Cosmology”, eds. H.V. Klapdor-Kleingrothaus and I.V. Krivosheina, International Workshop at ECT, Trento, Italy, 20-25 April, 1998, World Scientific (1998) pp. 251-301. 11. H.V. Klapdor-Kleingrothaus in the Proc. ”Symmetries in Intermediate High Energy Physics”, eds. A. Faessler et al., , Springer-Verlag, Berlin, Heidelberg (2000), Tracts in Modern Physics 63 pp. 69-104. 12. H.V. Klapdor-Kleingrothaus, Int. J. Mod. Phys. A 13 (1998) 3953. 13. H.V. Klapdor-Kleingrothaus, I.V. Krivosheina et al., Mod. Phys. Lett. A21 (2006) 1547; Phys. Lett. B586 (2004) 198-212 and Nucl. Instr. & Methods A522 (2004) 371406; Mod. Phys. Lett. A16 (2001) 2409-2420, Phys. Lett. B632 (2006) 623-631; Phys. Rev. D73 (2006) 013010; Phys. Scripta T127 (2006) 40-42; Int. J. Mod. Physics E17 (2008) 505-517. 14. H.V. Klapdor-Kleingrothaus, in Proc. of DARK2007, Sydney, Sept. 2007, eds. H.V. Klapdor-Kleingrothaus et al., World Scientific (2008) pp. 442-467. 15. H.V. Klapdor-Kleingrothaus and I.V. Krivosheina in Proc. of DARK2009, Seventh Heidelberg International Conference on Dark Matter in Astro-and Particle Physics Christchurch, New Zealand, 18-24 January 2009, eds. H.V. Klapdor-Kleingrothaus and I.V. Krivosheina, World Scientific, Singapore (2009) 137-169 and Preprint hepph/1006.2423 . 16. H.V. Klapdor-Kleingrothaus Int. J. Mod. Physics E17 (2008) 505-517. 17. H.V. Klapdor-Kleingrothaus et al., Phys. Lett. B578 (2004) 54-62. 18. H.V. Klapdor-Kleingrothaus, in Proc. of BEYOND03, Castle Ringberg, Germany, 9-14 June 2003, Springer (2004), ed. H.V. Klapdor-Kleingrothaus, 307-364. 19. K.Ya. Gromov et al., J. Part. Nucl. Lett. 3 (2006) 30-41. 20. H.V. Klapdor-Kleingrothaus, in Proc. of Intern. Conf. BEYOND’02, Oulu, Finland, 2-7 Jun. 2002, IOP, Bristol, 2003 ed. H.V. Klapdor-Kleingrothaus, 215-240 pp., and in Proc. of Neutrinos and Implications for Physics Beyond the Standard Model, Stony Brook, New York, 11-13 Oct. 2002, Int. J. Mod. Phys. A18 (2003) 4113-4128 and
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
252
hep-ph/ 0303217. 21. H.V. Klapdor-Kleingrothaus, H. P¨ as and A.Yu. Smirnov, Phys. Rev. D63 (2001) 073005. 22. M. Moe et al., Phys. Rev. Lett. 59 (1987) 989. 23. H.V. Klapdor-Kleingrothaus, J. Hellmig and M. Hirsch, GENIUS-Proposal, 20 Nov. 1997; J. Hellmig, HVKK, Z. Phys. A359 (1997) 351-359; H.V. KlapdorKleingrothaus, M. Hirsch, Z. Phys. A359 (1997) 361-372; H.V. KlapdorKleingrothaus, CERN Courier, Nov. 1997, 16- 18; H.V. Klapdor-Kleingrothaus, J. Hellmig, M. Hirsch, J. Phys. G24 (1998) 483-516; H.V. Klapdor-Kleingrothaus et al., Proposal Aug. 1999 sec. draft, hep-ph/9910205, Proc. of ”Beyond the Desert99” eds. H.V. Klapdor-Kleingrothaus, I.V. Krivosheina (IOP, 2000), pp. 915-1015. 24. H.V. Klapdor-Kleingrothaus, CERN Courier 43 Nr.6 (2003) 9; H.V. KlapdorKleingrothaus et al., Nucl. Instr. Meth. A 511 (2003) 341; H.V. KlapdorKleingrothaus et al., Nucl. Instr. Meth. A 530 (2004) 410-418; Nucl. Instrum. Meth. A 481 (2002) 149-159. 25. H.V. Klapdor-Kleingrothaus and I.V. Krivosheina, Nucl. Instr. Meth. A566 (2006) 472; Phys. Scripta T127 (2006) 52. 26. * B.S. Flanders, R. Madey, B.D. Anderson, A.R. Baldwin, J.W. Watson, C.C. Foster, H.V. Klapdor and K. Grotz Phys. Rev. C40 (1989) pp. 1985-1992; R. Madey, B.S. Flanders, B.D. Anderson, A.R. Baldwin, J.W. Watson, S.M. Austin, C.C. Foster, H.V. Klapdor and K. Grotz, Phys. Rev. C40 (1989) pp. 540-552. 27. SDSS Collaboration (M. Tegmark et al.), Phys. Rev.D 69 (2004) 103501, astroph/0310723. 28. WMAP Collaboration, Astrophys. J. Suppl. 148 (2003) 213-233; 170 (2007) 377. 29. ”Planck-The Scientific programme”, ESA-SCI (2005) 1, astro-ph/ 0604069 v1 and http://www.rssd.esa.int/SA/PLANCK/docs/Bluebook-ESA-SCI(2005)1 V2.pdf. 30. SNO Collaboration Phys. Rev. Lett. 92 (2004) 181-301; J.N. Bahcall et al., J. High Energy Phys. 0311 (2003) 1-48. 31. Super-Kamiokande Coll., Phys. Rev. Lett. 93 (2004) 101801; Phys. Rev. D71 (2005) 112005. 32. H.V. Klapdor and K. Grotz Phys. Lett. B142 (1984) pp. 323-328; and K. Grotz and H.V. Klapdor Nucl. Phys. A460 (1986) pp. 395-436. 33. J. Schechter and J.W.F. Valle, Phys. Rev. D25 (1982) 2951-2954. 34. H.V. Klapdor-Kleingrothaus in ”Karlsruher Nuklidkarte” (Commemoration of the 50th Anniversary) eds. G. Pfennig et al., Luxembourg: Office for Official Publications of the European Communities (EUR Scientific and Technical Research Series) (2008) 140-145. 35. H. Sugiyama, Proc. BEYOND’02, Oulu, Finland, June 2002, ed. H.V. Klapdor- Kleingrothaus (IOP 2003), pp. 409-415. 36. D.O. Caldwell, in Proc. 12th International Conference on ”Neutrino Physics and Astrophysics”, Sendai, Japan, June 3-8, 1986, eds. T. Kitagaki and H. Yuta, World Scientific, Singapore (1986) pp. 77-92. 37. D.O. Caldwell Int. J. Mod. Phys. A4 (1989) 1851-1869. 38. D.O. Caldwell in Proc. of 14th Europhysics Conference on Nuclear Physics: Rare Nuclear Decays and Fundamental Physics, Bratislava, Czechoslovakia, 22-26 October, 1990, ed P. Povinec, J. Phys. G17 (1991) Suppl. S137-S144. 39. I. Kirpichnikov, in Proc. of International Conference ”Underground Physics”, Baksan, Russia, August 1987, eds. E.N. Alekseev et.al.; A.A. Vasenko, I.V. Kirpichnikov et al., Mod. Phys. Lett. A5 (1990) pp. 1299-1306. 40. M.G. Inghram and J.H. Reynolds, Phys. Rev. 78 (1950) pp. 822-823.
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
253
41. 42. 43. 44. 45.
46. 47. 48. 49. 50. 51. 52.
53. 54. 55. 56. 57. 58.
59. 60. 61.
62.
63. 64. 65. 66.
N. Takaoka and G. Ogata, Z. Naturforsch A21 (1966) pp. 84-90. T. Kirsten, W. Gentner and O. Schaeffer, Z. Phys. 202 (1967) pp. 273-292. E. der Mateosian and M. Goldhaber Phys. Rev. 146 (1966) pp. 810-815. G. Belanger, F. Boudjema, D.London and H. Nadeau, Phys. Rev. D 53 (1996) 6292. G. Belanger et al., Phys. Rev. D53 (1996) 6292 and in Proc. of Lepton-Baryon Int. Conf., April 1998, Trento, IOP, Bristol, (1999), eds. H.V. Klapdor-Kleingrothaus and I.V. Krivosheina. E. Nardi, E. Roulet and D. Tommasini Phys. Lett. B344 (1995) pp. 225-232. O. Panella et al., Phys. Rev. D 62 (2000) 015013. M. Hirsch, H.V. Klapdor–Kleingrothaus and S.G. Kovalenko, Phys. Lett. B 398 (1997) 311 and 403 (1997) 291. M. Hirsch, H.V. Klapdor–Kleingrothaus and S.G. Kovalenko, Phys. Rev. Lett.75 (1995) 17. M. Hirsch, H.V. Klapdor–Kleingrothaus and S. Kovalenko, Phys. Rev. D 53 (1996) 1329. M. Hirsch, H.V. Klapdor–Kleingrothaus and S.G. Kovalenko, Phys. Lett. B 372 (1996) 181, Erratum: Phys. Lett. B381 (1996) 488. H.V. Klapdor-Kleingrothaus and H. P¨ as ”Neutrinoless Double Beta Decay and New Physics in the Neutrino Sector”, in Proc. COSMO 99: 3rd International Conference on Particle Physics and the Early Universe, Trieste, Italy, 27 Sept. - 3 Oct. 1999, eds U. Cotti, R. Jeannerot, G. Senjanovic and A. Smirnov. Singapore, World Scientific (2000), and in H.V. Klapdor-Kleingrothaus (ed.): Sixty years of double beta decay Singapore, World Scientific (2001) pp. 755-762, and hep-ph/0002109. R.N. Mohapatra and G. Senjanovi´c Phys. Rev. Lett. 44 (1980) pp. 912-915. R.N. Mohapatra Phys. Rev. D34 (1986) pp. 909-910. R.N. Mohapatra and A. P´ erez-Lorenzana hep-ph/9909389 (1999) pp. 1-10 and Phys. Lett. B468 (1999) pp. 195-200. G. Bhattacharyya, H.V. Klapdor-Kleingrothaus, H. P¨ as and A. Pilaftsis Phys. Rev. D67 (2003) 113001-1-17 and hep-ph/0212169. M. Hirsch, H. V. Klapdor-Kleingrothaus and O. Panella Phys. Lett. B374 (1996) pp. 7-12. M. Hirsch and H.V. Klapdor-Kleingrothaus in Proc. International Workshop ”Double Beta Decay and Related Topics”, Trento, Italy, April 24 - May 5, 1995, eds. H.V. Klapdor-Kleingrothaus and S. Stoica, Singapore: World Scientific (1996) pp. 175-191. D. Choudhury and S. Raychaudhuri Phys. Lett. B 401 (1997) 54-61 and preprint hep-ph/ 9702392. G. Altarelli, J. Ellis, G.F. Guidice, S. Lola and M.L. Mangano Nucl. Phys. B 506 (1997) 3-28 and preprint hep-ph/ 9703276. M. Hirsch, H.V. Klapdor–Kleingrothaus, S. Kovalenko, in Proc. of BEYOND’97, Castle Ringberg, Germany, 8-14 June 1997, ed. by H.V. Klapdor-Kleingrothaus and H. P¨ as, IOP Bristol (1998) 322-330; and Phys. Rev. D 542 (1996) R4207-R4210. H.V. Klapdor-Kleingrothaus in Proc. of BEYOND’97, Castle Ringberg, Germany, 8-14 June 1997, ed. by H.V. Klapdor-Kleingrothaus and H. P¨ as, IOP Bristol (1998) 485-531, and Int. J. Mod. Phys. A13 (1998) 3953. M. Hirsch, H.V. Klapdor–Kleingrothaus and S.G. Kovalenko, Phys. Rev. D 57 (1998) 1947. G. Jungmann, M. Kamionkowski and K. Griest, Phys. Rep. 267 (1996) 195. M. Hirsch, H.V. Klapdor–Kleingrothaus and S.G. Kovalenko, Phys. Lett. B 398 (1997) 311 and 403 (1997) 291. M. Hirsch, H.V. Klapdor–Kleingrothaus, St. Kolb and S.G. Kovalenko, Phys. Rev.
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
254
D 57 (1998) 2020. 67. E. Takasugi ”Double Beta Decay Constraint on Composite Neutrinos”, in Proc. ”Beyond the Desert’97”: Accelerator and Nonaccelerator Approaches, eds. H.V. KlapdorKleingrothaus and H. P¨ as, ”Conference on Physics Beyond the Standard Model”, Tegernsee, Germany, 8-14 June, 1997, IOP (1998) p. 360. 68. M. Hirsch, H.V. Klapdor–Kleingrothaus and S.G. Kovalenko, Phys. Lett. B 378 (1996) 17 and Phys. Rev. D 54 (1996) R4207. 69. J.L. Hewett and T.G. Rizzo, preprint hep-ph/9703337v3 (May1997). 70. K.S. Babu et al., preprint hep-ph/9703299 (March 1997). 71. J. Kalinowski et al., preprint hep-ph/9703288v2 (March 1997). 72. A. Halprin, C.N. Leung and J. Pantalone, Phys. Rev. D 53 (1996) 5365. 73. H.V. Klapdor-Kleingrothaus, H. P¨ as and U. Sarkar Eur. Phys. J. A5 (1999) pp. 3-6 and hep-ph/9809396. 74. H.V. Klapdor-Kleingrothaus, in Proc. of Int. Conf. “Neutrino Telescopes”, Febr. 2005, Venice, Italy, ed. M. Baldo-Ceolin., p. 215, hep-ph/0512263. 75. A. Staudt, K. Muto, H.V. Klapdor-Kleingrothaus, Eur. Lett. 13 (1990) 31. 76. M. Hirsch, K. Muto, T. Oda and H.V. Klapdor-Kleingrothaus, Z. Phys. A 347 (1994) 151. 77. K. Muto and H.V. Klapdor ”Neutrinos”, In ”Graduate texts in contemporary physics”, ed. H.V. Klapdor Berlin, Germany: Springer (1988) pp. 183-238. 78. A. Staudt and H.V. Klapdor-Kleingrothaus Nucl. Phys. A549 (1992) pp. 254-264. 79. K. Muto, E. Bender and H.V. Klapdor Z. Phys. A334 (1989) pp. 177-186 and pp. 187-194. 80. [Arn05] R. Arnold et. al. (NEMO Collaboration) ”First Results of the Search of Neutrinoless Double Beta Decay With the NEMO 3 Detector”, Phys. Rev. Lett. 95 (2005) pp. 182302-1–4, hep-ex/0507083. 81. M. Moe in Proc. of the 16th Int. Conf. on ”Neutrino Physics and Astrophysics, NEUTRINO’94”, Eilat, Israel, 29 May-3 June, 1994, eds. A. Dar, G. Eilam and M. Gronau, Nucl. Phys. B38 Proc. Suppl. (1995) pp. 36-44. 82. M. Boulay, M. Chen, M. Di Marco et al., ”A Letter Experessing Interest in Starting an Experiment at SNOLAB Involving Filling SNO with Liquid Scintillator Plus Double Beta Decay Candidate Isotopes”, April 9, (20040), http : //snoplus.phy.queensu.ca/ 83. J. Vuilleumier for the EXO coll., Proc. idm2004, Edinburg, Scotland 2004, WS, Singapore (2005) 635. 84. A. Piepke, (Talk) at Heidelberg, ν Workshop, 2005. and in Proc. of XXImes Rencontres de Blois ”Windows on the Universe”, Chateau Royal de Blois, France, 21- 26 June, 2009, http://confs.obspm.fr/Blois2009/. 85. K.S. Babu, E. Ma and J.W.F. Valle, Phys. Lett. B552 (2003) 207-213 and hepph/0206292. 86. E. Ma and M. Raidal, Phys. Rev. Lett. 87 (2001) 011802; Erratum-ibid. 87 (2001) 159901 and hep-ph/0102255. 87. E. Ma in Proc. of Intern. Conf. BEYOND’02, Oulu, Finland, 2-7 Jun. 2002, IOP, Bristol, 2003, and BEYOND 2003, Ringberg Castle, Tegernsee, Germany, 9-14 Juni 2003, Springer, Heidelberg, Germany, 2004, ed. H.V. Klapdor-Kleingrothaus. 88. R.N. Mohapatra, M.K. Parida and G. Rajasekaran, (2003) hep-ph/0301234. 89. D. Fargion et al., in Proc. of DARK2000, Heidelberg, Germany, July 10-15, 2000, Ed. H.V. Klapdor-Kleingrothaus, Springer, (2001) 455-468 and in Proc. of Beyond the Desert 2002, BEYOND02, Oulu, Finland, June 2002, IOP 2003, and BEYOND03, Ringberg Castle, Tegernsee, Germany, 9-14 Juni 2003, Springer, Heidelberg, Germany, 2003, ed. H.V. Klapdor-Kleingrothaus.
November 22, 2010
17:43
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.01˙Klapdor
255
90. Z. Fodor, S.D. Katz, A. Ringwald, Phys. Rev. Lett.88 (2002) 171101 and Z. Fodor et al., JHEP (2002) 0206:046 or hep-ph/0203198, and in Proc. of Intern. Conf. Beyond the Desert 02, BEYOND’02, Oulu, Finland, 2-7 Jun 2002, IOP, Bristol, 2003, ed. H V Klapdor-Kleingrothaus and hep-ph/0210123. 91. * H. V. Klapdor-Kleingrothaus and U. Sarkar, Mod. Phys. Letter. A18 (2003) 22432254. 92. M.N. Rebelo, Proc. of BEYOND’2003, Castle Ringberg, Germany, July 2003, ed. H.V. Klapdor-Kleingrothaus, Springer, Heidelberg (2004) 267. 93. K.S. Babu, E. Ma and J.W.F. Valle, Phys. Lett. B 552 (2003) 207-213. 94. M. Hirsch, J. C. Romao, S. Skadhauge, J. W. F. Valle and A. Villanova del Moral Phys. Rev. D69 (2004) 093006. 95. A. Blanchard, M. Douspis, M. Rowan-Robinson and S. Sarkar, astro-ph/0304237. 96. A. Faessler et al., Phys. Rev. D79 (2009) 053001 and hep-ph/ 0810.5733, March, 2009. 97. J. Engel and P. Vogel Phys. Rev. C69 (2004) 034304 and nucl-th/0311072. 98. T. Tomoda and A. Faessler Phys. Lett. B191 (1987) pp. 475-481. 99. S. Stoica and H.V. Klapdor-Kleingrothaus Eur. Phys. J. A9 (2000) pp. 345-352, and nucl-th/0010106. 100. S. Stoica and H.V. Klapdor-Kleingrothaus Phys. Rev. C63 (2001) pp. 064304-1–6. 101. F. Simkovic et al., Phys. Rev. C79 (2009) pp. 055501. 102. A. Faessler, in Proc. of Neutrino Oscillation Worksh., Otranto, Italy, 2004, eds. G.L. Fogli et al., Nucl. Phys. Proc. Suppl. 145 (2005) pp. 213-218. 103. * H.V. Klapdor-Kleingrothaus, A. Dietz and I.V. Krivosheina Phys. Rev. D70 (2004) pp. 078301-1–4, hep-ph/0403056. 104. C. Arnaboldi et al. (CUORICINO Collaboration, Phys. Rev. C78 (2008) 035502, hep-ex 0802.3439. 105. R. Hofmann, hep-ph/0401017 v.1. 106. T. Goldman, G.J. Stephenson Jr., P.M. Alsing and B.H.J. McKellar, in Proc. of DARK2009, Christchurch, New Zealand, January 15-23, 2009, World Scientific (2009), eds. H.V. Klapdor-Kleingrothaus and I.V. Krivosheina, pp. 180-193. 107. G. Karagiorgi et al., hep-ph/0906.1997v1; C. Athanassopoulos et al. (LSND Collaboration), Phys. Rev. Lett. 77 (1996) 3082 and nucl-ex/9605003; C. Athanassopoulos et al. (LSND Collaboration), Phys. Rev. C 58 (1998) 2489 and nuclex/9706006; A. Aguilar et al. (LSND Collaboration), Phys. Rev. D 64 (2001) 112007 hep-ex/0104049. 108. H.V. Klapdor-Kleingrothaus, I.V. Krivosheina and I.V. Titkova, to be published 2011. 109. I.V. Kirpichnikov, preprint hep-ph/1006.2025v1. 110. A. Giuliani, in these Proceedings of BEYOND 2010. 111. G.J. Stephenson Jr., P.M. Alsing, T. Goldman et al. in these Proceedings. 112. M. Losada, in these Proceedings. 113. H. Kiel et al., Nucl. Phys.A 723 (2003) 499-514.
November 22, 2010
18:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.02˙Dafinei
256
LUCIFER: AN EXPERIMENTAL BREAKTHROUGH IN THE SEARCH FOR NEUTRINOLESS DOUBLE BETA DECAY I. DAFINEI and F. FERRONI Dipartimento di Fisica dell’Universit` a di Roma La Sapienza e Sezione di Roma dell’INFN, Piazzale Aldo Moro 5, Roma, I-00185, Italy A. GIULIANI∗ Dipartimento di Fisica e Matematica dell’Universit` a dell’Insubria e Sezione di Milano-Bicocca dell’INFN, Via Valleggio 11, Como, I-22100, Italy ∗ E-mail:
[email protected] S. PIRRO and E. PREVITALI Dipartimento di Fisica dell’Universit` a di Milano-Bicocca e Sezione di Milano-Bicocca dell’INFN Piazza della Scienza 3, Milano, I-20126, Italy LUCIFER (Low-background Underground Cryogenic Installation For Elusive Rates) is a new project for the study of neutrinoless Double Beta Decay, based on the technology of the scintillating bolometers. These devices promise a very efficient rejection of the α background, opening the way to a virtually background-free experiment if candidates with a transition energy higher than 2615 keV are investigated. The baseline candidate for LUCIFER is 82 Se. This isotope will be embedded in ZnSe crystals grown with enriched selenium and operated as scintillating bolometers in a low-radioactivity underground dilution refrigerator. In this paper, the LUCIFER concept will be introduced and the sensitivity and the prospects related to this project will be discussed. Keywords: Neutrino Mass; Double Beta Decay; Scintillating Bolometers.
1. Introduction Neutrinoless double beta decay (0νββ)1 is a very rare nuclear transition which violates by two units the total lepton number conservation. The observation of this process would unquestionably proof that neutrinos are self-conjugate particles, i.e. Majorana fermions. In addition, if the transition proceeds through the so-called mass mechanism, it enables the determination of the neutrino mass scale and the ordering of the mass values of the three neutrino mass eigenstates.
November 22, 2010
18:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.02˙Dafinei
257
Low temperature calorimeters, or bolometers, were proposed more than 20 years ago as sensitive devices for the study of 0νββ.2 Currently, experiments based on bolometers, like Cuoricino3 (now closed) and CUORE4 (in preparation), are among the most sensitive in the world for the study of this phenomenon and among the most promising for next-generation searches. In this approach, the detector is made of an array of single dielectric diamagnetic crystals cooled down below 20 mK. The crystals contain the isotope under study in their basic chemical formula. This allows to achieve an efficiency close to 1 for the 0νββ signal joined with the very high energy resolution (a fraction of percent) characteristic of the bolometric technique. In a sentence, the bolometric approach represents the generalization to a multiisotope search of the classical Ge diode technology, which enables the investigation of the isotope 76 Ge only. Bolometer-based 0νββ searches require extremely low levels of background, especially that arising from radioactive contaminants in the bolometers themselves and in the surrounding materials. Surface contamination is of particular concern. In fact, α’s arising from radioactive impurities located on the surfaces of the detector or of passive elements facing them can lose part of their energy in a few microns and deposit in the detector an energy close to that of the signal, thus mimicking a 0νββ event. The experience of Cuoricino3 shows clearly that energy-degraded α’s, emitted by surface radioactive contamination, populate the spectral region between 2.5 and 4 MeV with a dangerous continuum at the level of 0.1 counts/keV/kg/y, hardly reducible by a factor ∼ 5 with surface cleaning techinques. Therefore, the ability to tag α particles is a formidable asset in the search for 0νββ. This improvement would be particulary effective if the investigated isotope presented an energy transition higher than the end point of the bulk of the natural γ radioactivity, i.e. 2615 keV. In this case, the simultaneous suppression of the γ background (thanks to the location of the transition energy) and of the α background (thanks to the identification of these particles), would provide a virtually zero background experiment. In practice, a specific background index better than 1 counts/y/ton looks feasible if both conditions are met.
2. Scintillating Bolometers and Double Beta Decay When the energy absorber in a bolometer is an efficient scintillator at low temperatures, a small but significant fraction of the deposited energy (up to a few %) is converted into scintillation photons, while the remaining dominant part is detected as usual in the form of heat. The simultaneous detection of the scintillation light and heat is a very powerful tool to identify the nature of the interacting particle, since the energy partition between phonons and photons is different for different types of quanta. In particular, a nuclear recoil can be distinguished from an electron recoil (much lower light yield), and an α particle from an electron or γ (different, not always lower, light yield). The ratio of the light yield for an α particle or nuclear recoil to that of an electron or γ is defined quenching factor (QF).
November 22, 2010
18:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.02˙Dafinei
258
The most obvious and effective method to detect scintillation photons in a very low temperature environment is to develop a dedicated bolometer, in the form of a thin slab, opaque to the emitted light and provided with its own phonon sensor. This auxiliary bolometer, normally a Si or Ge slab, is placed very close to a flat optically polished side of the main scintillating bolometer. The whole bolometric set-up can be surrounded by a reflecting foil in order to maximise the light collection. More scintillating crystals can be read out by the same light detector. The very low threshold achievable with bolometers (tens of eV) makes them suitable devices for few-photon counting. The scheme reported in Fig. 1 illustrates the concept of the detector (left) and of the discrimination method (right).
Fig. 1. Left: schematic structure of a double read-out scintillating bolometer. All the basic elements of the detector are shown. Right: schematic scatter plots of light signal amplitudes vs. heat signal amplitudes for events occurring in the scintillating bolometer. Cases with QF> 1 and QF< 1 are illustrated. In both circumstances, α events can be efficiently rejected and the 0νββ signal region, supposed above 2615 keV, is background free.
Nature has kindly provided us with a few 0νββ candidates presenting a transition energy higher than 2615 keV and forming chemical compounds suitable for the growth of large scintillating crystals, which proved to work as highly performing bolometers as well. A scintillating bolometer for 0νββ is no new concept in the field and was proposed more than one decade ago for 48 Ca with CaF2 crystals.5,6 We have today a long list of attractive possibilities:7 CdWO4 , CdMoO4 (for 116 Cd); PbMoO4 , CaMoO4 , SrMoO4 , ZnMoO4 (for 100 Mo); CaF2 , CaMoO4 (for 48 Ca); and last but not least ZnSe (for 82 Se). One of the most striking features of ZnSe is the abnormal low-temperature QF, higher than 1 unlike all the other studied compounds (see the lower plot in
November 22, 2010
18:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.02˙Dafinei
259
the right part of Fig. 1). Although not really welcome, this unexpected property does not degrade substantially the discrimination power of this material compared to the others and makes it compatible with the requirement of a high sensitivity experiment. An additional very useful feature is the possibility to perform α/β discrimination on the basis of the temporal structure of the signals, both in the heat and light channel. A preliminary analysis suggests an α-rejection efficiency better than 99% on a very conservative basis. The already demonstrated α background level above 2615 keV, of the order of 0.05-0.1 counts/keV/kg/y and achieved by mere cleaning techniques in Cuoricino and CUORE R&D, can then be reduced down to 10−3 counts/keV/kg/y and below. 3. LUCIFER: Structure and Sensitivity The sensitivity study of a future 0νββ experiment with scintillating bolometers can be performed assuming to exploit fully the experimental volume of the existing cryostat used for Cuoricino. This is a preliminary assumption. The space occupation of LUCIFER is not very large, and other existing or designed underground dilution fridges could house it. The total sensitive mass of LUCIFER has been estimated considering the largest crystals of ZnSe that can be grown with the present technology, which have a cylindrical shape with 5 cm in height and 5 cm in diameter. This is currently the baseline single-crystal size for LUCIFER. A preliminary version of the LUCIFER structure consists of an array of 48 crystals fitting exactly the experimental volume of the Cuoricino cryostat. The total active volume would be V = Ncryst Vcryst = 48 × 98 cm3 ∼ 4700 cm3 . The total detector mass would be 25 kg, with about 14 kg of enriched material assuming an enrichment level as high as 97%. The feasibility and the cost of the enrichment procedure are under control thanks to the investigation performed in the framework of ILIAS.a When applying different materials and nuclides to this scheme and considering all the relevant elements (scientific, technical, economical), the final balance is clearly in favour of 82 Se embedded in ZnSe crystals. The LUCIFER elementary module currently under study would contain five bolometers: • Four ZnSe cylindrical crystals (h=5 cm, Φ = 5 cm) will be arranged in each elementary module. One neutron transmutation doped (NTD) Ge thermistor, acting as temperature sensor, will be glued on each crystal. • The elementary-module holder is designed so as to house in its upper part a single photon absorber placed above the ZnSe crystals, consisting of an ultrapure Ge disk-shaped slab with a sub-millimiter thickness. Its diameter should be very large, at least 10 cm, but the light-detector exact geometry is not defined yet. Again, an NTD Ge thermistor will be glued on the Ge disk. a ILIAS
(Integrated Large Infrastructures for Astroparticle Science) was a European Project promoting Astroparticle Physics active between 2004 and 2009, http://www-ilias.cea.fr/
November 22, 2010
18:7
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.02˙Dafinei
260
This configuration allows the light detector to collect light from the four crystals underneath, and to offer its upper face to the adjacent elementary module with its four ZnSe crystals. These modules will be stacked so as to form a tower. The sequence will start from below with an isolated light detector. Then, 12 elementary modules will be assembled one above the other. The above structure assumes that a single light detector is sensitive enough to perform efficiently the α/β discrimination. So big light detectors have not been realized yet in the R&D activity which has preceeded LUCIFER. In case one should encounter problems in fabricating those large devices, each crystal will be coupled individually to a small light detector with 5 cm diameter, similar to existing devices. This alternative is fully viable and implies only a larger number of read-out channels, anyway available in the Cuoricino refrigerator. A preliminary evaluation of the LUCIFER sensitivity can be made on the basis of the structure discussed above and of the background expectations after α/β rejection. Assuming 5 year live time, a conservative energy window of 20 keV and a specific background coefficient of 10−4 counts/keV/kg/y, less than 1 background count is expected in the region of interest (the transition energy for 82 Se is 2995 keV). This corresponds to a sensitivity to the Majorana neutrino mass of the order of 100 meV, enough to scrutinize the 76 Ge claim8,9 with another nuclide. Apart from being a sensitive experiment per se, LUCIFER can be considered a demonstrator of the scintillating bolometer technology, with a significant mass and a full test of all the critical elements of this approach (large size crystals, large scale enrichment, final radiopurity of the detectors, background rejection investigated in many modules simultaneously operated). Acknowledgments The project LUCIFER has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement n. 247115. References 1. 2. 3. 4. 5. 6. 7. 8.
F. T. Avignone, S. R. Elliot and J. Engel, Rev. Mod. Phys. 80, 481 (2008). E. Fiorini, T. O. Niinikoski, Nucl. Instrum. Methods Phys. Res. A 224, 83 (1984). C. Arnaboldi et al (the Cuoricino collaboration), Phys. Rev. C 78, 035502 (2008). C. Arnaboldi et al. (the CUORE collaboration), Astropart. Phys. 20, 91 (2003). A. Giuliani and S. Sanguinetti, Mater. Sci. Eng. R-Rep. 11, 1 (1993). A. Alessandrello et al., Phys. Lett. B 420, 109 (1998). S. Pirro et al., Phys. Atom. Nucl. 69, 2109 (2006). H. V. Klapdor-Kleingrothaus, I. V. Krivosheina, A. Dietz and O. Chkvorets, Phys. Lett. B 586, 198 (2004). 9. H. V.Klapdor-Kleingrothaus and I. V. Krivosheina, Mod. Phys. Lett. A 21, 1547 (2006).
November 26, 2010
18:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.03˙Osipowicz
261
KATRIN THE KARLSRUHE TRITIUM NEUTRINO PROJECT A. OSIPOWICZ∗ (for the KATRIN collaboration) University of Appl. Sciences Fulda, Germany ∗ E-mail:
[email protected] http://www.hs-fulda.de KATRIN is a direct neutrino mass experiment with sub-eV sensitivity for neutrinomasses from tritium β decay. It is currently set up at Karlsruhe Institute of Technology (KIT), Germany, and combines a high luminosity windowless molecular tritium gas source with a high resolution electrostatic retarding spectrometer (MAC-E Filter) to investigate the β decay spectrum near the endpoint E0 with very high precision. It will improve the neutrino mass sensitivity by one order of magnitude down to 0.2 eV, sufficient to cover the degenerate neutrino mass scenarios and the cosmologically relevant neutrino mass range. Keywords: Neutrino mass, Tritium β decay, MAC-E-filter
1. Introduction Neutrinos as non baryonic hot dark matter could play an important role in the evolution of large scale structures (LSS). The dependence of structure formation on the neutrino mass is used to derive limits on the neutrino mass. Cosmic microwave background radiation observations (CMB) and galaxy surveys analysis allow for an conservatively derived value for the upper bound on the sum of neutrino masses1 of P mν,tot ≤ 0.6 − 0.7 eV/c2 . However one must take into account that these results are derived on basis of cosmological models and that neutrino masses are correlated to cosmological parameters (e.g. Hubble constant). Neutrino oscillation experiments have established compelling evidence for massive neutrinos explained in terms of neutrino flavour ocsillations. The Standard Model (SM) of particle physics does not provide nonzero neutrino mass and offers no explanation for fermion mass patterns and fermion generation mixing.2 Hence the evidence for neutrino masses and mixing is a strong indication for physics beyond the Standard Modell. However, as oscillation experiments are sensitive to differences of mass squares ∆m2ij = m2 (νi ) − m2 (νj ) , a dominant task in experimental neutrino physics is the determination of absolute neutrino mass scale relevant for cosmology and particle physics.
November 26, 2010
18:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.03˙Osipowicz
262
There are two complementary laboratory approaches to probe directly on the neutrino mass pattern. The search for neutrinoless double β decay (0νββ), which in case of a nonzero Majorana neutrino mass would measure an effective neutrino mass. 2 X ˜ iφj mee = m(νj ) U e ej j
(1)
˜ej and the corresponding phase factors eiφj connect the The mixing matrix U mass eigenstates υ1 , υ2 and υ3 and the flavour eigensates νe , νµ and ντ . Currently a value of mee is published by the Heidelberg-Moscow-experiment: mee = (0.2−0.6) eV (99.73 % C.L.)3–6 High precision spectroscopy of β decay spectra at the energy endpoint E0 of 3 H and 187 Re giving an ”average electron neutrino mass” m(νe )2 =
X U 2 m(νi )2 ei i
(2)
For the last 40 years the prominent canditate was tritium because of a) its relatively short lifetime (t1/2 = 12.3 y) resulting in a high activity and high count rate, b) its low energy end point E0 = 18.6 KeV (see Fig. 1), c) the superallowed transition with an energy independent nuclear matrix element and d) the relative simple atomic shell structure of the daughter nuclei that allowes to calculate the final state spectrum.
Fig. 1. Left.: Display of the tritium β-spectrum. Right: the area 3 eV below the energy endpoint E0 at 18.6 KeV showing the spectral shapes for a zero and a ν-mass off 1 eV. Only 2 × 10 −13 off all events fall in the sensitve area displayed in grey.
The latest generation of tritium experiments have been performed at Mainz10 and Troitzk.11 Both experiments have reached their theoretical sensitivity limits 2 and and give an upper limit of m(¯ νe ) ≤ 2.3 eV/c (95% C.L.). See Refs.7,8
November 26, 2010
18:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.03˙Osipowicz
263
2. The KATRIN Experiment The KArlsruhe TRItium Neutrino experiment (KATRIN) (see Fig. 2) is set up at the Forschungszentrum Karlsruhe (KIT), Germany, in close proximity to the Tritium Laboratory Karlsruhe (TLK).9 It is methodologically similar to the Mainz10 and Troitsk11 experiments, using a magnetic transport field that connects the source and detector in combination with an integrating electrostatic energy filter (MAC-Efilter). Inside the magnetic flux tube (Φ = 191 T cm2 ) produced by super-conducting solenoids β-particles from the Windowless Gaseous Tritium Source (WGTS) are adiabatically guided through a Differential Pumping Section (DPS), a Cryostatic Pumping Section(CPS) to 2-stage electrostatic MAG-E-filter. Along the source and transport sections the magnetic field varies between 3,5 T -5.6 T to maintain small flux cross diameters between RΦ = 75 mm - 90 mm. The magnetic field gradients in pre - and main-spectrometer adiabatically convert cyclotron energy Ecy into energy parallel to the magnetic field lines Ep . At the minimal magnetic field at the center of the main-spectrometer (the analysing field BA and RΦ = 4.5 m), a retarding electric field distribution allowes an integral energy analysis of Ep . The transmitted β-particles are subsequently focussed onto a PIN-diode detector. Large coil systems are arranged around the main-spectrometer for earth magentic field compensation (EMCS) and fine tuning (LFCS) of the magnetic transport flux.
Fig. 2. Schematic view of the KATRIN experiment (total lenght 70 m) consisting of calibration and monitor rear system, windowless gaseous T2 -source (WGTS), differential pumping (DPS) and cryo-trapping section (SPS), small pre-spectrometer and large main spectrometer with the large magentic coil system (EMCS & LFCS) and segmented PIN-diode detector.
3. The WGTS Source and transport section Ultra-cold molecular tritium gas (T = 27 K ± 30 mK, Bsource = 3.5 T) will be injected into the WGTS continuously through a set of capillaries at the center of the 10 m long beam-tube with an injection pressure of pin = 3.4×10−3 mbar leading to an T2 inventory of 40 g/day. Strong tubro-molecular pumps attached at lateral pump ports on both end of the WGTS produce an almost linear drop in T2 -density profile, that has to be kept stable at the 10−3 level. Tritium diffusion from the WGTS to the spectrometer has to be avoided to
November 26, 2010
18:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.03˙Osipowicz
264
achive a background rate generated by to decays within the main spectrometer to Rb ≤ 10−3 counts/s, which limits the amount of tritium allowed in the main spectrometer to a partial pressure of about 10−20 mbar. The large suppression factor will be achieved in a combination of a differential (DPS) and a cryogenic (CPS) pumping section, with each stage reducing the tritium flow by 7 orders of magnitude. The tritium recovered by the DPS12 system will after purification reeneter the WGTS through a closed tritium cycle. The T2 purity will be continously checked by a laser Raman spectroscopy. The beam-tube of the CPS will be kept at a temperature of 4.5 K. At this temperature tritium molecules are passively adsorbed on the wall. To enhance the trapping probability, the cold surfaces of the CPS beam-tubes will be covered by a thin layer of argon frost, that for reasons of reformation will be heated periodically. The adsorption properties of condensed argon for tritium, have been demonstrated in a seperate experiment TRAP a , set up by the KATRIN collaboration.15,16 4. The Spectrometers The magnetic guiding field inside the main spectrometers is provided by high field solenoids at the entrance and exit and large low field coils placed at the main spectrometer region. The minmal magnetic field at the anlysing plane is set to BA ≈ 3 − 6 G resulting in a very high magnetic resolution ∆E = 0.93 eV
Fig. 3. Schematic view of the tandem spectrometer arrangement with 190 Tcm 2 magnetic flux tube. The pre-spectrometer with fixed rejection potential and ∆E ≈ 70 eV. Main-spectrometer on variable analysing potential U0 = −18.5 − −18.6 kV.
The solid inner surfaces of the spectrometers are working as electrodes that have been shaped so that after connecting the spectrometers to a voltage an electrostatic potential distribution is produced inside that allowes an energy selection. In both spectrometers an inner electrode system, made of more than 28000 very thin stain-less steel wires close to the inner surface allow a fine tuning of the poa TRitium
Argon frost Pump
November 26, 2010
18:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.03˙Osipowicz
265
tential ditribution. The pre-spectrometer with a electrostatic blocking potential U B at some 10 V below the analysing potential U0 will reject the major part of the β -spectrum. Thus the small rate of β -particles entering the main-spectrometer (103 β/s) will result in a small background gas ionisation rate. The analysing potential voltage U0 at the main -spectrometer will be continously stabilised with high precision voltage dividers. The spectrometer used in the Mainz experiment will, after modfication, function as a monitor sepctrometer using 83m Kr conversion electron sources.
Fig. 4. On its way from the Rhine river to the Forschungszentrum Karlsruhe the KATRIN main spectrometer vessel (200 t)passes the village of Leopoldshafen on November 25. 2006 (photo:FZ Karlsruhe).
5. The Detector Electrons with an energy above U0 are focussed onto a monotithic 148 pixel PIN Diode array 100 mm diameter. Active and passive shielding will be used to keep the intrinsic detector background below 1 mHz. 6. Outlook KATRIN provides so far the most sensitive direct method for the investigation of the neutrino mass pattern and is complementary to the search for the neutrinoless double β decay and to the information from astrophysics and cosmological observations. Compared to the Mainz and Troitzk experiments KATRIN will reduce the statistical and systematical uncertainties by using a much stronger T2 source resulting in increase of a factor 100 in count rate, a larger spectrometer with a 100 times larger cross section and a 5 times better resolution. An actice and passive shielding, low activity materials, control and fine shaping of fields will help to reduse the background.
November 26, 2010
18:42
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.03˙Osipowicz
266
Fig. 5. Results of simulations of statistical neutrino mass squared uncertainty expected at KATRIN after 3 years of running, calculated in dependence on the fit interval under following conditions. Dark squares: for an initially proposed spectrometer (diameter = 7 m). All others: the final 10 m design . Dark circles: for a background = 10−2 Hz, equidistand measuring intervalls. Dark triangles: background = 10−2 Hz and optimized measuring point distribution. Open squares background = 10−3 Hz and optimized measuring point ditribution (reprinted from ref.9 ).
In combination with numerous control and monitoring efforts this will result in 2 an increase in sensitivity (see Fig. 5) of m(¯ νe ) = 0.2 eV/c on the neutrino mass. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Steen Hannestad, 2010 arXiv:1003.4119v1 G.L. Fogli et al., Prog. Part. Nucl. Phys. 57(2006)742 Hans Volker Klapdor-Kleingrothaus, hep-ph/0512263 H.V. Klapdor-Kleingrothaus et al., Phys. Lett. B 586 (2004)198-212 H.V. Klapdor-Kleingrothaus et al.. Nucl. Instr. Meth. A 522 (2004)371-406 H.V. Klapdor-Kleingrothaus and I.V. Krivosheina, Mod. Phys. Lett. A 21 (2006) 1547-1566 C. Kraus et al., hep-ex/0412056, Eur. Phys. J. C 40 (2005) 447 V.M. Lobashev et al.Phys. Lett. B 460 (1999) 227 A. Osipowicz et al. (KATRIN Collaboration), arXiv:hep-ex/0109033; T. Thmmler et al.(KATRIN collab.): FZKA report 6752. A. Picard et al., Nucl. Instr. Meth. B 63 (1992) 345 V.M. Lobashev, P.E. Spivac, Phys. Lett. B 460 (1985) 3305 X. Luo et al. Vacuum, 80 (2006) 864869. O. B. Malyshev: J. Vac. Sci. Technol. A, 26(1) (2008) 68 X. Luo and Ch. Day: J. Vac. Sci. Technol. A, 26(5) (2008) 1319. O. Kazachenko et al.: Nucl. Instrum. Methods A, 587 (2008) 136. F. Eichelhardt et al.: Fusion Sci. and Technol., 54 (2008) 615.
November 22, 2010
18:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.04˙Suhonen
267
NEUTRINOLESS DOUBLE EC AND RARE BETA DECAYS AS TOOLS TO SEARCH FOR THE NEUTRINO MASS J. SUHONEN∗ and M. T. MUSTONEN Department of Physics, University of Jyv¨ askyl¨ a, P.O. Box 35 (YFL), FI-40014 University of Jyv¨ askyl¨ a, Finland ∗ E-mail:
[email protected] www.jyu.fi/fysiikka Neutrinoless double electron capture (0νECEC) of atomic nuclei has recently attracted a lot of attention due to its potential in accessing the absolute mass scale of the neutrino. In particular, the resonant 0νECEC is interesting based on the possible huge enhancement of the corresponding decay rate by a resonance condition. Recently the mass differences of two atom pairs were measured in order to study the enhancement of the 0νECEC rates of 74 Se and 112 Sn. We have evaluated the associated nuclear matrix elements by using the proton-neutron quasiparticle random-phase approximation with realistic two-body interactions. The absolute mass scale of the neutrino can also be accessed through beta decays of small decay energy. Related to this we have also studied the recently discovered rare ultra-low-Q-value beta-decay branch of 115 In by using microscopic phonon-quasiparticle coupling schemes. Our calculations suggest that the effects of atomic origin may introduce non-negligible, even dramatic effects on this and other decays with a Q value in this extreme regime of only hundreds of eV. Keywords: Neutrinoless Double Electron Capture; Forbidden Beta Decays; Neutrino mass; Atomic Effects on Beta Decays.
1. Introduction 1.1. Resonant neutrinoless double electron capture Detailed information on the differences of the squared neutrino masses and the elements of the neutrino mixing matrix has increased very fast thanks to present-day high precision neutrino-oscillation experiments. To access the fundamental nature and the absolute mass scale of the neutrino one needs, however, to engage the atomic nuclei through their rare decays, e.g. the neutrinoless double beta (0νββ) decay and (single) beta decays. The existence of the 0νββ decay would mean that the neutrino is a so-called Majorana particle, i.e. it is its own antiparticle whereas for single beta decays this is not a prerequisite and they can proceed even if the neutrino is a Dirac particle. A massive (Majorana) neutrino is an indication of physics beyond the standard model. The ongoing and future large-scale 0νββ experiments have the potential to discover this decay mode at least in the cases of degenerate and inverted neutrino-mass hierarchies.1 However, to extract information on the neutrino mass
November 22, 2010
18:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.04˙Suhonen
268
from the data one needs to know the values of the involved 0νββ nuclear matrix elements.2 The search for the 0νββ decay is mostly concentrated on the 0νβ − β − decays due to their favorable Q values. Instead, the positron-emitting modes of 0νββ decay are hard to detect owing to their small decay Q values.2 A special case of the positronemitting modes, the neutrinoless double electron capture, 0νECEC, can only be realized as a resonant decay3 or a radiative process with or without a resonance condition.4 The 0νECEC decay with a resonance condition has attracted a lot of attention recently.5–9 The resonance condition – close degeneracy of the initial atomic state and the final (excited) atomic state – can enhance the decay rate by a factor as large as 106 . Schematically, the 0νECEC proceeds as e− + e− + (Z, N ) → (Z − 2, N + 2)∗ → (Z − 2, N + 2) + γ + 2X,
(1)
where the capture of two atomic electrons leaves the final nucleus in an excited state that decays by one or more gamma-rays and the atomic vacancies are filled by outer electrons with emission of X-rays. A graphical representation of this is given in Fig. 1. Final nucleus (Z − 2, N + 2)∗ n (Z − 2, N ) n H
H0
νe = ν¯e
e− bound
p
(Z − 2, N )
p
e− bound
Initial nucleus (Z, N ) Fig. 1. Schematic view of the resonant double-electron-capture decay. A Majorana neutrino propagates between the two electron-capture vertices.
The daughter state (Z − 2, N + 2)∗ is a virtual state with energy E = E ∗ + EH + EH 0 ,
(2)
including the nuclear excitation energy and the binding energies of the two captured electrons, leaving two holes H and H’ behind. For the half-life of the parent atom
November 22, 2010
18:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.04˙Suhonen
269
the resonant condition can be written as ln 2 g ECEC [M ECEC ]2 hmν i2 = Γ, T1/2 D2 + Γ2 /4
(3)
where g ECEC contains the leptonic phase space and atomic physics, and hmν i is the effective neutrino mass that is a linear combination of the neutrino-mass eigenstates weighted by the Majorana CP phases and elements of the neutrino mixing matrix.2 The quantity M ECEC is the nuclear matrix element that is computed in this work by the use of the proton-neutron quasiparticle random-phase approximation (pnQRPA) with realistic two-body forces. Such calculations have previously been extensively performed in the case of the other modes of neutrinoless double beta decay (see, e.g.1,2,10–13 ). The quantity Γ denotes the combined nuclear and atomic radiative widths (between few and few tens of electron volts14 ) and D = |Q − E| is the socalled degeneracy parameter containing the energy (2) of the virtual final state and the difference between the initial and final atomic masses, i.e. the Q value. 1.2. Beta decays with low Q values The beta decays with low Q values offer an experimental tool for probing the neutrino mass. This is currently being realized in two experiments, the KATRIN experiment using tritium with the Q value of 18.6 keV and the MARE experiment using 187 Re with the Q value of 2.47 keV.
Fig. 2.
Decay schemes of
115 In
and
115m In,
showing the decay to the first excited state of
115 Sn.
From the most recent mass evaluation,15 the energy of the ground-state-toground-state β − decay of 115 In is 499(4) keV. Given that the energy of the first excited state in 115 Sn is 497.334(22) keV,16 the available β − -decay energy to the excited state 115 Sn(3/2+ ) is only 1.7(40) keV. It was thus uncertain whether the β − decay 115 In(9/2+ ) → 115 Sn(3/2+ ) was energetically possible. However, Cattadori et
November 22, 2010
18:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.04˙Suhonen
270
al.17 were first to detect this rare decay in 2005 by measuring the 497.334(22) keV β − from a 929 g pure indium rod using an ultra low-background β − spectrometer in the Gran Sasso National Laboratory. This tiny second-forbidden unique beta decay channel of 115 In (see Fig. 2) was recently confirmed to break the record of the lowest known beta-decay Q value by an order of magnitude in two independent Penningtrap measurements. The first experiment,18 a combined effort of the JYFLTRAP group in Finland and the HADES underground laboratory in Belgium, confirmed the existence of the decay channel and established the fact that this decay indeed has the lowest beta-decay Q value ever observed, 0.35(17) keV. The second experiment,19 carried out in the Florida State University, further refined the Q value to 0.155(24) keV. 2. Results 2.1. Resonant 0νECEC decays To study how well the resonance condition (3) of the 0νECEC decays is fulfilled accurate knowledge of the involved decay Q value is needed. The Q value can be measured accurately by the Penning ion trap techniques, as was done for the decays 74 112 20,21 Se → 74 Ge(2+ Sn → 112 Cd(0+ by using the JYFLTRAP Penning 2 ) and 4 ) in trap. In Ref.20 the degeneracy parameter D for the decay of 112 Sn was determined (see Fig. 3 for the involved energetics). The result D = −4.5 keV for KK capture was derived as the most favorable one. The nuclear matrix element has a rather favourable value but since the degeneracy condition is far from being filled the decay half-life becomes quite high, e.g. T1/2 ≈
5.9 × 1029 years , (hmν i[eV])2
(4)
where the effective neutrino mass is to be given in units of eV. This result indicates that experimental sensitivities of T1/2 > 1030 years are required to detect the 112 Sn decay. This is impossible in the foreseeable future. The 74 Se decay is depicted in Fig. 4. In this case the nuclear matrix elements turns out to be tiny21 owing its existence to the recoil corrections of the weak hadronic current. This small value of the nuclear matrix element greatly reduces the transition probability (3). In addition, the JYFLTRAP measured Q value yields a rather unfavourable match of the energy (2) such that the smallest value of the degeneracy parameter |Q − E|min = 2.4 keV is obtained for the combined L2 (2p1/2 ) and L3 (2p3/2 ) captures. This in turn leads to a further suppression since now the electrons are captured from the p orbitals. Combining the phase space and the nuclear matrix element with the degeneracy mismatch yields the half-life estimate T1/2 ≈
5 × 1043 years , (hmν i[eV])2
(5)
November 22, 2010
18:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.04˙Suhonen
271
1+ 1 XK
112 49 In63
XK
Qβ − = 658 keV 0+ 4
1924.3 keV
0+ 1 112 50 Sn62
0νECEC
Q = 1919.82(16) keV
2+ 1
617.516 keV
0+ 1 112 48 Cd64
Fig. 3.
Resonant double-electron-capture decay of
112 Sn
with emission of two K-X rays.
where the effective neutrino mass has to be inserted in unit if eV. It is clearly seen that the non-radiative resonance decay of 74 Se is impossible to measure. In fact, our theoretical estimates suggest that in general the resonant 0νECEC decays to 2+ states become unobservable due to the suppression coming from the unavoidable p-wave capture and the tiny nuclear matrix element. 2.2. Ultra-low-Q-value beta decays As discussed in the introduction the unique forbidden beta decays with low Q values offer a chance to access the tiny neutrino mass. From the theoretical point of view, for unique decays the expression for the half-life T1/2 separates cleanly to distinct nuclear matrix-element and lepton phase-space parts:22 T1/2 =
1 , M 2 f2 (Zf , W0 , R)
(6)
where M is the nuclear matrix element containing the details of nuclear structure and f2 (Zf , W0 , R) is the phase-space integral depending only on the charge of the daughter nucleus Zf , the decay Q value W0 and the nuclear radius R. For the non-unique decays, such as the fourth-forbidden ground-state-to-ground-state transition channel, the corresponding formula is much more complicated and involves a
November 22, 2010
18:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.04˙Suhonen
272
2− 1 74 33 As41
Qβ − = 1353.1 keV XL
2+ 2
XL
1204.31 keV
0+ 1 74 34 Se40
0νECEC
2+ 1
595.88 keV
Q = 1209.171(49) keV
0+ 1 74 32 Ge42
Fig. 4.
Resonant double-electron-capture decay of
74 Se
with emission of two L-X rays.
number of different nuclear matrix elements (twelve in the case of fourth-forbidden decay).23,24 The fourth-forbidden non-unique ground-state-to-ground-state decay was calculated first24 by using the MQPM (microscopic quasiparticle-phonon model25 ) and later26 by applying the pnMQPM (proton-neutron MQPM) (Fig. 5a). While both theoretical calculations agreed with the experimental data reasonably well, the pnMQPM result was slightly better of the two. Encouraged by this success, the same pnMQPM description was applied to the simpler ground-state-to-excited-state decay channel.18 Surprisingly, the Florida State University measurement of the Q value combined with the HADES half-life measurement lies far off from the calculated curve of Fig. 5b: The discrepancy between the theoretical curve and the 1σ uncertainty limits of the experiments is roughly a factor of 15 in the half-life. It is clear from Eq. (6) that the discrepancy must be explained by either a very inaccurate nuclear matrix element, some neglected effect in the phase-space integral or a combination of both. To attribute the factor-of-15 error in half-life solely to the nuclear wave functions, the nuclear matrix element needs to be off by about a factor of four. Yet both pnMQPM and MQPM calculations agree with the na¨ive picture (Fig. 6), where both states are well approximated with a one-quasiparticle state. Unless this simple interpretation of the final state is completely wrong and
November 22, 2010
18:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.04˙Suhonen
273 9 7
Half-life (1020 y)
Half-life (1014 y)
10
pnMQPM Experiment
8 6 5 4 3
0.1
2 1
1
0.46
0.48
0.50
0.52
0.54
50
Q value (MeV)
100
150
200
250
Q-value (eV)
(a)
(b)
measurements18,19,27
Fig. 5. The most recent for the half-life and Q value compared with the pnMQPM calculations18,26 for the ground-state-to-ground-state decay channel (a) and the groundstate-to-excited-state channel (b). Notice the logarithmic scale in the right figure.
82 3/2+
11(2) ps
stable
1/2+
0h11/2 1d3/2 2s1/2 0g7/2 1d5/2
0.497334(22)
0.000
115 50 Sn65
Fig. 6. In the simplest view, the first excited state of of the unpaired 2s1/2 neutron to the 1d3/2 orbit.
50 115 Sn
can be interpreted as an excitation
the true nature of the state is some more exotic configuration, the solution to the puzzle must lie in the lepton phase space, in the effects that become important for the presently discussed ultra-low Q values but that can be neglected for beta decays with typical Q values. There are at least four different effects of atomic origin that remain unknown for the decays with Q values this low for the moment:28 the electron screening effect, the atomic overlap effect, the exchange effect and the final-state interactions. According to the existing literature, they are all known to get more significant as the Q value decreases. While they are completely negligible for typical beta-decay Q values, according to the existing theoretical estimates they can have already a contribution of several per cent for low-Q-value decays. Traditionally the Rose prescription29 has been accurate enough to estimate the electron screening correction to the beta-decay half-life. For the ultra-low Q values it breaks down completely. The same holds true for the more accurate, completely relativistic expression derived by Lopez and Durand.30 The atomic overlap effect, caused by the fact that the bound electron states of the initial and final atom are slightly different, is another possible source for corrections. This effect has been theoretically studied for the allowed decays by
November 22, 2010
18:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.04˙Suhonen
274
Bahcall.31 His estimates show that there is a trend of this effect to grow stronger as the Q value decreases. For the 241 Pu decay with the Q value of 21 keV, the estimated hindrance in the decay is 2%. However, those estimates break down for the Q values as low as a few hundred keV, and cannot be applied to the case under discussion. The first approximation for the exchange effects was published by Bahcall in the same study as the atomic overlap effect.31 That approximation suggests an additional reduction in the decay rate, 2% in the case of 241 Pu. Later theoretical work by Harston and Pyper32 contradicts this result concluding that the exchange effect should actually enhance the decay. In the case of 241 Pu their calculation yielded a 7.5% enhancement of the decay rate. However, estimation formulas derived in both works are inapplicable in the ultra-low Q-value regime. The final-state interactions pose yet another theoretical challenge. The molecular final-state interactions have only been studied for the beta decay of tritium,33 where the atomic structure is very simple compared to the heavier elements. In the case of 115 In, the role of the final-state interactions in the lattice is still deep in the terra incognita: Whether the chemical bonds of the indium atoms in the sample introduce a non-negligible correction to the decay channel with the ultra-low Q value or not remains yet another open question. The development of experimental techniques has now reached a beta decay with a Q value so low that the theoretical work on the atomic effects is outdated. To fill in this gap in our knowledge more studies, both theoretical and experimental, are needed. Finding other candidates for observing decays with ultra-low Q values is currently difficult due to the fact that the atomic masses – or the ground-state-toground-state Q values – are not yet systematically measured with sufficient accuracy. It would therefore be useful to make modern high-precision measurements for them along the valley of beta stability, allowing one to find promising candidates for further studies. Another challenge in the theoretical search for the true significance of the atomic contributions is the difficulty of experimental verification: The small corrections they induce to the usual low-Q-value beta decays are dwarfed by the uncertainties in the nuclear wave functions. Therefore properly closing all the open questions may have to wait for the time that the nuclear-structure theory has evolved considerably from today’s level. Still, this does not prevent from making theoretical estimates of the atomic effects for ultra-low-Q-value decays. If they proved to be as dramatic as our simple study of 115 In suggests and if other independent experimentally observable cases of ultra-low-Q-value beta decays were found, there would be a realistic possibility of actually verifying the existence of these effects experimentally. Acknowledgments This work was supported by the Academy of Finland under the Finnish Center of Excellence Programme 2006-2011 (Project No. 213503, Nuclear and Accelerator
November 22, 2010
18:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.04˙Suhonen
275
Based Physics Programme at JYFL). References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33.
F. T. Avignone III, S. R. Elliott and J. Engel, Rev. Mod. Phys. 80, p. 481 (2008). J. Suhonen and O. Civitarese, Phys. Rep. 300, p. 123 (1998). J. Bernabeu, A. De Rujula and C. Jarlskog, Nucl. Phys. B 223, p. 15 (1983). Z. Sujkowski and S. Wycech, Phys. Rev. C 70, p. 052501 (2004). A. S. Barabash, Ph. Hubert, A. Nachab and V. Umatov, Nucl. Phys. A 785, p. 371 (2007). A. S. Barabash et al., Nucl. Phys. A 807, p. 269 (2008). J. Dawson et al., Phys. Rev. C 78, p. 035503 (2008). M. F. Kidd, J. H. Esterline and W. Tornow, Phys. Rev. C 78, p. 035504 (2008). D. Frekers, hep-ex/0506002v2. M. Kortelainen, O. Civitarese, J. Suhonen and J. Toivanen, Phys. Lett. B 647, p. 128 (2007). M. Kortelainen and J. Suhonen, Phys. Rev. C 75, p. 051303(R) (2007). M. Kortelainen and J. Suhonen, Phys. Rev. C 76, p. 024315 (2007). J. Suhonen and M. Kortelainen, Int. J. Mod. Phys. E 17, p. 1 (2008). B. Crasemann, Atomic Inner-Shell Processes (Academic Press, New York, 1975). G. Audi, A. H. Wapstra and C. Thibault, Nucl. Phys. A 729, p. 337 (2003). J. Blachot, Nuclear Data Sheets 104, p. 967 (2005). C. M. Cattadori, M. De Deo, M. Laubenstein, L. Pandola and V. I. Tretyak, Nucl. Phys. A 748, p. 333 (2005). J. S. E. Wieslander, J. Suhonen, T. Eronen, M. Hult, V.-V. Elomaa, A. Jokinen, ¨ o, G. Marissens, M. Misiaszek, M. T. Mustonen, S. Rahaman, C. Weber and J. Ayst¨ Phys. Rev. Lett. 103, p. 122501 (2009). B. J. Mount, M. Redshaw and E. G. Myers, Phys. Rev. Lett. 103, p. 122502 (2009). S. Rahaman, V.-V. Elomaa, T. Eronen, J. Hakala, A. Jokinen, A. Kankainen, J. Ris¨ o, Phys. Rev. Lett. 103, p. 042501 (2009). sanen, J. Suhonen, C. Weber and J. Ayst¨ V. S. Kolhinen, V.-V. Elomaa, T. Eronen, J. Hakala, A. Jokinen, M. Kortelainen, ¨ o, Phys. Lett. B 684, p. 17 (2010). J. Suhonen and J. Ayst¨ J. Suhonen, From Nucleons to Nucleus: Concepts of Microscopic Nuclear Theory (Springer, Berlin, 2007). H. Behrens and W. B¨ uhring, Electron radial wave functions and nuclear beta decay (Clarendon, Oxford, 1982). M. T. Mustonen, M. Aunola and J. Suhonen, Phys. Rev. C 73, p. 054301 (2006). J. Toivanen and J. Suhonen, J. Phys. G: Nucl. Part. Phys. 21, p. 1491 (1995). M. T. Mustonen and J. Suhonen, Phys. Lett. B 657, p. 38 (2007). L. Pfeiffer, J. Allen P. Mills, E. A. Chandross and T. Kovacs, Phys. Rev. C 19, p. 1035 (1978). M. T. Mustonen and J. Suhonen, J. Phys. G: Nucl. Part. Phys. 37, p. 064008 (2010). M. E. Rose, Phys. Rev. 49, p. 727 (1936). J. L. Lopez and L. Durand, Phys. Rev. C 37, p. 535 (1988). J. N. Bahcall, Phys. Rev. 129, p. 2683 (1963). M. R. Harston and N. C. Pyper, Phys. Rev. A 45, p. 6282 (1992). A. Saenz and P. Froelich, Phys. Rev. C 56, p. 2132 (1997).
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
276
DETECTING OF RELIC NEUTRINOS AND MEASURING OF FUNDAMENTAL PROPERTIES OF NEUTRINOS WITH ATOMIC NUCLEI ∗ ˇ FEDOR SIMKOVIC
Bogoliubov Laboratory of Theoretical Physics, JINR Dubna, 141 960 Dubna, Moscow region, Russia ∗ E-mail:
[email protected] Department of Nuclear Physics and Biophysics, Comenius University, 842 48 Bratislava, Slovakia ∗ E-mail:
[email protected] If we are to understand the basic nature of the Universe in which we live, we must to understand and observe relic neutrinos. A possibility of direct detection of relic neutrinos within the KATRIN and MARE experiments is discussed. Further, theoretical questions associated with fundamental properties of neutrinos are addressed in the context of single and double beta decays. The subject of interest are nature of neutrinos (Dirac or Majorana) and the absolute mass scale of neutrinos. Keywords: Relic neutrinos, neutrino mixing, single β-decay, neutrinoless double β-decay
1. Introduction Neutrinos are probably one of the most important structural constituents of the Universe. The Big Bang Theory predicts that the significant component of them is formed by the cosmic neutrino background, an analogues of the big bang relic photons comprising the observed cosmic microwave background radiation. Tremendously large numbers of neutrinos are produced in various places in the Earth and in the Universe. Most are made naturally, by cosmic rays, by natural radioactive decay and by the thermonuclear reactions occurring inside the Sun and other stars as well as during the formation of the neutron stars. Nuclear power plants produce large numbers of neutrinos as by-product of nuclear fissions in reactors. It is also possible to produce neutrinos using beams of high energy protons generated by large accelerators. Neutrinos are important in stellar processes. Neutrinos govern the dynamics of supernovae, and hence the production of heavy elements in the Universe. Furthermore, if there is CP violation in the neutrino sector, the physics of neutrinos in the early Universe might ultimately be responsible for baryogenesis. If we are to understand ’why we are here’ and the basic nature of the Universe in which we live, we must understand the basic properties of neutrinos, which are one of the least
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
277
understood particles. Neutrinos are very special particles. Studies of these particles have played a crucial role in the understanding of the laws of elementary particles and their interactions. The recent observation of neutrino oscillations has established beyond doubt non-zero masses of neutrinos, the flavor change and mixing of neutrinos. The existence of neutrino masses is in fact the first solid experimental fact requiring physics beyond the Standard Model. The observed small neutrino masses have profound implications for our understanding of particle physics and the Universe. It has opened a new excited era in neutrino physics and represents a big step forward in our knowledge of neutrino properties. The goal of this contribution is to demonstrate that nuclear single β- and double β-decays are tools to detect relic neutrinos and to study fundamental properties of neutrinos. 2. Relic Neutrino Capture With Beta Decaying Nuclei Big Bang physics tells us that the number density of one flavor of relic neutrinos (or antineutrinos) is about 56 neutrinos per cubic centimeter (< η >= 56 cm−3 ). So far, relic neutrinos have not been observed. For number density < η > a direct detection of relic neutrinos is not possible with the current experimental techniques. However, as it was pointed out recently,1 gravitational clustering of neutrinos in our galaxy or galaxy cluster may enhance the relic neutrino density making its detection more realistic. The subject of interest is the feasibility to detect the cosmic neutrino background by means of β-decaying (3 H and 187 Re) nuclei. i) Neutrino capture on tritium In the β-decay of tritium the energy of the process is shared between the electron and the neutrino. The endpoint of the electron energy spectrum is used to extract a value for the neutrino mass mν . The KATRIN experiment (in construction phase) aims to measure mν with sensitivity of 0.2 eV.3 The electron spectrum in the reaction of relic neutrino capture on tritium, ν +3 H((1/2)+ ) →3 He((1/2)+ ) + e− ,
(1)
has a peak centered at the relic neutrino energy beyond the endpoint of the β-decay. For the relic neutrino capture rate we find Γν (3 H) =
1 2 ην 2 Gβ F0 (2, p) p p0 |MF |2 + gA |MGT |2 < ην > . π < ην >
(2)
Here, Gβ = GF Vud and F0 (Z, p) is the relativistic coulombic factor.4 |MF |2 and |MGT |2 are the squared Fermi and Gamow-Teller nuclear matrix elements of the β-decay of tritium. gV (= 1) and gA (= 1.25) are vector and axial-vector nucleon coupling constants, respectively. ην is the local density of relic neutrinos. By assuming |MF |2 = 1, |MGT |2 ' 3 and ην = < ην > we get Γν (3 H) = 4.2 10−25 y −1 . This value is in a good agreement with the result of Ref.6 For the
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
278
ratio of the relic neutrino capture rate to the β-decay rate we find2 Γν (3 H) = 7.5 10−24 , Γβ (3 H)
(3)
where T1/2 (3 H) = 12.33 y was assumed. The KATRIN experiment will use about 50 µg of tritium corresponding to 5 1018 T2 molecules.7 For number of neutrino capture events we find2 ν Ncapt (KAT RIN ) ≈ 4.2 10−6
ην y −1 . < ην >
(4)
Even by assuming an effect of clustering of relic neutrinos (ην /< ην > ' 103 − 104 ) the impact of the relic neutrino capture on KATRIN experiment is negligible. ii) Neutrino capture on rhenium The future bolometric experiment MARE will measure neutrino mass in the sub-eV range with the unique first forbidden β-decay of 187 Re:8 ν +187 Re((5/2)+ ) → 187 Os((1/2)− ) + e− .
(5)
For the capture rate of this process we derive Γν (187 Re) =
1 ην 1 Gβ F1 (76, p) (p R)2 B p p0 < ην >, π 3 < ην >
(6)
where F1 (Z, p) is the Fermi Coulomb factor and R is the radius of the rhenium nucleus. The β-strength B is given by r 2 4π X + rn gA 187 − B= |< Os(1/2 ) k τ {σn ⊗ Y1 (Ωrn )}2 k 187 Re(5/2+ ) > |2(. 7) 6 3 n n R From β-decay half-life T1/2 (187 Re) = 4.35 1010 y one finds B = 3.57 × 10−4 . This value implies Γν (187 Re) to be 2.75 10−32 y −1 (for ην = < ην >). For the ratio of capture rate to decay rate we get2 Γν (187 Re) = 1.7 10−21 . Γβ (187 Re)
(8)
Let us notice that this value is by about a factor of 200 larger than the corresponding ratio for 3 H given in Eq. (3). The MARE project will investigate the β-decay of 187 Re with absorbers of metal Re and AgReO4 . It foresees a 760 grams bolometer. For this amount of rhenium the number of neutrino capture events is2 ν Ncapt (M ARE) ' 7.6 10−8
ην y −1 . < ην >
(9)
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
279
3. Measuring of Neutrino Mass in Single β Decays The measurement of the electron spectrum in β decays provides a robust direct determination of the values of neutrino masses. In practice, the most sensitive experiments use tritium β-decay, because it is a super-allowed transition with a low Q-value. The effect of neutrino masses mk (k=1,2,3) can be observed near the end point of the electron spectrum, where Q − T ∼ mk . T is the electron kinetic energy. The current best upper bound on the effective neutrino mass mβ given by, m2β
=
3 X
|Uek |2 m2k ,
(10)
k=1
have been obtained in the Mainz and Troitsk experiments:3 mβ < 2.2 eV . Uek is the element of neutrino mixing matrix. In near future, the KATRIN experiment will reach a sensitivity of about 0.2 eV .3 Calorimetric measurements of the β-decay of rhenium where all electron energy released in the decay is recorded, appear complementary to those carried out with spectrometers. The first forbidden unique decay, 187
Re → 187 Os + e− + ν e ,
(11)
is particularly promising due to its low transition energy of ∼ 2.47 keV and the large isotopic abundance of 187 Re (62.8%), which allows the use of absorbers made with natural Rhenium. Measurements of the spectra of 187 Re have been reported by the Genova and the Milano/Como groups (MIBETA and MANU experiments). The achieved sensitivity of mβ < 15 eV was limited by statistics.3 The success of rhenium experiments has encouraged the micro-calorimeter community to proceed with a competitive precision search for the neutrino mas. The ambitious project is planned in two steps, MARE I and MARE II. MARE I is to meet the existing upper limit of 2 eV around 2011, MARE II is to challenge the KATRIN goal of 0.2 eV. 8 The ground-state spin-parity is 5/2+ for 187 Re and 1/2− for the daughter nucleus 187 Os, and the decay is associated with ∆J π = 2− (∆L = 1, ∆S = 1) of the nucleus, i.e., classified as unique first forbidden β-decay. The emitted electron and neutrino are expected to be, respectively, in p3/2 and s1/2 states or vice versa. The differential decay rate is a sum of two contributions associated with emission of the s1/2 and the p3/2 state electrons. By considering the finite nuclear size effect the theoretical spectral shape of the β-decay of 187 Re is9 3 q X G2 V 2 dΓ N (Ee ) = = |Uek |2 F 3ud BR2 pe Ee (E0 − Ee ) (E0 − Ee )2 − m2k dEe 2π k=1
1 × F1 (Z, Ee )p2e + F0 (Z, Ee )((E0 − Ee )2 − m2k ) θ(E0 − Ee − mk ) 3
(12)
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
280
with g2 B = A | < 0− 1/2 || 6
r
4π X + rn 2 τ {σn ⊗ Yn }2 ||0+ 5/2 > | . 3 n n R
(13)
GF is the Fermi constant and Vud is the element of the Cabbibo-Kobayashi-Maskawa (CKM) matrix. pe , Ee and E0 are the momentum, energy, and maximal endpoint energy (in the case of zero neutrino mass) of an electron, respectively. The Fermi functions F0 (Z, E) and F1 (Z, E) in (12) are due to a distortion of the s1/2 - and the p3/2 - electron wave states in the Coulomb field of final nucleus, respectively. The nuclear matrix element of the process in (13) can be determined from the measured half-life of the β-decay of 187 Re. For a normal hierarchy (NH) of neutrino masses with m3 > m2 > m1 the Kurie function of the β-decay of 187 Re can be written as9 h p √ K(y) = BRe y + m1 |Ue1 |2 y(y + 2m1 ) p +|Ue2 |2 (y + m1 − m2 )(y + m1 + m2 )θ(y + m1 − m2 ) i1/2 p , (14) +|Ue3 |2 (y + m1 − m3 )(y + m1 + m3 )θ(y + m1 − m3 ) with
BRe
√ s GF Vud B R2 p2e F1 (Z, Ee ) √ = 3 F0 (Z, Ee ) 2π 3
(15)
and y = (E0 − Ee − m1 ) ≥ 0. We note that with a good accuracy the factor BRe can be considered to be a constant. So far the rhenium β-decay experiments will not see any effect due to neutrino masses, it is possible to approximate mk Q − T and obtain 1/2 q (16) K(y) ' BRe (y + mβ ) y(y + 2mβ ) with y = (E0 − Ee − mβ ). For mβ = 0 the Kurie function for β-decay of 187 Re is linear near the endpoint. However, the linearity of the Kurie function is lost if mβ is not equal to zero. 4. Neutrinoless Double β − Decay The neutrinoless double beta decay (0νββ-decay), (A, Z) → (A, Z + 2) + 2e− ,
(17)
allows to prove the nature of neutrino (that it is a Majorana particle, ν = ν, or a Dirac particle, ν 6= ν). Besides, the 0νββ-decay rate could determine an absolute scale of neutrino mass, the neutrino mass hierarchy and CP-violating phases of neutrinos. The evidence for a 0νββ-decay of 76 Ge has been claimed by some authors of the +0.44 0ν Heidelberg-Moscow collaboration at LNGS with T1/2 = 2.23−0.31 × 1025 years.10
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
281
Such a claim has raised some criticism but none of the existing experiments can rule out it. The only certain way to confirm or refute this claim is with additional sensitive experiments. The next generation experiments should achieve this goal. 15 There is a general consensus that the 0νββ-decay has to be observed at different isotopes. The 0νββ-decay can be generated with different lepton number violating mechanisms like exchange of light or heavy neutrinos, the exchange of SUSY superpartners with R-parity violating, leptoquarks, right-handed W-bosons or Kaluza-Klein excitations, among others, which have been discussed in the literature. A detection of the 0νββ-decay immediately generates questions: What is the mechanism that triggers the decay? What happens, if several mechanisms are active for the decay? Can the different amplitudes be determined using the 0νββ-data from different nuclear systems, if one determines theoretically the different matrix elements? Possibilities to disentangle at least some of the possible mechanisms include the analysis of angular correlations between the emitted electrons,11 study of the branching ratios of 0νββ-decays to ground and excited states,12 a comparative study of the 0νββ-decay and neutrinoless electron capture with emission of positron (0νECβ + )13 and analysis of possible links with other lepton-flavor violating processes (e.g., µ → eγ).14 Unfortunately, the search for the 0νECβ + -decay is complicated due to small rates and the experimental challenge to observe the produced X-rays or Auger electrons, and most double beta experiments of the next generation are not sensitive to electron tracks or transitions to excited states. In connection with the neutrino oscillations much attention is attracted to the light neutrino mass mechanism of the 0νββ-decay. Then, the inverse value of the 0νββ-decay half-life for a given isotope (A,Z) is15 0ν −1 (T1/2 ) = G0ν (Qββ , Z) |M 0ν |2 | < mββ > |2 .
(18)
Here, G0ν (Qββ , Z) and M 0ν are, respectively, the known phase-space factor and the nuclear matrix element M 0ν . The main aim of the experiments on the search for 0νββ-decay is the measurement of the effective Majorana neutrino mass mββ : hmββ i =
3 X
|Uei |2 eiαi mi , (all mi ≥ 0) .
(19)
i
Here, αi is unknown Majorana phase. The effective Majorana neutrino mass mββ can be used to constrain the neutrino mass pattern and the absolute neutrino mass scale, i.e., information not available by the study of neutrino oscillations. However, interpreting existing results as a measurement of the neutrino effective mass, and planning new experiments, depends crucially on the knowledge of the corresponding NMEs that govern the decay rate. Accurate determination of the NMEs, and a realistic estimate of their uncertainty, is therefore an integral part of the study.
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
282
6.0
LSSM IBM PHFB (R)QRPA
5.0
3.0
M
0ν
4.0
2.0 1.0 0.0
48
Ca
76
Ge
82
Se
96
Zr
100
Mo
116
Cd
128
Te
130
Te
136
Xe
150
Nd
Fig. 1. The 0νββ-decay NMEs calculated within different nuclear structure approaches: Large Scale Shell Model (LSSM),16 (Renormalized) Quasiparticle Random Phase Approximation (R)QRPA,17 Projected Hartree-Fock Bogoliubov approach (P-HFB)18 and Interacting Boson Model (IBM).19 The Miller-Spencer Jastrow two-nucleon short-range correlations are taken into account.
The nuclear matrix elements for the 0νββ-decay must be evaluated using tools of nuclear structure theory. Unfortunately, there are no observables that could be directly linked to the magnitude of 0νββ-decay nuclear matrix elements and that could be used to determine them in an essentially model independent way. The calculation of the 0νββ-decay matrix elements is a difficult problem because ground and many excited states of open-shell nuclei with complicated nuclear structure have to be considered. Matrix elements for the double beta decay are calculated by the Large Scale Shell Model (LSSM),16 the Quasi-particle Random Phase Approach (QRPA),17 angular momentum projected (with real quasi-particle transformation) Hartree-FockBogoliubov (P-HFB) wave functions18 and by the Interacting Boson Model (IBM).19 The calculated 0νββ-decay NMEs within these approaches are presented in Fig. 1. We note a good agreement between the QRPA and the IBM results. The LSSM offer a different behavior, being practically independent on atomic mass number (except of the case of double magic 48 Ca). 5. Neutrinoless Double Electron Capture Recently, a new theoretical background for the neutrinoless double electron capture (0νECEC) transitions to final atomic state with a minimal mass difference
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
283
with original atom has been proposed.20 A new phenomenon is an oscillation plus deexcitations of atoms (A, Z) ↔ (A, Z + 2)∗∗ ,
(A, Z) ↔ (A, Z − 2)∗∗ ,
(20)
which has origin in a mixing of a pair of neutral atoms, (A, Z) and (A, Z ± 2)∗∗ differing by two units in lepton numbers. The first atom (A, Z) is in a ground state and the second atom (A, Z ± 2)∗ might be in an excited state both in respect of atomic and nuclear structure. The underlying mechanism is a transitions of two − protons and two bound electrons to two neutrons p+p+e− b +eb ↔ n+n. A signature of the oscillations might be an electromagnetic deexcitation of the involved unstable nucleus and atomic shell with the electron holes. A phenomenological analysis of the oscillations plus deexcitations of atoms pointed out on a resonant enhancement of the 0νECEC decay rate, that has a Breit-Wigner form. It was manifested that it is reasonable to hope that a search for oscillation plus deexcitation of atoms, which are sufficiently long lived to conduct a practical experiment, may uncover processes with lepton number violation. For that purpose systems of two atoms with the smallest mass difference were looked for in the periodic table. The favorable atomic systems are as follows: 162 Er → 162 Dy∗∗ , 156 Dy → 156 Gd∗∗ , 152 Gd → 152 Sm∗∗ , 112 Sn → 112 Cd∗∗ , 74 Se → 74 Ge∗∗ etc. It is worth mentioning that the lepton number conserving double electron capture with emission of two neutrinos, (A, Z) ↔ (A, Z + 2)∗∗ + 2ν,
(21)
is strongly suppressed as the phase space is very small. Within the formalism of oscillations plus deexcitation of atoms the decay width of the 0νECEC process is given by LN V 2 V νECEC Γ = ΓX . (22) 2 (Mi − Mf ) + Γ2X
Here, V LN V is the lepton number violating potential mixing atoms with ∆L = ±2. Mi and Mf are, respectively, atomic masses in the initial and final states and ΓX is electromagnetic width of the final quasi-stationary state. By an accidental, almost complete degeneracy of the parent ground state 112 Sn with the second excited state in the daughter nucleus 112 Cd (|Mi − Mf | ≤ ΓX ) a resonant enhancement of the decay rate by several orders of magnitude could is expected, bringing the half-life down to 1025 years (for a 1 eV mass of the neutrino). We note that a precision of order 10 eV appears possible using today’s ion trap facilities. 6. Conclusions As the most intriguing and fascinating fundamental particle, the neutrino is so important that neutrino physics has become one of the most significant branches of modern physics.
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
284
A direct detection of relic neutrinos would be of fundamental importance. But, we conclude that by taking into account even gravitational clustering of the relic neutrinos a direct detection of them through neutrino induced single β-decay is not possible with present methods and techniques. The planned rhenium β-decay experiment MARE will allow to probe the absolute mass scale of neutrinos with the same sensitivity as the tritium β-decay experiment KATRIN. We found that the Kurie function of the rhenium β-decay close to the endpoint coincides up to a factor to that for the super-allowed β-decay of tritium.5 Experimental searches for the 0νββ-decay are being pursued worldwide. However, interpreting existing results as a measurement of the neutrino effective mass, and planning new experiments, depends crucially on the knowledge of the corresponding nuclear matrix elements that govern the decay rate. Accurate determination of the nuclear matrix elements, and a realistic estimate of their uncertainty, is of great importance. Finally, it is reasonable to hope that the search for the neutrinoless double electron capture process of atoms, which are sufficiently long lived to conduct a practical experiment, may established the Majorana nature of neutrinos. This possibility should be considered as alternative and complementary to searches for the 0νββ-decay. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12.
13. 14. 15. 16.
R. Lazauskas, P. Vogel, C. Volpe, J. Phys. G 35, 1 (2008). ˇ R. Hod´ ak, S. Kovalenko, F. Simkovic, AIP Conf. Proc. 1180, 50 (2009). E. W. Otten and C. Weinheimer, Rep. Prog. Phys. 71, 086201 (2008). M. Doi, T. Kotani and E. Takasugi, Prog. Theor. Phys. (Supp.) 83, 1 (1985). ˇ F. Simkovic, R. Dvornick´ y, A. Faessler, Phys. Rev. C 77 055502 (2008). A. G. Cocco, G. Mangano, M. Messina, J. Phys. Conf. Ser. 110, 082014 (2008); ibid 120, 022005 (2008). A. Ringwald, Nucl. Phys. A 827, 501c (2009). MARE Collaboration, E. Andreotti et al., Nucl. Instrum. Meth. A 572, 208 (2007). ˇ R. Dvornick´ y, F. Simkovic, AIP Conf. Proc. 1180, 125 (2009); K. Muto, R. Dvornick´ y, ˇ F. Simkovic, Prog. Part. Nucl. Phys. 64, 228 (2010). H. V. Klapdor-Kleingrothaus, I. V. Krivosheina, Mod. Phys. Lett. A21, 1547 (2006); H. V. Klapdor-Kleingrothaus, A. Dietz, H. L. Harney, I. V. Krivosheina, Mod. Phys. Lett. A 16, 2409 (2001). M. Doi, T. Kotani, H. Nishiura and E. Takasugi, Prog. Theor. Phys. 69, 602 (1983). ˇ S. M. Bilenky, J.A. Grifols, Phys. Lett. B 550, 154 (2002); F. Simkovic, A. Faessler, Prog. Part. Nucl. Phys. 48, 201 (2002); F. Depisch, H. P¨ as, Phys. Rev. Lett. 98, 232501 (2007). M. Hirsch, K. Muto, T. Oda and H. V. Klapdor-Kleingrothaus, Z. Phys. A 347, 151 (1994). V. Cirigliano, A. Kurylov, M. J. Ramsey-Musolf and P. Vogel, Phys. Rev. Lett. 93, 231802 (2004). F. T. Avignone, S. R. Elliott, and J. Engel, Rev. Mod. Phys. 80, 481 (2008). E. Caurier, J. Menendez, F. Nowacki, A. Poves, Phys. Rev. Lett. 100, 052503 (2008).
November 22, 2010
18:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.05˙Simkovic
285
ˇ 17. F. Simkovic, A. Faessler, V. Rodin, P. Vogel and J. Engel, Phys. Rev. C 77, 045503 ˇ (2008); F. Simkovic, A. Faessler, H. M¨ uther, V. Rodin, M. Stauf, Phys. Rev. C 79, 055501 (2009). 18. K. Chaturvedi, R. Chandra, P. K. Rath, P. K. Raina and J. G. Hirsch, Phys. Rev. C 78, 054302 (2008). 19. J. Barea, F. Iachello, Phys. Rev. C 79, 044301 (2009). ˇ 20. F. Simkovic, M. I. Krivoruchenko, Phys. Part. Nucl. Lett. 6, 298 (2009).
November 22, 2010
18:55
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.06˙Capelli
286
NEUTRINOLESS DOUBLE BETA DECAY WITH TeO2 BOLOMETERS: PAST AND FUTURE S. CAPELLI∗ on behalf of CUORE and CUORICINO Collaboration Physics Departement and INFN, Universit` a di Milano Bicocca Milano, 20126, Italy ∗ E-mail:
[email protected] www.unimib.it TeO2 bolometric detectors have shown to be a promizing technique for Neutrinoless Double Beta Decay (ββ(0ν) ) research. With an analyzed statistics of ∼18.14 kg 130 Te × years the CUORICINO experiment has reached a limit on 130 Te half life for this decay of 2.94×1024 years (90% C.L.). After an intense R&D aimed to background reduction, the next generation CUORE experiment is presently under construcion and foreseen to take data in 2013. Its sensitivity on the electron neutrino Majorana mass hm ee i is expected to probe the Inverted Hierarchy Region (IHR) of the neutrino mass spectrum. Keywords: Neutrinoless Double Beta Decay, neutrino mass, bolometer.
1. Introduction After the positive results obtained by neutrino oscillation experiments, the question about neutrino nature and mass has become one of the frontier problems of fundamental physics. Experiments looking for the ββ(0ν) of even-even nuclei have the highest sensitivity to possible violations of the total lepton number L and to Majorana neutrino masses. A positive signal would therefore give a clear answer with respect to neutrino nature and absolute mass scale. ββ(0ν) is a lepton violating nuclear transition where a nucleus (A,Z) decays into an (A,Z+2) nucleus with the emission of two electrons and no neutrino. The experimental signature in direct counting experiments, which measure the energy of the ββ(0ν) emitted electrons, is a peak at the Q-value in the sum energy spectrum. The transition width is proportional to the square of | hmee i |, the proportionality constant being FN , the so called nuclear factor of merit. It is the product of the nuclear matrix element (NME) and the phase-space factor. Since larger FN ’s imply stronger constrains in hmee i , nucleus with higher FN are preferable. In spite of such a characteristic imprint, the rarity of the process under consideration makes its identification very difficult. In fact, ββ(0ν) half-lives are as long as 1025 y (and beyond), according to present limits. A claim for ββ(0ν) evidence +0.44 has been made for 76 Ge with a ββ(0ν) half-live of (2.23−0.31 )×1025 y.1 The experimental sensitivity for a direct counting experiment looking for
November 22, 2010
18:55
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.06˙Capelli
287
ββ(0ν) can be expressed as: r S0ν
i.a. ∝ε A
M ·T ∆E · b
(1)
where ε is the detection efficiency, i.a. the isotopic abundance, A the compound atomic mass, M the source mass, T the measure time, ∆E the energy resolution in the Region of Interest (ROI) and b the background in the ROI. A sensitive experiment needs therefore to measure for long times having very large source masses, good energy resolution, high efficiency and low background in the ROI. The use of DBD candidates with high natural isotopic abundance reduces the experimental cost due to isotopic enrichement. Two main approaches are used to search for ββ(0ν) : homogeneous (i.e. calorimetric) and non-homogeneous. In the first approach the detector material is chosen to be a compound containing the DBD candidate, providing a high efficiency (∼100%) and, in the case of solid-state and phonon detectors, high resolution technique. Detector masses up to 1 t are feasible. In the case of Xe TPC event topology can be used for background discrimination. In the non-homogeneous approach, an external source of the chosen DBD candidate is placed in the form of thin foils inside the detector. Event reconstruction by tracking can be used for background suppression, but the low efficiency and low energy resolution can be a limiting factor for these experiments. Standard Model DBD (ββ(2ν) ) can in fact become an important source of unavoidable background. 2. Low Temperature Calorimeters A bolometer consists of three main components: particle absorber, temperature sensor and thermal link. A particle energy deposition in any point of the absorber volume (no dead layer is present) originates a temperature rise. When working at very low temperatures (i.e. 10 mK) and using dielectric and diamagnetic absorbers, the temperature variation becomes measurable (i.e. 100 µK for 1 MeV depositions). This temperature variation is converted to an electric signal by means of devices whose resistance strongly depends on the temperature. The time response of the system is mainly dominated by the heat capacities of the system elements and by the thermal links. The absorber material can be chosen quite freely, the only requirements being reasonable thermal and mechanical properties. More DBD candidates are therefore in principle exploitable, by choosing a proper compound containing the wanted nucleus. 3. Past TeO2 Experiments After some years of research and development the Milano group started in 1989 a series of bolometric experiments in the National Laboratory of Gran Sasso (LNGS) to search for the ββ(0ν) of 130 Te . The choice of this isotope is motivated by its
November 22, 2010
18:55
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.06˙Capelli
288
high natural isotopic abundance (33.8%), a favorable nuclear factor of merit F N (a factor 5-10 more favorable than that of 76 Ge), and a Q-value (2527.52 ± 0.013 keV)2,3 located between the peak and the Compton edge of the 2615 keV line of 208 Tl . This leaves a clean window to look for the signal. The chosen compound for the absorber is TeO2 . It showed in fact to have better mechanical properties than natural tellurium and a higher Debye Temperature, this implying a lower heat capacity and thus a higher pulse amplitude for a given working temperature. TeO2 crystals showed also a good intrinsic radiopurity (less than 1pg/g in 232 Th and 238 U ), a fundamental requirement to study rare events as ββ(0ν) . The preliminary experiment was performed with 73 g and 334 g single TeO2 22 crystals, reaching a limit on 130 Te ββ(0ν) half-life (T0ν years (90% 1/2 ) of 2.1 × 10 CL). The second step was an array of 4 detectors, 340 g each, to study the feasibility of a large massive experiment made by identical bolometers. The cumulative limit 0ν obtained on T1/2 of 130 Te was 2.39 × 1022 years (90% CL).4 At the end of September 1997, a new array consisting of 20 TeO2 340 g crystals (MiDBD) was operated 130 in the LNGS.5 The new achieved limit on T0ν Te was 9.5 × 1022 years 1/2 of (90% CL). The knowledge acquired in terms of detector performance optimization and background reduction was delivered to the realization of a second large mass bolometric experiment, CUORICINO. 3.1. CUORICINO As for the previous experiments, CUORICINO was also located in the LNGS, which provides a natural shield for cosmic rays of 3600 m.w.e. The muon flux was measured to be of (3.2 ± 0.2)·10−8 µ/s/cm2 ,6 the neutron flux between 10−7 and 10−6 n/s/cm2 depending on the neutron energy7, ,8 and the gamma flux below 3 MeV of 0.73 γ/s/cm2 ,910 Operated between 2003 and 2008, CUORICINO consisted of an array of 62 TeO2 crystals, arranged in a tower of 13 planes, for a total TeO2 mass of 40.7 kg and a total 130 Te mass of 11.6 kg. As shown in Fig. 1- upper part, 11 out off 13 planes were made by 5×5×5 cm3 crystals (790 g/each) and the remaining two by 3×3×6 cm3 crystals (340 g/each). Two small crystals were enriched in 130 Te (75%) and two in 128 Te (82.3%). The detector tower was kept at a working temperature of ∼10 mK by means of a dilution refrigerator and provided with copper and lead shields. With an analyzed statistics of ∼18.14 kg 130 Te × y, the limit set for the 130 Te 0ν T1/2 is 2.94×1024 years (90% C.L.), corresponding to an upper limit on hmee i between 0.21 and 0.72 eV (using NME calculations from11,12 ). Due to the different isotope investigated and to the NME uncertainties, CUORICINO is not in a position to confirm or exclude the 76 Ge claim. As shown in Fig. 1- down part, no peak appears at the Q-value. A Maximum Likelihood procedure has been applied to the anticoincidence sum spectra, considering separately big, small and enriched crystals. For each spectrum a sum of N gaussians, one for each crystal, was used as a global response function. The FWHM resolution of each Gaussian was fixed to
November 22, 2010
18:55
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.06˙Capelli
289
the characteristic one of each corresponding detector (evaluated on the 2615 keV Tl peak in the calibration spectrum). Its value, averaged over all the detectors, is of 7 keV. The background underlying the ββ(0ν) peak was fit with a flat function, and the 60 Co sum line at 2505 keV was included in the fit region. The background value in the ROI is 0.18 ± 0.02 c/keV/kg/y. The analysis of complete CUORICINO set is being performed with new tools developed for the next generation CUORE experiment. 208
Fig. 1.
CUORICINO set-up and ββ(0ν) limit evaluation.
November 22, 2010
18:55
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.06˙Capelli
290
4. From CUORICINO to CUORE In order to increase significantly the sensitivity, detector masses up to 1 ton are necessary, together with a sensitive reduction of the background. The CUORE experiment, presently under construction, consists of 988 TeO2 absorbers, for a total TeO2 mass of ∼741 kg, and a total 130 Te mass of ∼204 kg. The crystals will be arranged in 19 towers, consisting of 13 planes with four 5×5×5 cm3 detectors each. The CUORE detector will be placed in a single dilution refrigerator, able to reach working temperature of ∼10 mK, and provided with specially designed shields for environmental and experimental radioactivity. Bolometric tests performed with CUORE-like detectors have shown that an improvement in the FWHM resolution by a factor 40% is feasible. Many efforts are being done in order to increase system reliability and duty cicle. At the state of the art the most challenging task is background reduction. Values at least down to 0.01 c/keV/kg/y are needed in order to go inside the IH region of the neutrino mass spectrum (a factor 18 with respect to CUORICINO). The coincidence/anticoincidence analysis of the MiDBD and CUORICINO measured background and its modelization by means of Monte Carlo simulations,9 have shown three most probable contribution to the background in the ROI: MultiCompton events from 208 Tl decays, due to 232 Th contamination in the cryostat (40 ± 10 %), degraded alphas and betas from surface contamination of inert materials facing the crystals (50 ± 10 %) and of crystals themselves (10 ± 5 %). Many efforts have been done in order to reduce the background in view of CUORE. New detector holders have been designed, thus reducing by a factor ∼2 the amount of copper facing the crystals. Careful shields design and materials selection have been performed in order to keep the overall bulk contribution in the ROI down to 0.001 c/keV/kg/y. Strict protocol for crystal production and quality check have been signed, including the implementation of a new surface mechanical treatement, previously optimized and tested with bolometric measurements in the Hall C of LNGS. An ultimate test for copper surface contamination reduction has just finished in the Hall A of LNGS. Three different surface cleaning procedures have been applied to the copper used for the supporting structure of three separate towers, consisting of three planes of four 5×5×5 cm3 crystals each. The three techniques (chemical etching, plasma cleaning and polyethilene wrapping) have shown compatible results. On the basis of the actual achievements the projection to the CUORE background has been evaluated as shown in Tab. 1. In the realistic scenario of a background in the ROI of 0.01 c/keV/kg/y and of 5 keV FWHM resolution in the ROI, CUORE should reach in five years live time a 26 1σ sensitivity on 130 Te T0ν years, corresponding to an upper bound 1/2 of 2.1 × 10 on hmee i between 23 and 82 meV (with NME calculation from11,12 ), thus probing a relevant part of the IH region of the neutrino mass spectrum.
November 22, 2010
18:55
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.06˙Capelli
291 Table 1.
CUORE background projection
Background Source
Rate in the ROI ×10−3 c/keV/kg/y
Crystal Bulk Crystal Surface Mounting Cu Bulk Mounting Cu Surface Experimental Setup Environmental Gammas10 Environmental neutrons10 Environmental Muons (no VETO)10
<1 <3 <0.6 20 ÷ 40 <10 <0.4 (8.6 ± 6.06)×10−3 0.104 ± 0.022
4.1. CUORE and beyond CUORE is foreseen to start taking data in 2013. Every item (hut construction, crystal growing and storage, cryostat purchasing and construction, thermistors and heaters preparation, assembly tools developement) is on going. The CUORE building and cryostat support structure are now completed. The cryostat has been purchased and the delivery of the dilution unit and flanges is scheduled within 2010. After bolometric tests of four sample crystals from each batch, 241 TeO2 crystals have already been produced and stored underground. Electronics have been designed and is being procured. The test of the first CUORE tower (CUORE-0) is under preparation. It is going to be installed and operated in the existing dilution refrigerator used for CUORICINO. It is intended to test CUORE assembly chain and procedure, and to be a high statistics test of the bolometric behaviour of CUORE detectors, data acquisition and analysis tools. It should also provide a final test of crucial components of the estimated CUORE background. It is foreseen to take data in 2011. Further improvements in ββ(0ν) sensitivity could be reached in the future by replacing natural TeO2 detectors with crystals enriched in 130 Te . The same experimental infrastructure developed for CUORE could be used. This would allow to reach a total 130 Te mass greater than 500 kg and to increase the value of the isotopic abundance by a factor three. Active background rejection (mainly alpha rejection) could also be implemented in order to reduce the background down to 0.001 c/keV/kg/y, thus allowing to cover all the IH region of the neutrino mass spectrum. Tests by means of surface sensitive detectors have been performed13 in this respect. Scintillating bolometers are also a valid alternative for active background rejection. Different compounds and ββ(0ν) candidates are being tested (i.e. ZnSe, CdWO4 , CdMoO4 ).14 5. Conclusions Low temperature calorimeters are a well established and competitive technique for ββ(0ν) search. The potential of this approach has been demonstrated by CUORICINO, which provided one of the most stringent limits on hmee i . Intense R&D
November 22, 2010
18:55
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.06˙Capelli
292
and careful material selection and shield design have been performed to lower the background sources in view of the next generation experiment CUORE. The projection of the actual achievements to CUORE show that the background goal of 0.01 c/keV/kg/y has almost been reached, thus allowing to explore the IH region of the neutrino mass spectrum. Presently under construction CUORE first tower will take data in 2011 and the full CUORE detector is foreseen to start in 2013. References 1. H.V. Klapdor-Kleingrothaus, H.V. Krivosheina, Modern Phys. Lett. A 21, 1547–1566 (2006). 2. M.Redshaw et al., Phys. Rev. Lett. 102, 212502 (2009). 3. N.D.Scielzo et al., Phys. Rev. C 80, 025591 (2009). 4. C. Brofferio et al., Nucl Phys. Proc. Suppl. 48, 238 (1996). 5. A.Alessandrello et al., Nucl. Instr. and Meth. in Phys. Res. A 440, 397–402 (2000). 6. M.Ambrosio et al., Phys. Rev. D 52, 3793 (1995). 7. F.Arneodo et al., Il Nuovo Cimento 112 A, 959 (1999). 8. P.Belli et al., Il Nuovo Cimento 101 A, 959 (1989). 9. C.Bucci et al., Eut. Phys. Journ. A 41, 155–168 (2009). 10. F.Bellini et al., Astrop. Phys. 33, 169–174 (2010). 11. V.Rodin et al., Nucl. Phys. A 766, 107 (2006). 12. V.Rodin et al., arXive:nucl-th/0706.4304v1 (2007). 13. L.Foggetta et al., Appl. Phys.Lett. 86, 134106 (2005). 14. S.Pirro et al., Phys. of Atom. Nucl. 69, 2109–2116 (2006).
November 22, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.07˙Roy
293
THE MAGIC OF FOUR ZERO NEUTRINO YUKAWA TEXTURES PROBIR ROY DAE Raja Ramanna Fellow, Saha Institute of Nuclear Physics, 1/AF Bidhan Nagar, Kolkata 700064, India ∗ E-mail:
[email protected] Four is the maximum number of texture zeros allowed in the Yukawa coupling matrix of three massive neutrinos. These completely fix the high scale CP violation needed for leptogenesis in terms of that accessible at laboratory energies. µτ symmetry drastically reduces such allowed textures. Only one form of the light neutrinos mass matrix survives comfortably while another is marginally allowed. Keywords: Neutrino Mass; Texture Zeros; Flavor symmetry
1. Introduction There is something magical about four zero Yukawa textures. After earlier success in the quark sector,1 it is proving useful with leptons, as explained in the abstract. Within the type-I seesaw framework and in the weak basis of mass diagonal charged leptons and heavy right chiral neutrinos, the results stated above were derived 2 leading to a highly constrained and predictive scheme.2,3 We shall discuss the effect of the imposition of µτ symmetry4 on this scheme. The light neutrino mass matrix in the usual notation is Mν ' −mD MR−1 mTD ,
(1)
with O(MR ) O(mD ). Mν diagonalizes as under U † Mν U ∗ = Mdν = diag(m1 , m2 , m3 ),
(2)
m1,2,3 being real and positive. Our PMNS parametrization is iαM 0 0 1 0 0 c13 0 −s13 e−iδD c12 s12 0 e −s12 c12 0 0 eiβM 0 (3) U = 0 c23 s23 0 1 0 iδD 0 0 1 0 −s23 c23 s13 e 0 c13 0 0 1
where cij ≡ cos θij , sij ≡ sin θij and δD , αM , βM are the one Dirac phase and two Majorana phases respectively. In our basis, M` = diag(me , mµ , mτ ) and MR = diag(M1 , M2 , M3 ), all mass eigenvalues being real and positive. The CasasIbarra5 form for mD which equals the neutrino Yukawa coupling matrix times the
November 22, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.07˙Roy
294
relevant Higgs VEV, is mD = iU
q p Mνd R MR ,
(4)
where R in general is an unknown complex orthogonal matrix: R T R = RRT = I. The best fit experimental numbers, needed by us, appear in Table 1. Loosely, R = ∆m221 ' 3.2 × 10−2 , θ23 ' π4 , θ12 ' sin−1 √13 and θ13 is small. We assume no |∆m232 | massless neutrino, i.e. detMν 6= 0. Table 1. Quantity ∆m221 = m22 − m21 ∆m232 = m23 − m22
Best-fit experimental numbers from [6] Experimental values −5 eV 2 7.59 ± 0.20 +0.61 −0.69 × 10 −2.40 ± 0.11 2.51 ± 0.12
+0.37 −0.36
+0.39 −0.36
× 10−3 eV 2 (inverted) × 10−3 eV 2 (normal)
θ12
34.4 ± 1.0
θ23
42.3+5.3 −2.8
θ13
+3.2 −2.9
◦
11.4 ◦ −7.1
< 13.2◦
2. Four Zero Yukawa Textures and µτ Symmetry It is more natural to attribute ab initio textures to mD , which appears in the Lagrangian rather than to the derived mν . 72 allowed four zero textures in mD have been classified into2 two categories: (A) 54 with one pair of vanishing conjugate off diagonal elements in Mν and (B) 18 with two zeros in one row and one each in the other two (k, l say) obeying det(cofactor(Mν )kl ) = 0. For all these, the R matrix of Eq. (4) has been reconstructed in terms of the element of U , Mνd and MR . Consequently, all phases of R are given in terms of δD , αM and βM which completely determine all phases in mD including those responsible for leptogenesis. Elements of mD and MR are required by µτ symmetry to remain invariant under the interchange νµ ↔ ντ , Nµ ↔ Nτ . The seesaw formula immediately implies a custodial µτ symmetry in Mν itself, leading to θ23 = π4 , θ13 = 0. Further, the number of four zero textures allowed in mD is drastically reduced.7 Only two each (A) (B) are allowed in categories (A) and (B) both leading to the same Mν or Mν : 2 2iα 2 k1 e + 2k22 k2 k2 l1 l1 l2 eiβ l1 l2 eiβ Mν(A) = m k2 1 0 Mν(B) = m0 l1 l2 eiβ l22 e2iβ + 1 l22 e2iβ . k2 0 1 l1 l2 eiβ l22 e2iβ l22 e2iβ + 1 (5)
November 22, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.07˙Roy
295
Here m and m0 are overall mass scales, k1 , k2 , l1 , l2 are real parameters and α, β are phases. Turning to θ12 and R, one can derive that R = 2(X12 + X22 ) X1 tan 2θ12 = . X2
1/2
[X3 − (X12 + X22 )
1/2 −1
]
(6)
,
(A)
The X’s of Eq. (6) are given for Mν as √ 1/2 2 (A) X1 = 2 2k2 [(1 + 2k22 ) + k14 + 2k12 (1 + 2k22 ) cos 2α] , (B) X2 (A) X3 (B)
For Mν
= 1−
k14
= 1−
4k24
−
4k24
−
k14
−
4k12 k22
cos 2α,
−
4k12 k22
cos 2α − 4k22 .
(7)
, they are given by √ 1/2 2 (B) X1 = 2 2l1 l2 [(l12 + 2l22 ) + 1 + 2(l12 + 2l22 ) cos 2β] , (B) X2 (B) X3
= 1+
4l22 cos 2β
= 1−
(l12
+
+
2 2l22 )
4l24
−
−
(8)
l14 ,
4l22 cos 2β.
One can further impose the requirement of tribimaximal mixing, i.e. θ12 = (A) sin−1 √13 ' 35.2◦ , which needs (Mν )11 + (Mν )12 = (Mν )22 + (Mν )23 . For Mν , (B)
α is then immediately fixed at π/2 and k1 = (2k22 + k2 − 1)1/2 . For Mν , β is then immediately fixed at cos−1 (l1 /4l2 ) and l2 = (1−l12 )1/2 /2. For category (A) R equals 3(k2 − 2)/(k2 + 1), while for category (B) it becomes 3l12 /2(1 − 2l12 ). 3. Phenomenology Experimentally fitted values of R and tan 2θ12 can be matched with the expression in Eq. (6) - Eq. (8). For category (A), only the spectrum with inverted ordering is found to be allowed. But no common allowed parameter ranges are found for (k1 , k2 , α) in the 1σ intervals of R = −(2.88 − 3.37) × 10−2 and θ12 = 33.15◦ − 35.91◦. The 3σ intervals R = −(2.46 − 3.99) × 10−2 and θ12 = 30.66◦ − 39.23◦ allow a thin strip (Fig. 1) in the k1 − k2 plane with 89◦ ≤ α ≤ 90◦ and 2.0 < k1 < 5.3, 1.2 < k2 < 3.7. The additional constraint of tribimaximal mixing, when α must be exactly π/2, confines k2 to the range 1.95 ≤ k2 ≤ 1.97. Improved experimental errors may thus rule out this category completely. Only the normally ordered spectrum is found to be allowed for category (B). But now there are significant allowed regions in the l1 − l2 plane both for 1σ and 3σ intervals R and θ12 . Specifically for the 3σ intervals R = (2.52 − 4.07) × 10−2 and θ12 = 30.66◦ − 39.23◦, two allowed branches appear (Fig.2) with β in the ranges 87◦ to 90◦ , 0.1 < l1 < 0.55 and 0.6 < l2 < 0.76. The further imposition of tribimaximal mixing forces the only one free variable left, namely l1 , to be within the range 0.11 ≤ l1 ≤ 0.15.
November 22, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.07˙Roy
296
4 Category A
3.5 3
k2
2.5 2 1.5 1 0.5 0 1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
k1 Fig. 1. Variation of k1 and k2 in category A with µτ symmetry over the 3σ allowed ranges of R and θ12 .
0.76 0.74 0.72
Category B
l2
0.7 0.68 0.66 0.64 0.62 0.6 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 l1 Fig. 2.
Variation of l1 and l2 in Category B for the 3σ allowed ranges of R and θ12 .
November 22, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.07˙Roy
297
4. Radiative Lepton Flavor Violating Decay And Leptogenesis With α > β and l1 = le , l2 = lµ , l3 = lτ , the branching ratio for the decay lα → lβ γ can be written in mSUGRA scenarios (with universal scalar masses at a GUT scale Mχ ∼ 2 × 1016 GeV) as BR(lα → lβ γ) ∝ BR(lα → lβ ν ν¯)|(mD Lm†D )αβ |
(9)
with Lkl = ln
MX δkl , Mk
(10)
Mk being the mass of the kth heavy right chiral neutrino Nk . Now BR(τ → µγ) (A) vanishes for category (A) since (Mν )23 = 0. Otherwise, the allowed textures in both categories have (Mν )13 = (Mν )12 6= 0 and lead to nonzero BR(τ → eγ) and BR(µ → eγ) but with the relation BR(τ → eνe ν¯e ) BR(τ → eγ) ' ' 0.178. BR(µ → eγ) BR(µ → eνµ ν¯µ )
(11)
For leptogenesis, the flavor dependent lepton asymmetry in the standard notation is given for the Minimal Supersymmetric Standard Model by Γ(Ni → φ¯lα ) − Γ(Ni → φ† lα ) P εα = i † ¯ β [Γ(Ni → φlβ ) + Γ(Ni → φ lβ )] ! !−1 2 2 X M M 1 g2 j j α , Iij + Jijα 1 − 2 f ' 2 16πMW Mi2 Mi (m†D mD )
(12)
ii j6=i
α α Iij = Im(m†D )iα (mD )αj (m†D mD )ij = −Iji ,
Jijα
=
Im(m†D )iα (mD )αj (m†D mD )ji f (x) =
√
x
=
(13)
−Jjiα ,
(14)
(15)
2 1+x − ln . 1−x x
The flavor independent lepton asymmetry is X X 2 1 g2 [(m†D mD )ij ] f Mj2 /Mi2 . εi = εα = i 2 † 16πMW (mD mD ) α ii j6=i
(16)
2 For M1 << M2,3 , f (M2,3 /M12 ) ' −3M1 /M2,3 . Table 2. summarizes our statement on the leptogenesis parameters including the effective mass m f1 α = |(mD )α1 |2 /M1 for the washout of the α-flavor asymmetry
5. Deviation Due to RG Running Suppose µτ symmetry in the neutrino sector is imposed at a high scale Λ ∼ 1012 GeV. The neutrino mass matrix elements can then be evolved down to a laboratory scale λ ∼ 103 GeV by RG running. On account of the inequality mτ mµ me ,
November 22, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.07˙Roy
298 Table 2.
Leptogenesis Table
0
m f1 e
nonzero
m f1 µ 0
m f1 τ
–do–
0
nonzero
0
0
(3)
µ τ 6= 0, rest zero I12 = I13
0
nonzero
nonzero
(4)
µ τ 6= 0, rest zero I13 = I12
0
nonzero
nonzero
equals m f1 µ
α Iij
α Jij
(1)
e = I e 6= 0, rest zero I12 13
(2)
configuration mD
mD mD mD
0
equals m f1 µ
µτ symmetry is badly broken in the charged lepton sector. Deviations from µτ symmetry creep into Mν from loop diagrams with charged lepton internal lines. We m2 2 2 2 keep only mτ induced terms via ∆τ ' 8π2τv2 (tan2 β + 1) ln Λ λ with v = vu + vd and tan β = vu /vd , vu,d being up, down type Higgs VEVs in the MSSM. Then to O(∆τ ), (Mν )11 , (Mν )12 , (Mν )21 and (Mν )22 are unchanged but the remaining elements change to (1−∆τ )(Mν )13 , (1−∆τ )(Mν )31 , (1−∆τ )(Mν )23 , (1−∆τ )(Mν )32 and (1 − 2∆τ )(Mν )33 . Consequently, θ13 can be nonzero and θ23 different from π/4. One can redo the phenomenology with these changes. For category (A), the 3σ λ allowed strip in the k1 −k2 plane gets marginally extended now with 0 ≤ θ13 ≤ 2.7◦ , ◦ θ23 ≤ 45 while the inverted ordering is retained and the normal one excluded. For category (B), the 3σ allowed branches in the l1 − l2 plane are enhanced a bit more λ with 0 ≤ θ13 ≤ 0.85◦ , θ23 ≥ 45◦ while retaining the normal ordering and excluding the inverted one. 6. Conclusion (1) Just four neutrino Yukawa textures with four zeros are compatible with µτ (A) symmetry, leading to only two forms for the light neutrino mass matrix Mν (B) and Mν . (A,B) (2) For Mν , 3σ-allowed values of θ12 and R = ∆m221 /∆m232 admit restricted (A) regions in the parameter space with Mν being in some tension with data (3) The tribimaximal mixing assumption further restricts the the parameters. (4) Radiative deviations from µτ symmetry yield small values of θ13 and can resolve the θ23 octant ambiguity (< or > 45◦ ) Acknowledgments I thank K.S. Babu and my collaborators B. Adhikary and A. Ghosal for helpful discussions. This work has been supported by a DAE Raja Ramanna fellowship. References 1. A review and original references may be found in, H. Fritzsch and Z.Z. Xing, Prog. Part. Nucl. Phys. Rev. 45, 1 (2001), as well as in Phys. Lett. B 555, 63 (2003). See
November 22, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.07˙Roy
299
also K. S. Babu and J.Kubo, Phys. Rev. D 71, 056006 (2005). 2. G. C. Branco, D. Emmanuel-Costa, M. N. Rebelo and P. Roy, Phys. Rev. D 77, 053011 (2008). 3. S. Choubey, W. Rodejohann and P. Roy, Nucl. Phys. B 808, 272 (2009). 4. P. F. Harrison and W. G. Scott, Phys. Lett. B 547, 219 (2002). 5. J.A. Casas and A. Ibarra Nucl. Phys. B 618, 171 (2001). 6. M. C. Gonzalez-Garcia, M. Maltoni and J. Salvado, arXiv:1001.4524 [hep-ph] 7. B. Adhikary, A. Ghosal and P. Roy JHEP 0910, 040 (2009).
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
300
SENSITIVITY TO STERILE NEUTRINO MIXINGS AND THE DISCOVERY CHANNEL AT A NEUTRINO FACTORY OSAMU YASUDA∗ Department of Physics, Tokyo Metropolitan University, Minami-Osawa, Hachioji, Tokyo 192-0397, Japan ∗ E-mail: yasuda at phys.metro-u.ac.jp Sensitivity of a neutrino factory to various mixing angles in a scheme with one sterile neutrino is studied using νe → νµ , νµ → νµ , νe → ντ and νµ → ντ . While the “discoverychannel” νµ → ντ is neither useful in the standard three flavor scheme nor very powerful in the sensitivity study of sterile neutrino mixings, this channel is important to check unitarity and to probe the new CP phase in the scheme beyond the standard neutrino mixing framework. Keywords: Neutrino oscillation; sterile neutrino; neutrino factory
1. Introduction It is known that the deficit of the solar and atmospheric neutrinos are due to neutrino oscillations among three flavors of neutrinos, and these observations offer evidence of neutrino masses and mixings.1 The standard three flavor framework of neutrino oscillations are described by six oscillation parameters: three mixing angles θ12 , θ13 , θ23 , the two independent mass squared differences ∆m221 , ∆m231 , and one CP violating phase δ, where ∆m2jk ≡ m2j − m2k and mj stands for the mass of the neutrino mass eigenstate. From the solar neutrino data, we have (sin2 2θ12 , ∆m221 ) ' (0.86, 8.0 × 10−3 eV2 ), and from the atmospheric neutrino data we have (sin2 2θ23 , ∆m221 ) ' (1.0, 2.4×10−3eV2 ). On the other hand, only the upper bound on θ13 is known (sin2 2θ13 ≤ 0.19),a and no information on δ is known at present. To determine θ13 and δ, various neutrino long baseline experiments have been proposed,7 and the ongoing and proposed future neutrino long baseline experiments with an intense beam include conventional super (neutrino) beam experiments such as T2K,8 NOνA,11 LBNE,12 T2KK,9,10 the β beam proposal,13 which uses a νe (¯ νe ) beam from β-decays of radioactive isotopes, and the neutrino factory proposal, 14 in which ν¯e and νµ (νe and ν¯µ ) are produced from decays of µ− (µ+ ). As in the a In
Refs. 2, 3, 4, 5, 6, a global analysis of the neutrino oscillation data has been performed, in which a non-vanishing best-fit value for θ13 is found. This result, however, is compatible with θ13 = 0 at less than 2σ, and it is not yet statistically significant enough to be taken seriously.
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
301
case of the B factories,16,17 precise measurements in these experiments allow us not only to determine the oscillation parameters precisely but also to probe new physics by looking for deviation from the standard three flavor scheme. In particular, test of unitarity is one of the important subjects in neutrino oscillations, and tau detection is crucial for that purpose. Among the proposals for future long baseline experiments, the neutrino factory facility produces a neutrino beam of the highest neutrino energy, and it is advantageous to detect ντ , because of the large cross section at high energy. New physics which has been discussed in the context of neutrino oscillation includes sterile neutrinos,15 the non-standard interactions during neutrino propagation,18–20 the non-standard interactions at production and detection,21 violation of unitarity due to heavy particles,22 etc. These scenarios, except the non-standard interactions during neutrino propagation, offer interesting possibilities for violation of three flavor unitarity. Among these possibilities, phenomenological bound of unitarity violation is typically of O(1%) in the case of non-standard interactions at production and detection, and it is of O(0.1%) in the case of unitarity violation due to heavy particles.23 On the other hand, the bound in the case of sterile neutrinos is of O(10%) which comes mainly from the constraints of the atmospheric neutrino data,24 so scenarios with sterile neutrinos seem to be phenomenologically more promising to look for than other possibilities of unitarity violation. In this talk I will discuss phenomenology of schemes with sterile neutrinos at a neutrino factory. Schemes with sterile neutrinos have attracted a lot of attention since the LSND group announced the anomaly which suggest neutrino oscillations with mass squared difference of O(1eV2 ).25–27 The reason that we need one extra neutrino to account for LSND is because the standard three flavor scheme has only two independent mass squared differences, i.e., ∆m221 = ∆m2 ' 8×10−5eV2 for the solar neutrino oscillation, and |∆m231 | = ∆m2atm ' 2.4×10−3eV2 for the atmospheric neutrino oscillation, and it does not have room for the mass squared difference of O(1eV2 ). And the reason that the extra state has to be sterile neutrino, which is singlet with respect to the gauge group of the Standard Model, is because the number of weakly interacting light neutrinos has to be three from the LEP data.1 The LSND anomaly has been tested by the MiniBooNE experiment, and it gave a negative result for neutrino oscillations with mass squared difference of O(1eV 2 ).28 While the MiniBooNE data disfavor the region suggested by LSND, Ref. 37 gave the allowed region from the combined analysis of the LSND and MiniBooNE data, and it is not so clear whether the MiniBooNE data alone are significant enough to exclude the LSND region. On the other hand, even if the Miniboone data are taken as negative evidence against the LSND region, there still remains a possibility for sterile neutrino scenarios whose mixing angles are small enough to satisfy the constraints of Miniboone and the other negative results. The effect of these scenarios could reveal as violation of three flavor unitarity in the future neutrino experiments. So in this talk I will discuss sterile neutrino schemes as one of phenomenologically viable possibilities for unitarity violation, regardless of whether the LSND anomaly
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
302
is excluded by the MiniBooNE data or not. It has been known that sterile neutrino schemes may have cosmological problems (see, e.g., Ref. 29). However, these cosmological discussions depend on models and assumptions, and I will not discuss cosmological constraints in this talk. Also it has been pointed outb that some sterile neutrino models30 have absorption effects even for neutrino energy below 1TeV, but I will not take such effects into consideration for simplicity. 2. Schemes With Sterile Neutrinos For simplicity I will discuss schemes with four neutrinos, although phenomenology of the schemes with two31 or three32 sterile neutrinos have also been discussed. Denoting sterile neutrinos as νs , we have the following mixing between the flavor eigen states να (α = e, µ, τ ) and the mass eigenstates νj (j = 1, · · · , 4): νe Ue1 Ue2 Ue3 Ue4 ν1 νµ Uµ1 Uµ2 Uµ3 Uµ4 ν2 = ν τ Uτ 1 Uτ 2 Uτ 3 Uτ 4 ν 3 . νs Us1 Us2 Us3 Us4 ν4 There are two kind of schemes with four neutrinos, depending on how the mass eigenstates are separated by the largest mass squared difference. One is the (2+2)scheme in which two mass eigenstates are separated by other two, and the other one is the (3+1)-scheme in which one mass eigenstate is separated by other three (cf. Fig.1). m24
m24
m23
m23 m22 m21
(a)
Fig. 1.
m23 m21
(b)
The two classes of four–neutrino mass spectra, (a): (2+2) and (b): (3+1).
2.1. (2+2)-schemes In this scheme the fraction of sterile neutrino contributions to solar and atmospheric oscillations is given by |Us1 |2 + |Us2 |2 and |Us3 |2 + |Us4 |2 , respectively, where the b I would like to thank J. E. Kim for calling my attention to the possibility of the absorption effect due to the transition magnetic moments of neutrinos.
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
303
mass squared differences ∆m221 and |∆m243 | are assumed to be those of the solar and atmospheric oscillations. The experimental results show that mixing among active neutrinos give dominant contributions to both the solar and atmospheric oscillations (see, e.g., Ref. 38). In particular, in Fig. 19 of Ref. 38 we can see that at the 99% level |Us1 |2 + |Us2 |2 ≤ 0.25 and |Us3 |2 + |Us4 |2 ≤ 0.25, from the solar and atmospheric oscillations, respectively, and this contradicts the unitarity condition P4 2 38 This conj=1 |Usj | = 1. In fact the (2+2)-schemes are excluded at 5.1σ CL. clusion is independent of whether we take the LSND data into consideration or not and I will not consider (2+2)-schemes in this talk. 2.2. (3+1)-schemes Phenomenology of the (3+1)-scheme is almost the same as that of the standard three flavor scenario, as far as the solar and atmospheric oscillations are concerned. On the other hand, this scheme has tension between the LSND data and other negative results of the short baseline experiments. Among others, the CDHSW 33 and Bugey34 experiments give the bound on 1 − P (νµ → νµ ) and 1 − P (¯ νe → ν¯e ), respectively, and in order for the LSND data to be affirmative, the following relation has to be satisfied:35,36 1 sin2 2θLSND (∆m2 ) < sin2 2θBugey (∆m2 ) · sin2 2θCDHSW (∆m2 ), (1) 4 where θLSND (∆m2 ), θCDHSW (∆m2 ), θBugey (∆m2 ) are the value of the effective twoflavor mixing angle as a function of the mass squared difference ∆m2 in the allowed region for LSND (¯ νµ → ν¯e ), the CDHSW experiment (νµ → νµ ), and the Bugey experiment (¯ νe → ν¯e ), respectively. The (3+1)-scheme to account for LSND in terms of neutrino oscillations is disfavored because eq. (1) is not satisfied for any value of ∆m2 . This argument has been shown quantitatively by Ref. 38 including the atmospheric neutrino data and other negative results. In Fig.2 the right hand side of the lines denoted as “null SBL 90% (99%)” is the excluded region at 90% (99%) CL by the atmospheric neutrino data and all the negative results of short baseline experiments, whereas the allowed region by the combined analysis of the LSND and MiniBooNE data at 90% (99%) CL is also shown. In the following discussions I will assume the mass pattern depicted in Fig.1(b) because the inverted (3+1)-scheme is disfavored by cosmology, and I will also assume for simplicity that the largest mass squared difference ∆m241 is larger than O(0.1eV2 ), so that I can average over rapid oscillations due to ∆m241 in the long baseline experiments as well as in the atmospheric neutrino observations. 3. Sensitivity of a neutrino factory to the sterile neutrino mixings 3.1. Neutrino factories Unlike conventional long baseline neutrino experiments, neutrino factories use muon decays µ+ → e+ νe ν¯µ and µ− → e− ν¯e νµ to produce neutrinos. In the setup sug-
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
304 2
∆m241[eV2]
10
1
10
0
10
nufact FD 90%CL nufact ND 90%CL LSND+MB 99%CL
-1
10
LSND+MB 90%CL null SBL 99%CL null SBL 90%CL
-2
10
10-8
10-7
10-6
10-5
10-4
10-3
10-2
10-1
100
4|Ue4|2|Uµ4|2 Fig. 2. Sensitivity to 4|Ue4 |2 |Uµ4 |2 in the (3+1)-scheme of a 20 GeV neutrino factory with Far Detectors39 (FD) or with Near Detectors40 (ND). Also shown are the allowed region from the combined analysis of the LSND and MiniBooNE data37 as well as the excluded region by all the negative data of short baseline experiments and atmospheric neutrino observations. 37,38
gested in the Physics Report7 of International Scoping Study for a future Neutrino Factory and Super-Beam facility, muons of both polarities are accelerated up to Eµ = 20 GeV and injected into one storage ring with a geometry that allows to aim at two far detectors, the first located at 4000 km and the second at 7500 km from the source. The reason to put far detectors at two locations is to resolve socalled parameter degeneracy.44–47 The useful channels at neutrino factories are the following: • νe → νµ and ν¯e → ν¯µ : the golden channel • νµ → νµ and ν¯µ → ν¯µ : the disappearance channel • νe → ντ and ν¯e → ν¯τ : the silver channel • νµ → ντ and ν¯µ → ν¯τ : the discovery channelc At neutrino factories, electrons and positrons produced out of νe and ν¯e create electromagnetic showers, which make it difficult to identify their charges. On the other hand, charge identification is much easier for µ detection, so the golden channel νe → νµ is used unlike the conventional long baseline neutrino experiments which use νµ → νe . The golden channel turns out to be powerful because of very low backgrounds. The disappearance channel is also useful because of a lot of statistics. The golden and disappearance channels are observed by looking for muons with magnetized iron calorimeters.49 On the other hand, the silver and discovery channels c It
has been known48 that this channel is not useful in the standard three flavor framework. On the other hand, once one starts studying physics beyond the standard three flavor scenario, this channel becomes very important. This is the reason why it is called the discovery channel. 39
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
305
are observed by looking for τ ’s with emulsion cloud chambers (nonmagnetized50,51 or magnetized52 ), and the statistics of the silver channel is limited. The silver channel is useful to resolve parameter degeneracy. Combination of the golden, disappearance and discovery channels is expected to enable us to check unitarity. 3.2. Sensitivity of a neutrino factory with far detectors 39 Ref. 39 studied sensitivity of a neutrino factory with far detectors to sterile neutrino mixings. The setup is the following:d the muon energy is 20GeV, the number of useful muons is 5 × 1020 µ− ’s and µ+ ’s per year, the measurements are supposed to continue for 4 years, the baseline lengths are L=4000km and L=7500km, the volume of each magnetized iron calorimeter at the two distances is 50kton, that of each magnetized emulsion cloud chamber at the two distances is 4kton, and the statistical as well as systematic errors and the backgrounds are taken into account. At long baseline lengths such as L=7500km, matter effects become important. The oscillation probability in constant-density matter can be obtained by the formalism of Kimura, Takamura and Yokomakura.53,54e The oscillation probability in matter can be written as P (να → νβ ) = δαβ − 4
X
˜ βα X ˜ βα∗ ) sin2 (∆E˜jk L/2) Re(X j k
j
−2
X
˜ βα X ˜ βα∗ ) sin(∆E˜jk L), Im(X j k
j
˜ αβ X j
˜αj U ˜ ∗ , ∆E ˜jk ≡ E ˜j − E ˜k , E ˜j and U ˜αj are the energy where ≡ U βj eigenvalue and the neutrino mixing matrix element in matter defined by ˜ diag(E ˜1 , E ˜2 , E ˜3 , E ˜ 4 )U ˜ −1 U diag(0, ∆E21 , ∆E31 , ∆E41 )U −1 + diag(Ae , 0, 0, An ) = U 2 2 2 (∆Ejk ≡ Ej − Ek ' ∆mjk /2E ≡ (mj − mk )/2E). The matter potentials Ae , An √ √ are given by Ae = 2GF Ne , An = GF Nn / 2, where Ne and Nn are the density of electrons and neutrinos, respectively. The neutrino energy E and the baseline length L which are typical at a neutrino factory satisfy |∆m231 L/4E| ∼ O(1), |∆m221 L/4E| 1 and |∆m241 L/4E| 1, and the energy eigenvalues in this case to the lowest order in the small mixing angles and to first order in |∆m331 |/|∆m341 | ˜1 ∼ ∆E31 , E ˜2 ∼ 0, E˜3 ∼ Ae , E˜4 ∼ ∆E41 . It can be shown that the 4-th are E ˜ αβ in matter is the same as that in vacuum: X ˜ αβ ' X αβ , where the component X 4 4 4 αβ ∗ notation Xj ≡ Uαj Uβj has been also introduced for the quantity in vacuum. On d In
Ref. 39 an analysis was performed also for the case of muon energy 50GeV and the baseline lengths L=3000km and L=7500km, and it was shown that sensitivity with τ detectors increases for 50GeV because of higher statistics. In this talk, however, I will only mention the results for the neutrino factory with muon energy 20GeV for simplicity. e Another proof of the KTY formalism was given in Refs. 55,56 and it was extended to four neutrino schemes in Refs. 57,56. Analytic forms of the oscillation probability in the (3+1)-scheme were also given in Ref. 58.
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
306
the other hand, other three components are given by ˜ βα = −∆E ˜ −1 E˜ −1 {X βαE˜2 E ˜3 + ( E ˜2 + E ˜3 )P βα + Qβα } X 1 21 31 4 ˜ βα = +∆E ˜ −1 E˜ −1 {X βαE˜3 E ˜1 + ( E ˜3 + E ˜1 )P βα + Qβα } X 2 21 32 4 ˜ βα = −∆E ˜ −1 E˜ −1 {X βαE˜1 E ˜2 + ( E ˜1 + E ˜2 )P βα + Qβα }, X 3
31
32
4
(2)
where P βα ≡ {A(X4ee + X4ss /2) − Aαα − Aββ }X4βα + ∆E31 X3βα + ∆E21 X2βα Qβα ≡ X4βα {A2αα + Aαα Aββ + A2ββ − A(Aαα + Aββ )(X4ee + X4ss /2)} −∆E31 (∆E31 + Aαα + Aββ )X3βα −∆E21 (∆E21 + Aαα + Aββ )X2βα +A∆E31 (X4βe X3eα + X3βe X4eα + X4βs X3sα + X3βs X4sα ) +A∆E21 (X4βe X2eα + X2βe X4eα + X4βs X2sα + X2βs X4sα ).
(3)
In Eq. (3) Aαα = Ae δαe + An δαs is the matrix element of the matter potential, and no sum is understood over the indices α, β. If the sterile neutrino mixings ˜ βα (j = 1, 2, 3) reproduce those for the stanX4αβ (α = e, µ, τ ) are small, then X j dard three flavor case. These sterile neutrino mixings X4αβ appear in the coefficients ˜ βα (j = 1, 2, 3) in front of the sine factors sin2 (∆E ˜jk L/2), so we can get informaX j tion on the sterile neutrino mixings from precise measurements of the coefficients 2 of the oscillation mode sin2 (∆E˜jk L/4E) (j, k = 1, 2, 3), which are the dominant contribution to the probability. We have evaluated sensitivity numerically by taking matter effects into account, and the results are given in Figs.2-5. Since we have assumed ∆m241 > O(0.1eV2 ), the results for ∆m241 < 0.1 eV2 are not given in the figures. The advantage of measurements with the far detectors is that sensitivity is independent of ∆m241 and it is good even for lower values of ∆m241 in most cases. In particular, in the case of the golden channel νe → νµ , the far detectors improve the present bound on 4|Ue4 |2 |Uµ4 |2 by two orders of magnitude for all the values of ∆m241 > O(0.1eV2 ). The neutrino factory with far detectors, therefore, can provide a very powerful test of the LSND anomaly. Their disadvantage of measurements with the far detectors is that sensitivity is not as good as that of the near detectors, which will be described in the next subsection, at the peak. 3.3. Sensitivity of a neutrino factory with near detectors 40 In my talk I skipped the discussions on sensitivity of a neutrino factory with near detectors, but because of recent interest on the near detector issues,41–43 I will describe sensitivity of measurements with near detectors for the sake of completeness. In Ref. 40 sensitivity of a neutrino factory with near detectors to sterile neutrino mixings was studied. The setup used in this analysis is the following: the muon energy is 20GeV, the number of useful muons is 2 × 1020 µ− ’s per year, the measurements are supposed to continue for 5 years, the volume of a magnetized iron calorimeter at the distance L=40km is 40kton, that of an emulsion cloud chamber
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
307 102
∆m241[eV2]
90%CL 101
100
nufact FD nufact ND CCFR MiniBooNE νµ CDHSW
10-1 -4 10
10-3
10-2 2
10-1
100
2
4|Uµ4| (1-|Uµ4| ) Fig. 3. Sensitivity to 4|Uµ4 |2 (1 − |Uµ4 |2 ) in the (3+1)-scheme of a 20 GeV neutrino factory with Far Detectors39 or with Near Detectors.40 The excluded regions by CDHSW,33 by CCFR61 and by the MiniBooNE νµ data37 are also shown.
103
∆m241[eV2]
90%CL 102
101
nufact FD nufact ND CHORUS NOMAD
100
10-1 -6 10
10-5
10-4
10-3
10-2
2
2
4|Ue4| |Uτ4|
10-1
100
Fig. 4. Sensitivity to 4|Ue4 |2 |Uτ 4 |2 in the (3+1)-scheme of a 20 GeV neutrino factory with Far Detectors39 or with Near Detectors.40 The excluded regions by NOMAD59 and by CHORUS60 are also shown.
at L=1km is 1kton, and the statistical errors and the backgrounds are taken into account.f At such short baselines, |∆m241 L/2E| ∼ O(1) |∆m231 L/2E| |∆m221 L/2E| f In
this analysis the effects of the systematic errors were not taken into account. Their results can be refined in the future.
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
308
∆m241[eV2]
103
90%CL nufact FD nufact ND CHORUS NOMAD
102
101
100
10-1 -5 10
10-4
10-3
10-2 2
4|Uµ4| |Uτ4|
10-1
100
2
Fig. 5. Sensitivity to 4|Uµ4 |2 |Uτ 4 |2 in the (3+1)-scheme of a 20 GeV neutrino factory with Far Detectors39 or with Near Detectors.40 The excluded regions by NOMAD59 and by CHORUS60 are also shown.
is satisfied, so the only relevant mass squared difference is ∆m241 . So we have the following oscillation probabilities: P (νe → νµ ) ' 4 |Ue4 |2 |Uµ4 |2 sin2 (∆m241 L/4E) P (νµ → νµ ) ' 1 − 4|Uµ4 |2 (1 − |Uµ4 |2 ) sin2 (∆m241 L/4E) 2
2
2
2
P (νe → ντ ) ' 4 |Ue4 | |Uτ 4 | sin2 (∆m241 L/4E) P (νµ → ντ ) ' 4 |Uµ4 | |Uτ 4 | sin2 (∆m241 L/4E) Thus we can determine 4|Uα4 |2 |Uβ4 |2 or 4|Uµ4 |2 (1 − |Uµ4 |2 ) from the coefficient of the dominant oscillation mode sin2 (∆m241 L/4E). The results are shown in Figs.25. The mass squared difference for which this neutrino factory setup has the best performance depends on the baseline length L, and in the present case it is approximately 10eV2 . The advantage of measurements with the near detectors is that sensitivity to the sterile neutrino mixings is very good at the peak while their disadvantage is that sensitivity becomes poorer for lower values of ∆m241 . From these results, we conclude that the near and far detectors are complementary in their performance.
4. The CP phases due to new physics The results in the previous section suggest that the discovery channel νµ → ντ may not be so powerful in giving the upper bound on the mixing angles. To see the role of the discovery channel, let us now consider the effects of the CP phases in neutrino oscillations.
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
309
4.1. T violation in four neutrino schemes In matter T violation Pαβ − Pβα ≡ P (να → νβ ) − P (νβ → να ) is more useful than CP violation P (να → νβ ) − P (¯ να → ν¯β ), so I will discuss T violation in four neutrino schemes.g In the three flavor scheme it is known that T violation is given by ˜ ˜ ˜ ˜ βα X ˜ βα∗ ) sin ∆E21 L sin ∆E31 L sin ∆E32 L . Pαβ − Pβα = −16 Im(X 1 2 2 2 2 βα ˜ βα∗ 62 ˜ The Jarlskog factor Im(X1 X2 ) can be written as ˜ βα X ˜ βα∗ ) = Im(X βα X βα∗ )∆E21 ∆E31 ∆E32 /∆E ˜21 ∆E ˜31 ∆E˜32 . Im(X 1 2 1 2
(4)
(5)
If |∆E31 L| ∼ O(1), then the differences of the eigenvalues in this case are all of O(∆E31 ) in the zeroth order in sin2 θ13 . In that case the product of the sine facQ tors j
(6)
In the case of neutrino energy with |∆E31 L| ∼ O(1), the dominant contribution in Eq.(6) to the leading order in the small mixing angles is given by X ˜ βα X ˜ βα∗ ) sin ∆E˜jk L, Pαβ − Pβα ' 4 Im(X (7) j k (j,k)=(1,2),(1,3),(2,3)
where we have averaged over rapid oscillations due to ∆m241 , i.e., limx→∞ sin x sin(x+ θ) = cos θ/2. To compare T violation in the different schemes or in different channels, we have only to compare the Jarlskog factors Im(Xjβα Xkβα∗ ) in Eqs.(7) and Q ˜j,k L/2) in Eq.(4) and sin ∆E˜jk L in Eq.(7) (5), since the sine factors j
that we are not claiming that T violation can be measured experimentally for all the channels. The oscillation probability can be always decomposed into T conserving and T violating terms, Pαβ = (Pαβ + Pβα )/2 + (Pαβ − Pβα )/2, and the second term is proportional to sin δ in the standard three flavor framework62 in constant-density matter, as in the case in vacuum. so T violation is phenomenologically suitable to examine δ. In the case of CP violation, on the other hand, CP is violated in matter even if the CP phase vanishes. In practice, people perform a numerical analysis by fitting the hypothetical oscillation probability to the full data including neutrinos and anti-neutrinos, instead of measuring P (να → νβ ) − P (¯ να → ν¯β ) or P (να → νβ ) − P (νβ → να ). So discussions on CP violation or T violation should be regarded as tools to help us understand the results intuitively.
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
310
then it turns out the dominant contribution to the Jarlskog factor comes from ˜ βα X ˜ βα∗ ) with coefficients of O(1) if we plug Im(X3βα X4βα∗ ), which appears in Im(X j k Eq.(2) in Eq.(7). Furthermore, let us introduce a parametrization for the 4 × 4 mixing matrix with three CP phases δ` : U = R34 (θ34 , 0) R24 (θ24 , 0) R23 (θ23 , δ3 ) R14 (θ14 , 0) R13 (θ13 , δ2 ) R12 (θ12 , δ1 ) , where Rij (θij , δl ) are the 4 × 4 complex rotation matrices defined by cos θij p = q = i, j p = q 6= i, j 1 −iδ l [Rij (θij , δl )]pq = sin θij e p = i; q = j . iδ − sin θij e l p = j; q = i 0 otherwise.
θ14 stands for the mixing angle in short baseline reactor neutrino oscillations, and θ24 (θ34 ) represents the ratio of the oscillation modes due to ∆m231 and ∆m241 (the ratio of the active and sterile neutrino oscillations) in the atmospheric neutrinos, respectively. In the limit when these extra mixing angles θj4 (j = 1, 2, 3) become zero, δ2 becomes the standard CP phase in the three flavor scheme. The explicit forms of the mixing matrix elements Uαj can be found in the Appendix A in Ref. 39. From the constraints of the short baseline reactor experiments and the atmospheric neutrino data, these angles are constrained as24 θ14 . 10◦ , θ24 . 12◦ , θ34 . 30◦ . If we assume the upper bounds for θj4 (j = 1, 2, 3) and θ13 , for which we have θ13 . 13◦ , then together with the best fit values for the solar and atmospheric oscillation angles θ12 ' 30◦ , θ23 ' 45◦ , we obtain the following Jarlskog factor: 4 Im(X3eµ X4eµ∗ ) 4flavor ' 4|s23 s13 s14 s24 sin(δ3 − δ2 )| . 0.02 | sin(δ3 − δ2 )| 4 Im(X3µτ X4µτ ∗ ) 4flavor ' 4|s23 s24 s34 sin δ3 | . 0.2 | sin δ3 | for the (3+1)-scheme, where cjk ≡ cos θjk and sjk ≡ sin θjk . These results should be compared with the standard Jarlskog factor: 4 Im(X1eµ X2eµ∗ ) 3flavor = (1/2)|c13 sin 2θ12 sin 2θ23 sin 2θ13 sin δ| . 0.2 | sin δ|.
Notice that the Jarlskog factor is independent of the flavor (α, β) in the three flavor case. Assuming that all the CP phases are maximal, i.e., | sin δ3 | ∼ | sin(δ3 − δ2 )| ∼ | sin δ| ∼ O(1), the ratio of T violation in the (3+1)-scheme for the two channel and that in the standard three flavor scheme is given by |Peµ − Pµe |4flavor : |Pµτ − Pτ µ |4flavor : |Peµ − Pµe |3flavor ∼ 0.02 : 0.2 : 0.006. Note that the term which would reduce to the three flavor T violation in the limit θj4 → 0 (j = 1, 2, 3) is contained in Eq.(7) as a subdominant contribution which is suppressed by |∆m221 /∆m231 | ∼ 1/30. From this we see that dominant contribution to T violation in the (3+1)-scheme could be potentially much larger when measured with the discovery channel than that in the (3+1)-scheme with the golden channel or than that in the standard three flavor scheme. In fact it was shown in Ref. 39 by a detailed analysis that the CP phase may be measured using the discovery and disappearance channels.
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
311
4.2. CP violation in unitarity violation due to heavy fields In generic see-saw models the kinetic term gets modified after integrating out the right handed neutrino and unitarity is expected to be violated.22 In the case of the so-called minimal unitarity violation, in which only three light neutrinos are involved and sources of unitarity violation are assumed to appear only in the neutrino sector, deviation from unitarity is strongly constrained from the rare decays of charged leptons. Expressing the nonunitary mixing matrix N as N = (1 + η)U , where U is a unitary matrix while η is a hermitian matrix which stands for deviation from unitarity, the bounds are typically |ηαβ | < O(0.1%).23 The CP asymmetry in this scenario in the two flavor framework can be expressed as63 Pαβ − Pα¯ β¯ −4|ηαβ | sin(arg(ηαβ )) ∼ . Pαβ + Pα¯ β¯ sin(2θ) sin (∆EL/2) The constraint on ηµτ is weaker that than on ηeµ , and it was shown63,64 that the CP violating phase arg(ηαβ ) may be measured at a neutrino factory with the discovery channel. 5. Summary In this talk I described sensitivity of a neutrino factory to the sterile neutrino mixings. The golden channel νe → νµ improves the present upper bound on 4|Ue4 |2 |Uµ4 |2 by two orders of magnitude, and provides a powerful test for the LSND anomaly. It is emphasized that τ detection at a neutrino factory is important to check unitarity, and the discovery channel νµ → ντ is one of the promising channels to look for physics beyond the standard three flavor scenario. We may be able measure the new CP violating phase using this channel in the sterile neutrino schemes and in the scenario with unitarity violation due to heavy particles. Acknowledgments I would like to thank H.V. Klapdor-Kleingrothaus, R.D. Viollier and other organizers for invitation and hospitality during the conference. I would also like to thank A. Donini, K. Fuki, J. Lopez-Pavon and D. Meloni for collaboration on Ref. 39. This research was supported in part by a Grant-in-Aid for Scientific Research of the Ministry of Education, Science and Culture, #21540274. References 1. C. Amsler et al. [Particle Data Group], Phys. Lett. B 667 (2008) 1. 2. G. L. Fogli, E. Lisi, A. Marrone, A. Palazzo and A. M. Rotunno, Phys. Rev. Lett. 101 (2008) 141801 [arXiv:0806.2649 [hep-ph]]. 3. G. L. Fogli, E. Lisi, A. Marrone, A. Palazzo and A. M. Rotunno, arXiv:0809.2936 [hep-ph]. 4. H. L. Ge, C. Giunti and Q. Y. Liu, arXiv:0810.5443 [hep-ph].
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
312
5. G. L. Fogli, E. Lisi, A. Marrone, A. Palazzo and A. M. Rotunno, arXiv:0905.3549 [hep-ph]. 6. M. C. Gonzalez-Garcia, M. Maltoni and J. Salvado, arXiv:1001.4524 [hep-ph]. 7. A. Bandyopadhyay et al. [ISS Physics Working Group], Rept. Prog. Phys. 72, 106201 (2009) [arXiv:0710.4947 [hep-ph]]. 8. Y. Itow et al. [The T2K Collaboration], arXiv:hep-ex/0106019. 9. M. Ishitsuka, T. Kajita, H. Minakata and H. Nunokawa, Phys. Rev. D 72, 033003 (2005) [arXiv:hep-ph/0504026]. 10. K. Hagiwara, N. Okamura and K. i. Senda, Phys. Lett. B 637, 266 (2006) [Erratumibid. B 641, 486 (2006)] [arXiv:hep-ph/0504061]. 11. D. S. Ayres et al. [NOvA Collaboration], arXiv:hep-ex/0503053. 12. J. Maricic [LBNE DUSEL Collaboration], J. Phys. Conf. Ser. 203, 012109 (2010). 13. P. Zucchelli, Phys. Lett. B 532 (2002) 166. 14. S. Geer, Phys. Rev. D 57 (1998) 6989 [Erratum-ibid. D 59 (1999) 039903] [arXiv:hepph/9712290]. 15. Some of the references on sterile neutrinos are found at the Neutrino Unbound web page, by C. Giunti and M. Laveder, http://www.nu.to.infn.it/Sterile Neutrinos/. 16. Belle experiment, http://belle.kek.jp/. 17. Babar experiment, http://www-public.slac.stanford.edu/babar/. 18. L. Wolfenstein, Phys. Rev. D 17, 2369 (1978). 19. M. M. Guzzo, A. Masiero and S. T. Petcov, Phys. Lett. B 260 (1991) 154. 20. E. Roulet, Phys. Rev. D 44 (1991) 935. 21. Y. Grossman, Phys. Lett. B 359 (1995) 141 [arXiv:hep-ph/9507344]. 22. S. Antusch, C. Biggio, E. Fernandez-Martinez, M. B. Gavela and J. Lopez-Pavon, JHEP 0610 (2006) 084 [arXiv:hep-ph/0607020]. 23. S. Antusch, J. P. Baumann and E. Fernandez-Martinez, Nucl. Phys. B 810, 369 (2009) [arXiv:0807.1003 [hep-ph]]. 24. A. Donini, M. Maltoni, D. Meloni, P. Migliozzi and F. Terranova, JHEP 0712, 013 (2007) [arXiv:0704.0388 [hep-ph]]. 25. C. Athanassopoulos et al. [LSND Collaboration], Phys. Rev. Lett. 77 (1996) 3082 [arXiv:nucl-ex/9605003]. 26. C. Athanassopoulos et al. [LSND Collaboration], Phys. Rev. Lett. 81 (1998) 1774 [arXiv:nucl-ex/9709006]. 27. A. Aguilar et al. [LSND Collaboration], Phys. Rev. D 64 (2001) 112007 [arXiv:hepex/0104049]. 28. A. A. Aguilar-Arevalo et al. [The MiniBooNE Collaboration], Phys. Rev. Lett. 98, 231801 (2007) [arXiv:0704.1500 [hep-ex]]. 29. A. Y. Smirnov and R. Zukanovich Funchal, Phys. Rev. D 74, 013001 (2006) [arXiv:hepph/0603009]. 30. J. E. Kim, Phys. Rev. Lett. 41, 360 (1978). 31. M. Sorel, J. M. Conrad and M. Shaevitz, Phys. Rev. D 70, 073004 (2004) [arXiv:hepph/0305255]. 32. M. Maltoni and T. Schwetz, Phys. Rev. D 76, 093005 (2007) [arXiv:0705.0107 [hepph]]. 33. F. Dydak et al., Phys. Lett. B 134, 281 (1984). 34. Y. Declais et al., Nucl. Phys. B 434, 503 (1995). 35. N. Okada and O. Yasuda, Int. J. Mod. Phys. A 12, 3669 (1997) [arXiv:hepph/9606411]. 36. S. M. Bilenky, C. Giunti and W. Grimus, Eur. Phys. J. C 1, 247 (1998) [arXiv:hep-
November 22, 2010
19:15
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.08˙Yasuda
313
ph/9607372]. 37. G. Karagiorgi, Z. Djurcic, J. M. Conrad, M. H. Shaevitz and M. Sorel, Phys. Rev. D 80, 073001 (2009) [Erratum-ibid. D 81, 039902 (2010)] [arXiv:0906.1997 [hep-ph]]. 38. M. Maltoni, T. Schwetz, M. A. Tortola and J. W. F. Valle, New J. Phys. 6, 122 (2004) [arXiv:hep-ph/0405172v6]. 39. A. Donini, K. i. Fuki, J. Lopez-Pavon, D. Meloni and O. Yasuda, JHEP 0908, 041 (2009) [arXiv:0812.3703 [hep-ph]]. 40. A. Donini and D. Meloni, Eur. Phys. J. C 22, 179 (2001) [arXiv:hep-ph/0105089]. 41. J. Tang and W. Winter, Phys. Rev. D 80, 053001 (2009) [arXiv:0903.3039 [hep-ph]]. 42. Main Injector Non Standard Interactions Search, http://www-off-axis.fnal.gov/MINSIS/. 43. Madrid Neutrino NSI Workshop, UAM, Madrid, 10-11 December 2009, http://www.ft.uam.es/workshops/neutrino/default.html. 44. J. Burguet-Castell, M. B. Gavela, J. J. Gomez-Cadenas, P. Hernandez and O. Mena, Nucl. Phys. B 608, 301 (2001) [arXiv:hep-ph/0103258]. 45. H. Minakata and H. Nunokawa, JHEP 0110, 001 (2001) [arXiv:hep-ph/0108085]. 46. G. L. Fogli and E. Lisi, Phys. Rev. D 54 (1996) 3667 [arXiv:hep-ph/9604415]; 47. V. Barger, D. Marfatia and K. Whisnant, Phys. Rev. D 65, 073023 (2002) [arXiv:hepph/0112119]. 48. A. Donini, talk at the 0th IDS plenary Meeting, CERN 29-31 March 2007, http://www.hep.ph.ic.ac.uk/ids/communication/cern-2007-03-29/slides/ IDStalk-Donini.pdf. 49. A. Cervera-Villanueva, AIP Conf. Proc. 981, 178 (2008). 50. A. Donini, D. Meloni and P. Migliozzi, Nucl. Phys. B 646 (2002) 321 [arXiv:hepph/0206034]. 51. D. Autiero et al., Eur. Phys. J. C 33 (2004) 243 [arXiv:hep-ph/0305185]. 52. T. Abe et al. [ISS Detector Working Group], arXiv:0712.4129 [physics.ins-det]. 53. K. Kimura, A. Takamura and H. Yokomakura, Phys. Lett. B 537, 86 (2002) [arXiv:hepph/0203099]. 54. K. Kimura, A. Takamura and H. Yokomakura, Phys. Rev. D 66, 073005 (2002) [arXiv:hep-ph/0205295]. 55. Z. z. Xing and H. Zhang, Phys. Lett. B 618 (2005) 131 [arXiv:hep-ph/0503118]. 56. O. Yasuda, arXiv:0704.1531 [hep-ph]. 57. H. Zhang, Mod. Phys. Lett. A 22, 1341 (2007) [arXiv:hep-ph/0606040]. 58. A. Dighe and S. Ray, Phys. Rev. D 76, 113001 (2007) [arXiv:0709.0383 [hep-ph]]. 59. P. Astier et al. [NOMAD Collaboration], Nucl. Phys. B 611, 3 (2001) [arXiv:hepex/0106102]. 60. E. Eskut et al. [CHORUS Collaboration], Nucl. Phys. B 793, 326 (2008) [arXiv:0710.3361 [hep-ex]]. 61. I. E. Stockdale et al., Phys. Rev. Lett. 52, 1384 (1984). 62. V. A. Naumov, Int. J. Mod. Phys. D 1, 379 (1992). 63. E. Fernandez-Martinez, M. B. Gavela, J. Lopez-Pavon and O. Yasuda, Phys. Lett. B 649, 427 (2007) [arXiv:hep-ph/0703098]. 64. S. Antusch, M. Blennow, E. Fernandez-Martinez and J. Lopez-Pavon, Phys. Rev. D 80, 033002 (2009) [arXiv:0903.3986 [hep-ph]].
November 22, 2010
19:20
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.09˙Novella
314
SEARCHING FOR THE MIXING ANGLE θ13 WITH REACTOR NEUTRINOS P. NOVELLA CIEMAT, Av. Complutense 22, Madrid, 28040, Spain E-mail:
[email protected] The discovery of neutrino oscillations is a direct indication of physics beyond the Standard Model. The so-called atmospheric and solar sectors have been explored by several experiments, meanwhile the mixing angle θ13 connecting both sectors remains unknown. In contrast to accelerator experiments, reactor neutrinos arise as a clean probe to search for a non-vanishing value of this angle. A new generation of multi-detector reactor experiments, starting operation by 2010-2011, aims at achieving sensitivities to sin 2 (2θ13 ) down to 0.01. This will allow for the exploration of the first hints pointing to a non-zero value of θ13 , provided by global fits of available neutrino data. Keywords: Neutrino physics; neutrino oscillations; reactor neutrinos; Double Chooz; RENO; Daya Bay.
1. Nuclear Reactors in Neutrino History Reactor neutrinos have already played a major role in the history of neutrino physics. Although Pauli predicted the exitence of the neutrino in 1930, it was only 25 years later when Reines and Cowan managed to detect the electron antineutrinos (¯ νe ) generated at nuclear power plants. About fifty years later, reactor neutrinos were used in the KamLAND experiment1 to measure with great accuracy the oscillation in the solar sector. In addition to other oscillation experiments, this proved the massive nature of neutrinos and therefore the existence of physics beyond the Standard Model. Nowadays, the mixing angle θ13 is the only one left to be measured and reactor neutrinos arise again as the main tool to derive its value. 2. One Step Beyond in Neutrino Oscillation Physics Neutrino oscillation data can be described within a three neutrino mixing scheme, in which the flavor states να (e, ν, µ) are connected to the mass states νi (i=1,2,3) through the PMNS mixing matrix UP M N S .2 This matrix can be expressed as the product of three matrices where the mixing parameters remain decoupled: UP M N S = Uatm · Uinter · Usol . The terms Uatm and Usol describe the mixing in the so-called atmospheric and solar sectors, which are driven by the mixing angles θ23 and θ12 , respectively. The Uinter matrix stands for the interference sector
November 22, 2010
19:20
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.09˙Novella
315
which connects the previous two, according to the mixing angle θ13 and the phase δ responsible for the CP violation in th leptonic sector. Finally, the oscillation probability between two neutrino species becomes a function of the above oscillation parameters and the two independent mass squared differences ∆m2ij = m2i − m2j . The KamLAND experiment has explored the oscillation in the solar sector and provided allowed and best fit values for θ12 and ∆m221 ,1 showing consistency with solar experiments data. In the same way, the MINOS experiment3 has published results for atmospheric sector (θ23 and |∆m231 |), being consistent with atmospheric neutrino data. However, the subdominant oscillation corresponding to the interference sector has not been observed yet. Results from CHOOZ experiment4 show at 90% C.L. that sin2 (2θ13 ) < 0.15 for |∆m231 | = 2.5 × 10−3 . Provided that δ appears in Uinter only in combination with sin2 (2θ13 ), the CP-violating phase also remains unknown. As a direct consequence, the search for the third mixing angle stands as one of the major open issues in neutrino oscillation physics.
3. Reactor Neutrinos in the Quest for θ13 Nuclear reactors produce nearly pure ν¯e fluxes coming from β decay of fission fragments. A typical core delivers about 2 × 1020 ν¯e per second and GWth of thermal power. Such high isotropic fluxes compensate for the small neutrino cross-section and allow for an arbitrary location of neutrino detectors, scaling the flux with 1/L 2 where L is the distance between the core and the detector. Any oscillation effect in the ν¯e survival is governed by the following equation:
P (¯ νe → ν¯e ) ∼ = 1 − sin2 2θ13 sin2 (
∆m231 L ∆m221 L ) − cos4 θ13 sin2 2θ12 sin2 ( ) 4Eν 4Eν
(1)
where Eν is the neutrino energy. The second and third terms of Eq. 1 describe the oscillation driven by θ13 and θ12 (solar regime), respectively. The value of θ13 can be derived directly by measuring P (¯ νe → ν¯e ). Notice that in contrast to accelerator neutrino experiments, this measurement does not suffer from the δ −θ13 degeneracy. The most common way of detecting reactor neutrinos is via the inverse beta β-decay (IBD) p + ν¯e → n + e+ . When this reaction takes place in liquid scintillator doped with Gadolinium, it produces two signals separated by about ∼ 30 µs: the first one due to the e+ and its annihilation, and the second one due to the n capture in a Gd nucleus. This characteristic signature yields a very efficient background rejection. The e+ energy pectrum peaks at ∼ 3MeV and can be related to Eν . The mean energy of the ν¯e spectrum in a detector filled by such a scintillator is around 4 MeV, as shown in panel (a) of Fig. 1. According to Eq. 1, for this energy the oscillation effect due to θ13 starts to show up at L ∼ 0.5 km, where the effect of θ12 is still negligible. Therefore, neutrino reactor experiments with short baselines become a clean laboratory to search for θ13 .
November 22, 2010
19:20
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.09˙Novella
316
4. Getting the Most From Reactor Experiments The sensitivity to the θ13 -driven oscillation is optimized by detecting a deficit in the expected neutrino events around 1 km away from the nuclear power plant, as shown in panel (b) of Fig. 1. However, some of the largest systematics in reactor experiments arise from the uncertainties in the original ν¯e fluxes. In order to reduce them, a relative comparison between two or more identical detectors located at different distances from the reactors becomes critical. In particular, a detector placed a few hundred meters away can measure the fluxes before any oscillation takes place, as demonstrated in panel (b) of Fig. 1. The comparison between the so-called far and near detectors leads to a breakthrough in the sensitivity to θ13 , as all the fully correlated systematics cancel out. Further steps in the sensitivity optimization relay on reducing the relative normalization and the relative energy scale uncertainties of the detectors, as well as on minimizing the backgrounds. 5. Improving the CHOOZ Experience
P(ν e → νe )
arbitrary units
In order to get sensitivities to θ13 down to 0.01 with detectors of a reasonable size, a multi-detector experimental set up is required. Following this strategy, a new generation of reactor experiments has been planned for the incoming years. Double Chooz experiment5 aims at operating two identical detectors located 400 m and 1050 m away from the two reactor cores, yielding 4.25 GWth of thermal power each one, of the CHOOZ nuclear plant (France). This experiment consists of two phases. First phase, starting in 2010, uses only the far detector and will be able to improve CHOOZ results in a few months of operation. Second phase (2012) takes advantage of the near detector and will improve CHOOZ sensitivity by a factor 5. A similar approach to the Double Chooz one is being developed by the RENO collaboration.6 RENO consists of two identical detectors meant to measure neutrino fluxes generated at the 6 cores (17.3 GWth in total) of the Youngwang
Neutrino visible energy Reactor neutrino flux IBD cross−section
1
0.9
0.7
0.6
Far Detector
Near Detector
0.8
sin2 (2θ 13 )=0.15 ∆m 212 =8 10
−5
eV 2
−3
2 ∆m 23 =2.5 10 eV 2
sin2 (2θ 12 )=0.87
0.5
0
2
4
6
8
10
0.4
102
3
10
Energy (MeV)
(a)
104
L (m)
(b)
Fig. 1. (a) ν¯e visible spectrum as a result of the flux shape and IBD cross-section. (b) ν¯e survival probability for Eν = 3 MeV, as a function of the distance L.
November 22, 2010
19:20
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.09˙Novella
317
nuclear plant (South Korea), and starts its operation also in 2010. Finally, a more ambitious project is the Daya Bay (China) experiment,7 which will be ready by 2011. A far site and two near sites, consisting of two twin detectors each one, will be built to measure ν¯e fluxes from the 6 cores (17.4 GWth in total) existing in the bay. As the detectors can be moved along a network of tunnels, several measurements at different distances from the cores are foreseen. The final sensitivity of the above three experiments will depend basically on the statistics achieved and the control of the systematics. Estimations from the three collaborations are summarized in Tab. 1. Table 1. Comparison of the Double Chooz, RENO and Daya Bay experiments in terms of statistical uncertainties (σstats ), systematic errors (σsys ) and the corresponding sensitivity to θ13 (sin2 (2θ13 )lim ), after 3 years of data taking.
Double Chooz RENO Daya Bay
σstats (%)
σsys (%)
sin2 (2θ13 )lim 90% C.L.
0.5 0.3 0.2
0.6 0.5 0.4
0.03 0.02 0.01
6. θ13 Around the Corner? Although there are no direct experimental evidences pointing to a non-zero value of the mixing angle θ13 , a global analysis of the available neutrino data shows a preference (the so-called hint), for θ13 6= 0.8 Independent analyses of atmospheric neutrino data, solar and KamLAND data, and latest MINOS results in the appearance of νe , lead to this hint. A combination of these three analyses provides an indication of θ13 > 0 at the 2σ (95% C.L.) level. Accordingly, best fit values for sin2 (2θ13 ) are around 0.04-0.08. Therefore, reactor neutrino experiments will be able to explore this hint in the incoming years. The confirmation of a non-vanishing value of this angle by reactor experiments would open the door for the search of the CP violation in the leptonic sector, the determination of the neutrino mass hierarchy (sign of ∆m231 ) and the ultimate definition of the future neutrino facilities. References 1. 2. 3. 4. 5. 6. 7. 8.
S. Abe et al. [KamLAND Collaboration], Phys. Rev. Lett. 100 (2008). M. C. Gonzalez-Garcia and M. Maltoni, Phys. Rept. 460 (2008). P. Adamson et al. [MINOS Collaboration], Phys. Rev. Lett. 101 (2008). M. Apollonio et al. [CHOOZ Collaboration], Eur. Phys. J. C 27 (2003). F. Ardellier et al. [Double Chooz Collaboration], arXiv:hep-ex/0606025. J. K. Ahn et al. [RENO Collaboration], arXiv:hep-ex/1003.1391. X. Guo et al. [Daya-Bay Collaboration], arXiv:hep-ex/0701029. G. L. Fogli, E. Lisi X. Guo et al., J. Phys. Conf. Ser. 203 (2010).
November 26, 2010
17:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.10˙Kawasaki
318
STATUS OF THE DOUBLE CHOOZ EXPERIMENT TAKEO KAWASAKI∗ (on behalf of the Double Chooz Collaboration) Niigata University, Department of Physics, Niigata 950 2181, Japan ∗ E-mail:
[email protected] The goal of the Double Chooz reactor experiment is the measurement of the θ 13 neutrino mixing angle using two identical detectors at two different distances. Now the construction of the far detector is in the final stage and the data taking will be started soon. The near detector is expected to be started 2 years later. After 3 years of operation, the sensitivity on sin2 2θ13 will be 0.03. The paper presents the overview, the status and the prospects of the experiment. Keywords: Neutrino oscillation, Reactor neutrino
1. Introduction Neutrino oscillation is a way to prove particle physics beyond the Standard Model. Most of the oscillation parameters have already been measured except θ13 for which we have only upper limit, sin2 2θ13 < 0.2, given by the CHOOZ reactor experiment.2 The sensitivity is limited by both statistics and systematic errors. In order to explore the value of θ13 , accelerator experiments (T2K, NOvA and so on) are being prepared and just started. The reactor neutrino experiment, however, is a cost-effective way to extend our sensitivity reach for θ13 measurement and is complementary to accelerator experiments. The Double Chooz experiment will be able to measure θ13 within a few years if is it not too small (sin2 2θ13 > 0.03). 2. Experiment Overview Double Chooz detectors will be installed in the Chooz two-core (4.27+4.27GW th) nuclear power plant, in the north France. Neutrino detection will be done through inverse-beta decay ν¯e p → e+ n in liquid scintillator. The scintillation light from the positron has information of ν¯e energy. The delayed signal (∆t ∼ 30µs) at the energy of about 8MeV is given by the neutron capture on the gadolinium, which is loaded to the liquid scintillator. The gadolinium gives large capture cross-section for thermal neutrons. The detection of a delayed coincidence signal reduces drastically the accidental backgrounds. To reduce the uncertainties coming from the neutrino flux, cross section and detector induced ones, we put two identical detectors. Far detector is installed at
November 26, 2010
17:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.10˙Kawasaki
319
the place, where the first Chooz detector was operated at 1,050 m far from reactor core, to measure ν¯e disappearance. The distance is near the oscillation maximum. The second detector is situated at closer (∼400m from core) position. The Double Chooz experiment will be started and integrated in two stages. The operation with only far detector will start in Summer 2010. Thanks to improved detector design and long stable operation, we will have better sensitivity than CHOOZ result in the first stage. After the operation of the near detector starts, the uncertainty on the neutrino flux and reactor power will be drastically reduced. We expect to have 15,000 events for the far detector and 300,000 events for the near detector per year. The statistical error will be 0.5% with data of three year operation in two detector phase.
Fig. 1.
Layout of the Double Chooz detector at far site.
3. Detector The detector is designed to have high efficiency to detect the ν¯e signal and to reduce the background events due to detector radioactivity and cosmic rays. The details of the detector are summarized elsewhere.3 Detector consists of four concentric cylindrical structures (Figure 1). The innermost volume is the neutrino target. The transparent acrylic vessel is filled by 10.3 m3 of Gd-doped liquid scintillator. A 55 cm thick volume, called gamma-catcher,
November 26, 2010
17:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.10˙Kawasaki
320
surrounds the target. Its acrylic envelop contains standard liquid scintillator (without Gd). It is designed to catch the γ-rays from Gd and positron annihilation in the central target region. This scintillating buffer around the target is necessary for reconstructing the original energy of the gammas from neutron capture on Gd and from positron annihilation and to avoid the fiducial volume cut and to reduce the systematic uncertainties. Finally, a 1.05m thick buffer region of non-scintillating liquid (110m3 ) serves to decrease the level of accidental background. This region is crucial to keep the single rates by photomultiplier tube(PMT) radioactivity below 10Hz in the sensitive region. 390 PMTs are installed on the inner wall of the steel vessel containing the oil. The buffer tank is surrounded by a 50 cm thick inner veto region filled with liquid scintillator. Inner veto is monitor by 78 8 inch PMTs to identify cosmic muons which pass the active region of detector. The whole detector is surrounded by 15 cm thick demagnitized iron to shield the external gamma background coming from the rock surrounding the detector. As outer veto, a plastic-scintillator tracker system will identify and locate cosmic muons, which is the origin of main background events. This improves the muon rejection by a factor 20 compared to that of the inner veto system. The inner volumes of the near and the far detectors will be identical. The shape of the outer veto system and shield will change since the overburden and the design structure of the experiment halls are different between the sites. 3.1. Liquid scintillator In the neutrino target volume, Double Chooz will employ a mixture of 20% of PXE and 80% of dodecane with small quantities of PPO, bis-MSB and 1g/` Gd-loading. Metal doped liquid scintillator might be unstable during long operation and it has been necessary to develop stable one since Double Chooz will operate during more than 3 years and the two detectors will be installed with a different timing. Degradation of scintillator performance affects to the performance of the experiment. After a systematic study for several years, MPIK Heidelberg succeeded to develop a stable liquid scintillator with Gd-β-diketonates. The long term stability of the scintillator has been carefully tested. The transmission and light yield is confirmed to be stable for 400 days under 20 ∼ 40◦ C. MPIK is also responsible to produce large amount of liquid scintillator. The scintillator needs to be produced together to assure identical number of protons per volume concentration in both detectors. Radioactivity of liquid scintillator will also be controlled by the purification process during the liquid scintillator production and at detector filling time. 3.2. PMT system Light produced in the liquid scintillator in the target region is detected by PMT system. We employ 390 10 inch PMTs for the inner target. The PMTs are mounted
November 26, 2010
17:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.10˙Kawasaki
321
on the stainless steel buffer tank assembled with support structure and magnetic shield. The optical coverage of the PMTs is 13%. The performance of PMT is crucial to assure the quality of whole experiment. We have developed very low-background and large-area PMT with Hamamatsu Photonics K.K. (HPK) in Japan. The PMT glass materials was selected carefully and melted in platinum coated pots to avoid contaminations of radioactive elements from the pot wall, to fulfill the tight requirement on radioactivity in order to reduce the single rate around inner target. We purchased 800 PMTs in total. All delivered PMTs have been tested for various items to check the performance and the characteristics of each PMT. We have measured the dark noise rate, the peak to valley ratio, the transition time and its spread for each PMT. After transportation to the experimental site, simple tests were performed just before the installation to confirm that PMT have not been damaged. We also use 78 8-inch PMTs for the inner muon veto. They were mounted on the wall of inner veto vessel. 3.3. DAQ system The signal from PMTs of inner target and the inner veto are transported via high voltage cable and separated by ”splitter” to be sent to readout electronics. The waveform digitization is performed to record pulse shape for each PMT signal by the 8-bit Flash-ADC operated at 500 MHz clock. It will be useful in offline analysis to make background rejection and particle identification. Intelligent data handling from FADC memory realize the deadtimeless operation. The trigger and timing system will distribute a 62.5 MHz clock to the whole detector. For the outer veto system, the light from the plastic scintillator strips are detected with multi anode PMTs through wavelength shifting fiber. The signal from PMTs are digitized by ADC and stored by an independent DAQ system prepared for the outer veto system. 4. Errors and Background Larger target volume and long exposure time make the better condition in the ν¯e measurement. During three year operation, 50,000 events on far detector are expected, giving a statistical error of about 0.5% (compared to 2.8% for the first CHOOZ experiment). The main uncertainty at the first CHOOZ came from the uncertainties of neutrino flux from reactor and detector related ones. Thanks to the use of the two identical detector concept, errors originating from neutrino source cancel. The dominant systematic error for Double Chooz experiment is the relative normalization between the two detectors. A difference of the number of protons in target volume (0.2%) and the live time related to the background rate (0.25%) are major ones. Some systematic errors come from analysis cuts. Since we apply essentially only 3 analysis cuts (positron signal energy, neutron signal energy and ∆t), the analysis
November 26, 2010
17:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.10˙Kawasaki
322
related error is reduced into 0.3%. Another source of systematic error is the treatment of background. From the experience with CHOOZ experiment, quantitative estimation on background is possible by extrapolation of CHOOZ data and with Monte Carlo estimation. Backgrounds are classified in two categories, accidental and correlated events. Accidental background is induced by accidental combination of prompt-like, due to radioactivities of materials in detector, and delayed background. The delayed background (neutron-like) comes mainly from neutron capture on Gd. The detector has been carefully designed to reduce the radioactivities. Correlated backgrounds are induced by the spallation process of high energy cosmic muon. The spallation in the surrounding rocks can make a fast neutron, which makes fake prompt and delayed signal. Our Monte Carlo simulation was tuned by the correlated background in the first CHOOZ data and reproduce it well. The spallation in the target scintillator makes long-lived radioactive neutron emitters (such as 9 Li and 8 He). Those also make fake signal and should be subtracted since those have long life time(>100ms) and are difficult to remove using cosmic muon timing. Background subtraction makes a small systematic error. Finally the total systematic error is expected to be 0.6% (compared to 2.7% in the CHOOZ experiment). Figure 2 shows the expected sensitivity of Double Chooz experiment. The far detector will start the first data taking alone in year 2010. With a few months operation, it will exceed the CHOOZ limit. The installation of the near detector will follow after about one and half year. In three years of data taking with both detectors, the sensitivity on sin2 2θ13 will reach a value of 0.03.
5. Status of Detector Construction and Schedule Double Chooz is now in the final stage of the far detector construction. We made great progress in the last year. The overhaul of the CHOOZ experimental hall and the installation of steel shield of the central detector have been done in year 2008. Then, we installed PMTs for the inner veto system and buffer tank. Before summer 2009, we finished the installation of PMTs for the inner detector system except that on the lid roof. Until the end of year, we succeeded to install the acrylic vessels and the rest of PMTs. The installation of the electronics and the DAQ system is ongoing now. In parallel, we will start the filling of liquids to the vessels and tanks. We will start data taking in Summer 2010. The installation of the outer veto system and calibration devices will follow. It should improve data quality and reduce systematic error from background and the uncertainty on the energy scale. Concerning the status of the near site, the civil engineering study, design of the experimental hall and the tunnel are completed. We start the excavation in 2010 and the near site will be ready in 2011.
November 26, 2010
17:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.10˙Kawasaki
323
Fig. 2. Expected sensitivity (90% C.L.) of Double Chooz experiment to sin2 (2θ13 ) as a function of exposure time with assuming ∆m2 = 2.5 × 10−3 eV 2 . We suppose the near detector will start its operation 18 months after the far detector operation started.
6. Summary Goal of the Double Chooz experiment is to measure the yet unmeasured oscillation parameter θ13 . The construction of the far detector is in the final stage. The data taking will start in Summer 2010. With a few months operation, the sensitivity on sin2 2θ13 measurement will exceed the CHOOZ limit. The installation of the near detector will follow after about one and half year. In three years of data taking with the both detectors, the sensitivity on sin2 2θ13 will reach a value of 0.03 with 90% C.L.. Acknowledgments I thank Beyond 2010 organizers to give the opportunity to introduce Double Chooz experiment and make fruitful discussion on neutrino physics. I gratefully acknowledge the Double Chooz collaborators. The Double Chooz Collaboration gratefully acknowledges support from Brasil, France, Germany, Japan, Spain, the U.K. and the U.S.A.. References 1. The Double Chooz collaboration consists of the following institutes: Physikalisches Institut RWTH Aachen, University of Alabama, APC-IN2P3, Argonne National Laboratory, CBPF Brazil, CIEMAT Spain, University of Chicago, Columbia
November 26, 2010
17:57
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.10˙Kawasaki
324
University, University of California at Davis, Drexel University, Universit¨ at Hamburg, Max Planck Institut f¨ ur Kernphysik Heidelberg, Hiroshima Institute of Technology, Illinois Institute of Technology, Institute for Nuclear Research RAS, Institue of Physical Chemistry RAS, IPHC Strasbourg, Kansas State University, Kobe University, RRC Kurchatov Institute, Lawrence Livermore National Laboratory, Massachusetts Institute of Technology, Technischen Universit¨ at M¨ unchen, Niigata University, University of Notre Dame, IRFU CEA/Saclay, Sandia National Laboratories, Subatech-IN2P3 Nantes, Tohoku University, Tohoku Gakuin University, Tokyo Institute of Technology, Tokyo Metropolitan University, Eberhard-Karls Universitat T¨ ubingen, UNICAMP, University of Sussex and University of Tennessee. 2. M. Apollonio, et al.(CHOOZ Collaboration), Eur. Phys. J. C 27, pp. 331-374 (2003). 3. F. Ardellier, et al.(Double Chooz Collaboration), hep-ex/0405032 (2004). F. Ardellier, et al.(Double Chooz Collaboration), hep-ex/0606025 (2006).
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
325
NEUTRINOS OSCILLATIONS WITH LONG-BASE-LINE BEAMS (Past, Present and very near Future) L. STANCO∗ INFN - Padova, www.pd.infn.it Via Marzolo, 8, Padova I-35131 Italy ∗ E-mail:
[email protected] We overview the status of the studies on neutrino oscillations with accelerators at the present running experiments. Past and present results enlighten the path towards the observation of massive neutrinos and the settling of their oscillations. The very near future may still have addiction from the outcome of the on-going experiments. OPERA is chosen as a relevant example justified by the very recent results released. Keywords: Neutrino; Oscillations; Tau.
1. Introduction In the last two decades several experiments have provided strong evidence in favor of the neutrinos oscillation hypothesis. In the so called atmospheric sector the flavor conversion was first established by Super-Kamiokande1 and further by MACRO2 and Soudan-23 experiments. Further confirmation was more recently obtained by the K2K4 and MINOS5 long-baseline experiments. However a two fold question is still unanswered, does the oscillation scenario correspond to the simple 3-flavor expectation or not? which is related to the still unobserved direct appearance of one flavor to another, in particular to the highly expected νµ → ντ oscillation. Answer to this two-fold question is relevant mainly to proceed towards the next steps in the clarification of the leptonic sector of the particle model. After a brief reminder of the physics behind we will assay to focus on the main points which brings us to the present knowledge about neutrino mixing. The recent history provided the scenario in which the neutrino oscillation framework was settled. Still new questions opened up and these bring us directly into the future. Next we will shortly report on the present results from short-base-line (SBL) experiment, mainly the MiniBooNE6 experiment, and the long-base-line (LBL) experiments, namely MINOS and OPERA.7 Finally some physics expectations for the near future after a personal discussion of the very recent OPERA results8 will be drawn.
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
326
2. Physics Layout The issue of the lepton mixing is far from being understood and even generally described as it occurs in the quark sector. In particular the generic three questions on the reason the leptons mix themselves, the details of the way they actually mix and which are the mechanisms which underlay their mixing, arize. In 1998 a new history for neutrinos began as a sort of second life with the double discovery that (a) they oscillate1 then owing a mass after 41 years from the initial idea of B. Pontecorvo in 19579 and (b) they mix themselves in a peculiar way after the void result by CHOOZ.10 The CHOOZ experiment took data in 1997-98 at a distance of about 1 km from a nuclear power plant of two reactors in France. It aimed to observe νe → νµ (actually antineutrinos) oscillations. After a collection of 2991 ν¯e candidates CHOOZ put an upper limit on the direct observation of ν¯µ events. At that time the limit was set as sin θ . 0.1 with a systematic error of 2.7%. The low error was due to the possibility for CHOOZ to measure the backgrounds before the switching on of the reactors. In 2002 the KamLAND experiment11 repeated the measure n a site in Japan where many reactors were present, close and far away from the detector. The distribution of the ν¯e flux coming from the reactors is displayed in Fig. 1(a), with an average distance of 150 km from the reactor. Differently from CHOOZ, KamLAND obtained a positive result in term of disappearance of ν¯e flux. The beautiful oscillation pattern is shown in Fig. 1(b).
(a)
(b)
Fig. 1. The KamLAND experiment. (a) Distribution of the ν¯e flux. (b) Oscillation pattern of the ν¯e disappearance. The figures are taken from [11].
Mainly after KamLAND (and a rather contemporary result in the solar neutrino sector by the SNO experiment12 ) the increase in the oscillation neutrino studies was extremely rapid and huge bringing to a re-interpretation of the CHOOZ result in term of oscillations of flavour eigenstates. The old idea of mixing matrix by Maki et al. in 196213 was revitalized, similarly to what was made by Cabibbo14 in 1963 for the quark sector. The standard parametrization of a mixing matrix at 3 components
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
327
is therefore realized via the usual 3 Euler rotations, leaving us with 3 angles, θ 12 , θ23 , θ13 , and a phase δ. Moreover in case of a Majorana picture two more phases are present, α1 and α2 . To emphasize the key point it comes out quite naturally to simply establish a similar way of mixing for quarks and leptons. Of course other more complex scenarios, where more than 3 eigenstates appear, are possible. More neutrinos states are compatible with the present knowledge of the lepton physics, in particular one or more sterile neutrinos16 may be included. This is a fundament question since it may or it may not en strength parallelism between quarks and leptons. The complete description of the formalism may be found in [15], while several fits have been performed to take into account the whole set of measurements. Still fundamental questions remain unanswered. The first question relates to the mass ordering of the neutrino mass eigenstates. Does the mass scale ordering of ν 1 , ν2 , ν3 (as defined by the parametrization) follows the same ordering of νe , νµ , ντ ? As the measured oscillation pattern is described only in term of ∆m212 and ∆m213 the exact order is not identified yet, neither it is the absolute mass scale. Are the 3 masses just below the present neutrino mass absolute limit (less than 1 eV) or are they some order of magnitude smaller? More and more unanswered questions come up as we put a closer look to the measured quantities. For example in Table 1 the present values of the mixing matrix components for quarks, VCKM , and for leptons, VM N S are compared. The underlying pattern is clearly different and we finally conclude that the lepton mixing is weirda . Table 1. Present values for the Neutrino Mass Mixing Matrix (a) as taken from Ref. 17 and the unitarity values of the VCKM (b) as extracted from Ref. 18. Note that the very recent result by MINOS29 sets sin2 2θ13 < 0.12. (a)
sin2 θ12 sin2 θ23 sin2 2θ13 ∆m213 δm212
= = < = =
0.30 ± 0.02 0.50 ± 0.07 0.13 −3 eV 2 2.40+0.12 −0.11 × 10 7.6 ± 0.2 × 10−5 eV 2
(b)
P |V |2 Pi=d,s,b ui 2 |V | P i=u,c,t id 2 i=u,c;j=d,s,b |Vij |
= = =
0.9999 ± 0.0011 1.002 ± 0.005 2.002 ± 0.027
Also the present knowledge of the errors is largely different in the quark and lepton sector. See e.g. Ref. 17 for an up-to-date report on the error measurements, to be compared with the extremely well known values of the quark mixing matrix. 18 a Even
if the lepton mixing appears weird several tentatives to elaborate a quark-lepton complementarity by playing on the relative values of the θ’s angles have been done. See for example Ref. 19.
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
328
To illustrate the importance of the size of the errors we may look at Fig. 2 taken from Ref. 20 (Fig. 43), which shows the large region for the possible values of the top angle of the lepton unitarity triangle. The degenerate case, θ13 = 0, corresponding to the bottom horizontal line, is also still allowed by the present measurements. More and exhaustive discussions may be found in Ref. 21.
∗ | is normalised to one. The triFig. 2. The unitarity eµ-triangles. The horizontal side, |Ue1 Uµ1 angles correspond to θ13 = 0.15 and different values of the phase δ. Each scatter point represents a possible position of vertex as the mixing parameters pick up random values within the present uncertainty ranges: sin2 θ23 ∈ [0.36, 0.61], sin2 θ12 ∈ [0.27, 0.37] and sin2 θ13 ∈ [0, 0.031], and δ ∈ [0, 2π]. There are also illustrated 3 different triangles for 3 different choices of δ and θ 13 = 8.60 case. The figure is taken from [20].
In summary we may conclude that the lepton mass mixing matrix might be technically similar to the quark one even if it shows a quite different pattern and it is at present rather poor known. We like to conclude this section by using the same wording of W. Buchm¨ uller at EPS09 conference:22 ”Right-handed neutrinos have been found; no exotics have been found (yet)”. Therefore as a whole it follows that we have to be prepared to the unexpected! 3. Physics Perspectives Currently the lepton scenario illustrated in the previous section is the only one which is receiving attention by experimental investigation and mostly phenomenological investigation too. Other theoretical possibilities like e.g. the NSI, Non-StandardInteractions,23 are in our judgement not so appealing and remains at the level of generic phenomenological models. Therefore a not so long list of unknowns have to be identified and measured:
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
329
the 3 mixing angles (θ12 , θ13 and θ23 ), the 2 neutrino squared mass differences (∆m212 , ∆m213 ), the sign of one the two mass differences (∆m223 ), a CP phase (δ), the absolute neutrino mass scale and their nature (Dirac or Majorana), the total number of neutrino (are there more than 3 neutrinos ? b ), not at last forgetting the detection of the undergoing source of the oscillation. The latter question corresponds to the detection of a direct appearance signal, that is the observation of the ν τ appearance for the atmospheric oscillation (and the νe for the solar one) providing a direct measurement of the Lepton Flavor Violation (LFV) processc. Most of the above items may be investigated at Long-Base-Line experiments by excluding the investigation of the fundamental nature of the neutrinos and their absolute mass scale.
Fig. 3. The LSND observation limits of the ν¯µ − ν¯e oscillation. The allowed regions are obtained by a (sin2 2θ, ∆m2 ) oscillation parameter fit, at 90% and 95% C.L. The curves are 90% CL limits from the Bugey reactor experiment, the CCFR experiment at Fermilab, the NOMAD experiment at CERN, and the KARMEN experiment at ISIS. The figure is taken from [24].
The physics prospects are raveled by the “presence” of internal puzzles in the experimental side. In particular the recent results from MiniBooNE are not able b The possibility of more than 3 neutrinos refers to the presence of the so called sterile neutrinos, 16 id est neutrinos not active from the point of view of the weak interaction. c The SNO experiment12 measured the appearance of neutrinos with flavor different from the original electronic one in the solar sector. We name this kind of observation indirect.
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
330
to disentangle the somewhat old and controversial result by LSND.24 The original result from LSND (see Fig. 3) of the ν¯µ − ν¯e observation could not be phenomenologically arranged in the 3 neutrino standard scenario. MiniBooNE25 looked for the oscillation in either the neutrino or the antineutrino modes. In the neutrino mode it is able to rule out the result by LSND as oscillation while observing an unexplained excess in a energy region below that of LSND. In the antineutrino mode no similar excess is observed while the ruling out of LSND is not gained. Fig. 4 (a and b) as extracted by Ref. 6 shows the MiniBooNE results.
-3
102
10
10-2
10-1
1 2 10
sin2(2θ) upper limit MiniBooNE 90% C.L.y MiniBooNE 90% C.L. sensitivity BDT analysis 90% C.L.
|∆m2| (eV2/c 4)
10
10
1
10-1
1
LSND 90% C.L.
10-1
LSND 99% C.L.
MiniBooNE 90% C.L.y KARMEN2 90% C.L. Bugey 90% C.L.
|∆m2| (eV2/c 4)
10
10
1
1
10-1
10-1 LSND 90% C.L. LSND 99% C.L.
10-2
10-3
10-2 sin2(2θ)
(a)
10-1
1
10-2
(b)
Fig. 4. The MiniBooNE experiment. (a) The limits extracted from the neutrino data (5.58 ± 0.12) × 1020 proton-on-target (p.o.t.) (b) The limits extracted from the antineutrino data (3.39 × 1020 p.o.t.). The figures are taken from [6].
As a matter of fact to the author the experimental situation is rather confused. More experimental facts are needed and the question whether the ongoing two LBL experiments MINOS and OPERA may help turns out to be fully relevant.
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
331
4. MINOS Physics Results The MINOS experiment26 is constituted by two similar apparata, the Near and the Far detectors, made of scintillator strips and a toroidal spectrometer. This layout allows the minimization of several uncertainties like the neutrino flux from the NUMI beam and the extrapolation via Monte Carlo of the unoscillated νµ spectrum from Near to Far sites. A very detailed analysis allows to reconstruct the energy of the interacting neutrinos (Fig. 5) and estimate the percentage of disappeared neutrinos.27 From the later MINOS extracts the oscillation parameters in the assumption of 2 flavor oscillation mode (Fig. 6 from the analysis in [5]). 60
40
F/N Ratio NDFit
30
2DFit
20
50
Unoscillated MC Best-fit MC
40
NC contamination
30 18-30 GeV
Events/GeV
Beam Matrix
Events/GeV
MINOS Data
50
20 10
10
0 0 2 4 6 8 10 12 14 16 18 Reconstructed Neutrino Energy (GeV)
00
5 10 15 20 25 30 Reconstructed Neutrino Energy (GeV)
(a)
2
NC subtracted
F/N Ratio NDFit 2DFit
1
0.5 Statistical error, 1.27 x 10
20
POT
0 5 10 15 20 25 30 Reconstructed Neutrino Energy (GeV) (b)
(A)
1.5 1 0.5 0
MINOS Data Best-fit MC
18-30 GeV
1.5
Data/MC Ratio
Ratio to Beam Matrix
(a)
-0.5 0 2 4 6 8 10 12 14 16 18 Reconstructed Neutrino Energy (GeV) (b)
(B)
Fig. 5. The MINOS experiment. (A) Neutrino energy spectra at the Far Detector in the absence of neutrino oscillations as predicted by the four extrapolation methods used. The limits are extracted from the neutrino data (5.58 ± 0.12) × 1020 proton-on-target (p.o.t.) (B) The reconstructed energy spectra of selected Far Detector events with the Far Detector unoscillated prediction (solid histogram) and best-fit oscillated spectrum (dashed histogram) overlaid and (b) the ratio of the observed spectrum to the unoscillated Far Detector prediction. The figures are taken from [27].
Since we will discuss in the next section the OPERA experiment it is worthwhile to outline the twofold character of the MINOS analysis, the “rate” and the “shape”.
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
332
ï3
×10 4.0 4
3.0 3
ï3
|∆m2| (10 eV2)
3.5 3.5
2.5 2.5 MINOS best oscillation fit
2.0 2 1.5 1.5 1.01
0.6
MINOS 90%
SuperïK 90%
MINOS 68%
SuperïK L/E 90%
MINOS 2006 90%
K2K 90%
0.7
0.8 sin (2θ)
0.9
1
2
Fig. 6. The MINOS observation limits of the νµ oscillation. Contours for the oscillation fit to the data in Fig. 5-B. Also shown are contours from Super-K and K2K and earlier MINOS result in 2006. The figure is taken from [5].
As OPERA will be able to deal only with “rates”, the latter significance power has to be compared with the corresponding one by MINOS which turned out to be rather poor (Fig. 7). The disappearance mode can be complementary studied in MINOS with the appearance of electron ν. First results reported were indicative of a possible ν e appearance: 35 events from νe interactions were observed against an expected background of 27±5(stat)±2(sys), corresponding to a 1.5 excess.28 However very recent results (released after the Conference time) with an increased statistics washed out that indication.29 It seems that the new dedicated experiments for the θ13 measurement have to be waited for (see the related contributions to these proceedings). 5. The OPERA Way We will now discuss at length the OPERA experiment since the very recent on May 31rst 2010 release of new results (see next Section) corresponds to a relevant new contribution in the neutrino physics. The OPERA experiment7 has been designed to observe the ντ appearance in the CNGS νµ beam30 on an event by event basis. The ντ signature is given by the decay topology of the short-lived τ leptons produced in the ντ Charged Current (CC) interactions decaying to one prong (electron, muon or hadron) or three prongs
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
333
|Λm2| (eV 2 /c4 )
10-1
10-2 Rate: 90% C.L. Shape: 90% C.L. Rate+Shape: 90% C.L. Rate+Shape best-fit
10-3 0
0.2
0.4
0.6
0.8
1
2
sin 2≤ Fig. 7. MINOS result: comparison of the 90% C.L. regions from oscillation fits using shape and rate information, apart and together. The best-fit point and 90% C.L. contour from the fit to shape and rate information are also shown. The figure is taken from [27].
hadrons. The detector is located underground in the Laboratorio Nazionale del Gran Sasso (LNGS, L’Aquila, Italy) along the path of the CNGS neutrino beam, 730 km away from the source at CERN. The beam was optimized in order to maximize the number of ντ CC interactions at the LNGS site keeping the energy constraint to be above the τ production threshold. The result is a wide band neutrino beam with an average energy of ∼ 17 GeV; the ν¯µ contamination is 2.1%, νe + ν¯e is below 1% and prompt ντ at production is negligible. With a nominal beam intensity of 4.5 × 1019 proton-on-target (p.o.t.) per year, νµ CC and neutral current (NC) interactions at Gran Sasso are deemed to 2900/(kton×year) and 875/(kton×year), respectively. By assuming the oscillation parameters ∆m2 = 2.5 × 10−3 eV2 at full mixing 10.4 events are expected to be observed in OPERA in 5 years of data taking with a background of 0.75 events. In the two years 2008 and 2009 OPERA succeeded31 to collect 5.30 × 1019 p.o.t. corresponding to 31,550 detected events in time with the beam, 5391 of which matched to a neutrino interaction in the OPERA target within more than 99% percent accuracy. At the CNGS energies the average τ decay length is ∼ 450 µm. In order to observe it OPERA makes use of 2 × 44µm nuclear emulsions films interspaced with 1 mm thick lead plates which form the target mass of the OPERA detector. This technique, called Emulsion Cloud Chamber (ECC), has been used successfully by the DONUT experiment for the first direct observation32 of the ντ . Every time a trigger in the electronic detectors is compatible with an interaction inside the target (see Fig. 8), the brick with the highest probability to contain the neutrino interaction vertex is extracted from the apparatus and exposed to X-rays
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
334
for film-to-film alignment. Further the brick is unsandwiched, the emulsion films are developed and analyzed. The final sensitivities are ∼0.3 µm spatial resolution, ∼2 mrad angular resolution and ∼90% single track detection efficiency.
Fig. 8. Neutrino event from OPERA as registered by the electronic detectors. The figure is taken from [31].
ADDENDUM Very recently OPERA reported the observation of a first ντ candidate.8 The result is obtained by the observation of a rather clean event (Fig. 9), a possible 1-prong hadron decay of a τ lepton with (n)π0 derived by the presence of some electromagnetic showers. The decay topology is consistent to be that of τ − → ντ + ρ− → ντ + π − π0 . Even if the expected number of ντ interactions and identification in OPERA is estimated to be 0.54 ± 0.13, well in agreement with the possible observation of 1 ντ event, the significance of the result depends totally on the value of the background. OPERA estimates the background to be 0.018 ± 0.007 for the 1-prong decay channel where the candidate has been observed. That corresponds to a probability of 1.8% to fluctuate to 1 event, which may be interpreted as a significance of 2.36 sigma’s towards the observation of a ντ interaction (p-values of the null hypothesis, see Ref. [18]). At first sight it may be surprising to extract such level of significance from just one event. That is the power of a clean experiment. It is ilustrated in Fig. 10 where the significance of the result is drafted towards the number of events observed instead of the usual integrated luminosity of the data collected. The curves parametrized as function of the number of p.o.t collected by OPERAd show that very few events allow to set a quite robust physics result. On top of that it is also evident that whether OPERA will be able to decrease the level of background d The
inputs in terms of expectation of number of ντ candidates and background events have been extracted and used from the OPERA proposal33 in 2001.
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
335
Fig. 9. Display of the ντ candidate event. Top left: view transverse to the neutrino direction. Top right: same view zoomed on the vertices. Bottom: longitudinal view. The figure is taken from [8].
the significance will increase it. For example, in case the estimated background be increased/decreased of a factor 2, retaining the assumed 50% nuisance, the corresponding significances will decrease/increase as 2.10 and 2.61, respectively. From another point of view the detection of a second (third) ντ candidate, with the present level of total background proportionally updated, will increase the statistical significance from 2.01 to 2.82 (3.42). The latter consideration may demonstrate that the OPERA result be potentially much more interesting that the actual measurement by Super-Kamiokande which set a 2.4 significance in the ντ appearance observation.34 The OPERA result, at 98.2% of probability, corresponds to an extremely important evidence which can be expressed in several ways. For example, we may say that it is the first direct evidence of Lepton Flavor Violation, the theoretical unsatisfaction of the Standard Models being from now on even more evident. The
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
336
Fig. 10. The number of Gaussian sigma’s corresponding to the number of observed ν τ candidates. The 3 curves represent different amount of data collections in terms of p.o.t. It also shown the level of confidence which is usually attributed to physics results in terms of sigma’s, as suggested by the author. The evaluation has been performed by considering the backgrounds from all the ν τ decay channels.
observation of the transition from one flavor to the other should constraint and open new horizons to the theoretical elaborations, not forgetting the parallelism (somehow opposite in term of flavor eigenstates) with the quark sector. The second important point which is left to OPERA for the near future is to answer the question about the number of oscillated ντ . That issue is well illustrated by a plot similar to the previous one (Fig. 11) where the distance in terms of sigma’s from the MINOS expectation is drawn towards the number of observed events. The result is parametrized as function of number of p.o.t. From the figure we may deduce that it will take some time to disentangle any deviation from the standard oscillation scenario. However it is will be fully worthwhile to pursue it. 6. Conclusions The neutrino oscillation scenario began to be clarified in 1998 with the observation of a disappearance of atmosferic νµ , followed by the determination of similar disappearance (and indirect appearance) in the solar sector. The scenario that rose up is based on a 3-flavor oscillation which however leave out some intriguing concerns like the LSND result and the presence or not of sterile neutrinos. In that context possible correlations with the similar mixing pattern of the quark sector are still at the level of theoretical exercises. The powerful results by MINOS settled a stringent measurement on the νµ oscillation. The very recent result by OPERA, even if still at the level of evidence, demonstrates the action of LFV and it rules out for the time being the presence of sterile neutrinos. The large numbers of experiments un-
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
337
Fig. 11. The distance in terms of number of Gaussian sigma’s to the expectation from the MINOS result (∆m223 = 2.5×10−3 eV 2 ) as a function of the observed ντ candidates. The 3 curves represent different amount of data collections in terms of p.o.t. The two level of confidence, at 90% and 95% are also shown. The detection of ZERO candidates is marginal even after 15 × 10 19 p.o.t. analyzed (but affordable), while departure from the expectation is possible even with very few candidate events.
dergoing all over the world to search for a θ13 value different from zero corresponds to a lively field of physics interest (see other contributions on these proceedings). However more than usual it is necessary to outline the lesson from past, nature is not obvious and the lack of experimental confirmations about theoretical models should encourage us to be prepared on the unexpected. Acknowledgments It is a pleasure to thanks the very warm hospitality of the organizers and the kind invitation. The presentation allowed the author to further elaborate on the very attractive field of neutrino physics. Some results and discussions reported in these proceedings were stimulated just for this occasion. Some of the statistical elaborations have been checked through by my colleague S. Dusini. Also I want to thank M. Mezzetto for a critical reading of the draft, many of his suggestions have been incorporated in the present version. Finally, all the considerations and the conclusions throughout the paper are of full responsibility of the author. References 1. 2. 3. 4. 5.
Y. Fukuda et al. (SK Collaboration), Phys. Rev. Lett. 81, 1562 (1998). S. P. Ahlen et al. (MACRO Collaboration), Phys. Lett. B 434, 451 (1998). W.W.M. Allison et al. (Soudan-2 Collaboration) Phys. Lett. B 449, 137 (1999). M.H. Ahn et al. (K2K Collaboration), Phys. Rev. D 74, 072003 (2006). D.G. Michael et al. (MINOS Collaboration), Phys. Rev. Lett. 101, 131802 (2008).
November 26, 2010
17:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.11˙Stanco
338
6. A.A. Aguilar-Arevalo et al. (MiniBooNE Collaboration), Phys. Rev. Lett. 98, 231801 (2007). Phys. Rev. Lett. 103, 111801 (2009). 7. R. Acquafredda et al. (OPERA Collaboration), JINST 4, 04018 (2009). 8. N. Agafonova et al. (OPERA Collaboration), submitted to Phys. Lett. B. arXiv:1006.1623v1 9. B. Pontecorvo, Sov. Phys. JETP 6, 429 (1957) [Zh. Eksp. Teor. Fiz. 33, 549 (1957)]. 10. M. Apollonio et al. (CHOOZ COllaboration), Phys. Lett. B 420, 397 (1998). 11. S. Abe et al. (KamLAND Collaboration), Phys. Rev. Lett. 100, 221803 (2008). 12. Q.R. Ahmad et al. (SNO Collaboration), Phys. Rev. Lett. 89, 011301 (2002). 13. Z. Maki, M. Nakagawa, S. Sakata, Prog. Theor. Phys. 28, 870 (1962). 14. N. Cabibbo, Phys. Rev. Lett. 10, 531 (1963). 15. A. Strumia, F. Vissani, Neutrino masses and mixings and..., arXiv:hep-ph/0606054. 16. G. L. Fogli et al., Phys. Rev. D 64, 093005 (2001). 17. M. Mezzetto, Thomas Schwetz, arXiv:1003.5800, 2010. T. Schwetz, M. A. Tortola, J. W. F. Valle, New J. Phys. 10, 113011 (2008), and arXiv-hep-ph/0808.2016v3. 18. C. Amsler et al. (Particle Data Group), Phys. Rev. Lett. 667, 1 (2008). Also 2009 partial update for the 2010 edition. 19. G. Altarelli, F. Ferruglio, New Jour. Phys. 6, 106 (2004). 20. International Scoping Studies (ISS), RAL-TR-2007-019 and arXiv:0710.4947v2 [hepph] 23 Nov 2007. 21. Y. Farzan and A. Yu. Smirnov, arXiv:hep-ph/0201105v2 28 Jan 2002. 22. W. Buchm¨ uller, presentation Summary & Outlook: Particles and Cosmology, EPSHEP, Krakow, Poland, 22 July 2009. 23. T. Kikuci, H. Minakata, S. Uchinami, JHEP, 114, 903 (2009) and arXiv:0809.3312 24. A. Aguilar et al. [LSND Collaboration], PRD 64, 112007 (2001). 25. A.A. Aguilar-Arevalo et al. (MiniBooNE Collaboration), Nucl. Instr. Meth., A599, 28 (2009). 26. D.G. Michael et al. (MINOS Collaboration), Nucl. Instrum. Meth. A 596, 190 (2008). 27. P. Adamson et al. (MINOS Collaboration), Phys. Rev. D 77, 072002 (2008). 28. P. Adamson et al. (MINOS Collaboration), arXiv:0909.4996v1. 29. P. Adamson et al. (MINOS Collaboration), arXiV:1006.0996. 30. CNGS project, http://proj-cngs.web.cern.ch/proj-cngs/ 31. L. Stanco (OPERA Collaboration), Proceedings of the BUE-CTP International Conference on Neutrino Physics in the LHC Era, 19 November 2009, Luxor, Egypt. 32. K. Kodama et al. (DONUT Collaboration) Phys. Lett. B 504, 218 (2001). 33. M. Guler et al., (OPERA collaboration), An appearance experiment to search for νµ → ντ oscillations in the CNGS beam: experimental proposal, CERN-SPSC-2000028, LNGS P25/2000. M. Guler et al. (OPERA collaboration), Status Report on the OPERA experiment, CERN/SPSC 2001-025, LNGS-EXP 30/2001 add. 1/01 34. K. Abe et al. (SK Collaboration), Phys. Rev. Lett. 97, 171801 (2006).
November 23, 2010
17:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.12˙Bravar
339
STATUS OF THE T2K EXPERIMENT ALESSANDRO BRAVAR for the T2K Collaboration Department of Nuclear and Particle Physics, University of Geneva, Geneve, Switzerland E-mail:
[email protected] The current status and near term physics goals of the T2K (Tokai-to-Kamioka) long baseline neutrino oscillation experiment are presented. Recently, T2K completed the construction of the neutrino beam line and of the near detectors including the upgrade of the Super-Kamiokande far detector. The first physics run started in January 2010. Goals for this first physics run are also discussed. Keywords: neutrino oscillations, neutrino cross sections
1. Introduction The observation of neutrino oscillations has established that neutrinos have mass and that the three families mix. The oscillation of νµ neutrinos into other flavors has been well established by both atmospheric neutrino experiments1 and long baseline accelerator experiments using νµ beams.2 However, several questions remain unanswered, like the observation of the θ13 mixing angle, the possible existence of sterile neutrinos, the absolute mass scale, the choice of mass hierarchy, the Dirac vs. Majorana nature of neutrinos, etc. Observation of CP violation in neutrino oscillations requires appearance experiments, and that all three neutrino mixing angles are different from zero. A non-zero θ13 mixing angle would allow one to search for CP violation in the leptonic sector. The T2K experiment aims at answering some of these questions. The Tokai-to-Kamioka (T2K) experiment is a next generation long baseline neutrino oscillation experiment designed to measure neutrino oscillation parameters using a relatively pure sub-GeV high intensity νµ beam generated by the new JPARC accelerator facility in Tokai (Japan). After traveling underground for 295 km (see Fig. 1), the neutrino beam is measured at the Super-Kamiokande (SK) 50 kton water Cherenkov detector at Kamioka (Japan). T2K operates with a 2.5◦ off-axis beam and uses several near detectors and a far detector to measure the properties of the neutrino beam. The off-axis technique3 yields a very narrow band νµ beam that can be tuned to the oscillation maximum (see Fig. 2). In the three generation framework, the neutrino oscillation probability for νµ → νµ disappearance is given by
November 23, 2010
17:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.12˙Bravar
340
Fig. 1. Baseline of the T2K experiment. After traveling underground for 295 km the ν beam is measured at the SK detector.
∆m223 L Pµ→µ ≈ 1 − sin2 (2θ23 ) sin2 1.27 Eν
(1)
and the probability for the νµ to νe appearance by
Pµ→e
∆m223L ≈ sin (θ23 ) sin (2θ13 ) sin 1.27 Eν 2
2
2
.
(2)
L is the distance in km between the neutrino source and the far detector, Eν the ν energy in GeV, θ the mixing angle, and ∆m2 the difference between the square of the mass eigenstates in units of eV2 . The main physics goals of T2K are given below. The expected sensitivity is given for 8×1021 protons on target accumulated over a 5 year period. The primary proton beam energy is 30 GeV. At 30 GeV, this intensity corresponds to an average beam power of 0.75 MW incident on the neutrino production target. (i) Discovery of νµ → νe appearance: Among the three neutrino mixing angles, at present only θ13 is unknown (the oscillation from νµ to νe has not yet been observed experimentally). The expected T2K sensitivity on sin2 2θ13 is 0.006 at 90% C.L. for δCP = 0 (see. Fig. 4), which represents more than an order of magnitude improvement over the current upper limit of 0.15 set by the CHOOZ reactor νe disappearance experiment.4 In the case νe appearance is not observed, an upper limit will be set. If νe appearance is observed and measured, CP-violation in the neutrino sector may be searched for in future experiments. (ii) Precision measurements of oscillation parameters in νµ disappearance: The goal is a 1% precision measurement of the mixing angle sin2 2θ23 with emphasis on whether θ23 is maximal, and a few % precision measurement of |∆m223 | (δ(sin2 2θ23 ) ≈ 0.01 and δ(∆m223 ) < 1 × 10−4 eV2 ). (iii) Search for sterile components in νµ disappearance by studying neutralcurrent events.
November 23, 2010
17:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.12˙Bravar
341
Fig. 2. Neutrino energy spectra for different off-axis angles. The plot on top shows the oscillation probability P (νµ → νx ) as a function of Eν . T2K operates with a 2.5◦ off-axis angle.
2. T2K Experimental Setup In a long baseline neutrino oscillation experiment the neutrino flux is measured in a detector far from the neutrino source (far in this context means L/E 1, see Eq.s 1 and 2) and compared with a prediction based on the neutrino flux unmodified by oscillations. A deviation in the measured neutrino flux from the predicted one is interpreted as evidence for neutrino oscillations and used to determine, for example the mixing parameters sin2 2θ23 and |∆m223 |. To predict the neutrino flux at the far detector one measures the neutrino flux in the near detector, which intercepts the beam at a distance where the effect of the oscillations is negligible. The T2K neutrino beam is produced from the decays of pions and kaons created in the interactions of the 30 GeV J-PARC proton beam on a graphite target. The resulting neutrino flux is measured by different near detectors 280 m downstream of the production target and in the Super-Kamiokande detector located 295 km away from the neutrino source (far detector).
November 23, 2010
17:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.12˙Bravar
342
3. The Beam T2K adopts an off-axis beam configuration in which the beam axis is displaced by a few degrees from the far detector direction. The off-axis method yields a low energy neutrino beam with a narrow energy spectrum and a small high energy tail. In particular, this suppresses the background from π 0 s produced in neutral current interactions by higher energy neutrinos, thus enhancing the signal for the appearance and disappearance measurements. By varying the off-axis angle, as illustrated in Fig. 2, the neutrino beam peak energy changes and can be tuned to the oscillation maximum for a given baseline in order to maximize the sensitivity to the neutrino oscillation parameters. For a baseline of 295 km, the optimal off-axis angle adopted by T2K is 2.5◦ , yielding a neutrino beam peaked around Eν ∼ 650 MeV. Details of the production mechanisms of the beam have to be known and understood in order to minimize the systematic uncertainties. The uncertainties in the production of π+ and K+ mesons can lead to important uncertainties in the oscillation analysis, and can preclude precise measurements of neutrino cross sections. Since there are no hadro-production data for proton-carbon (pC) interactions at 30 GeV and extrapolations from existing measurements at different energies are not always reliable, a new program to measure precisely π + and K+ hadro-production spectra is ongoing within the framework of the NA61/SHINE experiment at CERN.5 Pilot data were taken in 2007, followed by a physics run in 2009. Graphite targets of different lengths, including a replica of the T2K target, are being used. The goal of the NA61 measurement is to determine π + and K+ cross sections in pC interactions to a few % precision in order to (i) determine the so called far-to-near neutrino flux ratio to 3% and (ii) to predict the absolute neutrino flux to 5%. NA61 data are being currently analyzed and will be used in the T2K beam Monte Carlo to predict the neutrino flux precisely. 3.1. The beamline Primary protons are first accelerated in a 181 MeV linac, then by the Rapid Cycle Synchrotron up to 3 GeV, and finally by the Main Ring Synchrotron up to 30 GeV. The 30 GeV proton beam is extracted from the Main Ring in a single turn (fast extraction) to the neutrino facility, which transfers the beam to the neutrino production target. The target is a graphite rod of 26 mm diameter and 90 cm length. Outgoing positively charged pions and kaons are focused by three magnetic horns into a 94 m long decay volume filled with helium, where they decay in flight mainly into νµ ’s. The target is embedded in the first magnetic horn. The horns are driven by a pulsed current of 250 kA synchronized with the proton extraction timing. A beam dump is placed at the end of the decay volume, 110 m from the target. A muon monitor consisting of silicon PIN photodiodes and ionization chambers is placed just behind the beam dump and measures the muon flux produced from in flight pion decays. While almost all hadrons are absorbed by the beam dump, muons of energy > 5 GeV can penetrate the dump. The muon monitor provides
November 23, 2010
17:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.12˙Bravar
343
pulse-by-pulse information on the intensity and profile (direction) of the beam. The accelerator complex and neutrino facility have been constructed over a relatively short period of time. The construction of the neutrino beam line was completed by the end of 2009 with the installation of all three focusing horn magnets and beam instrumentation. Commissioning of the beamline started in April 2009 when first protons were extracted to the neutrino production target. At the time of writing, the facility operates continuously with a primary beam intensity of about 3.7 × 1013 protons per pulse with a repetition rate of 3.5 seconds, which corresponds to a beam power of about 50 kW. Work is ongoing to further increase the beam intensity delivered to the neutrino facility. 4. The Near Detector Complex The near detectors are located in a pit 280 m downstream from the neutrino production target. They are used to measure and study the neutrino flux before oscillation. 4.1. The On-axis near detector An on-axis detector, INGRID (Interactive Neutrino Grid), is used to monitor the primary proton beam direction by detecting interactions of on-axis neutrinos. This detector provides fast feedback for beam tuning and can determine the neutrino beam direction better than 1 mrad. The first neutrino candidate event in this detector was observed in November 2009. INGRID is a modular detector, consisting of 14 modules arranged in a form of cross centered on the beam axis. The horizontal and vertical arms extend for 10 m. Each module is a 1 × 1 × 1.3 m3 tracking iron-scintillator calorimeter, consisting of 10 layers of scintillating bars with 9 layers of iron plates sandwiched between them. The scintillating bars are readout with wavelength shifting fibers and Hamamatsu Geiger-mode avalanche photodiodes (Multi-Pixel Photon Counters MPPCs).6 This readout technique is used for all scintillators used in the near detectors. 4.2. The ND280 off-axis near detector The off-axis near detector ND2807 is at the same 2.5◦ off-axis angle as the SK far detector, though not on the same line of sight. The role of this detector is to measure the neutrino energy spectrum and its flavor composition before oscillation. ND280 will also measure neutrino interaction cross section with high precision. The detector consists of several sub-detectors located inside the magnet previously used by UA1 and NOMAD experiments. The magnet is operated at a field of 0.2 T. Fig. 3 shows the schematic layout of this off-axis detector. Based on the role of the sub-detectors, ND280 can be divided into three regions: a tracking detector, a π 0 detector, and an electromagnetic calorimeter. The tracking detector consists of three time projection chambers (TPCs) with micromegas readout and two fine grained detector (FGD) made of scintillator bars
November 23, 2010
17:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.12˙Bravar
344
of 1 × 1 cm2 cross section readout with wavelength shifting fibers and MPPCs. The downstream FGD contains pockets filled with water to measure ν interactions off water, the same active medium as the SK Cherenkov detector. The TPCs will measure charged particles with a momentum resolution of about 10% and will provide a 5 σ e/µ separation based on ionization measurements.
Fig. 3.
Open view of the off-axis ND280 near detector.
The π 0 detector (POD) is a brass-scintillator tracking calorimeter containing water pockets and lead-scintillator calorimeter sections. It is designed to measure π0 production in NC and CC neutrino interactions on a water target. π 0 production represents one of the major backgrounds in SK for neutrino appearance and disappearance measurements. The ECAL surrounds the tracking detector and the POD. Its role is of capturing electromagnetic energy that escaped the tracking detectors and POD and complements the event reconstruction and particle identification. The magnet yokes are instrumented with scintillator counters forming the SMRD (Side Muon Range Detector). The purpose of the SMRD detector is to measure ranges of high energy muons leaving tracking detectors and the π0 detector and to reconstruct cosmic ray muons. The SMRD provides a veto for external muons and forms a cosmic trigger to calibrate internal detectors.
November 23, 2010
17:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.12˙Bravar
345
5. The Far Detector T2K’s far detector is the Super-Kamiokande 50 kton water Cherenkov detector with a fiducial mass of 22.5 kton. It is located at a depth of 2,700 m water equivalent in the Kamioka mine in Japan. Cherenkov light photons produced by relativistic charged particles are detected by 11,000 20 inch photomultipliers. The pulse-height and timing information is used to identify neutrino interactions and to reconstruct the neutrino interaction vertex, direction, and energy. The Cherenkov ring shape allows one to discriminate between muons (clear rings), electrons (fuzzy rings) and π 0 s from NC interactions (multiple rings from γ conversions). New electronics and a dead-timeless DAQ have been installed in 2008. This upgrade improves the tagging of electrons from muon decays. The SK detector is synchronized with the J-PARC facility via the GPS system. A first event candidate generated by a neutrino produced at J-PARC was observed in February 2010. 90% CL T13 Sensitivity
10-1
2 ' m23 (eV 2)
10-2
10-3 Systematic Error Fraction 5% sys error 10% sys error 20% sys error CHOOZ Excluded
10-4 -3 10
Normal Hierarchy
10-2 10-1 sin2 2 T13 sensitivity
1
Fig. 4. T2K sensitivity for a 5 year data taking period: (left) expected νe appearance signal in SK for sin2 2θ13 = 0.1. The background is due to the intrinsic νe component in the beam and merged Cherenkov light rings from NC π 0 conversions in SK. (right) Sensitivity at 90% C.L. to νe appareance for different systematic error fractions (5 to 20%). Also shown is the region excluded by the CHOOZ experiment (upper right region).
6. Outlook The first T2K physics run started in January 2010 and data taking is ongoing. Several neutrino interactions have already been observed in the near detectors, and a couple of neutrino event candidates have been detected in SK. The accelerator complex has achieved stable operations at 50 kW beam power delivered to the neutrino production target. Work is currently ongoing to increase the beam intensity.
November 23, 2010
17:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.12˙Bravar
346
The aim of the first T2K physics run is to accumulate an integrated beam power of 100 kW × 107 sec in 2010 delivered to the neutrino production target. This would allow T2K to achieve a better sensitivity on θ13 than the current limit set by CHOOZ.4 With this flux, about 20 νµ CCQE events are expected in SK. If sin2 2θ13 ∼ 0.1, 3 to 4 νe events are expected in SK as well. If no νe event candidate is observed an upper limit on sin2 2θ13 of 0.06 will be set. References 1. Y. Ashie et al., Phys. Rev. Lett. 93, 101801 (2004); Y. Ashie et al., Phys. Rev. D 71, 112005 (2005). 2. M.H. Ahn et al., Phys. Rev. D 74, 072003 (2006); D.G. Michaels et al., Phys. Rev. Lett. 97 191801 (2006). 3. D. Beavis et al., BNL Proposal E-889 (1995), unpublished. 4. M. Apollonio et al., Eur. Phys. J. C27, 331 (2003). 5. NA61 Proposal CERN-SPSC-2006-034 (2006). 6. M. Yokoyama et al., Nucl. Instr. Meth. A610, 128 (2009). 7. T2K ND280 Conceptual Design Report, T2K Internal document (2005), unpublished; Yu. Kudenko, Nucl. Instr. Meth. A598, 289 (2009).
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
347
LOOKING FOR HIGH ENERGY ASTROPHYSICAL NEUTRINOS: THE ANTARES EXPERIMENT VINCENZO FLAMINIO (for the ANTARES Collaboration) Physics Department, University of Pisa and INFN-Pisa, Pisa, 56127, Italy ∗ E-mail:
[email protected] Attempts to detect high energy neutrinos originating in violent Galactic or Extragalactic processes have been carried out for many years, both using the polar-cap ice and the sea as a target/detection medium. The first large detector built and operated for several ˆ years has been the AMANDA Cerenkov array, installed under about two km of ice at the South Pole. More recently a much larger detector, ICECUBE is being installed at the same location. Attempts by several groups to install similar arrays under large sea depths have been carried out following the original pioneering attempts by the DUMAND collaboration, initiated in 1990 and terminated only six years later. ANTARES has been so far the only experiment installed at large sea depths and successfully operated for several years. This report will provide a short review of the expected ν sources, of the detector characteristics, the installation operations performed, the data collected and the first results obtained. Keywords: Neutrinos; Undersea Detectors; Galactic Neutrino Sources; Extragalactic Neutrino Sources.
1. Introduction The long march towards the observation of neutrinos from outside the Earth began around 1966, when Ray Davis started the construction of the now famous Chlorine detector, leading in 1968 to the first observation of νe ’s from the Sun.1 The results obtained by Davis over more than 20 years found a beautiful confirmation in an experiment carried out in Japan. Here a group led by Satoshi Koshiba made the first real-time observation of Solar νe ’s in the Kamiokande experiment.2 Solar ν’s have typical energies of a few MeV or less. In 1987 the lucky event of a SuperNova explosion in a nearby Galaxy allowed the first real time observation 3 of ν’s from outside our Galaxy. In this case the observed ν’s had slightly larger energies. There are many theoretical and experimental reasons to believe that neutrinos of much larger energies are emitted in violent events taking place in many astrophysical objects. In our Galaxy potential sources are SN remnants, Pulsars and Microquasars. Potential extragalactic sources are Gamma-Ray Bursts, AGNs and many others. While suggestions for a large cosmic flux of high energy ν’s come from γ-ray
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
348
observations4 and from the measured cosmic ray flux, no direct detection of high energy cosmic ν’s has been reported so far. An undoubtable advantage of ν’s over γ’s as probes in astrophysical observations is related to their tiny cross section. While a 1 TeV γ has an interaction length (in water) λ ≈ 42m, a ν of the same energy has λ ≈2×109 m. The increase of the ν cross section with energy is such that at 1 PeV its interaction length becomes a thousand times smaller, or 2×106m. It may be seen that the ν interaction length becomes equal to the diameter of the Earth at energies of the order of 200 TeV. Very high energy ν’s may reach us from very large distances but may later be absorbed by the Earth. Given the very small cross section, huge detector volumes are required. The idea which is at the basis of all detectors now being designed or built was suggested back in 1960 by M. Markov.5 He suggested to use photomultipliers (PMTs) immersed in ˆ the sea or a lake and detect the Cerenkov radiation generated by muons (electrons) produced in the νµ (νe ) charged-current interactions. The scheme of the apparatus is shown in figure 1. After crossing the Earth a νµ undergoes a charged-current interaction in the Earth’s crust under the sea bottom, producing a high energy µ. This, upon entering the water, starts giving ˆ rise to Cerenkov photons at a typical angle of 43◦ from the µ direction. These are detected by a large photomultiplier array, providing position and time of arrival of each photon, thus allowing a reconstruction of the µ direction that, at very high energies, tends to coincide with the direction of the νµ . ˆ The range (≈ 1 km for a 200 GeV muon) and Cerenkov yield (about 2×104 generated photons/meter in the frequency sensitivity range of PMTs) of high energy muons in sea-water are both very large. In addition, the water transparency in this frequency range is excellent (λabs ≈ 50 ÷ 60m is the typical value in the deep sea). We must also recall that, on average, the µ carries between 50% and 66% of the νµ energy. Therefore a measurement of the µ energy provides an estimate of the ν µ energy. The reason for looking for ν’s coming from “underneath”, the ones that have crossed the Earth, stems from the need to avoid being swamped by the enormous background of “atmospheric” µ’s , whose flux, at a depth of 2 km, is about a factor 106 times bigger than the rate of the conventional background flux of “atmospheric neutrinos”, as can be seen in the left part of figure 1. The drawback of this method derives from the fact that, as already noticed, very high energy ν’s are absorbed by the Earth. Following early efforts to install a detector off the Hawaii islands by the DUMAND collaboration, a small detector was installed in lake Baikal, at a depth of 1100 m.6 This is still operating and results have been published. Upper limits on the neutrino flux have also been published by the AMANDA and ICECUBE experiments, that have been installed under about 2 km of ice at the south-pole.7 It is worth recalling that, due to the need to select neutrinos from the opposite side of the Earth, a detector installed in the northern hemisphere has a complemen-
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
349
Fig. 1. [Left] The solid line shows the expected angular distribution of atmospheric muons at the depth of ANTARES. The dashed line refers to νµ induced µ’s. [Right] Schematic view of a typical undersea ν detector. The inset in the figure shows the different contributions to the expected neutrino flux: protons and α cosmic ray particles giving muons; cosmic ray protons interacting in the atmosphere on the other side of the Earth and giving atmospheric neutrinos; neutrinos from cosmic sources. The figure refers to the depth of ANTARES, but is otherwise quite general.
tary view of the sky to that offered by one located, like AMANDA or ICECUBE, at the South Pole. For this reason three different projects were recently initiated in the Mediterranean. The NESTOR experiment managed to deploy a prototype detector near Pylos (Greece) at a depth of 4000 m. The prototype operated for a short period in 2003 and results on the flux of atmospheric muons were published.8 An additional project, NEMO, was initiated by an Italian Collaboration for the construction of a large detector off the coast of Sicily, at a depth of 3500 m. They deployed and operated in 2007 a prototype detector at a depth of 2000 m. Results on the atmospheric muon flux are now being published.9 At the same time this collaboration has deployed a ≈ 100 km long electro-optical cable connecting the shore station with the site proposed for a 1 km3 apparatus and built a new prototype of larger size, that they plan to deploy and connect in the coming months.10 The third project, ANTARES, has succeeded in building and has been operating since 2006 a large detector off the southern French coast (about 40 km off Toulon). This is a multidisciplinary experiment carried out by a large European Collaboration, whose main aim, of detecting ν’s of cosmic origin, is accompanied by parallel research interests in the fields of marine biology, geophysics and oceanography. Being this the experiment that has made the most impressive progress over the last few years, I will concentrate on it in the rest of the talk. 2. Potential Neutrino Sources The idea behind most calculations of expected neutrino fluxes from astrophysical sources is based on the analogy between the emission mechanism of high energy photons and neutrinos in such objects. It is commonly assumed that at accelera-
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
350
tion sites a fraction of high-energy cosmic rays interact with the ambient matter or photon fields, producing both neutral and charged pions. Neutral pions decay, producing high-energy photons, while charged pions decay into muons and neutrinos. There has been a large number of high energy γ-ray sources recently discovered by γ-ray telescopes. Many of these are thought to be potential neutrino sources, with neutrino energy spectra similar to those of the γ-rays. Many of these sources are located within our Galaxy, others being extragalactic. Moreover we have a number of steady sources, yielding a time-invariant flux with a power law energy spectrum with cutoffs in the 10 TeV to 1 PeV region, and others characterised by short flares of high-energy radiation (Transient Point Sources). In this latter category we find the gamma ray bursts (GRB), characterised by extremely energetic emissions of electromagnetic radiation, all of extragalactic origin. In view of this it is customary, in searching for potential sources of neutrinos, to look at the characteristics of γ sources. γ’s may however be produced in purely electromagnetic processes; not all γ sources can therefore be also sources of neutrinos. The most promising candidate sources are, of course, those located in our Galaxy. The EGRET and more recently the FERMI experiments have observed a very large number of γ-ray sources, of which a large fraction are located within our Galaxy. In the class of Galactic sources it is worth mentioning several shell-type supernova-remnants (SNRs) like the Vela Jr SNR (RX J0852.0-4622). Recent high energy γ-ray observations of this SNR by HESS11 strongly suggest a hadronic origin for the observed γ’s and therefore a possible source of high energy neutrinos, as proposed by12 . Pulsar Wind Nebulae (PWNe) are SNRs having at their center a pulsar which blows jets of fast-moving material into the expanding shell. Calculations performed13 for a number of such PWNe, like the Crab, the Vela X SNR and others, suggest that these could be intense sources of high-energy neutrinos. The Galactic Center Region (GCR) is another likely origin of high-energy neutrinos. HESS had observed a point-like source of very high energy γ rays at the center of the Galaxy (HESS J1745-290) compatible with the position of the supernova black hole Sagittarius A*, the SNR Sagittarius A East and a Galactic center source reported by other groups. Later they found a second source of high energy γ-rays: the PWN G0.9+0.1 in the same region. These are both shown in the upper left part of figure 2. Subtracting from the γ-ray map for the entire region the contributions of these two powerful sources they are left with an extended emission spatially coincident with the unidentifed EGRET source 3EG J1744-3011 and in addition an emission extending along the Galactic plane in the region: |l| <0.8◦ , |b| < 0.3◦ well described by a power-law spectrum with photon index Γ = 2.29. This region correlates very well with that of interstellar material in giant molecular clouds, as can be seen in the bottom-left part of the same figure. The interpretation by HESS4 of this observation is that the molecular clouds constitute an efficient target for high-energy cosmic rays, whose interactions give rise to very high-energy π 0 and therefore γ’s. This interpretation is further strengthened by the hardness of
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
351
Fig. 2. [Top left]: γ-ray count map; [Bottom left] the same map after subtraction of the two dominant point sources, showing an extended band of γ-ray emission. Axes are Galactic latitude (x) and Galactic longitude (y), units are degrees. White contour lines indicate the density of molecular gas, traced by its Carbon Monosulphide (CS) emission. [Right]: The continuous (red) curve shows the density of molecular gas,traced by CS emission. The black histogram shows the γ-ray counts versus l for -0.2◦ < b <0.2◦ . The dashed (green) line shows the γ-ray flux expected if the cosmic-ray density distribution can be described by a Gaussian centered at l = 0 ◦ and with r.m.s. = 0.8o.
the observed γ-ray spectrum observed in this region. If this interpretation will be supported by further observations, we shall have an additional powerful source of high energy hadrons and therefore of neutrinos. Microquasars are Galactic X-ray binary systems having relativistic jets observed in the radio band. Their characteristics present strong analogies with those of quasars, whence their name. Microquasars have been proposed as acceleration sites of hadrons for energies up to 1016 eV. Indeed the presence of relativistic hadrons in their jets has been reported in.14 It follows that microquasars are promising neutrino sources. A microquasar of interest is LS 5039, observed by HESS, for which the authors of ref15 have shown that electrons can hardly account for the observed TeV γ-ray signal and therefore the parent particles should be protons or nuclei. Of great interest, because of the enormous energies involved and in spite of their, in most cases cosmological distances, are some extra-galactic transient sources, like Active-Galactic-Nuclei (AGN) or GRBs. This is the case of the recent observations by the Auger experiment,18 that has discovered that the 27 highest-energy cosmic ray events which they observe (having energies above 55 EeV) do not come equally from all directions. Comparing the clustering of these events with the known locations of 318 Active Galactic Nuclei, they find that most of them correlate well with the locations of AGNs in some nearby galaxies, such as Centaurus A a . These hadronic sources are therefore likely sources of high energy neutrinos. In the search for point-like neutrino sources a non-negligible background is that due to the diffuse flux of atmospheric neutrinos. One needs therefore detectors with a very good a The evidence for this correlation has weakened after the analysis of the latest data 26 including a total of 55 events
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
352
Fig. 3. The celestial sphere in galactic coordinates showing the arrival directions of the 27 highest energy (greater than 57 EeV) cosmic rays detected by Auger. These are shown as circles of radius 3.1◦ . The positions of 472 AGN within 75 Mpc are shown as red *’s. The blue region defines the field of view of Auger; deeper blue indicates larger exposure. The solid curve marks the boundary of the field of view, where the zenith angle equals 60◦ . The closest AGN, Centaurus A, is marked as a white *. The supergalactic plane is indicated by the dashed curve. This plane delineates a region where a large number of nearby galaxies, including AGNs, are concentrated.
(about 0.1◦ ) angular resolution. In the case of transient sources the search relies also on time coincidences with observations by detectors operating in the visible or γ-ray regions, typically onboard satellites. In addition to a search for point sources (steady or transient) neutrino telescopes aim at the detection of what is known as the diffuse flux, due to a large number of unresolved sources, plus neutrinos from cosmic-ray interactions with the cosmic microwave background or with interstellar dust. The diffuse flux, being isotropic, has to be disentangled from the large background of cosmic-ray neutrinos. Being the latter of relatively lower energy, the technique used to separate the two relies on a good energy resolution. It is worth recalling in this context that the detection of a diffuse multi-TeV gamma ray emission from a region of the Galactic disk close to the inner Galaxy has recently been reported by MILAGRO.16 This result has been used by the authors of ref17 to place constraints on the diffuse neutrino flux from the inner Galaxy. 3. The ANTARES Experiment The detector,19,22–25,30,31 schematically shown in figure 4, consists of 12 detection lines, each holding 75 10” Hamamatsu PMTs arranged in triplets (storeys) and looking downward, at an angle of 45◦ to the vertical. The PMTs are housed in pressure-resistant glass spheres.20,21 The separation between storeys in each line is 14.5 m, starting 100 m from the sea-floor; the distance between pairs of strings in the horizontal plane is 60-70 m. Each PMT triplet is held in place by a titanium frame, as shown in the inset of figure 4, attached to a vertical electro-optical cable used for data and clock signals as well as for power transmission. Digital data transmission takes place on optical fibers. At the center of the frame a titanium cylinder (LCM) encloses the readout/control electronics, together with com-
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
353
Fig. 4. [Left] Sketch of the ANTARES detector (only a few of the storeys are shown). The inset at the top-right shows an individual storey, with a titanium frame holding the three glass spheres, each housing a PMT. The photomultipliers look downwards at an angle of 43 ◦ in such a way as to optimize the acceptance for muons moving upwards. The detector layout in the horizontal plane is shown by the inset on the left. [Right] Footprint of the complete detector, using the first hit recorded on each PMT from down going muons.
passes/tilt meters used for geometrical positioning. Some of the storeys also house LED beacons, each containing 36 LEDs, which provide very fast pulses used for timing calibrations. Lasers located at the bottom of two of the lines provide additional means for timing calibrations. For readout purposes, each group (sector ) of five storeys in any given line is treated separately. Hydrophones, attached one per sector, are used, in conjunction with sonic transmitters located at fixed locations on the sea-floor and with the compasses-tilt meters installed in the LCMs, for precise position determinations. An “Instrumentation line”, equipped with instruments used to monitor other important sea parameters such as temperature, pressure, salinity, light attenuation length and speed of sound, is part of the detector. Each line is connected, via an electronics module located at its bottom, to a junction box (JB) in turn connected via a 42 km long electro-optical cable to the shore station. All data are collected here by a computer farm, where a fast processing of events satisfying predetermined trigger requirements is performed. Precise timing is provided by a 20 MHz high accuracy on-shore clock synchronised with the GPS, distributed via the electro-optical cable and the JB to each electronics module. The expected performance of the detector has been studied in detail using MC simulations. The effective area for ν’s, shown in figure 5 reaches a maximum of ≈30m2 at 107 GeV. For ν’s at small nadir angles there is a drastic decrease at very high energies, due to absorption by the Earth. The ν angular resolution, shown in the plot on the right in the same figure, is dominated by electronics at high energies, where it reaches a value of ≈ 0.2 ÷ 0.3◦ . At lower energies it is dominated by the kinematics of µ production by ν’s.
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
354
P Q ANTARES
Fig. 5. Expected detector performance. [left] effective area as a function of energy for three different intervals of nadir; [right] squares: neutrino angular resolution as a function of energy; triangles: muon angular resolution as a function of energy.
4. Detector Installation And Operation Following many tests carried out over several years, the installation of the detector in its final configuration started in the spring of 2006, when the first line became operational. The installation proceeded then in phase with the line assembly so that five lines were operational by the middle of 2007. The last two lines were finally installed and became operational in May 2008. The installation of each line required two separate sea operations. In a first operation the line was loaded on a ship, transported to the ANTARES site and deployed at the predetermined position, with a typical precision of a few meters. In a subsequent operation a remotely operated submarine (ROV) was used to connect one end of an electro-optical cable to the bottom of the line and the other to the junction box. Data taking is controlled at the shore station, located at La-Seyne-Sur-Mer (Toulon). Here a large computer farm processes all the data from the detector (all data are sent to shore), applying pre-selected trigger algorithms. Various trigger schemes are in place. These include a minimum-bias trigger used to monitor the data quality, a general-purpose trigger used to select muons over the whole solid angle and a directional trigger used mainly to select with high efficiency events from the Galactic Centre Region. Additional trigger schemes are used to allow multimessanger searches, like those for GRBs, based on the gamma-ray bursts coordinates network (GCN). The on-shore data processing system is linked to the GCN. There are about 1 or 2 GCN alerts distributed per day and half of them correspond to a real gamma-ray burst. For each alert, all raw data are saved to disk during a preset period (presently 2 minutes). The buffering of the data in the data filter processors is used to save the data up to about one minute before the actual alert. ANTARES is also capable of distributing its own event alerts to external detectors. A collaboration with the TAROT optical telescope has been recently established to this purpose. The direction of interesting neutrino triggers (two neutrinos
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
355
within 3◦ within a time window of 15 minutes or a single event of very high energy) are sent to the Chile telescope in order that a series of optical follow-up images can be taken. 4.1. Measured rates The signal rate of the PMTs is monitored continuously. It is characterised by a continuous component, due to 40 K (about 30 kHz) and bioluminescent bacteria (about 40 kHz). Sometimes sudden, short bursts which may reach up to several MHz are also detected, presumably due to living macro-organisms. The time dependence of the typical median rate is shown in figure 6. The fraction of time in which bursts occur (Burst Fraction or BF) ranges between a few percent and 50 percent, with an average value of about 4 ÷ 5%. A correlation between the BF and the water current speed is found, as shown in figure 7.
Fig. 6.
Fig. 7.
Time dependence of the median rate for various PMTs.
Correlation between burst-fraction and current speed observed in Antares
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
356
Typical values of the trigger rates vary between 4 and 8 Hz for the general purpose trigger and between 5-20 Hz for the Galactic Center Trigger. A fast online reconstruction has been implemented. This is used as a further monitoring tool during data-taking. A distinction has to be made between muons reconstructed using two or more lines (multiline muons), for which both space angles are available, and those reconstructed using a single line, for which only the zenith angle is available. Figure 8 shows a typical plot of the rate of reconstructed muons per day. The number of multiline upgoing muons (muon neutrinos) per day has an average value of 3.2. The plot on the right of figure 4 shows a “footprint” of the (! !$ " *
Fig. 8.
Rate of on-line reconstructed muons per day in 2009.
complete detector, obtained by using the first recorded hit generated by down going µ’s on each PMT. 5. Preliminary Results A very large number of triggers has been collected over these years. They are mainly due to atmospheric µ’s, together with a smaller number of ν’s. It must be noticed that coincidences between pairs of PMTs belonging to an individual storey can be due to muons or also to the decay of 40 K if this occurs nearby. Being bioluminescence a single photon process it only contributes accidental coincidences. The plot on the left in figure 9 shows a typical distribution of measured time differences between hits in neighbouring PMTs on the same storey. A peak centred at 0 ns is visible, mainly due to 40 K decays, with a smaller contribution due to muons. The data have been fitted to the sum of a Gaussian distribution plus a flat background. The full width at half maximum of the fitted Gaussian is about 9 ns. Being both the 40 K concentration and the muon flux constant in time, the rate of genuine coincidences under the peak may be used as a stability monitor of the pair of PMTs. Any deviation of the center of the peak from zero can at the same time be used as a monitor of the time stability. The distribution on the right
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
357
Entries
in figure 9 shows the coincidence rates (after background subtraction) for all PMT pairs in the full detector. The average coincidence rate is 16 ± 2Hz, in reasonable agreement with the results of a Monte Carlo simulation. Entries
1800 1600 1400
35 30
1200
25
1000 20
800 15
600 10
400
5
200 0
-20
-10
0
10
20 ∆ t [ns]
0 0
5
10
15
20 25 30 Coincidence rate [Hz]
Fig. 9. [Left] Coincidence plot for pairs of PMTs belonging to the same storey. [Right] Coincidence rates (background subtracted) observed by all PMTs in the 12 detector lines.
A vertically downgoing muon may generate coincidences as described above (referred to in the following as L1 coincidences if they occur within a 20 ns gate) that add to the 40 K contribution. It may in addition generate delayed L1 coincidences between adjacent storeys of a given line. Figure 10 shows a typical distribution of time differences between hits recorded in adjacent storeys of a line, where L1 coincidences have been required in both storeys. A clear peak is seen, centered at
Fig. 10. Distribution of the measured time differences between hits from adjacent storeys (lowerupper). A local coincidence is required in both storeys. The dashed line shows the result of a Monte Carlo simulation.
about 20 ns, on top of a continuous background due to random coincidences. The width of the distribution reflects the angular spread of atmospheric muons. The dashed curve shown in the plot is the result of a Monte Carlo simulation. Repeating this analysis for pairs of storeys located at different depths along a
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
358
line it has been possible to obtain a measurement of the muon flux as a function of depth.32 The result is shown in figure 11. Here the grey band shows the normalisation uncertainty in the data b , while the dashed curves refer to Monte Carlo simulations using two different models. Figure 12 shows the θ (zenith) distribution
Fig. 11. The measured muon flux as a function of depth. The scale on the right refers to the vertical muon intensity. The dashed lines shows the results of Monte Carlo simulations using two different models. The grey band shows the normalisation uncertainty on the data.
for reconstructed muons, compared with the MC predictions. On the right in the same figure is shown the angular distribution of neutrino-induced muons, compared with the Monte Carlo predictions. The agreement is good. It is estimated that up to the end of 2009 more than 2000 ν-induced µ’s, together with a much bigger number of down going (atmospheric) µ’s were present in the data collected. The analysis of this large amount of data is progressing steadily.30,31 The plot in the left part of figure 13 shows a very preliminary sky-map of a sample of ν-induced µ’s obtained in 2007 using the data provided by the, at the time still incomplete, 5 lines detector. The plot on the right in the same figure shows the one-year sensitivity expected for the full detector, compared with limits published by other experiments.27–29 Also shown is the sensitivity reachable with the data from the reduced 5-lines ANTARES detector operated in 2007. 6. Conclusions The construction of the ANTARES detector has been successfully completed in June 2008. ANTARES is presently the only undersea neutrino detector in operation and the largest detector of astrophysical neutrinos present in the northern hemisphere. A large quantity of data has been collected since the installation of the first lines and data-taking is continuing steadily. The performance of the detector is being bA
large part of this uncertainty is due to the fact that our PMTs are looking downwards, thus ˆ with only a lateral exposure to Cerenkov light from downgoing muons
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
359
Fig. 12. [left]: Experimental θ distribution of the reconstructed muons (black points with error bars) compared with that expected from the Monte Carlo predictions for the sum of the muon and neutrino contributions (histogram in green). [right]: Same distribution shown on a linear scale, for up going muons (i.e. neutrino-induced muons). The separate Monte Carlo contributions due to muons and neutrinos are also shown on both plots. The data refer to the (10-12) lines detector. and only to events reconstructed on more than one line.
Miami2009
24 5 line dataset sensitivity estimate
Fig. 13. [left] Sky-map of the neutrino events found in the 5-line 2007 data sample. [right]: The blue-shaded area shows the sensitivity estimate of the 5-lines data obtained in 2007 (140 days), as a function of declination. Also shown are the limits provided by the MACRO experiment (2299.5 days), by Superkamiokande (1645.9 days) and by AMANDA (1387 days). The continuous line shows the 1-year sensitivity of the full ANTARES detector.
continuously monitored and fully matches the design parameters. A fast on-line event reconstruction is available and it is now used to provide fast triggers for multimessanger searches. A very large number of muons have been reconstructed up to the end of 2009, together with a few thousands neutrinos. The depth dependence of the muon flux has been measured and comparisons with Monte Carlo predictions show an excellent agreement. A preliminary analysis of an initial sample of neutrinos obtained with the 5-lines operated in 2007 is now being completed, in order to search for neutrinos from a number of known astrophysical sources. Sensitivity estimates have been provided, both for the full detector and for the reduced 5-lines apparatus, showing the improvement with respect to previous results.
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
360
References 1. R. Davis et al.; Physical Review Letters 20, 1205 (1968). R. Davis; Proc. Informal Conf. on Status and Future of Solar Neutrino Res., Upton, NY, (1978) 2. K.S.Hirata et al., Physical Review Letters 63,16 (1989) 3. K. Hirata et al.; Physical Review Letters 58, 1490 (1987). R. M.Bionta et al.; Physical Review Letters 58, 1494 (1987). 4. F. Aharonian et al. (H.E.S.S. Collaboration). Discovery of very high energy gammarays from the galactic centre ridge. Nature, 439, 695 (2006). 5. M.A. Markov, Proc. 1960 Ann. Int. Conf. on High Energy Physics (Rochester) (1960). 6. V. Bolkanov et al,; Nuclear Physics (Proc. Supp.) B 91, 438 (2001). V. Bolkanov et al.; Proc. of the 2001 ICRC, page 1096. 7. J. Braun et al., Astroparticle Physics 29, 299 (2008) . Andr`es et al, Astroparticle Physics, 13, 1 (2000). M. Ackermann et al., Physical Review D 71, 077102 (2005). M. Ackermann et al.; Astroparticle Physics 22, 127 (2004). M.Kestel; Nuclear Instruments and Methods in Physics Research A 535, 139 (2004). O.Botner; Nuclear Physics (Proc. Supp.) B 143, 367 (2005). P.Desiati: ν Astrophysics and Galactic Cosmic Ray Anisotropy in ICECUBE; Contribution to this Conference. 8. S. E. Tzamarias; Nuclear Instruments and Methods in Physics Research D 502, 150 (2003). 9. S.Aiello et al.; Measurement of the atmospheric muon flux with the NEMO Phase-1 detector arXiv:0910.1269. Astroparticle Physics (in press) March 2010. 10. P. Sapienza;. Nuclear Physics (Proc. Supp.) B 145, 331 (2005). 11. F. Aharonian et al. Detection of TeV γ − ray emission from the shell-type supernova remnant RX J0852.0-4622 with H.E.S.S. Astron. & Astrophys. 437 L7, (2005). 12. M.D. Kistler and J.F. Beacom, Guaranteed and prospective galactic TeV neutrino sources; Physical Review D 74, 063007 (2006). 13. D. Guetta and E. Amato. Neutrino flux predictions for galactic plerions. Astroparticle Physics 19, 403, (2003). 14. S. Migliari, R. Fender, and M. Mendez. Iron emission lines from extended X-ray jets in SS 433: Reheating of atomic nuclei. Science 297, 1673 (2002) 15. F. Aharonian, L. Anchordoqui, D. Khangulyan, and T. Montaruli. Microquasar LS 5039: a TeV gamma-ray emitter and a potential TeV neutrino source. J. Phys. Conf. Ser. 39, 408 16. A.A. Abdo et al.; Astrophys. Journal Lett., 658, 33 (2007) 17. Andrew M. Taylor et al.: Revisiting the diffuse neutrino flux from the inner Galaxy using new constraints from very high energy γ-ray observations. Nuclear Instruments and Methods in Physics Research A 602, 113 (2009). 18. J Abraham et al. (Pierre Auger Collaboration) Correlation of the highest energy cosmic rays with nearby extragalactic objects Science 318, 939, 2007 19. Proposal for a 0.1 km2 detector, the ANTARES Collaboration (1999), available at http://antares.in2p3.fr 20. P. Amram et al., The ANTARES optical module, Nuclear Instruments and Methods in Physics Research A 484, 369 (2002) 21. J. A. Aguilar et al., Study of Large Hemispherical Photomultiplier Tubes for the ANTARES Neutrino Telescope, Nuclear Instruments and Methods in Physics Research A 555, 132 (2005) 22. J.A. Aguilar et al., Transmission of light in deep sea water at the site of the ANTARES neutrino telescope, Astroparticle Physics 23, 131 (2005). 23. P. Amram et al., Sedimentation and Fouling of Optical Surfaces at the ANTARES Site, Astroparticle Physics 19, 253 (2003) 24. P. Amram et al., Background light in potential sites for the ANTARES undersea neutrino telescope, Astroparticle Physics 13, 127 (2000) 25. M. Ageron et al., Studies of a full scale mechanical prototype line for the ANTARES neutrino telescope and tests of a prototype instrument for deep-sea acoustic measure-
November 23, 2010
18:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.13˙Flaminio
361
ments, Nuclear Instruments and Methods in Physics Research A 581, 695 (2007) 26. John Kelley: The Pierre Auger Observatory: Recent Results and Future Plans. Contribution to this Conference. 27. MACRO Collaboration, M. Ambrosio et al., Astrophys. Journal 546, 1038 (2001) 28. J. Braun et al., Astroparticle Physics 29, 299 (2008). 29. K. Abe et al., Astrophys. Journal 652, 198 (2006); 30. J A. Aguilar et al.; Astroparticle Physics 26, 314 (2006). 31. V. Flaminio; Proc. XIII Intern. Works. on Neutrino Telescopes M.B.Ceolin Ed. (2009). 32. J.A. Aguilar et al.; Astroparticle Physics 33, 86-90 (2010)
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
362
LOW ENERGY SOLAR NEUTRINO SPECTROSCOPY: RESULTS FROM THE BOREXINO EXPERIMENT D. D’ANGELO on behalf of the Borexino Collaboration Istituto Nazionale di Fisica Nucleare – sez. di Milano via Celoria, 16 – 20133 Milano – Italy ∗ E-mail:
[email protected] Till very recent the real-time solar neutrino experiments were detecting the tiny fraction of about 0.01% of the total neutrino flux above some MeV energy, the sub-MeV region remained explored only by radiochemical experiments without spectroscopical capabilities. The Borexino experiment, an unsegmented large volume liquid scintillator detector located in the Gran Sasso National Laboratory in central Italy, is at present the only experiment in the world acquiring the real-time solar neutrino data in the low-energy region, via the elastic scattering on electrons in the target mass. The data taking campaign started in 2007 and rapidly lead to the first independent measurement of the mono-cromatic line of 7 Be of the solar neutrino spectrum at 862keV, which is of special interest because of the very loose limits coming from existing experiments. The latest measurement, after 41.3t · yr of exposure, is (49 ± 3stat ± 4syst )c/(day · 100t) and leaves the hypothesis of no oscillation inconsistent with data at 4σ level. It also represents the first direct measurement of the survival probability for solar νe (P7ee = 0.56 ± 0.10) in the vacuumBe dominates oscillation regime. Recently Borexino was also able to measure of the 8 B solar neutrinos interaction rate down to the threshold energy of 3 MeV, the lowest achieved so 6 −2 s−1 . far. The inferred electron neutrino flux is ΦES 8 B = (2.7 ± 0.4stat ± 0.1syst ) × 10 cm The corresponding mean electron neutrino survival probability, is P8ee = 0.29 ± 0.10 at B the effective energy of 8.9 MeV. Both measurements are in good agreement with other existing measurements and with predictions from the SSM in the hypothesis of MSWLMA oscillation scenario. For the first time, thanks to the unprecedented radio-purity of the Borexino target and construction materials, we confirm with a single detector, the presence of a transition between the low energy vacuum-dominated and the high-energy matter-enhanced solar neutrino oscillations. A further confirmations of the LMA scenario is provided by the absence of a day-night asymmetry in the 7 Be signal. These experimental results allow to improve the knowledge of the pp neutrino flux, to place an upper limit on the CNO flux and also to explore non standard neutrino properties, improving the upper limit on the neutrino effective magnetic moment. Calibration campaigns aiming to reduce the systematical errors on fiducial volume definition and detector energy response have been performed and data analysis is presently in progress. Borexino has also recently observed antineutrinos from the Earth, for the first time at more the +5.8 3σ C.L. and has measured a rate of 3.9+1.6 −1.3 (−3.2 ) events/(100ton-yr) at 68.3%(99.73%) C.L. Borexino is also a powerful supernova neutrino detector. Future prospects of the experiment include reducing the systematic error on the 7 Be flux to below 5% and direct measurement of additional solar neutrino emissions such as pep, CNO and possibly pp. Keywords: Solar neutrino; sub-Mev neutrino; LMA; liquid scintillator; Borexino.
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
363
1. Introduction The Sun is an intense source of electron neutrinos, produced in nuclear reactions of the p-p chain and of the CNO cycle. While photons reach the surface of the Sun after some tens thousand years, neutrinos emitted in the fusion reactions easily escape the Sun’s core and arrive to the Earth surface. Due to a very low interaction cross section (∼ 10−46 cm2 ) most of the neutrinos cross the Sun and the Earth with no interactions at all. They provide therefore a unique probe for studying both the nuclear fusion reactions that power the Sun and the fundamental properties of neutrinos. Historically, the first successful observation of solar neutrinos has been performed by Ray Davis’s chlorine experiment using inverse beta process 37 Cl+νe →37 Ar+e− (Homestake experiment1 ). The experiment found about one third of the neutrino flux predicted by the Standard Solar Model (SSM) of the time, posing for the first time the so called solar neutrino problem. Solar neutrinos have now been studied for over 40 years by means of radiochemical2–5 and water Cherenkov6,7 detectors and combining these results with those from experiments detecting neutrinos from other sources like reactor anti-neutrinos,8 it has been pointed out that the key to the solar neutrino problem lays in the neutrino physics field, while advances in helioseismology9 left little room for the deviations from the solar models. A non-standard neutrino implies physics beyond the Standard Model of elementary particles such as a non-zero neutrino mass and the discovery of the ν flavor oscillations, with matter effects in the Sun playing a crucial role too (MSW effect).10 The range of the parameters describing the oscillation phenomenon has been recently constrained using the data coming from solar and reactor neutrino experiments to the so called LMA (Large Mixing Angle) region of the plane θ12 , ∆m212 +0.06 +0.21 (tan2 (2θ12 ) = 0.47−0.05 and ∆m212 = 7.59−0.21 · 10−5 eV 2 ).11 A central feature of the MSW-LMA solution is the prediction that neutrino oscillations are dominated by vacuum oscillations at low energies (<1 MeV) and by resonant matter-enhanced oscillations, taking place in the Sun’s core, at higher energies (>5 MeV). A measure of the survival probability as a function of the ν energy is very important to confirm the MSW-LMA solution or to exploit possible traces of non standard neutrino-matter interactions or non standard neutrinos properties (mass varying ν).12 The relevance of the measurements of the various solar neutrinos components is then twofold: from one side they can increase the confidence in the oscillation scenario and from the other side, assuming the knowledge of the oscillati109on parameters, they could provide a measurement of the absolute solar neutrino fluxes, helping for example in the scientific debate between high and low metallicity solar models.14 The primary goal of Borexino is the real-time measurement of the monoenergetic (862 keV) neutrino flux, originating from the electrons capture on 7 Be in the Sun’s core, through the neutrino-electron elastic scattering with a precision of at least 5%. This is of special interest because of the very loose experimental limits on the flux coming from combined data of existing experiments. The allowed
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
364 +0.24 range for the 7 Be neutrino flux, before Borexino results, was φ(7 Be) = 1.03−1.03 of 15 the flux predicted by the SSM. Among existing experiments on solar neutrinos, SNO7 and Super-K6 measure the solar neutrinos fluxes with high threshold (5 MeV) because of the low Cherenkov light yield and the high intrinsic backgrounds, so they are only sensitive to 8 B neutrinos. Radio-chemical detectors2–5 on the other hand featured a lower threshold, below 1MeV energy but did not measure the ν energy. Borexino has opened a new chapter in the experimental history of solar ν making feasible the solar ν’s spectroscopy in real time down to 200 keV. This was possible by employing a liquid scintillator technique which has several advantages: the light yield is a factor 50 higher than the Cherenkov one and the very low solubility to ions and metal impurities makes it possible to reach unprecedented levels of radio-purity. Solar neutrinos are detected in Borexino through their elastic scattering on electrons in the scintillator. Electron neutrinos (νe ) interact through charged and neutral currents and in the energy range of interest they have a cross section 5 times larger than νµ and ντ , which interact only via neutral current. The electrons scattered by neutrinos are detected by means of the scintillation light retaining the information on the energy while information on the direction of the scattered electron is lost. Electron-like events induced by solar neutrinos interaction cannot be distinguished, on an event by event basis, from electrons or gammas due to radioactive decays so a strong effort has been devoted to the containment and comprehension of the background. The design of Borexino is based on the principle of graded shielding, with the inner core scintillator at the center of a set of concentric shells of increasing radio-purity. All components were screened and selected for low radioactivity and the scintillator and buffer were purified on site at the time of filling. The purification strategy relies on filtration, multistage distillation and nitrogen purging.18 The present work reports, after a brief detector’s description, the main goals reached in the so called Borexino phase I data taking period (May 2007-Oct 2008), namely the measurement of 7 Be solar neutrinos flux and D/N asymmetry, the 8 B solar neutrinos flux, the best current limits on the ν magnetic moment and the detection of geo-neutrinos. Calibrations campaigns, the role of Borexino as a supernova neutrino detector and next goals are also described.
2. The Detector The Borexino detector is located at the Gran Sasso National Laboratories (LNGS) in central Italy, at a depth of 3800m.w.e.. The active mass consists of 278t of pseudocumene(PC) doped with 1.5g/l of PPO. The scintillator is contained in a thin (125 µm) nylon vessel and is surrounded by two concentric PC buffers doped with DMP, a scintillation light quencher. The scintillator and buffers are contained in a Stainless Steel Sphere (SSS) with a diameter of 13.7m. The SSS is enclosed in a Water Tank (WT), containing 2100t of ultra-pure water as an additional shield.
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
365
Fig. 1.
Schematics of the Borexino detector.
The scintillation light is detected via 2212 8” photomultiplier tubes uniformly distributed on the inner surface of the SSS. Additional 208 8” PMTs instrument the WT and detect the Cherenkov light radiated by muons in the water shield, serving as a muon veto. A detailed description of the detector can be found in.17 Key features of the scintillator are the high light yield (500p.e./MeV) and the fast time response that allows to reconstruct the events position by means of a time-of-flight technique. An event is recorded when either the number of PMT pulses in the Inner Detector exceeds 25 within a time window of 99ns (the corresponding energy threshold is about 40keV) or the number of PMT pulses in the Outer Detector exceeds 6 within 150ns. Both sub-detectors are always read at the same time. When a trigger occurs, a 16µs gate is opened and time and charge of each PMT is collected. The offline software identifies the shape and the length of each scintillation pulse and reconstructs the position and energy of the each deposit. Pulse shape analysis is performed to identify various classes of events, among which electronic noise, pile up events, muons, α and β particles. 3. Radiopurity and Background Levels Besides reducing external background, the key requirement for measuring lowenergy ν with Borexino is an extreme radio-purity of the scintillator itself. Dur-
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
366
ing 15 year of dedicated R & D studies, the Borexino collaboration developed a purification strategy which proved to be effective in removing the most dangerous contaminants.18 In particular 40 K contamination was found to be below 3·10−18g/g (90% C.L.) while the contamination due to 238 U and 232 Th was reduced to the unprecedented levels of (1.6 ± 0.1) · 10−17 g/g and (5 ± 1) · 10−18 g/g. By far the most important source of background is 14 C, a β emitter with 156keV endpoint, which is naturally present in an organic liquid scintillator. Its isotopic ratio in the scintillator batch is evaluated to be 14 C/12 C = (2.7 ± 0.6) · 10−18 , perfectly suited to the planned analysis threshold of 200keV. Other important background is the 5.3MeV α emitter 210 Po. The ionization quenching of the scintillator reduced the visible energy by a factor 13 and moves the α peak in the energy range of the 7 Be signal. Its contamination at the beginning of data taking was about 80 counts/day/ton, decreasing afterwards with time with the expected mean life of 200 days. This background can be statistically subtracted by use of a pulse-shape discrimination made possible by the PC-based scintillator. 210 P o is a Radon daughter out of equilibrium with its predecessors in the decay chain, like 210 Bi which is present in the scintillator in a much lower concentration. In spite of this 210 Bi is a more problematic spectral component as it is β emitter, which no Pulse Shape Discrimination can distinguish from a neutrino interaction, encompassing the whole 7 Be energy region (Sec. 4) and particularly the region where pep and CNO neutrinos are to be searched for. The most annoying background for the 7 Be ν analysis is however 85 Kr, an air-borne contaminant, emitting electrons with 687keV endpoint and a rate of the same order of the 7 Be signal. The 85 Kr content in the scintillator was probed through the rare decay sequence 85 Kr→85m Rb + e+ + νe , 85m Rb→85 Rb+γ (τ = 1.5µs, B.R. = 0.43%) that offers a delayed coincidence tag and it is evaluated to be (28 ± 7)counts/day/ton. The large error is due to low statistics. At energies above 800 keV the dominant background is cosmogenically produced 11 C (β + decay, Q=1.98 MeV). It is observed at an average rate of 25 counts /(day 100 tons), which is the range of prediction of the previous studies though slightly higher.19,20 4. The 7 Be Signal The basic signature for the mono-energetic 0.862MeV 7 Be ν is the Compton-like edge of the recoil electrons at 665keV as shown in Fig.2. Events have been selected by means of the following cuts: • Only 1 cluster events are accepted: the event must have a unique reconstructed cluster in the gate time window (16µs) in order to reject pile-up and fast coincident events ( ≈100%). • Muons and muon daughters are rejected: events associated with Cherenkov light in the Water Tank are identified as cosmic muons and rejected. A 2 ms veto is applied after each muon crossing the detector to remove afterpulses and muon induced neutrons (τ ≈ 250µs); the measured muon rate in Borexino is ∼ 0.05s−1 and the dead time introduced by this cut is negligible.
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
367
Fig. 2. Raw charge spectrum used for the determination of 7 Be neutrinos. The black (upper) curve is the spectrum after basic selection cuts (see text), the blue (middle) curve is the spectrum after the additional application of the fiducial volume cut and the red (lower) curve is the spectrum including statistical subtraction of α’s.
• Space and time correlated events are rejected: events occurring within 2ms at the same place (∆R <1.5 m) are removed; the Rn daughters occurring before the 214 Bi-214 Po delayed coincidences are eliminated by vetoing events up to three hours before a coincidence. The total loss of fiducial exposure due to this cuts is 0.7%. • Fiducial volume cut: to remove external backgrounds only events reconstructed in the innermost 100t are accepted. Another volumetric cut |z| < 1.8m was applied in order to cut out the regions close to the poles showing a different detector’s response. The result is a nominal fiducial mass of 78.5t. In Fig. 3 the measured spectrum in 192 days is a shown as obtained after the application of the previous cuts. The most noticeable peak, around 400 keV, is the one due to the 210 Po α decay, while at energies above 800 keV the beta spectrum of 11 C is clearly visible. The 7 Be signal rate in Borexino is obtained fitting the energy spectrum by a superposition of the spectra due to solar neutrinos and to the not taggable backgrounds. Two procedures were adopted: one of them includes the 210 Po α peak while in the second one a further pulse shape discrimination is applied to data and the α-like events are statistically subtracted. The two results are perfectly compatible 22 and they give the value of the 7 Be neutrinos interaction rate of (49 ± 3 ± 4) c/(day 100t) after 192 days of live time. According to the the Standard Solar Model with high metallicity13,14 the expected signal for non oscillated solar 7 Be ν is (74 ± 4), which is reduced to (48 ± 4) c/(day 100t) according to the MSW-LMA oscillation parameters. The νe survival probability at the 7 Be ν energy corresponding to our results is Pee =(0.56±0.10) and the non oscillation hypothesis (Pee =1) can be rejected at 4 σ C.L.. Therefore Borexino on one hand confirms the MSW-LMA ν oscillation
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
368
Fig. 3. The Borexino energy spectrum obtained after applying the described analysis cuts (see text). The black curve (upper) is the fit including different components: the red curve (up to 800keV) is the 7 Be neutrino signal; the blue curve (up to 700keV) is 85 Kr, the green curve (up to 1.3MeV) accounts for 210 Bi and CNO neutrinos, the violet curve (from 800keV) is cosmogenic 11 C.
scenario and on the other hand provides the first direct Pee measurement in the low energy vacuum regime. The new Borexino results can be used along with the results of radiochemical experiments2–5 and SNO7 to calculate new limits on the pp- and CNO-neutrino fluxes. The results are at present the best experimental limits and are shown in Fig. 4 for different C.L.. Including in addition the solar luminosity constraint we +0.008 determine fpp = 1.005−0.020 (1σ) and fCN O < 3.80 (90%C.L.), using the 1-D χ2 23 profile method. The results of fCN O can be transformed into a contribution to the total solar neutrino luminosity of less than 3.3% (90%C.L.).
Fig. 4. Determination of flux normalization constants for pp and CNO solar neutrinos without luminosity constraint, fpp and fCN O (68%, 90%, and 99% C.L.).
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
369
Huge experimental and analysis efforts are now in progress to reduce the errors associated to the measurement of the 7 Be signal rate. We have mentioned in Sec. 3 the role of 85 Kr background and its decay branch featuring a β/γ delayed coincidence. At the time of our last published results22 only 8 β/γ coincidences were selected in 192d of live time and the contamination value was taken as a free parameter in the fitting procedure because of the too large statistical uncertainty. Now after one year of life time, the uncertainty has been reduced by 50% and the amount of contamination is constrained to (28 ± 7) counts/(days 100t) so it can be fixed in the fitting procedure. Presently the possibility to reduce the 85 Kr contamination through a scintillator purification is under study. In particular, 85 Kr is a noble gas and the experience gained with CTF, the 4t prototype of Borexino, 24 showed than nitrogen purging is particularly effective to remove this background. A 7 Be rate measurement with a few percent accuracy requires also a strong reduction of systematical errors: the main contributions are coming from the imperfect knowledge of the fiducial volume and of the detector energy response (each of them is giving a contribution to the systematical error of 6%). We are presently reducing these uncertainties through the detector calibration: two calibration campaigns have been already completed, others are scheduled for the next months and the results are under analysis.
5. The 8 B Signal And The Survival Probability In The Vacuum-Matter Oscillation Transition The extreme radio-purity of Borexino, combined with the efficient software rejection of cosmogenic background allows to investigate the recoiled electron spectrum induced by 8 B solar ν, down to the energy threshold of 3 MeV. This value is mainly due to the presence at lower energies of a large background coming from penetrating γ rays emitted by 208 Tl decay in the PMT’s material. So far Borexino is the first experiment providing the realtime measurement of 8 B ν below 5 MeV. The major background sources at the energy above 3 MeV are muons, gammas from the neutron capture, radon emanation from the nylon vessel, short lived (t<2 s) and long-lived (t>2s, 10 C) cosmogenic isotopes and bulk 208 Tl contamination. In addition to the already discussed cuts, a stronger cosmogenic cut is applied by vetoing the overall detector for 5 s after a crossing muon; 10 C candidates are removed by the triple coincidence with the parent muon and the neutron capture on protons and the 208 Tl contamination due to the internal radioactivity is evaluated by measuring the delayed coincidence of its branching competitor 212 Bi-212 Po in the 232 Th chain and statistically subtracted. The number of selected events is (75 ± 13) in 488d of live time (reduced to 345d after the application of analysis cuts). They correspond to a rate of 8 B solar ν interactions above 3MeV of (0.217 ± 0.038stat ± 0.008sys ) 6 −2 −1 s counts/(day 100 t) which turns into a flux ΦES 8 B = 2.7 ± 0.4 ± 0.1 × 10 cm .25 The equivalent νe flux survival probability, assuming the high metallicity Standard Solar Model,13 is (0.29 ± 0.10) at the effective energy of 8.9 MeV. So the
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
370
non-oscillation model is excluded at 4.2σ C.L. Borexino is the first experiment able to simultaneously measure solar ν fluxes both in vacuum-dominated (7 Be ν) and matter-enhanced regions (8 B ν). Eliminating the common sources of systematic errors, the ratio between the measured survival probabilities for 7 Be and 8 B neutrinos is 1.93±0.49, 1.9 σ apart from unity. The obtained results for Pee are shown in Fig. 5 and compared with expectation due to MSW-LMA theory.22 The agreement is fair. In the case of 8 B neutrinos fluxes an improvement in the precision of the measurement requires an increase of the time of measure and of the fiducial mass, besides a better definition of the fiducial volume mass and of the energy response of the detector.
Fig. 5. Electron neutrino survival probability as function of the neutrino energy, evaluated for the 8 B neutrino assuming the BS07(GS98) model and the oscillation parameters from the MSW-LMA solution.
6. Other Neutrino Fluxes A possibility to use ultra-pure liquid organic scintillator to detect solar pp neutrino has been discussed in,26 and it has been shown that a large volume liquid organic scintillator detector with a 1σ energy resolution of 10 keV at 200 keV will be sensitive to solar pp neutrinos, if operated at the target radio-purity levels of Borexino detector. The energy resolution and the threshold of the current Borexino setup could be sufficient to fulfill this search without further modifications. The main problem is the disentanglement of the very tail of the 14 C spectrum (with possible pile-up) from the pp neutrino spectrum. A series of calibrations with specially designed 14 C source are envisaged in order to calibrate the detector performance at the very low energies (down from 200keV). The feasibility of the measurement is under study. A direct test of the solar C, N and O abundances would come from measuring the 13 N and 15 O neutrinos that follow proton capture on 12 C and 14 N respectively. We
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
371
have mentioned indirect constraints that come from Borexino data in Sec. 4, however direct spectroscopy of these neutrinos is also possible. In addition Borexino has the potential to detect neutrinos from the pep fusion process. For these measurements to be possible, radioactive contamination in the detector must be kept extremely low. Since sufficiently clean conditions are being met by Borexino, the main remaining background source is 11 C, produced in reactions induced by the residual cosmic muon flux on 12 C. In the process, a free neutron is almost always produced. 11 C can be tagged on an event-by-event basis by looking at the threefold coincidence with the parent muon track and the subsequent neutron capture on protons. This coincidence method has been tested using Borexino Counting Test Facility,20 and is now being implemented in Borexino. 7. Temporal Variations of the Solar Neutrino Flux The MSW with LMA parameters predicts the absence of day-night asymmetry in the neutrino signal. Preliminary analysis of the Borexino data provides a confirmation of the LMA scenario due to the lack of a significant day-night asymmetry in the 7 Be flux.27 Data corresponding to a total livetime of 422.12d with 212.87 days and 209.25 nights have been analyzed. The day-night asymmetry Adn is defined as Adn =(Cn -Cd )/(Cn +Cd ) where Cn and Cd are the counts during day and night time and it includes the contributions both of the signal and of the background. Considering the statistical precision of the 7 Be flux determination in the day and night periods we get for the contribution of the signal alone in the 7 Be energy window Aν dn =(0.02 ± 0.04)stat , compatible with zero. Further analysis and the evaluation of the contribution of possible systematic errors due to the selection of the data sample is in progress. A very important source of periodic variations of the neutrino flux is the Earth orbit eccentricity (±3.5%). The statistics of the registered neutrino flux allows to measure these variations at the time scale of 3 to 5 years with condition of the constant background. No statistically significant results can be obtained with the data accumulated so far. 8. Neutrino Effective Magnetic Moment There is compelling evidence from solar and reactor neutrino experiments that neutrinos are massive. A minimal extension of the Electroweak Standard Model with a massive neutrino allows a non zero magnetic moment, proportional to the neutrino mass. In this hypothesis the ν scattering cross section off electrons is modified by the addition of an electromagnetic term proportional to the effective neutrino magnetic moment, dσ/dTEM = µ2ν πα2em /m2e [1/T − 1/Eν ] where Eν is the ν energy and T is the electron kinetic energy. The best limit on effective neutrino magnetic moment obtained so far using solar neutrino data comes from the SuperKamiokande detector above a 5 MeV threshold, and it is µν < 1.1 · 10−10 Bohr magneton (µB ). The best limit on magnetic moment from the study of reactor
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
372
anti-neutrinos (GEMMA experiment) is µν < 3.2 · 10−11 µB (90% CL).28 The study of the maximum allowed deviations from the pure weak electron recoils shape for 7 Be neutrinos performed with Borexino 192d live-time data lead to the new limit on the effective neutrino magnetic moment of µν < 5.4 · 10−11 µB at 90% C.L..22 The measurement is unique in the neutrino physics due to the large statistics involved, which allows the self-calibration of the neutrino flux. In such a way the result doesn’t contain any errors on the fiducial volume, parameters of oscillations and solar neutrino flux itself. With more statistics the limit on the effective neutrino moment can be improved down to ∼ 10−11 µB , constraining the 85 Kr content in the spectral fit to the value measured using the low probability decay branch described in Sec. 3. 9. Detector Calibrations Three calibration campaigns with radioactive sources have been completed during 2008 and 2009 with the goal to reduce the systematical uncertainties in the 7 Be and 8 B signal at the level of a few percent. Radioactive sources were inserted in the inner vessel center and along the vertical axis (first calibration campaign) and in more than 100 positions on and off vertical axis (second and third calibration campaigns). An Am-Be source and several gamma sources were used for the energy calibration while a source made by a quartz sphere filled by scintillator loaded with radon and 14 C was used to study the position reconstruction as a function of the event energy and of the event type (α, β and γ) and to map the relative changes of the reconstructed energy at various positions. The true position can be determined within 1cm accuracy through the use of a red light laser (mounted on the source support) monitored by a system of CCD cameras. The position is then compared with the one reconstructed by means of time of flight technique from the scintillation light induced by radioactive decays and detected by the PMT’s. Particular care has been devoted to the design of the source insertion system, to the choice of its material and to the definition of the insertion procedure in order to minimize the risk of introducing any radioactive contaminant in the detector that would spoil the unprecedented performance of Borexino. The calibration data are under analysis. 10. Geo-Neutrino Detection Geo-neutrinos, electron anti-neutrinos produced in beta decays of naturally occurring radioactive isotopes in the Earth, are a unique direct probe of our planet’s interior. The first observation at more than 3σ C.L. of geo-neutrinos has been performed with the Borexino detector at Laboratori Nazionali del Gran Sasso. Antineutrinos are detected through inverse beta decay reaction. With a 252.6 ton-yr +4.1 +14.6 fiducial exposure after all selection cuts, we detected 9.9−3.4 (−8.2 ) geo-neutrino events, with errors corresponding to a 68.3%(99.73%) C.L. From the ln L profile, the statistical significance of the Borexino geo-neutrino observation corresponds
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
373
Fig. 6. Spectrum for the positron prompt events of the ν e candidates and the best-fit including geo- and reactor-ν e (solid thick line). The darker area isolates the contribution of the geo-ν e in the total signal and the red (dotted) line shows this contribution alone.
+1.6 +5.8 to a 99.997% C.L. Our measurement of the geo-neutrinos rate is 3.9−1.3 (−3.2 ) events/(100ton-yr). This measurement rejects the hypothesis of an active georeactor in the Earth’s core with a power above 3 TW at 95% C.L. The spectrum for the positron prompt events is shown in Fig. 6 together with the best-fit including geo- and reactor-ν e components. The observed spectrum above 2.6MeV is compatible with that expected from european nuclear reactors (mean base line of approximately 1000 km). Our measurement of reactor anti-neutrinos excludes the non-oscillation hypothesis at 99.60% C.L. More details about this measurement can be found in.29
11. Supernova Neutrino Detection Calculations suggest that in the case of a “typical” galactic supernova (at 10 kpc and 3 × 1053 ergs of binding energy release) about 150 events above 200 keV will occur in the inner vessel of the Borexino detector within few tens of seconds. The event rates for supernova neutrino interactions are expected to be 1 to 3 orders of magnitude larger than the uniform background and, therefore, the Borexino detector is well suited for the early detection of a galactic supernova. The Borexino detector met the requirements of the Super Nova Early Warning System (SNEWS), has been incorporated into the network starting June 2009, and is now ready to detect supernova events. The SNEWS network is a collaboration between multiple large volume neutrino detectors (LVD, Super-Kamiokande, SNO(+) and AMANDA/Ice Cube) that takes advantage of time correlation between possible supernova neutrino signals among the different detectors to offer the astronomical community a reliable alert in the case of a galactic supernova.30
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
374
12. Future Perspectives Borexino has completed its first scientific phase providing a real time measurements of 7 Be and the low threshold 8 B signal rate. A preliminary result about the day-night asymmetry has also been obtained. Calibration campaigns have been performed in year 2008 and 2009 and the analysis of these data is in progress. The results will strongly contribute to reduce the errors about the fiducial volume determination and about the detector response function thus opening the possibility of a 7 Be signal rate measurement with few percent accuracy and of a reduction of the systematic errors of the 8 B measure. Great efforts toward the measurement of the CNO, pep and possibly pp signal rate and the detection of the annual modulation of the solar neutrino signals are in progress together with further analysis work about the day night asymmetry. The possibility to purify the scintillator to reduce the background due to 85 Kr and to 210 Bi is under consideration.
References 1. R. Davis, Nobel prize lecture, 2002; B.T. Cleveland et al, Astrophys.J. 496, 505 (1998). 2. K. Lande and P. Wildenhain, Nucl.Phys.B (Proc. Suppl.) 118, 49 (2003). 3. W. Hampel et al (Gallex coll.), Phys.Lett.B 447, 127 (1999). 4. J.N. Abdurashitov et al (SAGE coll.), Phys.Rev.Lett. 83, 4686 (1999). 5. M. Altmann et al (GNO coll.), Phys.Lett.B 616, 174 (2005). 6. J. Hosaka et al (Super-Kamiokande coll.), Phys.Rev.D 73, 112001 (2006). 7. B. Aharmin et al (SNO coll.), Phys.Rev.C 75, 045502 (2007). 8. S. Abe et al (KamLAND coll.), Phys.Rev.Lett. 100, 221803 (2008). 9. J.N. Bahcall et al, Phys.Lett.B 433, 1 (1998). 10. S.P. Mikheev and A.Yu. Smirnov, Sov.J.Nucl.Phys. 42, 913 (1985); L. Wolfenstein, Phys.Rev.D 17, 2369 (1978); P.C. de Holanda and A.Yu. Smirnov, JCAP 302, 001 (2003). 11. A. Suzuki, Talk at the Neutrino Telescopes 2009 conference. 12. V. Barger et al, Phys.Rev.Lett. 95, 211802 (2005); A. Frieland et al, Phys.Lett.B 594, 347 (2004). 13. J.N. Bahcall, A.M. Serenelli, S. Basu, Astrophys.J.Suppl. 165, 400 (2006). 14. N. Gravesse, A.G. Sauval, Space Sci.Rev. 85, 161 (1998); M. Asplund M., N. Gravesse, A.G. Sauval, Nucl.Phys.A 777, 1 (2006); C. Pen˜ a-Garay, A.M. Serenelli, arXiv:0811.2424 [astro-ph]. 15. M.C. Gonzalez-Garcia, M. Maltoni, Phys.Reports 460, 1 (2008). 16. G. Alimonti et al (Borexino coll.), Astropart.Phys. 16, 205 (2002). 17. G. Alimonti et al (Borexino coll.), Nucl.Instr.Meth.A 600, 568 (2009). 18. G. Alimonti et al (Borexino coll.), Nucl.Instr.Meth.A 609, 58 (2009). 19. T. Hagner et al, Astropart.Phys. 14, 33 (2000). 20. H. Back et al (Borexino coll.), Phys.Rev.C 74, 045805 (2006). 21. C. Arpesella et al (Borexino coll.), Phys.Lett.B 568, 101 (2008). 22. C. Arpesella et al (Borexino coll.), Phys.Rev.Lett. 101, 091302 (2008). 23. W.M. Yao et al, Jour.Phys.G 33, 306 (2006). 24. G. Alimonti et al (Borexino coll.), Nucl.Instr.Meth.A 406, 411 (1998). 25. G. Bellini et al (Borexino coll.), arXiv:0808.2868v2 (2010).
November 26, 2010
18:1
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.14˙Dangelo
375
26. O. Smirnov, O. Zaimidoroga and A. Derbin, Phys.At.Nucl. 66 N. 4, 702 (2003); A. Derbin, O. Smirnov and O. Zaimidoroga, Phys.At.Nucl. 67 N.11, 2066 (2004). 27. G. Testera et al (Borexino coll.), Talk at the Neutrino Telescopes 2009 conference. 28. A.G. Beda et al, arXiv:0906.1926v1 (2009). 29. G. Bellini et al (Borexino coll.), arXiv:1003.0284v2 (2010). 30. P. Antonioli et al, New J.Phys. 6, 114 (2004).
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
376
NEUTRINO ASTROPHYSICS AND GALACTIC COSMIC RAY ANISOTROPY IN IceCube P. DESIATI∗ for the IceCube Collaboration† IceCube Research Center, University of Wisconsin, Madison, WI 53703, U.S.A. ∗ E-mail:
[email protected] † http://icecube.wisc.edu The IceCube Observatory is a kilometer-cube neutrino telescope under construction at the South Pole and planned to be completed in early 2011. When completed it will consist of 4,800 Digital Optical Modules (DOMs) which detect Cherenkov radiation from the charged particles produced in neutrino interactions and by cosmic ray initiated atmospheric showers. IceCube construction is currently 90% complete. A selection of the most recent scientific results are shown here. The measurement of the anisotropy in arrival direction of galactic cosmic rays will also be presented and discussed. Keywords: Neutrinos, cosmic rays, anisotropy
1. Introduction The IceCube Observatory is a km3 neutrino telescope designed to detect astrophysical neutrinos with energy above 100 GeV. IceCube observes the Cherenkov radiation from charged particles produced in neutrino interactions. The quest for understanding the mechanisms that shape the high energy Universe is taking many paths. Gamma ray astronomy is providing a series of prolific experimental observations, such the detection of TeV γ rays from point-like and extended sources, along with their correlation to observations at other wavelengths. These observations hold the clues about the origin of cosmic rays and the possible connection to shock acceleration in Supernova Remnants (SNR), Active Galactic Nuclei (AGN) or Gamma Ray Bursts (GRB). Supernovæ are believed to be the sources of galactic cosmic rays, nevertheless the γ ray observations from SNR still do not provide us with a definite and direct evidence of proton acceleration. The competing inverse Compton scattering of directly accelerated electrons may significantly contribute to the observed γ ray fluxes, provided that the magnetic field in the acceleration region does not exceed 10 µG.1 Ultra High Energy Cosmic Rays (UHECR) astronomy, has the potential to hold the key breakthrough in astroparticle physics. The identification of sources of cosmic rays could provide a unique opportunity to probe the hadronic acceleration models currently hypothesized. On the other hand, cosmic ray astronomy is only possible at
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
377
energies in excess of 1019 eV, where the cosmic rays are believed to be extragalactic and point back to their sources. TeV γ rays from those sources are likely absorbed during their propagation between the source and the observer : at ∼10 TeV γ rays have a propagation length of about 100 Mpc, while at ∼100 GeV γ rays can propagate much deeper through the Universe. If the extra-galactic sources of UHECR are the same as the sources of γ rays, then hadronic acceleration is the underlying mechanism and high energy neutrinos are produced by charged pion decays as well. Neutrinos would provide an unambiguous evidence for hadronic acceleration in both galactic and extragalactic sources, and are the ideal cosmic messengers, since they can propagate through the Universe undeflected and with practically no absorption. But the same reason that makes neutrinos ideal messengers makes them also difficult to detect. The discovery of the anisotropy in arrival direction of the galactic cosmic ray has also triggered particular attention recently. The origin of galactic cosmic ray anisotropy is still unknown. The structure of the local interstellar magnetic field within 1 pc is likely to have an important role in shaping the large angular scale features of the observed anisotropy. Nevertheless it is possible to argue that the anisotropy might be originated by a combination of astrophysical phenomena, such as the distribution of nearby recent supernova explosions.3 The observation of galactic cosmic ray anisotropy at different energy and angular scales has, therefore, the potential to reveal the connection between cosmic rays and shock acceleration in supernovae. At the same time, there seems to be clear observational evidence for the existence of dark matter in the Universe, even if its nature remains unknown. A variety of models predict the existence of a class of non-relativistic particles called Weakly Interacting Massive Particles (WIMP). These particles could be gravitationally condensed within dense regions of matter (such as the Sun or the galactic halo) and could provide a visible source for indirect detection via neutrino generation through annihilation. Neutrino telescopes are powerful tools to indirectly test spin-dependent WIMP-nucleon scattering cross section, provided the models for matter distribution and WIMP annihilation rate are taken into account. In section §2 the IceCube Observatory apparatus functionality and calibration are described. Selected physics analyses results are summarized in §3 : the determination of the atmospheric muon neutrino energy spectrum (§3.1), the search for astrophysical neutrinos from diffused and point sources, and from Gamma Ray Bursts (§3.2), the indirect search for dark matter (§3.3), and the anisotropy in cosmic rays arrival direction (§3.4). 2. The IceCube Observatory The IceCube Observatory (see Fig. 1) currently consists of 4,740 DOMs deployed in 79 vertical strings (60 DOMs per string) between 1,450 m and 2,450 m depth below the Geographic South Pole. At the beginning of 2011 IceCube will be completed
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
378
with 86 strings and 5,160 DOMs. The surface array IceTop with 81 stations, each consisting of two tanks with frozen clean water with two DOMs each, will provide the measurement of the spectrum and mass composition of cosmic rays at the knee and up to about 1018 eV. The Deep Core sub-array, consisting of 6 densely instrumented strings and located at the bottom-center of IceCube, is capable of pushing the neutrino energy threshold to about 10 GeV. The surrounding IceCube instrumented volume can be used to veto the background of cosmic ray induced through-going muo bundles to enhance the detection of down-going neutrinos within the Deep Core volume. The veto rejection power can reach 105 .
Fig. 1. The IceCube Observatory. Currently it consists of 79 strings and 4,740 DOMs. In 2011 it will be completed with 86 strings and 5,160 DOMs. The shaded region near the center of the array is the Deep Core dense sub-array, and the one on the right is the AMANDA neutrino telescope, de-comissioned in May 2009.
AMANDA was the first and largest neutrino telescope before the contruction of IceCube. Using analog technology it significancly contributed to the advance of neutrino astrophysics searches and it was de-commissioned in May 2009. The basic detection component of IceCube is the DOM: it hosts a 10-inch Hama-
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
379
matsu photomultiplier tube (PMT) and its own data acquisition circuitry enclosed in a pressure-resistant glass sphere, making it an autonomous data collection unit. The DOMs detect, digitize and timestamp the signals from optical Cherenkov radiation photons. Their main board data acquisition (DAQ) is connected to the central DAQ in the IceCube Laboratory at the surface, where the global trigger is determined.7 The detector calibration is one of the major efforts aimed at characterizing its response and to reduce systematic uncertainties at the physics analysis level. Each PMT is tested in order to characterize its response and to measure the voltage yielding a specific gain.8 In the operating neutrino telescope the current gain is about 107 and the corresponding dark noise rate is about 500 Hz. Time calibration is maintained throughout the array by regular transmission to the DOMs of precisely timed analog signals, synchronized to a central GPS-disciplined clock. This procedure has a resolution of less than 2 nsec. The LEDs on the flasher boards instrumented in the DOMs, are used to measure the photo-electron (p.e.) transit time in the PMT for the reception of large light pulses between neighboring DOMs. This delay time is given by light travel time from the emitter to the receiver, by light scattering in ice and by the electronics signal processing. The RMS of this delay is also less than 2 nsec. Waveform sampling amplitude and time binning calibration is periodically performed in each DOM and used to extract the number of detected p.e. with an uncertainty of less than 10%. Higher level calibrations are meant to correlate the number of detected p.e. to the energy of physics events that trigger the detector. Instrumented devices, such as the flasher boards, are used to illuminate the detector with 400 nm wavelength photons (corresponding to the wavelength yielding the highest detection sensitivity), simulating a real electron-neutrino interaction, or cascade, inside the detector. A complete Monte Carlo simulation chain is used to relate the known number of injected photons with the energy scale of the artificial cascade. The energy resolution depends on the event topology (track-like versus cascade-like) and on its containment inside the instrumented volume. Monte Carlo simulation provide the necessary fluctuations implicit by the topology and containment of the physics events. The ice optical properties are the most fundamental calibration determination that allows us to know how photons propagate through the ice and, therefore, how to relate the number of detected p.e. to the energy of the physics events. Due to the antarctic glaciological history, the optical properties depend on depth and they have been measured in the past using AMANDA in-situ calibration lasers,9 in the depth range between 1,400 m and 2,000 m. The optical properties down to 2,450 m, the depth of IceCube instrumentation, are extrapolated from ice core observations in other location of the antarctic continent, and an new campaign of extended in-situ measurements is currently being carried out.
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
380
3. Physics Results If the DOMs that detect Cherenkov photons satisfy specific trigger conditions, an event is formed and recorded by the surface DAQ. An on-line data filtering at the South Pole reduces the event volume to about 10% of the trigger rate, based on a series filter algorithms aimed to select events based on directionality, topology and energy. The filter allows us to transfer data via satellite to the northern hemisphere for prompt physics analyses. 3.1. Atmospheric neutrinos The 99.99999% of the events that trigger IceCube are muon bundles produced by the impact of primary cosmic rays in the atmosphere. Only about a small fraction of the detected events (∼10−5 ) are muon events produced by atmospheric neutrinos. In order to reject all the down-ward muon bundle background only up-ward events are generally selected, provided specific event selection provide well reconstructed events. In the 40 string instrumented IceCube (IceCube-40) about 30-40% of the up-ward events survive the selection, with a background contamination of less than 1%.10
Fig. 2. The unfolded spectrum of atmospheric muon neutrinos (νµ + ν¯µ ) from the IceCube40 detector,10 compared with previously measurements : the Fr´ejus result,11 upper and lower bands from Super-Kamiokande,12 the forward-folding result from AMANDA13 and the unfolded spectrum from AMANDA.14 The AMANDA unfolding analysis was a measurement of the zenithaveraged ux from 100◦ to 180◦ and the present analysis (IC40 unfolding) is a measurement of the zenith-averaged flux from 97◦ to 180◦ .
The energy estimation resolution of these atmospheric muon neutrino induced events is of the order of 0.3 in Log of neutrino energy, and a regularized unfolding
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
381
technique is used to determine the energy spectrum. Fig. 2 shows the unfolded energy spectrum of the 17,682 atmospheric neutrinos detected by IceCube-40 between zenith angle of 97◦ and 180◦ . The figure also shows other measurements performed by other experiments, including AMANDA.13,14 IceCube has detected the highest energy atmospheric neutrinos (about 250 TeV where a significant fraction of neutrinos are expected to arise from the decay of heavy mesons with charm quarks). IceCube allowed us to extend the global measured spectrum up to 6 orders of magnitudes in energy. For the first time the precision of this measurement is providing a powerful tool to test the high energy hadronic interaction models that govern our present knowledge of the cosmic ray induced extensive air showers.
3.2. Search for astrophysical neutrinos Atmospheric neutrinos represent an irreducible background for the search of high energy astrophysical neutrinos. If hadronic acceleration is the underlying source of high energy cosmic rays and γ rays observations, we expect that unresolved sources of cosmic rays over cosmological times have also produced enough neutrinos to be detected as a diffuse flux. Since shock acceleration is expected to provide an ∼E −2 energy spectrum, harder than the ∼E−3.7 of the atmospheric neutrinos, the diffuse flux is expected to dominate at high energy.
Fig. 3. Current status on the search of diffuse astrophysical νµ + ν¯µ . The atmospheric neutrino background as measured by IceCube-40 and AMANDA are shown, along with the predictions to the highest possible energies.15–18 The limit on an E−2 flux by AMANDA19 and the preliminary sensitivity by IceCube-40 are shown. We predict that IceCube-40 sensitivity is below the Waxman & Bahcall diffuse neutrino bound.20 The graph shows other models of astrophysical muon neutrinos as well21–25 (An E−2 spectrum is a horizontal line in this graph)
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
382
Fig. 3 shows the AMANDA experimental upper limit and the preliminary IceCube-40 sensitivity for an E−2 diffuse muon neutrino spectrum. One year of IceCube-40 is about 5 times more sensitive than 3 years of AMANDA and its sensitivity is lower than the Waxman-Bahcall neutrino bound. This means that IceCube is potentially approaching the discovery of the origin of cosmic rays. In the Ultra High Energy range (i.e. above ∼106 GeV or UHE) IceCube is placing upper limits that are still more than about one order of magnitudes above the predicted flux of neutrinos from UHECR interactions on the microwave photons (or GZK neutrinos26 ). The complete IceCube Observatory might be able to reach the discovery level within the next 5-8 years. If the observed γ rays from galactic and extra-galactic point and extended sources are from neutral pion decays in hadronic acceleration sites or from cosmic ray interaction with molecular clouds, the charged pions could produce enough neutrinos that can be detected by a km3 neutrino telescope.
Fig. 4. On the left: sensitivity (90% CL) for a full-sky search of steady point sources of muon neutrinos with an E−2 energy spectrum as a function of declination angle (δ < 0◦ for down-ward events, or the southern hemisphere, and δ >0◦ for up-ward events, or the northern hemisphere). The IceCube-22 sensitivity for the full-sky search27,28 is about the same as the IceCube-40 discovery potential (5σ discovery for 50% of the trials), and about a factor of three larger than the IceCube-40 sensitivity (90% CL). The squares show the IceCube-40 90% upper limits for selected sources, and the black line represents the projected IceCube-86 sensitivity. On the right: preliminary sky map in equatorial coordinates of statistical significance (p-value) from the search for steady point sources of high energy muon neutrinos in IceCube-40. The observation is extended to the southern hemisphere by reducing muon bundles background five orders of magnitudes with a zenith-dependent energy selection (>100’s TeV).
Fig. 4 shows, on the left, the sensitivity (90% CL) of IceCube for the full-sky search of steady point sources of E−2 muon neutrinos as a function of declination. The extension of point source search to the southern hemisphere is being possible by rejecting five orders of magnitude background events with a high energy event selection. This makes the southern hemisphere still dominated by high energy muon bundles and it yields a poorer neutrino detection sensitivity because of the high energy selection. Nevertheless this opens IceCube to a full-sky coverage and provides a coverage complement to the neutrino telescopes in the Mediterranean. On the
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
383
right of Fig. 4 is the sky-map of statistical significance from the full sky search of IceCube-40. No significant localized excess is observed. The sensitivity is to be interpreted as the average upper limit we expect to observe from individual sources across the sky. If we test specific sources (see left panel of Fig. 5) we see that the full IceCube (about twice as sensitive as IceCube-40) will be able to discover neutrinos from individual point sources in about 3-5 years, depending on the location in the sky.
Fig. 5. On the left: sensitivity (90% CL) and discovery potential (5σ discovery for 50% of the trials) in IceCube-40 for the Crab,29 MGROJ1852+0130 and 3C279.31 On the right: upper limits (90% CL) for the searches of neutrinos in coincidence with Gamma Ray Bursts in AMANDA, 32 IceCube-2233 and IceCube-40.
The search for neutrinos from transient galactic and extra-galactic sources is also being pursued. In particular, the right panel in Fig. 5 shows the upper limits (90% CL) for the model-dependent search of prompt neutrinos from GRB in the northern hemisphere with AMANDA,32 IceCube-2233 and IceCube-40. For each detector configuration, the list of GRB detected during the corresponding physics runs were collected and the predicted neutrino flux calculated based on the γ ray spectrum.34 The corresponding average neutrino spectrum was used to search for neutrinos detected within the so-called T90 time window (within which 90% of the γ signal is recorded). The right panel of Fig. 5 also shows the Waxman-Bahcall (WB) predicted average spectrum from GRB35 and the average GRB spectrum corresponding to the 2009-2008 time period of IceCube-40 physics runs. The preliminary IceCube-40 upper limit is below the WB spectrum, which indicates that IceCube is becoming very sensitive and could potentially discover neutrinos in coincidence with GRBs within the next few years. 3.3. Search for dark matter Non-baryonic cold dark matter in the form of weakly interacting massive particles (WIMPs) is one of the most promising solutions to the dark matter problem.36 The minimal supersymmetric extension of the Standard Model (MSSM) provides
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
384
a natural WIMP candidate in the lightest neutralino χ ˜01 .37 This particle is weakly interacting only and, assuming R-parity conservation, is stable and can therefore survive today as a relic from the Big Bang. A wide range of neutralino masses, mχ˜01 , from 46 GeV38 to a few tens of TeV39 is compatible with observations and accelerator-based measurements. Within these bounds it is possible to construct models where the neutralino provides the needed relic dark matter density.
Fig. 6. On the left: upper limits (90% CL) on the muon flux from neutralino annihilations in the Sun for the soft (b¯b) and hard (W+ W− ) annihilation channels, adjusted for systematic effects, as a function of neutralino mass. A muon energy threshold of 1GeV was used when calculating the flux. Also shown are the limits from MACRO,42 Super-K,43 AMANDA,44 and IceCube-2245 merged to AMANDA at the low energy end. On the right: Upper limits (90% CL) on the spin dependent neutralino-proton cross-section σ SD for the soft (b¯b) and hard (W+ W− ) annihilation channels, adjusted for systematic effects, as a function of neutralino mass. Also shown are the limits from CDMS,40 COUPP,46 KIMS47 and Super-K.43 The shaded area represents MSSM models not disfavored by direct searches40,41 based on σ SI .
Relic neutralinos in the galactic halo may be gravitationally attracted by the Sun and accumulate in its center, where they can annihilate each other and produce standard models particles, such as neutrinos. This provides an indirect channel detection of this type of dark matter, provided the WIMP density and velocity distribution and the neutralino annihilation rate models are taken into account. The left panel of Fig. 6 shows the upper limits (90% CL) on the muon flux for IceCube2245 merged to the AMANDA upper limit at the low energy end,44 along with other indirect observations. The limits on the annihilation rate can be converted into limits on the spin-dependent σ SD and spin-indipendent σ SI neutralino-proton cross-section (as shown on the right panel of Fig. 6). This conversion allows us a comparison with the direct search experiments. Since capture in the Sun is dominated by σ SD , indirect searches are expected to be competitive in setting limits on this quantity. Assuming equilibrium between the capture and annihilation rates in the Sun, the annihilation rate is directly proportional to the cross-section. Fig. 6 also shows the predicted sensitivity (90% CL) of the combined IceCube-86 and the Deep Core dense instrumentation that allows us to significantly lower the energy threshold, and consequently increase sensitivity to low neutralino mass. In indirect
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
385
searches WIMPs would accumulate in the Sun over a long period and therefore sample different dark matter densities in the galactic halo. This progressive gravitational accumulation is sensitive to low WIMP velocities, while direct detection recoil experiments are more sensitive to higher velocities, making indirect searches a good complement to the direct ones.
Fig. 7. On the left: relative expected neutrino flux from dark matter self-annihilations in the Milky Way halo on the northern celestial hemisphere (in equatorial coordinate). The largest flux is expected at right ascension α closest to the Galactic Center (∆α = 0). Dashed lines indicate circles around the Galactic center, while the solid lines show the definition of on and off-source region on the northern hemisphere. The on-source region is centered around ∆α = 0, while the off-source region is rotated by 180◦ in α. On the right: preliminary sensitivity (90% CL) to the self-annihilation cross-section < σA v > as function of the WIMP mass mχ , for IceCube-2252 and IceCube-40 for different annihilation channels. The bands on the IceCube-22 sensitivity curves account for different halo density profiles. The dashed line at the bottom labeled ’natural scale’ is for dark matter candidates consistent with being a thermal relic. The black dashed line is the unitarity bound.
Neutrino telescopes can also test the dark matter self-annihilation cross section < σA v > (averaged over the dark matter velocity distribution), making them complementary to γ ray measurements. If the lepton excess observed by Fermi, 48 H.E.S.S.49 and PAMELA50 is interpreted as a dark matter self-annihilation signal in the galactic dark matter halo,51 the leptophilic dark matter in the TeV mass range provides the most compatible model. Since the dark matter halo column density is larger toward the direction of the Galactic Center (GC), neutrinos from WIMP self-annihilation in the halo are expected to have a large angular scale anisotropy with an excess in the direction of the GC (see left panel of Fig. 7). The dark matter density distribution in the Milky Way has different shapes depending on the model.52 The expected neutrino flux from dark matter self-annihilation is proportional to the square of the dark matter density integrated along the line of sight for a given angular distance from the Galactic center. The differential neutrino flux for
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
386
a WIMP of mass mχ depends on the halo density profile, the neutrino production multiplicity (estimated with Dark SUSY model53 ) and on self-annihilation cross section < σA v >.54 The search for an excess of neutrinos in the direction of the GC for different annihilation channels, therefore, allows us to probe the allowed range of < σA v > for the corresponding channels (see right panel of Fig. 7), and provide a direct comparison to similar results from γ ray experiments. IceCube’s reach can be signicantly improved by looking at the GC (visible from the southern hemisphere), which will be possible beginning with the IceCube 40string dataset, by using neutrinos interacting inside the detector volume. While such an analysis will be able to put significantly better constraints, a large scale anisotropy would provide a more distinct discovery signal. 3.4. Cosmic ray anisotropy Galactic cosmic rays are found to have an energy dependent large angular scale anisotropy in arrival direction distribution with amplitude of about 10−4 −10−3 . The first comprehensive observation os such an anisotropy was provided by a network of muon telescopes sensitive to sub-TeV cosmic ray energies and located at different latitudes.55,56 More recently, an anisotropy was also observed in the multi-TeV range by the Tiber ASγ array,57 ARGO YBJ,58 Super-K,59 and by MILAGRO.60 And the first observation in the southern hemisphere is being reported by IceCube61 for a median cosmic ray energy of about 20 TeV.
Fig. 8. On the left: sky-map of the relative intensity in arrival direction of cosmic rays for IceCube22 (on the top),61 and preliminary sky-map of the relative intensity for IceCube-40 (on the bottom), in equatorial coordinates. For a better visual effect, a 3◦ smoothing has been applied to the maps. On the right: preliminary modulation of the relative intensity in arrival direction of cosmic rays for IceCube-40, projected in right ascension (black symbols), in the right ascension with respect to the Sun’s location (green symbols), and in pseudo-right ascension, i.e. corresponding to anti-sidereal time (red symbols).
The left panel of Fig. 8 shows the relative intensity in arrival direction of the cosmic rays, obtained by normalizing each ∼3◦ declination band independently. On the top is the relative intensity map obtained from the 4.6 billion events collected by IceCube-2261 and on the bottom the preliminary map obtained from the 12
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
387
billion events collected by IceCube-40. The two maps show the same anisotropy features and they both appear to be a continuation of the observed modulation in the northern hemisphere. The right panel of Fig. 8 shows the preliminary modulation of the relative intensity in arrival direction of the cosmic rays projected into right ascension for IceCube-40 (black symbols). In order to verify whether the observed sidereal anisotropy (i.e. in equatorial coordinates), has, in one full year, some spurious modulation derived from the interference between possible yearlymodulated daily variations, the same analysis was performed using the anti-sidereal time frame (a non-physical time defined by switching the sign of the transformation from universal to sidereal time).62 The real feature in the sidereal time is expected to be scrambled in the anti-sidereal time. The anti-sidereal modulation (shown on the right of Fig. 8 in red symbols) appears to be relatively flat with an amplitude that is the same order of magnitude of statistical errors, suggesting that no significant spurious effect is present. If the relative intensity is measured, over one full year, as a function of the angular distance from the Sun in right ascension, we expect an excess in the direction of motion of the Earth around the Sun (at ∼ 270◦ ) and a minimum in the opposite direction. This is what is observed (see the green symbols on the right of Fig. 8).
Fig. 9. preliminary statistical significance sky-map of the medium scale anisotropy in arrival direction of the cosmic rays for IceCube-40, combined with the MILAGRO northern hemisphere sky-map.63 In this map only anisotropy scales that are smaller than ∼30◦ are visible. Note the different color scales for the statistical significance in the two hemispheres.
With the same techniques used in γ ray detection, it is possible to estimate the event intensity sky-map (the background) with a pre-defined angular scale averaging, and determine the residual by subtracting the background from the actual map. With this method MILAGRO, in an attempt to estimate the background without γ-hadron separation, discovered two significant localized regions of cosmic rays, 63
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
388
also observed by ARGO YBJ.64 The same medium-scale anisotropy measurement was performed with IceCube for the first time in the southern hemisphere and the combined MILAGRO-IceCube-40 significance sky-map is shown in Fig. 9, where only the anisotropy features with angular scale smaller than ∼30◦ are visible. The different event statistics between MILAGRO (with 220 billion events with median energy of ∼ 1 TeV) and IceCube-40 (with 12 billion events with median energy of ∼ 20 TeV) does not allow to compare the two hemispheres on a statistical base. Nevertheless, there seems to be some indication that the small scale features observed in the two hemispheres might be part of a larger scale structure. The origin of the galactic cosmic ray anisotropy is still unknown. However there might be multiple superimposed causes, depending on the cosmic ray energy and the angular scale of the anisotropy. The large scale structure in the 10-100 TeV range might be a local fluctuation caused by nearby supernovae (within 1,000 pc) that exploded within the last 100,000 years or so.3 Or simply, the structure of the local Inter-Stellar Medium (ISM) magnetic field well within 1 pc might likely have the most important role. The strongest and most localized of the MILAGRO excess regions has triggered astrophysical interpretations, invoking Geminga pulsar as a possible source.65,66 Or it is a effect of the strong anisotropy of the MagnetoHydro Dynamic turbulence in the ISM, which can generate a superposition of the large scale anisotropy (perhaps generated by a nearby SNR) with a beam of cosmic rays focused along the local magnetic field direction, depending on the turbulence scale.67 However, the localized nature of the hottest MILAGRO excess region suggests a local origin, and its coincidence with the heliospheric tail might be caused by acceleration via magnetic reconnection. Reconnection in the heliospheric tail is generated by magnetic polarity reversals due to the 11-year solar cycles compressed by the solar wind in the magneto-tail. Acceleration in reconnection regions can be efficient up to about 10 TeV, which is the energy at which MILAGRO observes a cut-off for the localized regions.68 Up to this energy scale a directional localized excess might be observable. Acknowledgments We acknowledge the support from the following agencies: U.S. National Science Foundation-Office of Polar Programs, U.S. National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foundation, U.S. Department of Energy, and National Energy Research Scientific Computing Center, the Louisiana Optical Network Initiative (LONI) grid computing resources; Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research (BMBF), Deutsche Forschungsgemeinschaft (DFG), Research Department of Plasmas with Complex Interactions (Bochum), Germany; Fund for Scientific Research (FNRS-FWO), FWO Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office (Belspo); Marsden Fund, New Zealand; Japan Society for Promotion of Science (JSPS); the Swiss National Science
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
389
Foundation (SNSF), Switzerland; A. Kappes and A. Gro acknowledge support by the EU Marie Curie OIF Program; J. P. Rodrigues acknowledges support by the Capes Foundation, Ministry of Education of Brazil.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43.
Abdo A.A., et al., Astrophys. J. 658 L33 (2007) Abbasi R. et al.: submitted to Astrophys. J. Letters, arXiv:1005.2960 Erlykin A.D. & Wolfendale A.W., Astrop. Phys. 25, 183 (2006) Bertone G., Hooper D., & Silk J., Phys. Rept. 405, 279 (2005), hep-ph/0404175 Abbasi R. et al., Phys. Rev. Lett. 102, 201302 (2009) Abbasi et al., Phys. Rev. D 81, 057101 (2010) Abbasi R., et al., Nucl. Instrum. and Methods A 601, 294 (2009) Abbasi R., et al., submitted to Nucl. Instrum. and Methods A, arXiv:1002.2442 Andr´es E., et al., J. Geophys. Res 111, D13203 (2006) Abbasi R., et al.: to be submitted Daum K., et al., Zeitschrift f¨ ur Physik C 66, 417 (1995) Gonzalez-Garcia C., M. Maltoni & Rojo J., J. High Energy Phys. 10, 75 (2006) Abbasi R. et al., Phys. Rev. D 79, 102005 (2009) Abbasi R., et al., Astrop. Phys, in publication Gaisser T.K. et al., Phys. Rev. D 70, 023006 (2004) Fiorentini G., Naumov V.A. & Villante F.L., Phys. Lett. B 510, 173 (2001) Honda M., et al., Phys. Rev. D 75, 043006 (2007) Enberg R., Reno M.H. & Saercevic I., Phys. Rev. D 78, 043005 (2008) Achterberg A., et al., Phys. Rev. D 76, 042008 (2007) Waxman E. & Bahcall J.N., Phys. Rev. D 59, 023002 (1998) Stecker F.W., Phys. Rev. D 72, 107301 (2005) Becker J.K., Biermann, P.L. & Rhode W., Astrop. Phys. 23, 355 (2005) Becker J.K., et al., Astrop. Phys. 28, 98 (2007) Razzaque S., M´eszaros P. & Waxman E., Phys. Rev. D 68, 083001 (2003) Mucke A., et al., Astropart. Phys. 18, 593 (2003) Anchordoqui L.A., et al., Phys. Rev. D 76, 123008 (2007) Abbasi R., et al., Astrophys. J. 701, L47 (2009) Abbasi R., et al., Phys. Rev. Lett. 103, 221102 (2009) Morlino, et al.: arXiv:0903.4565 Halzen F., et al.: arXiv:0902.1176 Reimer, et al.,arXiv:0810.4864 Achterberg A., et al., Astrophys. J. 674, 357 (2008) Abbasi R., et al., Astrophys. J. 710, 346 (2010) Guetta D., et al., Astropart. Phys. 20, 429 (2004) Waxman, E. & Bahcall, J.N., Phys. Rev. Lett. 78, 2292 (1997), arXiv:astroph/9701231 Rubin V. & Ford W.K., Astrophys. J. 159, 379 (1970) Drees M. & Nojiri M.M., Phys. Rev. D 47, 376 (1993) Amsler C., et al., Phys. Lett. B 667, 1 (2008) Gilmore R.C., Phys. Rev. D 76, 043520 (2007) Ahmed Z., et al., Phys. Rev. Lett. 102, 011301 (2009) Angle J., et al., Phys. Rev. Lett. 100, 021303 (2008) Ambrosio M., et al., Phys. Rev. D 60, 082002 (1999) Desai S., et al., Phys. Rev. D 70, 083523 (2004)
November 23, 2010
19:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
05.15˙Desiati
390
44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68.
Ackermann M., et al., Astropart. Phys. 24, 459 (2006) Abbasi R., et al., Phys. Rev. Lett. 102, 201302 (2009) Behnke E., et al., Science 319, 933 (2008) Lee H.S., et al., Phys. Rev. Lett. 99, 091301 (2007) Abdo A.A., et al., Phys. Rev. Lett. 102, 181101 (2009), arXiv:0905.0025 Aharonian C.F., arXiv:0905.0105 Adriani O., et al., Nature 458, 607 (2009), arXiv:0810.4995 Meade P., Papucci M., Strumia A. & Volansky T., arXiv:0905.0480 Rott C., et al.: proceedings of the CCAPP Symp., OSU, (Columbus, OH 2009), arXiv:0912.5183 Gondolo P., et al., JCAP 0407, 008 (2004), astro-ph/0406204 Yuksel H., et al., Phys. Rev. D 76, 123506 (2007), arXiv:0707.0196 Nagashima K., Fujumoto K. & Jacklyn R.M., J. Geophys. Res. 103, 17429 (1998) Hall D.L., et al., J. Geophys. Res. 104, 6737 (1999) Amenomori A., et al., Science 314, 439 (2006) Zhang J.L. et al.: proceedings of the 31st ICRC, L´ od´z (Poland, 2009) Guillian G. et al., Phys.Rev. D 75, 062003 (2007) Abdo A.A. et al., Astrophys. J. 698, 2121 (2009) Abbasi R., et al., submitted to Astrophys. J. Letters, arXiv:1005.2960 Farley, F., et al., Physical Society. A. 67, 996 (1954) Abdo A.A., et al., Phys. Rev. Lett. 101, 221101 (2008) Vernetto S., et al., proceedings of the 31st ICRC, L´ od´z (Poland, 2009) Salvati M. & Sacco B., arXiv:0802.2181 Drury L.O’C. & Aharonian F.A., Astropart. Phys. 29, 420 (2008), arXiv:0802.4403 Malkov M.A., et al., arXiv:1005.1312 Lazarian A. & Desiati P., submitted to Astrophys. J.
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
PART VI
Lepton-Flavour Violation, Superstrings, Magnetic Monopoles and Search For Exotics
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
393
A FRAMEWORK FOR DOMAIN-WALL BRANE MODEL BUILDING R. R. VOLKAS∗ School of Physics, The University of Melbourne, Victoria 3010, Australia ∗ E-mail:
[email protected] I discuss a framework for the construction of codimension-1 domain-wall brane models. Various dynamical mechanisms are invoked to localize 3+1-dimensional scalars, fermions, gauge bosons and gravitons. These pieces of the puzzle are then assembled into an explicit model that may reproduce the standard model as the effective 3 + 1-dimensional theory for the dynamically-localized fields. I discuss how the fermion mass hierarchy problem can be addressed in this model through the fact that the fermions and the scalars are automatically split along the extra dimension. Quark and lepton masses that are much smaller than the electroweak scale are obtained in a natural way due to small overlap integrals for their extra-dimensional profile functions. Keywords: Brane; Domain wall; Extra dimensions; Dynamical localization.
1. Introduction The purpose of this talk is to discuss whether or not a phenomenologically-realistic domain-wall brane model is possible. The main problem is how to make the lowenergy effective theory present as 3 + 1 dimensional rather than 4 + 1 dimensional. This means we have to devise dynamical localization mechanisms for all the fields we want in our low-energy dimensionally-reduced effective description. If we are serious about describing the real world — and of course we should be! — then those localized fields must contain the standard model (SM) degrees of freedom (quarks, leptons, Higgs bosons, gluons, W -bosons, Z-bosons, photons) plus gravity. So, the task involves two steps. The first is to find generic ways to localize all these different kinds of fields. The second is to write down specific models that are phenomenologically realistic, meaning that the effective theory is the SM or some acceptable extension thereof (such as left-right symmetry). Step 1 has had good coverage in the literature. Step 2, for some strange reason, has not. The structure of this talk shall be as follows: In the next Section I shall discuss the place this class of brane-world model has in the overall picture of extradimensional theories. I shall then review various proposed dynamical localization mechanisms, field by field: fermions, scalars, gravitons and gauge bosons. This shall be followed by an attempt at step 2: writing down an explicit, realistic theory.
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
394
Armed with this theory, we shall see that it provides an explanation for one major mystery in particle physics: the fact that all quarks and leptons, apart from the top quark, have masses that are suppressed with respect to the weak scale.
2. Context There are many different kinds of extra-dimensional theories, with many different motivations and features.1–5 One broad classification is into models employing branes and those not employing branes. I shall restrict my discussion to the former. Brane-world models may themselves be classified into three broad categories: string-theory constructions, field-theory models with fundamental branes, and fieldtheory models without fundamental branes. In string-theory constructions, one uses non-perturbative stringy objects such as D-branes, and the context is that of a topdown attempt to describe the observed universe in terms of a unified quantum theory of gravity and particle physics. Such theories are ultra-violet (UV) complete, and one of them may describe the universe we find ourselves in. It is not yet known if this is in fact the case. Field-theory models are less ambitious, but they are also more closely connected with the SM and experiment. Most field theory models employ fundamental branes. This means that the origin of the brane or branes is not specified within the field theory itself; branes are simply inserted by hand into the action. In most models, the action is the sum of a high-dimensional integral and one or more four-dimensional integrals. The former describes the “bulk physics”, which always includes gravity but often also includes other sectors, and the latter are the brane terms. By using Dirac-δ factors, the hybrid action can be written as a single integral over the full high-dimensional spacetime, with the branes thus revealed as being infinitely-thin along the extra dimensions. Theories in this class include the renowned ADD1 and Randall-Sundrum (RS) models.2,3 While the origin of the branes is not specified per se, the general idea is that eventually such models ought to arise as the low-energy field-theory limits of some UV-complete string theories, with the branes then explained as nonperturbative stringy objects. For example, RS set-ups can emerge from “warped throats” in string theory. This ties in well with the fact that the high-dimensional field theories are non-renormalizable and thus UV incomplete. We shall be concerned here with field-theory models containing no fundamental branes. The branes instead are postulated to be topological defects or solitons, stable classical solutions for scalar fields (usually).4 Such a configuration may replace the usual homogeneous vacuum as the effective ground state of the world. The origin of the branes is thus understood within the field theories themselves, and they are of finite thickness. Because the action is now all “bulk” — entirely high-dimensional — every field in the theory is able to propagate throughout the whole spacetime. This means that one must explain why the world presents as 3 + 1-dimensional through dynamical localization: interactions that cause fields to “stick” to the defect. One of the main achievements of Randall and Sundrum was to show that gravitons
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
395
may be dynamically localized to a fundamental brane. Soliton-brane models seek an extension of this phenomenon to cover all relevant fields, not just that of the graviton. Like fundamental-brane field-theory models, soliton-brane models are nonrenormalizable so they are implicitly defined with a UV cut-off. They cannot be ultimate theories; they are instead bottom-up extensions of the SM, valid only up to the cut-off energy scale. They eventually require UV completion, but to begin with one is concerned with what they bring to an effective description of the lowenergy world, not how they tie in with in-principle complete theories. String theory completions are not precluded, as far as I am aware, since there may be string theories whose low-energy effective field-theory limits admit field-theoretic soliton solutions. But within the bottom-up approach one need not commit oneself to the nature of the UV completion. To get down to specifics: I shall discuss 4 + 1-dimensional models where the brane shall be a domain wall or kink soliton, and the extra dimension topologically infinite. The action shall treat all dimensions on an equal footing: there shall be no “compactification”.3 I am attracted to this kind of theory precisely because all dimensions are symmetrically treated in the action.a The fact that a brane may appear spontaneously, courtesy of a domain wall solution, and that other fields may then be dynamically localized to it, realising an effective 3 + 1-dimensional world without contrivance, is aesthetically appealing to me. So, can our world actually be of this nature? And if it can be, are any mysteries thus explained? I shall present evidence that realistic models of this type are possible, and that they may explain why most quarks and leptons have masses suppressed compared to the electroweak scale. 3. Dynamical Localization to Domain Walls The prototype domain-wall brane is the Z2 kink. Let η be a real scalar field with the η → −η symmetric potential V = λ(η 2 − v 2 )2 .
(1)
The vacua, η = ±v, are disconnected due to the fact that a discrete symmetry is being spontaneously broken. The kink configuration is a static solution to the Euler-Lagrange equations that depends on one spatial coordinate, call it y, with the vacua −v and +v imposed as boundary conditions at y = −∞ and y = +∞, respectively (the antikink has the boundary conditions reversed): √ ηkink (y) = v tanh( 2λvy). (2) Such a solution exists in any number of spatial dimensions. For our application, y is the single extra dimension, and this configuration is a three-dimensional “domain a These
models combine the visions of Rubakov-Shaposhnikov4 and (type-2) Randall-Sundrum,3 so they may be called RS-RS theories.
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
396
wall” separating bulk regions that √ are close to the two vacua on opposite sides of the wall. The width of the wall is ( 2λv)−1 , a free parameter, with a phenomenological upper bound of about TeV−1 . We shall require a slightly more elaborate configuration as the basis for our proposed realistic model, but for the moment think of the brane as the above solution. Now consider a 4+1-dimensional fermion field Ψ(x, y), where x is the usual 3+1dimensional coordinate. Chiral massless 3 + 1-dimensional fermions, dynamically localized to the domain wall, are an inevitable outcome of Yukawa coupling to η.4 To see this, one starts with the 4 + 1-dimensional Dirac equation, iΓM ∂M Ψ − b(y)Ψ = 0,
(3)
where ΓM = (γ µ , −iγ5 ) and b(y) is equal to the kink solution multiplied by the Yukawa coupling constant. Note for later use that b(y) is generally the sum of background scalar-field configurations multiplied by Yukawa coupling constants. One seeks a separated variable solution Ψ(x, y) = fL0 (y)ψL0 (x), where ψL0 (x) is a 3 + 1-dimensional left-chiral fermion obeying iγ µ ∂µ ψL0 = 0. It is massless, a zero mode. Substituting in Eq. (3) reveals the solution fL0 (y) = N e−
Ry 0
b(y 0 )dy 0
,
(4)
where N is a normalization constant. With the Yukawa coupling constant chosen as positive, we see that the exponent goes rapidly to negative infinity as y → ±∞, so fL0 is square-integrable and peaked around y = 0. The associated 3 + 1-dimensional left-chiral fermion ψL0 is dynamically localized around the center of the domain wall, y = 0. This localized chiral zero mode is our prototype quark or lepton. (To obtain a localized right-chiral fermion one simply chooses a negative Yukawa coupling constant.) The mode function fL0 is often called a “profile”. In the more general case where b(y) is the sum of separate terms, the localization center is y0 such that b(y0 ) = 0. If b(y0 ) has positive slope then a left-chiral mode is localized, otherwise it is a right-chiral mode. The lower limit of integration in Eq. (4) is then most conveniently chosen to be y0 . Notice that the zero-mode profile is exponentially sensitive to the Yukawa coupling constants. We shall use this feature to account for fermion mass hierarchies later on. The zero mode is the first term in a generalized Kaluza-Klein (KK) or mode decomposition of Ψ(x, y): X [fLn (y)ψLn (x) + fRn (y)ψRn (x)] , (5) Ψ(x, y) = fL0 (y)ψL0 (x) + n
where the ψn ’s obey 3 + 1-dimensional Dirac equations, iγ µ ∂µ ψL,Rn = mn ψR,Ln .
(6)
Substitution in Eq. (3) reveals that the mode functions fL,Rn obey the Schr¨ odingerlike equations6 −fL,Rn 00 + W∓ fL,Rn = m2n fL,Rn ,
(7)
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
relative mode energy
effective potential
397
0 L R -3
-2 -1
0
1
2
3
fLn profile
fRn profile
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
extra dimension y
extra dimension y
10 8 6 4 2 0
extra dimension y
Fig. 1. The left-hand figure shows the effective Schr¨ odinger potentials for left- and right-chiral 3 + 1-dimensional fermion fields. In this example the well for left-chiral modes is deeper and supports a zero mode as its ground state. The right-hand figure displays the modes for these wells. The massive modes constituting the infinite Kaluza-Klein tower exhibit mass-degeneracy pairing between left- and right-chiral states.
where prime denotes differentiation with respect to y, with effective potentials W∓ (y) = b(y)2 ∓ b0 (y).
(8)
As shown in Fig. 1, the effective potentials are finite wells. The “bound states” correspond to discrete, localized fermions, beginning with the chiral zero mode and followed by a finite number of massive KK excitations, with the mass gaps at the scale of the inverse width of the wall. Above the height of the wells, the spectrum becomes continuous, with the “scattering states” now describing delocalized fermions. Processes above this energy will liberate states from being stuck to the wall. The KK decomposition is an exact re-expression of the 4 + 1-dimensional fermion field as an infinite tower of 3+1-dimensional fields. The mode functions fL,Rn are simply a convenient complete set, used as a basis for bounded field configurations. We also want the ability to dynamically localize scalar fields. For instance, to produce an effective localized SM we need a localized Higgs doublet. The procedure is almost completely analogous to the case of fermions. One takes a 4+1-dimensional scalar field, couples it via a scalar potential to the fields constituting the domain wall, and decomposes it as an infinite KK tower using a convenient complete set of mode or profile functions. An important difference from the fermion case is that no zero mode is obtained (absent fine tuning). The effective Schr¨odinger eigenvalues are now m2n , the effective 3 + 1-dimensional KK mass-squared values. By adjusting the scalar-potential parameters, one can produce a ground state with a negative m20 , thus triggering the condensation of the localized scalar field.6 This is how we shall induce spontaneous electroweak symmetry breaking below. We next turn to the graviton. Fortunately, this story is a minor modification of the type-2 Randall-Sundrum (RS2)3 analysis.7 One minimally couples gravity to the domain-wall forming scalar field η, and solves the coupled Einstein-Klein-Gordon equations for the kink and a warped metric. A solution with qualitatively similar features to the RS2 set-up exists, provided that the equivalent fine-tuning condition between the bulk cosmological constant and the domain-wall energy density or
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
398
tension is made. The warp-factor exponent is now a smoothed-out version of the RS2 |y| function, and the bulk is only approximately anti-de-Sitter. A massless graviton is still dynamically localized to the wall, and the corrections to Newton’s gravitational potential are negligible. The effective Schr¨odinger potential for the graviton profile is volcano-like, as in RS2, but the well is smooth, not Dirac-δ-like. There is no mass gap above m = 0, exactly because of the volcano-like potential, but its absence is phenomenologically fine, just as in RS2, because the continuum mode functions have very suppressed amplitudes inside the wall. The warped metric does affect the localization potentials for the other fields.8 The finite potential wells in the flat-space case (see e.g. Fig. 1) now get deformed into volcanos. The fermion chiral zero mode remains as a bound state, but the discrete KK excitations now become resonances embedded in a continuum of modes. They are quasi-localized, but with lifetimes that can be made long by adjusting the width and height of the classically-forbidden region they must tunnel through to escape the brane. The continuum states between the resonances have very little effect inside the domain wall, because they have very small amplitudes there (for the same reason that the RS2 graviton continuum modes have small amplitude: they have to tunnel through a wide classically-forbidden region to get inside the wall). For simplicity, we shall work in flat space when we come to write down the proposed realistic theory. The above discussion shows that we understand how the picture will change in the presence of warped gravity, and that the changes will be small. Let us now turn to gauge bosons, the most difficult class of particles to deal with. The technique used for fermions, scalars and the graviton does not work for gauge bosons. The y-dependent profile functions that would be thus obtained would spoil gauge universality, and the localized spin-1 fields would be massive not massless. Quite different physics is needed. The most plausible known mechanism to achieve gauge boson localization while preserving exact 3 + 1-dimensional gauge invariance and thus gauge universality is that proposed by Dvali and Shifman (DS).9 It uses non-perturbative quantum field theory, very different from the essentially classical localization mechanism that works so well for the other fields. The idea is this: Let the bulk respect gauge symmetry G, and require it to be confining. Let the domain wall background scalar fields spontaneously break G to subgroup H inside the wall only. The hypothesis is that the gauge bosons of H are localized. Figure 2 depicts this situation for the toy model G = SU (2) and H = U (1). Two heuristic arguments for H gauge boson localization have been proposed. The original DS argument uses the mass gap associated with confinement. The H gauge bosons must propagate as massive G glueballs in the bulk. Inside the wall, they propagate either as free particles or as H glueballs. Provided the H glueballs are less massive than the G glueballs, there is an energy cost to H glueballs entering the bulk. Thus they are repelled from the bulk; they are dynamically localized. The description is even easier for unconfined U(1) gauge bosons: they are simply free
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
399
SU(2)
dual superconductor
U(1)
SU(2)
dual superconductor
Fig. 2. Schematic picture of the Dvali-Shifman set up. The bulk respects local SU (2) symmetry, which is spontaneously broken to U (1) inside the wall. The bulk SU (2)-symmetric regions are presumed to be in confinement phase. A source for the U (1) gauge field is placed inside the wall. The electric field lines initially spread out evenly in all spatial dimensions, but they are repelled from the confining bulk, and thus spread only along the wall at distances large compared to the wall width. Thus the asymptotic Coulomb field has three- not four-dimensional character. The U (1) gauge boson is dynamically localized and massless.
massless particles inside the wall, but incorporated into massive G glueballs in the bulk. The second argument was later articulated by Arkani-Hamed and Schmaltz.10 Place a source of the U (1) charge inside the wall. At distances much less than the wall width from the source, the electric field lines spread out as per Gauss’s law in four spatial dimensions. But for distances greater then the width, they argue that the field lines get repelled from the bulk region. The requirement for the bulk to be in confinement phase implies that the bulk cannot support diverging electric field lines, and in fact is expected to repel them. If you adopt the monopole condensation picture of confinement, then this repulsion is ascribed to the dual Meissner effect. Thus the electric field lines get channeled along the wall, and at distances large compared to the width of the wall the field fall off is inverse-square, and thus effectively three-dimensional. Now place the source charge in the bulk. The electric field lines, due to confinement, will form a flux tube that ends inside the wall, at which point they expand as if the source particle is actually inside the wall! Thus, it does not matter where along y the source charge is placed, the electric field far from it will be confined inside the wall and follow the inverse-square law. This is important for us: the profiles describe how a wall-localized particle is smeared along the extra dimension. If the above heuristic argument is valid, then the long-distance gauge field sourced by a particle will be the same irrespective of its profile. This is essential for achieving gauge universality in the 3 + 1-dimensional effective theory. We shall assume that the DS mechanism works as advertised. Obviously, to be sure of this, detailed computations using, for example, lattice techniques ought to
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
400
be performed (this has been attempted, but only for 2 + 1 dimensions11 ). The best that has been achieved so far is to prove that a confinement phase exists for 4 + 1dimensional pure Yang-Mills theory at finite lattice spacing (equivalent to a UV cut off theory) provided that the gauge coupling constant exceeds a critical value.12 This completes our review of dynamical localization mechanisms. It is straightforward to write down explicit models with localized fermions, scalars and gravitons: everything about the required physics is understood and readily calculable. Gauge bosons are all that stop me from proclaiming that domain-wall brane localized gauge theories can definitely be constructed. If the DS mechanism works, which we shall provisionally assume, then model building can proceed. 4. Putting it All Together: the SU (5) Model The Dvali-Shifman mechanism requires us to embed SU (3) × SU (2) × U (1) in a larger group that breaks to it inside the wall. The minimal sensible choice is SU (5).13 (Extensions to both SO(10)14 and, in a different way, to E6 15 have also been constructed.) We use an SU (5) singlet scalar η to produce a kink, and an SU (5) adjoint χ to P break SU (5) to the SM inside the wall. Write χ = a T a χa , where T ’s are SU (5) generators in the fundamental representation. If the component χ1 corresponding to the hypercharge generator Y condenses inside the wall, then SU (5) → SU (3) × SU (2) × U (1)Y . We next write a Higgs potential for these two multiplets, and arrange the global minima to be hηi = ±v,
hχi = 0.
(9)
These are then used as boundary conditions for a stable domain wall solution of the coupled Euler-Lagrange equations. In general, only numerical solutions are possible. However, on a certain slice through Higgs-potential parameter space, the equations simplify enough to permit an instructive analytical solution to exist, η(y) = v tanh(ky),
χ1 (y) = A sech(ky),
(10)
where v, k and A are related to scalar potential parameters. This solution is graphed in Fig. 3. The adjoint field is nonzero only inside the wall, breaking SU (5) to the SM there. The bulk regions respect SU (5), and are assumed to be in confinement phase. Next, we introduce 4 + 1-dimensional fermions in the usual SU (5) representations, Ψ5 ∼ 5∗ ,
Ψ10 ∼ 10,
N ∼ 1,
which are Yukawa coupled to η and χ: YDW = h5η Ψ5 Ψ5 η + h5χ Ψ5 χT Ψ5 + h10η Tr(Ψ10 Ψ10 )η − 2h10χ Tr(Ψ10 χΨ10 ) + h1η N N η.
(11)
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
401
field profile
v
0
-v
η χ extra dimension y
Fig. 3. The domain wall solution for the SU (5) model. The singlet scalar η forms a kink, while the appropriate component of the adjoint χ condenses inside the wall, breaking SU (5) to the SM gauge group. This realizes the required Dvali-Shifman set up.
The background fields, called b(y) earlier, to be used in the 4 + 1-dimensional Dirac equations for the fermions are: r 3Y hnχ χ1 (y). (12) bnY (y) ≡ hnη η(y) + 52 Observe that SM components of different hypercharge Y couple to different linear combinations of η(y) and χ1 (y), even if they are from the same SU (5) multiplet. Fermions are split along the extra dimension,16 but not arbitrarily. The 3 + 1dimensional chiral quarks and leptons are the zero modes of the Dirac equations. We now introduce a 4 + 1-dimensional scalar, Φ ∼ 5∗ , containing the weak doublet Φw and a colored scalar Φc . It is Yukawa coupled to the fermions in the usual way, and it is added to the scalar potential of the theory via all possible gauge-invariant terms (we stop the in-principle infinite series of terms in our nonrenormalizable effective field theory at quartic order). A mode decomposition may be performed, with our interest mainly being in the lowest modes for the electroweak doublet and its colored partner: Φw,c (x, y) = pw,c (y)φw,c (x).
(13)
The above ansatz is substituted into the Euler-Lagrange equations for Φw,c to obtain effective Schr¨ odinger equations for the profiles pw,c (y), with the potential wells plotted in Fig. 4. The pw well is deeper (due to the scalar potential parameter region chosen) and has a lowest eigenvalue that is negative, triggering spontaneous electroweak symmetry breaking on the wall. The colored scalar must have all eigenvalues positive to preserve color symmetry. This completes the construction of the model (in the absence of gravity). I now report some preliminary results from work by Ben Callen on fitting the model parameters to reproduce the observed quark and lepton masses, including
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
402
WY
W-1 W2/3
-6
-4
-2 0 2 dimensionless coordinate ky
4
6
Fig. 4. Example of the effective localization potentials for the electroweak Higgs doublet (solid) and its SU (5) colored partner (dashed). The scalar potential parameters are chosen so that the doublet has a negative mass-squared eigenvalue, which triggers spontaneous electroweak symmetry breaking inside the wall.
those for Dirac neutrinos (Majorana neutrinos are also possible and interesting, but I shall discuss only the extreme case of Dirac neutrinos in this talk).17 Any 3 + 1-dimensional Yukawa coupling term is of the form: Z (14) dyfL (y)fR (y)p(y) ψ L (x)ψR (x)φ(x). h The 3 + 1-dimensional Yukawa coupling constant is equal to the 4 + 1-dimensional Yukawa coupling constant multiplied by an overlap integral of profile functions, which themselves depend on fermion-brane Yukawa coupling constants in a complicated way. The profiles are exponentially sensitive to these Yukawa coupling constants. Generically, one expects the overlap integrals of the split multiplets to suppress the effective 3 + 1-dimensional masses, leading in a natural way to hierarchies. Searching the parameter space is numerically intensive. We have been proceeding by trial-and-error, and have found that multiple viable regions exist. Figures 5-7 give examples of profiles that produce the correct quark and lepton masses, including acceptably tiny ones for the Dirac neutrinos. Both quark and lepton mixing were switched off for simplicity in these examples. It is amusing that all standard electroweak Yukawa coupling constants have been set as equal for these plots! The mass hierarchies arise solely from the fermion-brane Yukawa terms via splitting, and not at all from the usual SM source. The spread in fermion-brane Yukawa coupling constants was less than an order of magnitude in this example, but this produced a 14 order-of-magnitude spread in the fermion masses, ranging from the top quark mass to the tiny neutrino masses. It is quite complicated to parameter-fit for mixing angles as well as the masses. We think it should be possible to fit the small quark mixing angles in a natural way, but not the tribimaximal mixing pattern for the neutrinos. The latter seems to call for a flavor symmetry to exist. The best eventual model of this type is likely
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
403 3.5 3.0 2.5 2.0 1.5 1.0 0.5
-2
0
-1
1
2
Fig. 5. Profiles for the first generation of quarks and leptons and the electroweak Higgs doublet. The have been normalized to square-integrate to one, and the horizontal axis is ky. Black dashed = electroweak Higgs; black-solid = RH neutrino; black-dotted = LH lepton doublet; black-dottedand-dashed = RH charge −1/3 quark; gray-solid = LH quark doublet; gray-dotted = RH charge +2/3 quark; gray-dotted-and-dashed = RH charged lepton.
3.5 3.0 2.5 2.0 1.5 1.0 0.5
-2
-1
Fig. 6.
0
1
2
As for Fig. 5 but for the second generation.
to solve the qualitative mass hierarchy problem using splitting, but to explain the mixing matrices through a flavor symmetry. Finally, it is worth mentioning the role of the colored scalar φc . It is troublesome, as it leads to proton decay: the doublet-triplet splitting problem is the need to make the colored scalar very massive, even though its SU (5) partner needs to be at the electroweak scale. In usual grand-unified theories, this can only be achieved by fine-tuning. In this extra-dimensional setting, one has the chance to remove this problem through splitting. One has to see if there is a viable parameter regime where enough colored-scalar Yukawa coupling constants are very small due to tiny overlap integrals. For the parameter choice leading to the above profiles, it turns
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
404 3.5 3.0 2.5 2.0 1.5 1.0 0.5
-2
-1
Fig. 7.
0
1
2
As for Fig. 5 but for the third generation.
out that one of the Yukawa terms involving φc has a tiny coupling constant, 10−30 (it is the one for uR (eR )c φ∗c ). This shows that splitting can suppress such coloredscalar Yukawas. In fact, in work performed after this talk was given, Callen found a parameter region where all φc -induced proton decay processes are suppressed, eliminating the doublet-triplet splitting problem, while being consistent with a good quark and lepton mass spectrum.17 5. Conclusion Viable domain-wall brane models appear to exist, provided that the Dvali-Shifman mechanism for gauge boson localization works in that 4 + 1-dimensional context. The dynamical localization of all other fields (scalars, fermions and the graviton) is well-understood. I have presented an SU (5) model that may be a realistic theory of this type. It can explain the existence of quark and lepton mass hierarchies in a qualitative way, and it ameliorates the doublet-triplet splitting problem. Apart from the veracity of the Dvali-Shifman mechanism, there are several other as-yet unanalyzed issues: gauge coupling constant unification, gauge-boson-induced proton decay, and the generation of quark and lepton mixing angles including CP violating phases. Deeper problems include the lack of a dark matter candidate, and the gauge hierarchy problem. All these issues are under study. Acknowledgments I thank the organizers for their kind invitation to this meeting, and all my extradimensional collaborators and students, especially Ben Callen, Rhys Davies and Damien George for the work presented here. This work was partially supported by the Australian Research Council. Part of this work was done at the Aspen Center for Physics in July 2009 under the aegis of the neutrino workshop, and I thank Andr´e de Gouvˆea for informative discussions during that time. I also thank Ferruccio Feruglio
November 23, 2010
19:19
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.01˙Volkas
405
for useful discussions in Padova in October 2009. References 1. 2. 3. 4. 5.
6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Lett. B429, 263 (1998). L. Randall and R. Sundrum, Phys. Rev. Lett.83, 3370 (1999). L. Randall and R. Sundrum, Phys. Rev. Lett.83, 4690 (1999). V. A. Rubakov, and M. E. Shaposhnikov, Phys. Lett. B125, 136 (1983). K. Akama, Lect. Notes Phys.176, 267 (1982); M. Visser, Phys. Lett. B159, 22 (1985); G. W. Gibbons and D. L. Wiltshire, Nucl. Phys. B287, 717 (1986); I. Antoniadis, Phys. Lett. B246, 377 (1990); I. Antoniadis et al., Phys. Lett. B436, 257 (1998). See, for example, D. P. George and R. R. Volkas, Phys. Rev. D75, 105007 (2007). See, for example, C. Csaki et al., Nucl. Phys. B581, 309 (2000); O. de Wolfe et al., Phys. Rev. D62, 046008 (2000); A. Davidson and P. Mannheim, hep-th/0009064. R. Davies and D. P. George, Phys. Rev. D76, 104010 (2007). G. R. Dvali and M. A. Shifman, Phys. Lett. B396, 64 (1997). N. Arkani-Hamed and M. Schmaltz, Phys. Lett. B450, 92 (1999). M. Laine et al., JHEP04, 027 (2004). M. Creutz, Phys. Rev. Lett.43, 553 (1979); D. P. George, PhD thesis, University of Melbourne (2009). R. Davies, D. P. George and R. R. Volkas, Phys. Rev. D77, 124038 (2008). J. E. Thompson and R. R. Volkas, Phys. Rev. D80, 125016 (2009). A. Davidson et al., Phys. Rev. D77, 085031 (2008). N. Arkani-Hamed and M. Schmaltz, Phys. Rev. D61, 033005 (2000). B. Callen and R. R. Volkas, in preparation.
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
406
SEARCH FOR LEPTON FLAVOUR VIOLATION WITH THE µ+ → e+ γ DECAY: FIRST RESULTS FROM THE MEG EXPERIMENT GIOVANNI SIGNORELLI∗ INFN Sezione di Pisa, Largo Bruno Pontecorvo 3, I-56127 Pisa, Italy E-mail:
[email protected] The MEG experiment aims at testing the lepton-flavour symmetry, present in the Standard Model, by searching for the µ+ → e+ γ decay with a sensitivity of a few ×10−13 , two orders of magnitude better than the present experimental limit. Novel detectors were developed for this measurement as well as multiple and redundant calibrations which are mandatory to constantly monitor the performance and possible drifts in the apparatus. The experiment had a start-up physics run in the last three months of 2008 at reduced acceptance. From the analysis of the first data a limit on the branching ratio of BR(µ+ → e+ γ) < 2.8 × 10−11 was obtained, which is about a factor of two larger than the present experimental limit set by the previous experiment. The experiment finished a second short run in 2009 and is scheduled to take data in 2010 and 2011 to reach its full sensitivity. Keywords: Muon decay; Lepton Flavour Violation; MEG.
1. Introduction The MEG experiment, hosted by the Paul Scherrer Institut in Villigen (Switzerland) is designed and built by an international collaboration made of physicists from Italy, Japan, Russia, Switzerland and the United States and aims at measuring the branching ratio BR(µ+ → e+ γ/µ → eν ν¯) with unprecedented sensitivity.1 The µ+ → e+ γ decay is forbidden in the Standard Model of elementary particles (SM) by the fact that the neutrinos of the three families are massless and degenerate. With the introduction of neutrino masses and mixings in the model, the µ+ → e+ γ ∗ On behalf of the MEG collaboration: J. Adam, X. Bai, A. Baldini, E. Baracchini, A. Barchiesi, C. Bemporad, G. Boca, P. W. Cattaneo, G. Cavoto, G. Cecchet, F. Cei, C. Cerri, A. De Bari, M. De Gerone, T. Doke, S. Dussoni, J. Egger, L. Galli, G. Gallucci, F. Gatti, B. Golden, M. Grassi, D. N. Grigoriev, T. Haruyama, M. Hildebrandt, Y. Hisamatsu, F. Ignatov, T. Iwamoto, D. Kaneko, P.-R. Kettle, B. I. Khazin, O. Kiselev, A. Korenchenko, N. Kravchuk, A. Maki, S. Mihara, W. Molzon, T. Mori, D. Mzavia, H. Natori, R. Nard` o, D. Nicol` o, H. Nishiguchi, Y. Nishimura, W. Ootani, M. Panareo, A. Papa, R. Pazzi, G. Piredda, A. Popov, F. Renga, S. Ritt, M. Rossella, R. Sawada, M. Schneebeli, F. Sergiampietri, G. Signorelli, S. Suzuki, C. Topchyan, V. Tumakov, Y. Uchiyama, R. Valle, C. Voena, F. Xiao, S. Yamada, A. Yamamoto, S. Yamashita, Yu. V. Yudin, D. Zanello (Dubna, Genova, KEK, Lecce, Novisibirsk, Pavia, Pisa, PSI, Roma I, Tokyo, UCI, Waseda).
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
407
decay is radiatively induced, but at a negligible level, since the muon neutrino has to oscillate into an elecron neutrino during a W -boson’s lifetime, resulting in a probability for this process of ∼ 10−54 . It is generally believed that the SM is just a low energy approximation of a more fundamental theory, and in all its extensions the rate for the µ+ → e+ γ process is enhanced by mixings that are naturally present in the high energy sector of these theories, since many more particle can circulate in the loop turning the initial muon flavour into an electron final state. Predictions can be made for the decay rate that depend to some extent on the kind of theory (supersymmetric grand-unification theory, extra-dimensions, heavy right-handed neutrinos. . . ) but are generally in the range of BR(µ+ → e+ γ) ≈ 10−12 ÷ 10−14 .2–4 An experiment that is able to explore such range of probabilities has therefore the potential to discover the physics beyond the SM or pose serious constraints to its possible extensions. 1.1. Connections with other branches of particle physics The search for this rare decay has many connections with other branches of physics, that may be more familiar to the reader. We just want to give here three examples of such links. (a) The quest for the µ+ → e+ γ decay is a way to search for physics beyond the SM which is complementary to the search for new particles performed at the high energy frontier (e.g. at the LHC). The small terms in the model Lagrangian which are suppressed by powers of E/M , where E is the energy scale of the process and M is the new physics mass scale, are investigated not by increasing the energy (and enhancing the process) but performing a precision experiment. (b) The estabilished phenomenology of neutrino oscillations5 implies that the neutral leptons do mix. It is natural to believe that such mixing is transferred to the charge leptons through their common high energy partners. Models exist in which this mixing is maximal (PMNS-like, or similar to the neutrino mixing) or minimal (CKM-like, similar to the quark mixing). Depending on the case the observation or exclusion of the µ+ → e+ γ process can shed light on the nature of such mixing.6 (c) The experimental measurement of the anomaly of the muon magnetic moment (aµ ) differs by the SM-prediction by 3.4 σs.7 The Feynman diagrams that describe the contributions to aµ and µ+ → e+ γ are the same, once the outgoing muon leg is replaced by an electron: the muon anomaly is related to the diagonal part of the charged lepton mixing matrix whereas the µ+ → e+ γ process is related to the off-diagonal terms. Models exist in which a prediction of the µ+ → e+ γ rate can be made as a function of the discrepancy of aµ with respect to the SM value.8 The present discrepancy is compatible with a possible BR(µ+ → e+ γ) in the range 10−13 ÷ 10−12 .
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
408
1.2. Historical perspective of the µ+ → e+ γ search The search for the µ+ → e+ γ decay started soon after the discovery of the µ-meson in the cosmic radiation (see Fig. 1). The first limit was set to less than 10% by Hinks and Pontecorvo in 19489 making use of cosmic ray muons. The limit improved constantly thanks to the development of pion beams first, and muon beams after the 1970s. The non-existence of this process at a level of 10−5 in the mid-1950s led directly to the two-neutrino hypothesis (νµ 6= νe ) which was verified a few years later.
Fig. 1. Limit on the µ+ → e+ γ branching fraction as a function of the year. The expected sensitivity of the MEG experiment is also indicated.
Each improvement in the limit was linked to an improvement in beam or detector technology, and there has always been a trade-off between the various detector elements (e.g. efficiency versus resolution, solid angle versus time performance) to reach the optimal sensitivity. The present experimental limit for the branching ratio BR(µ+ → e+ γ) is set by the MEGA experiment10 to 1.2 × 10−11 and is one of the strongest bounds on the lepton-flavour number conservation. 2. Experimental Search for the Decay The µ+ → e+ γ signal has a simple topology and appears as 2-body final state of a positron and a γ−ray, emitted in opposite directions with an energy of 52.8 MeV each, corresponding to half of the muon mass. The background to such a measurement can be divided in “prompt” and “accidental” background. The former comes from radiative muon decays (RMD,
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
409
Fig. 2.
Cross-sectional view of the MEG detector.
µ+ → e+ ν ν¯γ) in which the two neutrinos carry little energy, the latter comes from an accidental coincidence of a high energy positron from a normal (Michel) muon decay with an energetic photon coming from RMD, positron bremsstrahlung or annihilation in flight. It can be shown11 that in experiments such as MEG, the accidental background dominates, and is proportional to the muon rate and the experimental resolutions on the particle kinematical variables: 2 BRacc ≈ Rµ ∆Ee ∆Eγ2 ∆θeγ ∆teγ .
(1)
The successful search for the µ+ → e+ γ decay needs therefore a detector with superior resolutions for both electrons and γ−rays at energies close to 52.8 MeV. The MEG experiment was designed to reach a sensitivity of few ×10−13 , two orders of magnitude better than the present limit.
3. The MEG Detector The MEG detector schematics is depicted in Fig. 2. A high intensity beam of 28 MeV/c surface muons (∼ 3 × 107 µ+ /sec) is brought to rest in a thin polyethilene target. The positron four-momentum is measured by a magnetic spectrometer composed of a set of drift chambers immersed in a non-homogeneous magnetic field coupled to a plastic scintillator timing counter, that measures the positron emission time. The γ−ray energy, conversion position and time are measured by an innovative liquid xenon calorimeter. Liquid xenon was chosen because of its short radiation lenght, high luminosity and fast scintillation signal.
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
410
3.1. Beam line and target Secondary muons generated by the 590 MeV proton accelerator at PSI are captured by the πE5 beam line. A Wien filter deflects the positrons present in the beam in order to obtain a pure (better than percent) muon beam; the muons are focused by quadrupole triplets and through a superconducting transport solenoid (BTS), where a collimator is placed, reach the 205 µm CH2 target placed at the center of the MEG spectrometer, in a beam spot with σx ≈ σy ≈ 11 mm. 3.2. Positron spectrometer The MEG positron spectrometer COBRA (COnstant Bending RAdius) consists of a superconducting solenoidal magnet with the tracker inside, coupled to a fast timing system. The COBRA magnet changes radius along the z-axis and, as a result, its field changes from 1.27 Tesla at z = 0 and decreases as |z| increases, reaching 0.49 Tesla at z = 1.25 m. In this way the positron are quickly swept away after being measured not to make the spectrometer blind. The gradient of the magnetic field was chosen such that positrons with the same absolute momenta follow trajectories with a constant transverse radius, independent of the emission angle. This allows a discrimination of high-momentum signal positrons from the Michel positrons originating from the target. In order to keep the material budget as low as possible an open-frame construction for the drift chambers was adopted: the frames holding the anode and field wires have an opening on the side close to the muon stopping target; this allows positrons to be detected without scattering in the chamber frames. The frames themselves are made of a carbon fibre and are pre-tensioned before attaching the wires. The gas volume is closed by very thin foils, thus the amount of material in the fiducial volume due to the very light construction together with a use of He-C2 H6 gas mixture, corresponds to only 1.5 × 10−3 radiation lenghts along the positron path. In total 16 radial drift chambers are placed inside the magnet. The r−coordinate of the track is determined by the drift time with a precision of ∼ 230 µm. The cathodes are etched with a zig-zag shaped, 5 cm long periodic Vernier pattern therefore six signals are recorded for each chamber cell: two wire ends and four cathode signals. The rough z−coordinate is determined by the charge division at the wire ends, and refined by looking at the cathode charge asymmetry within the correct Vernier period. With this method a good z−resolution is obtained (∼ 600÷700 µm) keeping the chamber material as low as possible. An array of plastic scintillators is placed on each side of the spectrometer to measure the e+ emission time with a resolutions of 100 ps FWHM. 3.3. The photon detector The γ-ray four-momentum is measured by a liquid xenon scintillation detector. This device consists of a single volume of liquid xenon viewed from all sides by about 800
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
411
photomultipliers (PMTs) immersed in the liquid at 165 K temperature. The total measured light gives an estimate of the photon energy while the light distribution on the front face is used to determine the position and time of its first interaction. 3.4. Trigger and DAQ All the signals coming from the detector are processed by two waveform digitizers in parallel: a 2 GHz custom digitizer (DRS12 ) is used for offline analysis and its resolution is mandatory to search for possible pile-up effects. A 100 MHz FADCbased digitizer is used for trigger purposes: it receives the signals from the xenon detector, the timing counter and the drift chambers and performs an on-line computation of the photon energy and timing, positron direction and timing and their correlation. This reduces the rate from the initial 3 × 107 µ−decays per second to an acquisition speed of 7 sec−1 . The DRS chip is able to digitize 8+1 channels at a speed up to 6 × 109 samples per second with a resolution of 12 bits. The depth of each channel might be from 1024 bins and more, applying a cascading scheme. The analog bandwidth of the last version of the chip is extended up to 850 MHz which is enough even for the signals from the fast PMT. Custom-built VME boards have 4 DRS chips (total 32 channels) and an FPGA making a calibration and zero suppression in real time. It has been demonstrated that this solution allows the effective separation of pile-up events separated in time by less than 10 ns. 4. Calibrations It is understood that in such a complex detector a lot of parameters must be constantly checked. For this reason there are redundant calibrations and monitoring tools regarding both single detectors (e.g. PMT equalization, inter-bar timing, energy scale) and multiple detectors simultaneously (relative timing). A list of some of these methods is presented in Tab. 1. As an example, for the xenon detector monitoring, several possibilities exist, which allow a survey of the detector in an energy range as large as possible: Table 1. Typical calibrations that are performed to determine the liquid xenon detector performance (energy scale, linearity, etc.) together with their energy range and feasibility frequency. Process Charge exchange Radiative µ−decay Proton accelerator
Energy
Frequency
→ π 0 → γγ
55, 83, 129 MeV
year / month
µ+ → e+ ν ν¯γ
52.8 MeV endpoint
week
π0 n
π− p
7 Li(p, γ
17.6 )
11 B(p, γ
Nuclear reaction Radioactive source
8 Be
12 16.1 ) C
58 Ni(n, γ
9)
AmBe
59 Ni
14.8, 17.6 MeV
week
4.4, 11.6, 16.1 MeV
week
9 MeV
daily
4.4 MeV
daily
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
Number of events
412
1200 1000 800 600 400 200 0 0
5
10
15
20 Eγ [MeV]
Fig. 3. Spectrum of γ−rays recorded by the liquid xenon detector coming from nuclear reactions induced by the MEG Cockcroft-Walton proton accelerator: 11 B(p, γ4.4,11.6 )12 C (green) 7 Li(p, γ 8 17.6,14.8 ) Be (blue).
(1) In the low energy region (5.5 MeV) α-source spots deposited on thin wires 13 are used to measure the PMT quantum efficiencies and the liquid xenon optical properties on a daily basis; (2) In the intermediate energy region a Cockcroft-Walton accelerator is used, three times per week, to shoot protons, in the energy range 400-700 keV, against a Li2 B4 O7 target. Photons of 17.6 MeV energy from Li(p, γ)Be are used to monitor the xenon detector energy scale and resolution, while time coincident 4.4 MeV and 11.6 MeV photons from B(p, γ)C are used to intercalibrate the timing of the xenon calorimeter with the positron timing counters (see Fig. 3). (3) In the high energy region measurements of photons from π 0 decays from π − charge exchange in a liquid hydrogen target are performed twice a year; (4) LEDs and a custom developed LASER are used to monitor the stability of the subdetectors. The possibility of having different ways of calibration and monitoring, complementary to each other, is of extreme importance for the experiment. 5. The 2008 Run In 2008 we had the first physics run after a short engineering run in 2007. During the summer of 2008 we proceeded with detector assembly, xenon purification and calibration of the detector (CEX runs). On 12 September we started data taking until 23 December. We ran for ∼ 7×106 s with an average live time of 50% (including the machine weekly shut-down) corresponding to ∼ 1014 muon decays. Once per week we had one day of data taking at reduced beam intensity to
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
Events /(0.080 nsec)
Events /(0.080 nsec)
413
500 400 300
RD time
180 160
mean = 2 ± 8 ps sigma = 184 ± 7 ps
140 120 100 80
200
mean = 6 ± 16 ps sigma = 152 ± 16 ps
60 40
100
20
0
-1
0
(a)
1 teγ (nsec)
0
-1
0
1 teγ (nsec)
(b)
Fig. 4. (a) Radiative peak in MEG. (b) Radiative peak in dedicated low intensity RMD runs. The Eγ and Ee thresholds were lower than in nominal physics runs and the back-to-back condition was released.
be able to see a clear RMD signal for calibration purposes. In Fig. 4 the time distribution of positron-γ coincidences is shown for normal (a) and reduced (b) beam intensities. The peak corresponding to radiative decay is clearly visible on top of the accidental background. The 2008 run suffered from a severe problem of drift chamber (DCH) instability. An increasing number of chambers suffered frequent high-voltage failures and in the end we had to run at one-third of the nominal acceptance. The problem was solved during the 2009 shutdown, but influenced the statistics presented in this contribution. 6. Data Analysis The data collected in the 2008 run were used to perform a blind-box likelihood analysis. Events in which Eγ was close to 52.8 MeV and teγ ∼ 0 were removed from the main data stream and hidden. Data in the sidebands were used to study the distribution of the kinematical variables, and to study the expected background in the signal region, since it is mainly due to accidental coincidences. The probability density functions (pdf s) for the signal, RMD and accidental background were extracted from data, when possible, or from Monte Carlo computations using experimental inputs. For example the pdf for Eγ was extracted by a fit to the 55 MeV line of the π 0 decay in the CEX runs, scaled to 52.8 MeV. The Ee pdf was extracted by fitting a multi-component Gaussian resolution to the endpoint of the measured Michel spectrum (see Fig. 5). The blinding-box was opened after completing the optimization of the analysis algorithms and the background study. The details of the analysis procedure and the
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
414
(a)
(b)
Fig. 5. (a) Measured Michel positron energy spectrum. A solid line shows the fitted function as described in the text. (b) Measured energy spectrum for 54.9 MeV photons from a CEX run.
discussion of the systematics is presented in reference.14 The number of µ+ → e+ γ events is determined by means of a maximum likelihood fit in the analysis window region defined as 46 MeV < Eγ < 60 MeV, 50 MeV < Ee < 56 MeV, |teγ | < 1 ns, |θeγ | < 100 mrad and |φeγ | < 100 mrad. An extended likelihood function L is constructed as Nobs Nsig NRMD NBG N Nobs e−N Y S+ R+ B , L(Nsig , NRMD , NBG ) = Nobs ! N N N i=1 where Nsig , NRMD and NBG are the number of µ+ → e+ γ, RMD and accidental background (BG) events, respectively, while S, R and B are their probability density functions. The 90 % confidence intervals on Nsig and NRMD are determined by the FeldmanCousins approach.15 A contour of 90 % C.L. on the (Nsig , NRMD )-plane is constructed by means of a toy Monte Carlo simulation. The obtained upper limit at 90 % C.L. is Nsig < 14.7, where the systematic error is included. The largest contributions to the systematic error are from the uncertainty of the selection of photon pile-up events (∆Nsig = 1.2), the response function of the positron energy (∆Nsig = 1.1), the photon energy scale (∆Nsig = 0.4) and the positron angular resolution (∆Nsig = 0.4). The upper limit on BR(µ+ → e+ γ) is calculated by normalizing the upper limit on Nsig to the number of Michel positrons counted simultaneously with the signal and using the same analysis cuts, assuming BR(µ → eν ν¯) ≈ 1. This technique has the advantage of being independent of the instantaneous beam rate and is nearly insensitive to positron acceptance and efficiency factors associated with the DCH and TC detectors. These differ only slightly between the signal and the normalization samples, due to small momentum dependent effects.14
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
Events / (0.2000 MeV)
Events / (0.4667 MeV)
415
102
10
1
10-1
50
55
100
60
Eγ [MeV]
80 60 40 20 50
51
52
(a)
53
54
55
56
Ee [MeV]
(b)
Fig. 6. Projected distributions for Eγ (a) and Ee (b), containing all events in the analysis window. A solid line shows the likelihood functions fitted to the data.
The limit on the branching ratio of the µ+ → e+ γ decay isa BR(µ+ → e+ γ) ≤ 2.8 × 10−11
(90%C.L.)
where the systematic uncertainty on the normalization is taken into account. The upper limit can be compared with the branching ratio sensitivity of the experiment with these data statistics. This is defined as the average upper limit of the branching ratio, extracted with toy Monte Carlo simulations, assuming a null signal and the same numbers of accidental background and RMD events as in the data.15 The branching ratio sensitivity in this case is estimated to be 1.3 × 10−11 , which is comparable with the current branching ratio limit set by the MEGA experiment.10 Given this branching ratio sensitivity, the probability to obtain the upper limit greater than 2.8 × 10−11 is ∼ 5 % if systematic uncertainties in the analysis are taken into account. 7. Status and Perspectives After a start-up engineering run in 2007 we had the the first MEG physics run at the end of 2008, which suffered from detector instabilities. Data from the first three months of operation of the MEG experiment give a result which is competitive with the previous limit. During 2009 shutdown the problem with the drift chamber instability was solved and the detector operated for all the 2009 run with no degradation. We had physics data taking in November and December 2009 with improved efficiency, improved electronics and improved resolutions. We are confident in obtaining a sensitivity that should allow us to improve the present experimental limit. The experiment is scheduled to run until the end of 2011 to reach the target sensitivity of the experiment. a At the conference a slightly worse limit was shown, namely BR(µ + → e+ γ) ≤ 3.0 × 10−11 . In the meanwhile a better estimate of the systematic uncertainty was performed. The result reported here is consistent with the one published in.14
November 23, 2010
19:28
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.02˙Signorelli
416
Acknowledgments I want to thank the organizer of the BEYOND2010 conference for the perfect organization and for giving me the opportunity of visiting the wonderful city of Cape Town. I met many colleagues from all around the world with whome I had pleasant discussions, not only regarding physics. The walk to the top of Table Mountain with Dr. Z. Ahmed, Dr. S. Capelli, Dr. D. D’Angelo and Dr. C. Kiessig was memorable! References 1. A. Baldini, T. Mori et al., “The MEG experiment: search for the µ → eγ decay at PSI”, available at http://meg.psi.ch/docs 2. R. Barbieri et al., Nucl. Phys. B 445 (1995) 225 3. J. Hisano et al., Phys. Lett. B 391 (1997) 341 4. A. Masiero et al., Nucl. Phys. B 649 (2003) 189 5. T. Schwetz, M. A. Tortola and J. W. F. Valle, New J. Phys. 10 (2008) 113011. A. Strumia and F. Vissani, arXiv:hep-ph/0606054v3 6. L. Calibbi et al., Phys. Rev. D 74 (2006) 116002 7. C. Amsler et al. [Particle Data Group], Phys. Lett. B667 (2008) 1 (pag. 481-482) 8. G. Isidori et al., Phys. Rev. D 75 (2007) 115019 9. E. P. Hincks and B. Pontecorvo, Can. J. Res. 28A, 29 (1950), reprinted in S. M. Bilenky et al. (editors) “Bruno Pontecorvo” Societ` a Italiana di Fisica (1997) 10. M. L. Brooks et al. [MEGA Collaboration], Phys. Rev. Lett. 83, (1999) 1521. 11. Y. Kuno and Y. Okada, Rev. Mod. Phys. 73 (2001) 151. 12. S. Ritt, Nucl. Instrum. Meth. A 518, 470 (2004). 13. A. Baldini et al., Nucl. Instrum. Meth. A 565, 589 (2006). 14. J. Adam et al. [MEG Collaboration], Nucl. Phys. B 834, (2010) 1. 15. G. J. Feldman and R. D. Cousins, Phys. Rev D 57, (1998) 3873.
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
417
SEARCHES FOR MAGNETIC MONOPOLES AND BEYOND L. PATRIZII∗ and G. GIACOMELLI INFN and Phys. Dept. of the University of Bologna, v.le Berti Pichat 6/2, I-40127 Bologna, Italy ∗ E-mail:
[email protected] www.bo.infn.it Z. SAHNOUN Astrophys. Dept. CRAAG, B.P. 63, Bouzareah, Algiers, Algeria and INFN Bologna, v.le Berti Pichat 6/2, I-40127 Bologna, Italy The searches for classical Magnetic Monopoles (MMs) at accelerators, for GUT Superheavy MMs in the penetrating cosmic radiation and for Intermediate Mass MMs at high altitudes are discussed. The status of the search for other massive exotic particles such as nuclearites and Q-balls is briefly reviewed. Keywords: Magnetic monopoles; strangelets, nuclearites, Q-balls
1. Introduction Magnetic Monopoles (MMs) are hypothetical particles carrying a magnetic charge which is quantized according to the Dirac relation:1 e g = n~c/2 = n gD , where e is the basic electric charge, n is an integer, n = 1, 2, ... and gD = ~c/2e = 68.5e is the unit Dirac charge. Pointlike, magnetic charged particles are usually referred to as “classical” or “Dirac” monopoles, whose properties are derived from the Dirac relation. No predictions exist for their mass (a rough estimate obtained assuming that the classical g 2 me monopole radius is equal to the classical electron radius yields mM ' ' e2 2 n 4700 me ' n 2.4 GeV/c ). Dirac MMs have been searched for at every new accelerator/collider. So-called “primordial” GUT magnetic monopoles are topological point defects possibly produced in the Early Universe at the phase transition corresponding to the spontaneous breaking of the Unified Gauge group into subgroups, one of which is U(1).2,3 GUT MMs would have masses as large as 1016 − 1017 GeV/c2 . Later phase transitions could have lead to Intermediate Mass Monopoles (IMMs)4 with masses in the range 105 ÷ 1013 GeV/c2 . Given their large expected mass, GUT and Intermediate Mass monopoles can only be searched for as relic particles from the Early Universe in the Cosmic Radi-
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
418
ation (CR). In this paper we review the experimental situation on MM searches; a short discussion on the searches for nuclearites5 and Q-balls6 is also presented.
2. Magnetic Monopole Energy Losses A fast MM with magnetic charge gD and velocity v = βc behaves like an equivalent electric charge (ze)eq = gD β losing energy mainly by ionization; for β > 10−1 , the energy loss of a gD MM is ∼ (68.5)2 ∼ 4700 times that of a minimum ionizing particle. Slow poles(10−4 < β < 10−2 ) lose energy by ionization or excitation of atoms and molecules of the medium (“electronic” energy loss) or by yielding kinetic energy to recoiling atoms or nuclei (“atomic” or “nuclear” energy loss). Electronic energy loss dominates for β > 10−3 . In noble gases and for monopoles with 10−4 < β < 10−3 there is an additional energy loss due to atomic energy level mixing and crossing (Drell effect7 ). At very low velocities (v < 10−4 c) MMs may lose energy in elastic collisions with atoms or with nuclei. The energy is released to the medium in the form of elastic vibrations and/or infrared radiation.8 In Fig. 1 are shown schematically the different energy loss mechanisms at work in liquid hydrogen for a g = gD MM versus its β.9
Fig. 1. The energy losses, in MeV/cm, of g = gD MMs in liquid hydrogen vs β. Curve a) corresponds to elastic monopole–hydrogen atom scattering; curve b) to interactions with level crossings; curve c) describes the ionization energy loss.
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
419
2.1. Searches for classical (Dirac) monopoles Dirac magnetic monopoles have been searched for at accelerators and colliders in e+ e− , e+ p, pp and pp collisions, mostly using scintillation counters, wire chambers Table 1. √ Reac-
s
Compilation of the accelerator based MM searches.10
Mass range
tion
(GeV)
(GeV)
e+ e− e+ e− e+ e− e+ e− e+ e− e+ e− e+ e− e+ e− e+ e−
206 88-94 89-93 50-61 50-52 50-52 29 34 29
45-102 < 45; 41.6 < 44.9 < 29; 18 < 24 < 24
e+ p e+ p e+ p e+ p
300 300 300 300
pp pp pp pp pp pp pp pp pp pp pp pp
1960 1800 1800 1800 1800 1800 1800 1800 1800 1800 1800 540
200 − 700 > 265 > 355 > 410 > 375 > 295 > 260 > 325 > 420 < 800 < 800
pp pp pp pp
52 56 63 60
< < < <
< 10 < 30
20 30 20 30
Magnetic charge (in gD units)
Cross Section
Accele
Experiment
Year
Upper Limit (pb)
-rator
1 1; 2 1 1; 2 1 2 <3 <6 <3
0.05 0.3 70 0.1 0.8 13 0.03 0.04 0.9
LEP2 LEP LEP KEK KEK KEK SLAC DESY SLAC
OPAL MODAL MODAL TRISTAN TRISTAN TRISTAN PEP PETRA PEP
08 93 92 89 88 88 84 83 82
1 2 3 ≥6
2 0.2 0.07 − 0.09 0.05 − 0.06
FNAL FNAL FNAL FNAL
HERA HERA HERA HERA
05 05 05 05
1 1 2 3 6 1 2 3 6 ≥1 ≥1 1, 3
0.2 0.6 0.2 0.07 0.2 0.7 7.8 2.3 0.11 1.2 × 103 3 × 104 1 × 105
FNAL FNAL FNAL FNAL FNAL FNAL FNAL FNAL FNAL FNAL FNAL CERN-ISR
CDF D0 D0 D0 D0 FNAL FNAL FNAL FNAL -
06 04 04 04 04 00 00 00 00 90 87 83
8 0.1 0.1 2
CERN-ISR CERN-ISR CERN CERN-ISR
11
<3 < 24 <3
1× 4 × 103 5 × 10−7 5 × 10−6 1 × 10−5 1 × 10−4 2 × 10−4 10 20
IHEP BNL FNAL FNAL IHEP CERN BNL-AGS CERN LBL-Bevatron
IHEP IHEP -
76 76 75 74 72 63 63 61 59
2 × 106
FNAL
-
75
0.65 × 103 1.9 × 103
BNL-AGS CERN
-
97 97
pA pA pA pA pA pA pA pA pA
11.9
<5
<2
28.3 28.3 11.9 7.6 7.86 7.6 3.76
< 12 < 13 <5 <3 <3 <3 <1
< 10 < 24
nA
24.5
AuAu P bA
4.87 17.9
< 3.3 < 8.1
<2 <2 <4 1 ≥2 ≥2
10−4
11
11 11 11
E882 E882 E882 E882
82 78 78 75
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
420
and nuclear track detectors (NTDs). Searches were made also based on induction devices looking at persistent currents induced by monopoles in superconducting coils. In Table 1 the accelarator searches for Dirac MMs are listed; entries in the table are from ref.10 The most recent searches are briefly discussed in following sections. 2.1.1. Searches at LEP Searches at the CERN LEP e+ e− collider were performed by the MODAL12 and OPAL collaborations.13 Both searches were based on the detection of MMs pair produced through the e+ e− → γ ∗ → M M reaction. √ The MODAL12 experiment was run at s = 91.1 GeV. The detector consisted of a polyhedral array of CR39 NTD foils covering a 0.86 × 4π sr angle surrounding the I5 interaction point at LEP. The integrated luminosity was 60 ± 12 nb−1 . After chemical etching NTD sheets were analysed in the search for penetrating tracks consistent with the passage of a heavily ionizing particle. No candidate event was found; the 95% CL upper limit on the MM production cross section was 7 × 10−35 cm2 for monopoles with masses < 45 GeV/c2 . The OPAL Collaboration13 performed a search based on the detection of pair √ produced MMs at s = 206.3 GeV and a total integrated luminosity of 62.7 pb−1 . The search was primarily based on the measurements of the momentum and energy loss in the tracking chambers of the OPAL detector. Back to back tracks with high energy release were searched for in opposite sectors of the Jet Chamber. The 95% CL cross section upper limit for the production of monopoles with masses 45 GeV< mM < 104 GeV was 0.05 pb (Fig. 2a).
Fig. 2. The 95% CL upper limits on monopole pair-production cross section versus magnetic monopole mass as obtained by the OPAL experiment (a) and CDF experiment (b), respectively. In Fig.2a) the Drell-Yan theoretical curve is also given.
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
421
2.1.2. Searches at HERA The H1 collaboration performed an indirect search for monopoles produced in high √ energy e+ p collisions14 at s = 300 GeV. MMs would stop and be trapped in the beam pipe surrounding the H1 interaction point at HERA. The beam pipe was cut into long thin strips which were passed through a superconducting coil coupled to a SQUID; the signature of the presence of a MM would be the induction of a persistent current within a superconducting loop. The aluminium beam pipe had been exposed √ to a luminosity of 62 ± 1pb−1 at s = 300 GeV; during HERA operations it was in a 1.15 T solenoidal magnetic field parallel to the beam pipe. Two models for M M pair production were considered: one assumed spin 0 monopole pair production by the elastic process e+ p → e+ p M M; the second model assumed spin 1/2 monopole pair production by the inelastic process e+ p → e+ X M M (where X is any state). The upper limits on the production cross section derived for these models are shown in Figure 3.
Fig. 3. The cross section upper limits derived from the H1 run of the HERA experiment. Curve a) corresponds to spin 0 MM pair production; curve b) to spin 1/2 MM pair production.
2.1.3. Searches at FNAL Several searches for MMs were performed at the Tevatron-FNAL pp collider. The CDF collaboration in 200315 performed a search for magnetic monopoles √ produced in 35.7 pb−1 sample of pp collisions at s = 1.96 TeV. MMs would have been detected by the Central Outer Tracker and ToF detectors placed in the 1.4 T magnetic field parallel to the beam direction. The M M pair production was excluded (95% CL) for cross sections < 0.2 pb for monopole mass in the range 200 < mM < 700 GeV/c2 , as shown in Fig. 2b.
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
422
The effects of virtual MMs were looked for searching for γγ production via a virtual monopole loop in pp collisions at the Tevatron collider. The pp → γγ cross section at energies below the monopole production threshold would be enhanced by the strong coupling of virtual monopoles to photons.16 A different indirect search was made looking for monopoles trapped in beam pipes and detector materials from the old D0 and CDF detectors. Several Be, Pb and Al samples were passed through the strong field generated by a superconducting magnet. Trapped monopoles would induce a persistent current in the superconducting coil after complete passage of the sample.17 This technique is independent of the magnetic monopole mass and velocity; it was used also in the search for cosmic MMs in bulk matter (passing through a superconducting magnet moon rocks, meteorites, schists and terrestrial magnetic materials).18 2.1.4. MoEDAL: Monopole Searches at LHC MoEDAL (Monopole and Exotic particle Detection At the LHC) is a future experiment at the LHC.19 It will search for MMs and other highly ionizing exotic particles in p p collisions at an expected luminosity of 1032 cm−2 s−1 and also in the heavy ion running. The MoEDAL detector will be an array of NTD stacks deployed around the (Point-8) intersection region of the LHCb detector, in the VELO cavern as sketched in Fig. 4a. The array will cover a surface area of ∼ 25 m2 . Each stack, 25 × 25 cm2 , will consist of 9 interleaved layers of CR39, Makrofol and Lexan NTDs (Fig. 4b) .
Fig. 4. a/Skecth of the MoEDAL detector as planned to be deployed in the LHCb VELO region. b/A MoEDAL detector element consisting of 3 sheets of CR39 and 3 sheets of Makrofol interleaved with three sheets of Lexan to make 9 layers of NTDs.
The passage of a heavily ionizing particle in NTD would cause the formation of a damage (latent-track) along its trajectory; a subsequent chemical etching would lead to the formation of etch-pit cones in both front and back faces of each sheet. The size and shape of the cones are related to the particle restricted energy loss and angle of incidence. A detailed description of NTD technique can be found in
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
423
Fig. 5. Cross section upper limits on monopole production from past searches at accelerators/colliders. The upper limit expected from the MoEDAL experiment at LHC is also indicated.
Ref. 20 and references therein. The MM signature in MoEDAL would be a sequence of collinear etch-pits consistent with the passage of a particle with constant energy loss through the detector foils of a whole stack. In Fig. 5 the upper limits on cross sections for magnetic monopoles production set by past searches are reported; the expected sensitivity for the LHC-MoEDAL experiment is also shown. 3. Searches for SuperMassive Magnetic Monopoles GUT MMs from the Early Universe may be present today in the cosmic radiation as “relic” particles, with a velocity spectrum in the 4 × 10−5 < β < 0.1 range. Larger velocities could be achieved by IMMs (105 < mM < 1013 GeV ) accelerated in one coherent domain of the galactic magnetic field. Bounds on the flux of cosmic MM were obtained on the basis of astrophysical and cosmological considerations; the most referred one is the so-called “Parker Bound” F< 10−15 cm−2 s−1 sr−1 ;21 it is obtained by requiring that the kinetic energy per unit time that MMs gain from the galactic magnetic field be not larger than the magnetic energy generated in the galaxy by the dynamo effect. The original limit was re–examined to take into account the almost chaotic nature of the galactic magnetic field, with domain lengths of about ` ∼ 1 kpc; the limit becomes mass dependent.21 By applying similar considerations to the survival of an early seed of
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
424
galactic magnetic field a more stringent bound was obtained, the “Extended Parker Bound” (EPB):22 F < m17 10−16 cm−2 s−1 sr−1 , with m17 = mM /1017 GeV/c2 . Several searches for GUT MMs were performed above ground and underground using many types of detectors.23–25 In Table 2 are listed the different searches and their results. Table 2. Flux upper Limits for g = gD GUT and Intermediate Mass Monopoles from different experiments. Experiment AMANDA II Upgoing26 AMANDA II Downgoing26 AMANDA II (catalysis)27 Baikal28 Baikal (catalysis)29 MACRO30 MACRO30 MACRO (catalysis)31 OHYA32 OHYA32 SLIM20 SLIM20 MICA33 INDU Combined9,18
Mass Range (GeV/c2 )
β range
Flux Upper Limit (cm−2 s−1 sr−1 )
Detection Technique
> 1011
0.76 − 1
8.6 − 0.37 × 10−16
> 108
0.8 − 1
16.3 − 2.8 × 10−16
> 1011
' 10−3
5 × 10−17
107 − 1014
0.8 − 1
1.83 − 0.46 × 10−16
5 × 1013
' 10−5
6 × 10−17
5 × 108 − 5 × 1013
> 5 × 10−2
3 × 10−16
> 5 × 1013
> 4 × 10−5
1.4 × 10−16
5 × 1013
> 4 × 10−5
3 − 8 × 10−16
Water Cherenkov Water Cherenkov Water Cherenkov Water Cherenkov Water Cherenkov Scint.+Stream. +NTDs Scint.+Stream. +NTDs Sctreamer tube
5 × 107 − 5 × 1013 > 5 × 1013 105 − 5 × 1013 > 5 × 1013 – > 105
> 5 × 10−2 > 3 × 10−2 > 3 × 10−2 > 4 × 10−5 10−4 − 10−3 –
6.4 × 10−16 3.2 × 10−16 1.3 × 10−15 0.65 × 10−15 ∼ 10−17 2 × 10−14
Plastic NTDs Plastic NTDs Plastic NTDs Plastic NTDs NTD Induction
The most stringent experimental flux limit on supermassive MMs was set by the MACRO experiment at the underground Gran Sasso Laboratory.30 Searches with underwater neutrino telescopes are sensitive to relativistic MMs. They would be detected by the large amount of Cherenkov radiation, 8300 × that of muons. In Fig.6 the 90% CL flux upper limits versus β for GUT MMs with g = gD as set by the MACRO,30 Ohya,32 Baksan,29 Baikal,28 and AMANDA26 experiments are shown. The Baikal and AMANDA limits are obtained assuming that relativistic GUT MMs would reach the detector from “below”, i.e. after crossing the Earth (which is unlikely). The interaction of the GUT monopole core with a nucleon can lead to nucleon decay (catalysis), f. e. via M + p → M + e+ + π 0 via the Rubakov-Callan mechanism.34 Most recent searches for MM induced nucleon decay were made with neutrino telescopes27,29 and by the MACRO experiment.31
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
425
Fig. 6. The 90% CL upper limits vs β for a flux of cosmic GUT monopoles with magnetic charge g = gD .
MMs with masses mM > 105 − 106 GeV8 could reach the Earth surface from above and be detected; lower mass MMs may be searched for with detectors located at high mountain altitudes, balloons and satellites. The SLIM experiment at the Chacaltaya high altitude laboratory (5290 m a.s.l.)20 searched for downgoing IMMs with a 427 m2 array of NTDs exposed for 4.2 years to the CR. SLIM was sensitive to g = 2gD MMs in the whole range 4 × 10−5 < β < 1 and to g = gD MMs for β > 10−3 . No candidate event was observed. Flux upper limits versus mass, as set by SLIM and other experiments for two different MM velocities, are plotted in Fig.7. 4. Nuclearites and Q-balls Strange Quark Matter (SQM) composed of approximately the same number of up, down and strange quarks was conjectured as the ground state of nuclear matter. 5 SQM density would be larger than that of atomic nuclei and be stable for all baryon numbers in the range 300 < A < 1057 . Due to the suppression of some s quarks it should have a relatively small positive electric charge35,36 neutralized by an electron cloud, thus forming a sort of atom. Large lumps of SQM possibly present in the cosmic radiation are called “nuclearites”.5 Smaller SQM bags with baryon number A < 106 ÷107 are usually called “strangelets”. Nuclearites could have been produced
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
426
Fig. 7. Experimental 90% CL flux upper limits versus mass for MMs with: (a) β ∼ 0.05, (b) β ∼ 0.8 at the detector level, from different experiments.
in the early Universe and be a component of the galactic cold dark matter (typical velocities of ∼10−3 c).5 Strangelets could be produced in very energetic astrophysical processes involving strange star collisions37,38 and supernovae explosions.39 The main energy loss mechanism for galactic nuclearites is that of elastic or quasi-elastic collisions with the ambient atoms and molecules. The energy loss is large; nuclearites should be easily detected by scintillators and NTDs.40 In trans-
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
427
parent media some of the energy dissipated could appear as visible light. Several experimental results were obtained as by-products of magnetic monopole searches. In Fig. 8 the most stringent flux upper limits for nuclearites with β = 10−3 are at the level of 1.4 ÷ 3 × 10−16 cm−2 s−1 sr−1 .41 In the figure the galactic dark matter limit is obtained assuming that all dark matter is composed of nuclearites.
Fig. 8. 90% CL flux upper limits vs mass for intermediate and high mass nuclearites with β = 10−3 obtained from various searches with NTDs. A combined flux from the MACRO and SLIM experiments is also shown.
Strangelets are generally assumed to have no associated electrons; their interaction with matter should be similar to that of heavy ions with a different charge to mass ratio Z/A and they would undergo the same acceleration and interaction processes as ordinary cosmic rays. In Fig. 9 the flux upper limits for relativistic strangelets obtained with the SLIM41 detector and experiments onboard stratospheric balloons and in space42–45 are given. The three uppermost horizontal lines in the figure indicate the measured flux assuming that unusual events found in cosmic rays by searches in the past could be due to SQM.33,46–48 The fluxes expected according to different models for SQM propagation in the Galaxy and in atmosphere are indicated by the dotted line and the grey band.49,50 Q-balls are hypothesized coherent states of squarks q˜, sleptons ˜ l and Higgs fields predicted by minimal supersymmetric generalizations of the Standard Model of particle physics.6 They may carry some conserved global baryonic charge Q and
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
428
Fig. 9. 90% CL flux upper limits vs A for relativistic strangelets from onboard balloon and space experiments and by the SLIM detector at mountain altitude. The grey band and dotted line are the fluxes expected from different models.49,50
possibly also a lepton number. Q-balls could have been copiously produced in the early Universe and may have survived till now as a dark matter component. They are classified into two groups : (i) neutral Q-balls, generally called SENS (Supersymmetric Electrically Neutral Solitons) that should be massive and may catalyse proton decay and (ii) charged Q-balls called SECS (Supersymmetric Electrically Charged Solitons) that might be formed by SENS gaining an integer electric charge from proton or nuclei absorption. SECS with typical galactic velocities β ' 10−3 and MQ < 1013 GeV could reach an underground detector from above, SENS also from below. SENS may be detected by their continuons emission of charged pions (energy loss ∼100 GeV g−1 cm2 ), generally in large neutrino telescopes. SECS may interact in a way not too different from nuclearites and may be detected by scintillators, NTDs and ionization detectors. In Fig. 10 are shown the flux upper limits versus mass for SENS and for charged Q-balls (with ZQ >10e) set by various experiments. Not shown in the figure is the result obtained by the DAMA Collaboration;51 the flux upper limit on the flux of charged Q-balls with β ' 10−3 is at the level of ∼ 3 × 10−11 cm−2 s−1 sr−1 . 5. Conclusions Searches for classical Dirac MMs at accelerators set limits on the pair production cross sections from different physical processes and for a wide range of masses,
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
429
Fig. 10. Flux upper limits versus mass for: a/ Neutral Q-balls and b/ charged Q-balls with ZQ > 10e, obtained by various experiments.
as shown in Fig.5. Future improvements may come from new experiments at the LHC.19 Many searches were performed for GUT poles in the penetrating cosmic radiation. The 90% CL flux limits are at the level of ∼ 1.4 × 10−16 cm−2 s−1 sr−1 for β ≥ 4 × 10−5 . It may be difficult to do much better unless new refined detectors with considerably larger areas are proposed.
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
430
Present limits on Intermediate Mass Monopoles with high β are at the level of ∼ 1.3 × 10−15 cm−2 s−1 sr−1 given by experiments at high altitudes. These limits could be improved with much larger detectors, in particular by large volume neutrino telescopes. IceCube expects limits of the order of ∼ 10−17 cm−2 s−1 sr−1 for GUT MMs inducing nucleon catalysis and ∼ 10−18 − 10−19 cm−2 s−1 sr−1 for relativistic poles.52 As a byproduct of GUT MM searches some experiments obtained stringent limits on nuclearites and on Q-balls. Future searches with neutrino telescopes and in space48 should reach sensitivities to nuclearites, strangelets and Q-balls of smaller masses. Acknowledgments Z. S. thanks INFN, Sez. Bologna for providing FAI Grants for foreigners. We acknowledge the collaboration from many colleagues. References 1. P.A.M. Dirac, Proc. R. Soc. London 133, 60 (1931); Phys. Rev. 74, 817 (1948). 2. G.’t Hooft, Nucl. Phys. B29, 276 (1974). 3. A.M. Polyakov, JETP Lett. 20, 194 (1974); N.S. Craigie et al., Theory and Detection of MMs in Gauge Theories, World Scientific, Singapore (1986). 4. G. Lazarides et al., Phys. Rev. Lett. 58, 1707 (1987); T. W. Kephart and Q. Shafi, Phys. Lett. B520, 313 (2001). 5. E. Witten, Phys. Rev. D30, 272 (1984); A. De Rujula and S. Glashow, Nature 31,272 (1984). 6. S. Coleman, Nucl. Phys. B262, 293 (1985); A. Kusenko and A. Shaposhnikov, Phys. Lett. B418, 46 (1998). 7. G.F. Drell et al., Nucl. Phys. B209, 45 (1982). 8. J. Derkaoui et al., Astrop. Phys. 9, 173 (1998); Astrop. Phys. 9, 339 (1999). 9. G. Giacomelli et al. hep-ex/011209; hep-ex/0302011; hep-ex/0211035. 10. S. Eidelman et al. (PDG Collab.), Phys. Lett. B592, 33, 67 (2004) and refs. therein. 11. For the ISR monopole searches see G. Giacomelli and M. Jacop, Phys. Rept. 55, 1 (1979) 12. K. Kinoshita et al., Phy. Rev. D46, R881 (1992). 13. G. Abbiendi et al., Phys. Lett. B663, 37 (2008). 14. A. Aktas et al. (The H1 collaboration), Eur. Phys. J. C41, 133 (2005). 15. A. Abulencia et al., Phys. Rev. Lett. 96, 201801 (2006). 16. I.F. Ginzburg and A. Schiller, Phys. Rev. D60, 075016 (1999). 17. K.A. Milton, Rept.Prog.Phys. 69, 1637 (2006). 18. G. Giacomelli, Riv. Nuovo Cimento 7N12, 1 (1984). 19. J. L. Pinfold et al., Nucl. Instrum. Meth. B208, 489 (2009). 20. S. Balestra et al., Eur. Phys. J. C55, 57 (2008). 21. E.N. Parker, Ap. J. 160, 383 (1970); M.S. Turner et al., Phys. Rev. D26, 1296 (1982). 22. F.C. Adams et al., Phys. Rev. Lett. 70, 2511 (1993). 23. J. Ruzicka and V.P. Zrelov, JINR-1-2-80-850 (1980). 24. G. Giacomelli et al., hep-ex/0005041. 25. D. Bakari et al., hep-ex/0004019.
December 20, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.03˙Patrizii
431
26. H. Wissing et al., 30th ICRC 4, 799 (2007). arXiv:0711.0353 [astro-ph]. 27. A. Pohl et al., Proc. International Workshop on Exotic Physics with Neutrino Telescopes, 77 (2006). 28. V. Aynutdinov et al., astro-ph/0507713; 30th ICRC, arXiv:0710.3064 [astro-ph]. 29. E.N. Alexeyev et al., 21st ICRC 10, 83 (1990); Yu.F. Novoseltsev et al., Nucl. Phys. B151, 337 (2006). 30. M. Ambrosio et al., MACRO Coll., Eur. Phys. J. C25, 511 (2002); Phys. Lett. B406, 249 (1997); Phys. Rev. Lett. 72, 608 (1994). 31. M. Ambrosio et al., Eur. Phys. J. C26, 163 (2002). 32. S. Orito et al., Phys. Rev. Lett. 66, 1951 (1991). 33. P. B. Price, Phys. Rev. D38, 3813 (1988); D. Ghosh and S. Chatterjea, Europhys. Lett. 12, 25 (1990). 34. V.A. Rubakov, JETP Lett. B219, 644 (1981); G.G. Callan, Phys. Rev. D26, 2058 (1982). 35. H. Heiselberg, Phys. Rev. D48, 1418 (1993). 36. J. Madsen, Phys. Rev. Lett. 85, 4687 (2000). 37. J. Madsen, Phys. Rev. Lett. 61, 2909 (1988). 38. J. L. Friedman and R. R. Caldwell, Phys. Lett. B264, 143 (1991). 39. H. Vucetich and J. E. Horvath, Phys. Rev. D57, 5959 (1998). 40. M. Ambrosio et al., Eur. Phys. J. C13, 453 (2000). 41. S. Cecchini et al., Eur. Phys. J. C57, 525 (2008). 42. P. H. Fowler et al., Astrophys. J. 314, 739 (1987). 43. W. R. Binn et al., Astrophys. J. 347, 997 (1989). 44. E. K. Shirk and P. B. Price, Astrophys. J. 220, 719 (1978). 45. A. J. Westphal et al., Nature 396, 50 (1998); B. A. Weaver et al., Nucl. Instrum. Meth. B145, 409 (1998). 46. T. Saito et al., Phys. Rev. Lett. 65, 2094 (1990). 47. M. Ichimura et al., Il Nuovo Cimento A106, 843 (1993). 48. J. Sandweiss, J. Phys. G30, S51 (2004). 49. J. Madsen, Phys. Rev. D71, 014026 (2005). 50. G. Wilk and Z. Wlodarczyk, Proc. 23th Int. Symp. Multiparticle Dynamics, World Scientific 2003, hep-ph/0210203; M. Rybczy´ nski et al., Nucl. Phys. (Proc. Suppl.) B151, 341 (2006); Il Nuovo Cimento C24, 645 (2001). 51. F. Cappella, R. Cerulli and A. Incicchitti, Eur. Phys. J. direct C14, 1 (2002). 52. D. Hardtke et al., Proc. International Workshop on Exotic Physics with Neutrino Telescopes, 89 (2006).
November 24, 2010
10:59
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.04˙Prodanov
432
DAEMON DECAY AND COSMIC INFLATION EMIL M. PRODANOV School of Mathematical Sciences, Dublin Institute of Technology, Ireland E-mail:
[email protected] www.maths.dit.ie Quantum tunneling in Reissner–Nordstr¨ om geometry is studied and the tunneling rate is determined. A possible scenario for cosmic inflation, followed by reheating phases and subsequent radiation-domination expansion, is proposed. Keywords: Early Universe, Inflation, Reissner–Nordstr¨ om
In 1971, Hawking suggested1 that there may be a very large number of gravitationally collapsed charged objects of very low masses, formed as a result of fluctuations in the early Universe. A mass of 1014 kg of these objects could be accumulated at the centre of a star like the Sun. The masses of these collapsed objects are from 10−8 kg and above and their charges are up to ± 30 electron units.1 Tracing the evolution of such objects, we propose a mechanism that accounts for the cosmic inflation, takes us into a period of reheating phases, and, finally, into the expansion of a radiation-dominated Universe. In a nut-shell, the inflation mechanism is based on the accumulative effects of Coulomb repulsion at very short range, initially completely “cocooned” by Reissner–Nordstr¨ om gravitational effects and subsequently unleashed by quantum tunneling. Consider the Reissner–Nordstr¨ om geometry 2,3 in Boyer–Lindquist coordinates:4 ds2 = −
∆ 2 r2 2 dt + dr + r2 dθ2 + r2 sin2 θ dφ2 . r2 ∆
(1)
where: ∆ = r2 − 2M r + Q2 , M is the mass of the centre, and Q — the charge of the centre. We will be interested in the case of a naked singularity only, namely: |Q| > M . The radial motion of an ultra-relativistic test particle of mass m and charge q in Reissner–Nordstr¨ om geometry can be modeled by an effective one-dimensional motion of a particle in non-relativistic mechanics with the following equation of motion5–7 (see also8 for Schwarzschild geometry) : r˙ 2 2 − 1 + U (r) = , 2 2
(2)
November 24, 2010
10:59
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.04˙Prodanov
433
where 1 q 2 Q2 q Q M a b 1− 2 − 1− ≡− 2 + (3) 2 2 m r m M r r r is the effective non-relativistic one-dimensional potential per unit mass, E = (2 − 1)/2 is the specific energy of the effective one-dimensional motion, and = kT /m+1 is the specific energy of the three-dimensional relativistic motion. In equation (3), the constant a = −Q2 (1 − q 2 /m2 )/2 is positive in view of the very high charge-tomass ratio q/m for all charged elementary particles and the parameter b = −M [1 − (qQ)/(mM )] depends on the temperature via . Motion is allowed only when the kinetic energy is real. Equation (2) determines the region (r− , r+ ) within which classical motion is impossible. The turning radii are given by:5–7 " # r q Q 2 2 Q2 M q Q q r± = 2 −1± − 1 − (1 − 2 ) 1 − 2 . (4) −1 m M m M m M2 U (r) =
There is no inner turning radius r− for particles of specific charge q/m such that sign(Q)q/m < 1. For particles such that sign(Q)q/m < −1, there is neither inner turning radius, nor outer turning radius.5–7 Such particles will fly unopposed into the centre. Barrier with two turning radii is present only for particles for which sign(Q)q/m ≥ 1. We will consider only such particles. Thus the parameter b will be taken as positive. There is no classical analogue of this effect: a charged centre being able to capture particles of the same charge within the inner turning radius r− , despite of the Coulomb repulsion. We make the following assumption: the preinflationary Universe is an ideal quantum gas in thermal equilibrium with constant volume densities of the positive and the negative charges. Under minute density fluctuation in the volume density of one type of charges at some point, the domain of all other like charges within radius r− are trapped gravitationally into a cluster. We will call this domain a daemon (for dark electric matter object). There will be no charges of this type between r− and r+ , while charges of this type on the outside of r+ would be strongly repelled. As a result, the pre-inflationary Universe nucleates into such domains (daemons). Domains of different charge can, obviously, overlap: an oppositely charged particle, approaching a daemon, will not experience turning radii, will fly into the daemon and freely interact with the particles in it. As our aim is to give a qualitative description, we also assume that m is the typical mass of an elementary particle, while q is its typical charge. It should be noted that when both turning radii are present, they are always real, that is, that they are real for any value of (or any temperature). The discriminant (4Q2 /M 2 )(1 − q 2 /m2 )(Q2 /M 2 − 1) of the quadratic expression in under the root must then be negative and, indeed, it always is — for all charged elementary particles, q/m 1. Also, an arbitrary accumulation of elementary particles of like charge, trapped by the Reissner–Nordstr¨ om field, necessarily leads to |Q| > M . A daemon is, therefore, a naked singularity. We now turn to the study of quantum tunneling of trapped particles through the classically forbidden region between the two turning radii r± . The Schr¨ odinger equa-
November 24, 2010
10:59
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.04˙Prodanov
434
tion of one-dimensional motion along the r-axis in potential (3) is: ψ 00 (r) + (Ar−2 − Br−1 )ψ(r) = −(2mE/~2 )ψ(r), where A = (2m/~2 )a = −mQ2 (1 − q 2 /m2 )/~2 = const > 0 and B = (2m/~2 )b = −2mM [1 − (qQ)/(mM )]/~2 > 0. We are interested in the continuum of scattering states (E > 0). Using the Wentzel-Kramers-Brillouin (WKB) approximation method (see,9 for example), we will determine the transmission coefficient for tunneling through the classically forbidden region between the two turning radii r± . The picture is very similar to the Gamow theory of alpha-decay (see9 again). The p Schr¨ odinger equation can be re-written as ψ 00 (r) = −(p2 /~2 )ψ(r) where p(r) = 2m[E − U (r)] is the classical momentum of a particle with energy E moving in potential U (r) (with E > U (r), so that p(r) is real). For tunneling through a potential barrier (namely, across the classically forbidden region between the two turning radii r± ), the WKB-approximated wave function is given by: i
r+
R
|p(r)|dr ±~ D ψ(r) ' p , (5) e r− |p(r)| p where D = const and |p(r)| = 2m[U (r) − E]. The amplitude of the transmitted wave, relative to the amplitude of the incident wave, is diminished by the factor e2γ , where r Zr+ q Q m q2 1 π√ π√ M −1 √ γ= . (6) |p(r)|dr = − − 1 + m |Q| m M 2 2 ~ ~ m ~ −1 r−
The tunneling probability P is proportional to the Gamow factor e−2γ .9 With the drop of the temperature (that is, when starts falling from ∞ towards 1), the inner turning radius r− tends to a finite value, while the outer turning radius r+ tends to infinity. The width of the forbidden classical region, δ = r+ − r− , also tends to infinity in the limit → 1: In the very early Universe, at extremely high temperatures (regime 1), the two turning radii are approximated by r± = (qQ ± m|Q|)/(kT ) and γ is not temperature-sensitive: √ √ γ|1 = −(π/~) m |Q| (q 2 /m2 − 1)1/2 + (πqQ)/(~ m). As the emitted particles have charge with the same sign as that of the daemon, the absolute value |Q| of the total charge of the daemon diminishes. The mass M of the daemon diminishes as well with each emission, but |Q|/M ' const > 1 at all times. Note that we disregard the small effect of re-capture of ejectiles on the rate of decrease of |Q| and M . We next expand the first term on the right-hand-side of the last equation up to first order over the small parameter m/q. This gives the probability for tunneling P as √ proportional to exp[−(π/~)m m |Q|/|q|] in the very early Universe (regime 1) and growing exponentially with the decrease of |Q|. Over (dimensionless) time dt, the charge of the daemon will decrease by the amount d|Q| proportional to −P dt and, therefore, in the very early Universe, |Q(t)| ' ln(C − t), where C = const and t is dimensionless time. This gives P (t) ' 1/(C − t). In alpha-decay, the daughter nucleus recoils after the emission. In view of the analogies between alpha-decay
November 24, 2010
10:59
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.04˙Prodanov
435
and the current case, we make the following assumption. A particle of kinetic energy E inside the daemon, tunnels through. Tunneling in itself does not change the energy of the particle (otherwise, it would “resurface” at point different from the outer turning radius). The recoil energy (needed for conservation of momentum) however, does: the particle’s kinetic energy after the emission will be E, diminished by the recoil kinetic energy ER . The relativistically correct relation between the linear momentum p of particle of rest mass m and the kinetic energy E of the particle is given by:10 p2 = 2mE + 4E 2 /c2 and the recoil kinetic energy ER is:10 ER = (m/M )E + 2(1 − m2 /M 2 )E 2 /M where M is the mass of the daughter nucleus. Let us first disregard the relativistic (quadratic in E) corrections. Then, if E0 is the total kinetic energy of all particles inside the daemon before the first emission and if we denote M/m by n (figuratively, we have n “equivalent” ingredients inside the daemon), then after the first emission, the ejectile will have energy (1) (1) E1 = E0 /n − ER , where ER = E1 m/(M − m) = E1 /(n − 1). The energy of the first ejectile is therefore E1 = E0 (n − 1)/n2 . At the same time, as a result of the loss of m/M of the daemon, the total kinetic energy inside the daemon is decreased from E0 to E0 − E0 /n = E0 (n − 1)/n. The second ejectile will have 1/(n − 1) of this energy, or energy E0 /n prior to leaving the daemon. After tunneling, its energy (2) (2) E2 will be E0 /n − ER , where ER = E2 m/(M − 2m) = E2 /(n − 2). Thus E2 = E0 (n − 2)/[n(n − 1)]. The energy inside the daemon is decreased from E0 (n − 1)/n to E0 (n − 2)/n. The energy of the third ejectile prior to leaving the daemon will be 1/(n − 2) of the inside energy, or E0 /n. Thus the energy carried away by the third ejectile will be E3 = E0 (n−3)/[n(n−2)]. The k th projectile will therefore have energy Ek = E0 (n−k)/[n(n−k+1)] = E0 (m/M )[1−1/(M/m−k+1)]. We now take a continuum limit and re-write this as E(t) = E0 m/M − E0 /[(m/M )(m/M − k(t) + 1)], where k(t) is the number of particles emitted after time t. The charge inside a daemon decreses in time from its initial value Q0 as Q(t) = Q0 − k(t)q. Thus k(t) = Q0 /q − (1/q) ln(C − t) ' M/m − (1/q) ln(C − t). This gives E(t) ' E0 (m/M ){1 − 1/[1 + (1/q) ln(C − t)]}. The temperature drops at least as square root of E(t). The outer turning radius r+ (which is inversely proportional to the temperature) has an accelerated increase with time. In result, the scale factor of the Universe, a(t), which is proportional to r+ and, therefore, inversely proportional to the temperature grows with time. The second derivative of a(t) is positive. Therefore we have inflation. If the relativistic corrections,10 mentioned earlier, were included, than the rapid drop in T would be even more pronounced. Note that in the regime 1, the width r+ −r− = 2m|Q|/(kT ) of the classically forbidden region initially even decreases with time (as the drop of the temperature T is not, initially, as fast as the drop of the charge |Q| of the daemon — tunneling is practically temperature-independent). This is when huge amounts of particles gush out of the daemons. The extremely rapid drop in the temperature that follows leads to an extremely rapid growth of r+ , together with that of a(t). The “graceful exit” of the inflation occurs when the width r+ − r− = 2m|Q|/(kT ) of the barrier grows large enough so that quantum tunneling is switched off. This happens before
November 24, 2010
10:59
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.04˙Prodanov
436
daemons become fully depleted (bound states inside the daemons should also not be forgotten). In other words, when the temperature T drops sufficiently and the second √ term in expression (6) for γ, namely (π/~)M m[qQ/(mM ) − 1](2 − 1)−1/2 , takes √ control, a break is put on the tunneling (the lower limit of this term is (π/~)qQ/ m when 1). As the probability for tunneling is brought down very rapidly towards 0 and particles are no longer ejected by the daemons, the medium outside the daemons is no longer cooled by the tunneling process. Without quantum tunneling, the charges of the daemons remain practically constant. However, the temperature of the outside fraction of the Universe continues to drop after the rapid accelerated expansion as a different expansion mechanism has naturally taken over. This is the recently proposed Reissner–Nordstr¨ om expansion mechanism:5,6 with constant charges of daemons, the Universe continues to cool: T ' t−1/2 , and expand: a ' t1/2 . This is the start of the radiation dominated epoch. It is also characterized as the beginning of a supercooling phase. At the end of the inflation, the daemons are still much hotter than the outside fraction of the Universe. A daemon will now cool not through quantum tunneling, but through interaction with the particles of oppositely charged daemons, which, in turn interact with the particles outside the original daemon. In view of the low densities, this does not happen as fast as the Universe expands. Eventually, the temperature of the daemons and the temperature of the “free” fraction of the Universe will equalize and, in result, the Universe will have reheated, but not enough to reignite the inflation (as the daemon temperature now is lower than the one at the end of the inflation and quantum tunneling cannot start). During the reheating, the scale factor a(t) of the Universe does not decrease as there is no mechanism to draw particles, blown away by the growth of the daemons’ outer radii, back towards the daemons: the decrease in the outer turning radius r+ of a daemon simply means that particles of the outer fraction will penetrate deeper and deeper into the repulsive field of the daemons. The Universe then enters into another supercooling phase followed by another reheating. This process is repeated until daemons cool down to the temperature of the surrounding fraction and cannot re-ignite futher reheatings. Then the temperature drop will simply follow T ' t−1/2 √ and the expansion will be at the rate of t. References 1. S. Hawking, Mon. Not. R. Astr. Soc. 152, 75–78 (1971). 2. H. Reissner, Ann. Phys. (Germany) 50, 106–120 (1916); G. Nordstr¨ om, Proc. Kon. Ned. Akad. Wet. 20, 1238–1245 (1918). 3. C.W. Misner, K.S. Thorne and J. Wheeler, Gravitation, W.H. Freeman (1973). 4. R.H. Boyer and R.W. Lindquist, J. Math. Phys. 8 (2), 265 (1967). 5. E.M. Prodanov, R.I. Ivanov, and V.G. Gueorguiev, Astroparticle Physics 27 (150–154) 2007, hep-th/0703005. 6. E.M. Prodanov, R.I. Ivanov, and V.G. Gueorguiev, Journal of High Energy Physics, JHEP 06(2008)060, arXiv: 0608.0076. 7. J.M. Cohen and R. Gautreau, Phys. Rev. D 19 (8), 2273–2279 (1979). 8. R.M. Wald, General Relativity, University of Chicago Press (1984).
November 24, 2010
10:59
WSPC - Proceedings Trim Size: 9.75in x 6.5in
06.04˙Prodanov
437
9. D.J. Griffiths, Introduction to Quantum Mechanics, Pearson Prentice Hall (2005). 10. M. Stanley Livingston and H.A. Bethe, Rev. Mod. Phys. 9(3), 245–390 (1937).
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
PART VII
Cosmological Parameters, Dark Matter and Dark Energy
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
441
CHROMODYNAMICS VACUUM STRUCTURE AND COSMOLOGY FEDERICO R. URBAN Department of Physics & Astronomy, University of British Columbia, Vancouver, B.C. V6T 1Z1, Canada
[email protected] The infrared sector of QCD contains all the necessary ingredients, once laid onto a time-dependent, curved background, to cater for the much needed cosmological vacuum energy. This is achieved through the fields that describe the impact of the long-range interactions of QCD, the Veneziano ghost and its dipolar partner. Although technically extremely challenging, the physics is well understood and the estimated dark energy density is of the correct order of magnitude. A further tantalising application of this proposal is the ability of generating cosmological magnetic fields via a Standard Model anomalous coupling between the ghost and photons. As a spin-off it is possible to show that the QCD vacuum possesses a Casimir-like energy density if enclosed in a non-trivial compact manifold. Keywords: QCD vacuum, Dark Energy, large scale magnetic fields
1. Generalities The problem of the origin, nature, and properties of the cosmological dark energy is a fascinating yet mysterious one. Our universe accelerates away,3 and leaves particle physicists and theoretical cosmologists alike in search for a good explanation.4 One of the most striking features of the fluid which we call dark energy, and which accounts for some 73% of the total energy density of the universe, is that it defies the simplest and most intuitive estimates in such a substantial way that one is left to wonder where this appalling discrepancy hails from. In this contribution to the proceedings of the Beyond 2010 workshop we will review two proposals1,2 for a Standard Model (SM) based solution of the wearisome cosmological vacuum energy problem, which find their common denominator in the infrared sector of QCD, and in particular make use of the highly peculiar properties of the so-called Veneziano ghost,5 with an interesting side-effect in the generation of cosmological magnetic fields. We will briefly give some background for the physics involved in our work, summarise what the Veneziano ghost (in fact, a dipole) is, and emphasise the characteristics which are of relevance for our proposals; the two upcoming sections report on the two effects of the Veneziano ghost when QCD is quantised in a non-trivial spacetime. Finally, before concluding, I will outline the connection to electromagnetism mentioned above.
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
442
1.1. The Veneziano ghost The Veneziano ghost was introduced by Gabriele Veneziano in 19795 in order to reconcile the two sides of the large-Nc limit anomalous axial U (1) Ward Identity (WI) Z χ ≡ i dxh0|T {Q(x), Q(0)}|0i = mq h¯ q qi + O(m2q ) , (1) where h¯ q qi is the chiral condensate, and
αs a ˜ µνa G G , (2) 8π µν where Kµ is the gauge-variant Chern-Simons current, the divergence of which is Q, the topological charge density. The standard Witten-Veneziano5,6 solution of the U (1)A problem is based on the assumption (confirmed by numerous lattice computations, see e.g. the recent review papers7 and references therein) that the topological susceptibility χ does not vanish despite of the fact that Q is a total derivative. It implies that there is an unphysical pole at zero momentum in the correlation function of Kµ , similar to the Kogut-Susskind (KS) ghost in the Schwinger model.8,9 In fact, the analogies between 2d QED (the Schwinger model) and 4d QCD are much deeper: the effective Lagrangians are exactly the same in the two cases. More precisely, one begins with the usual chiral Lagrangian as proposed first by Di Vecchia and Veneziano in10 (a variation on this Lagrangian was already available in11 ), whose general form reads Nc η′ 1 q (3) L = L0 + ∂µ η ′ ∂ µ η ′ + 2 q 2 − θ − 2 bfη′ fη ′ ′ η + g.f. , + Nf mq |h¯ q qi| cos fη ′ Q ≡ ∂µ K µ ≡
where q is the topological charge density, and all unimportant degrees of freedom (including π, K, η) are assumed to be in L0 , and shall not be mentioned here. In this Lagrangian g.f. means gauge fixing term for three-form Aµνρ , see below, and the coefficient b ∼ m2η′ is fixed by the Witten-Veneziano relation for the topological susceptibility in pure gluodynamics. The topological density is defined as usual, q=
g2 1 ǫµνρσ Gaµν Gaρσ ≡ ǫµνρσ Gµνρσ , 2 64π 4
(4)
with Gµνρσ ≡ ∂µ Aνρσ − ∂σ Aµνρ + ∂ρ Aσµν − ∂ν Aρσµ , i ↔ ↔ g2 h a ↔ a a a a a a b c Aνρσ ≡ A A − A A − A A + 2gC A A A . ∂ ∂ ∂ ρ ν σ abc ν σ ρ σ ν ρ ν ρ σ 96π 2
(5) (6)
The fields Aaµ are the usual Nc2 − 1 gauge potentials for chiral QCD and Cabc the SU (Nc ) structure constants. The constant fη′ ≃ fπ is the η ′ decay constant
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
443
(fπ being that of the pion), while mq is the quark mass, and h¯ q qi is the chiral condensate. The three-form Aνρσ is an abelian totally antisymmetric gauge field which, under colour gauge transformations with parameter Λa δAaµ = ∂µ Λa + igCabc Λb Acµ ,
(7)
Aνρσ → Aνρσ + ∂ν Λρσ − ∂ρ Λνσ − ∂σ Λρν ,
(8)
behaves as Λρσ ∝
Aaρ ∂σ Λa
−
Aaσ ∂ρ Λa .
(9)
In this way the four-form Gµνρσ is gauge invariant. y The term proportional to θ is the usual θ-term of QCD and appears in conjunction with the η ′ field in the correct combination as dictated by the Ward Identities (WI). The constant b is a positive constant which would give the wrong sign for the mass term of the scalar field q, the property which motivated the term “Veneziano ghost”. However, this positive sign for b is what is required to extract the physical mass for the η ′ meson, m2η′ ∼ b, see the original reference10 for a thorough discussion. One should emphasise that the gauge fixing term in (3) has to be not confused with the standard gauge fixing term for the conventional gluon field Aaµ , as it is related to the fixing of the gauge for the three-form Aµνρ describing the Veneziano ghost and carrying no colour index. One can interpret the field Aµνρ as a colourless collective mode which is represented by a specific combination of the original gluon fields, which in the infrared leads to a pole in the unphysical subspace. We know about the existence of this very special degree of freedom and its properties from the resolution of the famous U (1)A problem: integrating out the q field provides the mass for the η ′ meson. One more remark about the coefficient b which enters (3), and which is the principal ingredient in solving the U (1)A problem. Its magnitude is determined by the topological susceptibility in pure gluodynamics (without quarks) as Z if 2 (10) d4 xhT {q(x), q(0)}iθ=0 = π b . 2Nc Of course b = 0 to any order in perturbation theory because q(x) is a total divergence q = ∂µ K µ . However, as we learnt from,5,6 b 6= 0 due to the non-perturbative infrared physics; in fact, m2η′ ∼ b. Now, with little effort we can rewrite this effective Lagrangian in terms of scalars as in (3). Defining Kµ as Kµ ≡ ǫµνρσ Aνρσ ,
q = ∂µ K µ ,
(11)
we can write its longitudinal part (the only one that appears in q) as Kµ ≡ ∂µ Φ ,
(12)
such that the expression for the topological density takes the form q = ∂µ K µ = 2Φ ,
(13)
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
444
where Φ is a new scalar field of mass dimension 2. Now our Lagrangian (3) can be expressed in terms of the Φ field as follows ′ η 1 ′ µ ′ q qi| cos (14) L = L0 + ∂µ η ∂ η + Nf mq |h¯ 2 fη ′ ′ 1 η + Φ22Φ + 2Φ − θ2Φ , 2m2η′ fη2′ fη ′ where we plugged in the coefficient b → 2Nc m2η′ as the Witten-Veneziano relation requires. As usual, the presence of 4-th order operator Φ22Φ is a signal that the ghost is present in the system and may be quite dangerous. However, we know from the original form (3) that the system is unitary, well defined etc, in different words, it does not present any problem associated with the ghost. It is convenient to define a new field φ2 which is a combination of the the original η ′ field and Φ as Φ ′ , (15) η ≡ φ2 + fη ′ which serves to complete the squares in (14) in such a way that one can eliminate R R the term d4 xη ′ 2Φ = − d4 x∂µ Φ∂ µ η ′ . The Lagrangian now takes the form " # 1 φ2 Φ µ L = ∂µ φ2 ∂ φ2 + Nf mq |h¯ q qi| cos + 2 2 fη ′ fη ′ 1 Φ m2η′ 2 + 22 Φ . + 2m2η′ fη2′
(16)
It is now straightforward to repeat the known steps in coping with the higher derivatives term 22 in (16), namely, one recognises that this operator hides an extra degree of freedom, and can be explicitly reduced in terms of the associated propagator as ˜ F = −m2 ′ δ 4 (x) , 22 + m2η′ 2 △ η
˜ F = limρ→0 [△F (mη′ , x) − △F (ρ, x)] , △
(17)
which is the sum of a massive mη′ scalar, and a massless ghost-like scalar. This means that the Φ field corresponds to two degrees of freedom which is almost an obvious statement if one formally writes the inverse operator as follows ! m2η′ 1 1 = − . (18) 22 + m2η′ 2 −2 − m2η′ −2 In analogy with the 2d Kogut and Susskind (KS) model,8 we will call the massive scalar field as φˆ while the massless ghost is φ1 . The final Lagrangian, explicitly
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
445
including all the relevant terms, thus becomes 1 ˆ µˆ 1 1 ∂µ φ∂ φ + ∂µ φ2 ∂ µ φ2 − ∂µ φ1 ∂ µ φ1 2 2 "2 # φˆ + φ2 − φ1 1 2 ˆ2 q qi| cos , − mη′ φ + Nf mq |h¯ 2 fη ′
L=
(19)
where all fields have now canonical dimension one in four dimensions, and which coincides with the Kogut-Susskind effective Lagrangian for the 2d Schwinger model (2,8 ). We claim that the Lagrangian (3) is that part of QCD which describes long distance physics in our context. There are no new fields or coupling constants beyond the standard model. As anticipated, (3) is exactly identical to that of the KS 2d Schwinger model. The Veneziano ghost in QCD is represented by the φ1 field in (3) and it is always accompanied by its companion, the massless field φ2 . This fact turns out to be essential in the quantum model, we can readily obtain by imposing ˆ φ1 , and φ2 which follow canonical equal-time commutation relations for the fields φ, from (3) as h i ˆ x, t) , ∂t φ(~ ˆ y , t) = iδ 3 (~x − ~y) , φ(~ [φ1 (~x, t) , ∂t φ1 (~y , t)] = −iδ 3 (~x − ~y) ,
(20)
3
[φ2 (~x, t) , ∂t φ2 (~y , t)] = iδ (~x − ~y) ,
whence we evince that φ1 is a massless ghost field. The cosine interaction term includes vertices between the ghost and the other two scalar fields, but it can in fact be shown that, once appropriate auxiliary conditions on the physical Hilbert space are imposed (similarly to the Gupta-Bleuler Lorentz-invariant quantisation of QED12 ), the unphysical degrees of freedom φ1 and φ2 drop out of every gauge-invariant matrix element, leaving the theory well defined, i.e., unitary and without negative normed physical states. Specifically, this is achieved by demanding that the positive frequency part of the free massless combination (φ2 − φ1 ) annihilates the physical Hilbert space: (φ2 − φ1 )(+) |Hphys i = 0 .
(21)
Yet, there is one place where the ghost has physical consequences through the Witten-Veneziano formula which relates the mass of the η ′ to the topological susceptibility of the model χ. It is precisely the topological susceptibility which enjoys the uncancelled contribution of the ghost φ1 with its companion φ2 , see1,2,5,8,9 for details. 2. Non-Trivial Topology and QCD Casimir Energy What happens when we embed the system in a non-trivial (compact) topology? With this idea in mind, it must be clear that the any information related to possible deviations from infinite Minkowski spacetime could manifest itself in local
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
446
observables only when there are strictly massless degrees of freedom which can propagate at very large distances L. Now, in QCD, where all fields are massive, the information about the boundaries can only be carried by the very unique Veneziano ghost. The key point we are making here is that the corrections due to the very large but finite size L of the manifold are small, but not exponentially small, exp(−L), as one could anticipate for any QFT where all physical degrees of freedom are massive. To be more concrete, as the WI shows (1) the deviation in the topological susceptibility ∆χ is related to that of the chiral condensate ∆h¯ q qi. The corresponding exact 2d computation indeed demonstrates9 that the magnitude of the chiral condensate on a large torus of size L slightly changes from its infinite Minkowski value as ∆h¯ q qi ∼ h¯ q qi/Lmη′ : the modification to the Minkowski χ is linear in 1/L. This result comes from the ghost’s contribution, which is very sensitive to the specific boundary conditions at very large distances. We can not perform a similar explicit analytical computation in the 4d case. However, the presence of the Veneziano ghost suggests that the scenario would be very similar to what we observed in the Schwinger model.1,2,9 Now we want to link the small correction we have just obtained to extra energy density when quantising on the torus. In order to do so recall that in QCD ∂ 2 ǫvac (θ) |θ=0 . (22) ∂θ2 For estimation purposes, and also because we know it must be so, we choose a manifold of size L ∼ 1/H0 : this leads at once to 2 ∂ ǫvac (θ) H0 ∆ |θ=0 = −∆χ ≃ −c · · |mq h¯ q qi| , (23) ∂θ2 mη ′ where the unknown, order 1, constant c parametrises our ignorance concerning the details of the manifold. The θ-dependent portion of vacuum energy at θ ≪ 1 is well known,10 and for Nf quarks with equal masses is given by ǫvac (θ) = −Nf |mq h¯ q qi| cos(θ/Nf ). Therefore, our relation (23) for Nf = 1 can be written in the form H0 ρΛ ≡ ∆ǫvac = c · · |mq h¯ q qi| ∼ c(3.6 · 10−3 eV)4 , (24) mη ′ χ=−
to be compared with the observational value ρΛ = (2.3 · 10−3 eV)4 . The similarity in magnitude between these two values is very encouraging. It is also important to notice that the non-vanishing result for ρΛ is parametrically proportional to mq , and only occurs if the θ-dependence is non-trivial. In particular, in the chiral limit mq = 0 when all θ-dependence is gone from every physical observable, the effect under consideration (24) also identically vanishes. 2.1. Observing topology in the CMB The most immediate consequence of expression (24) is that if the cosmological constant ρΛ indeed arises from the finiteness of the manifold we live in, than the
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
447
corresponding topological structure on the scale 1/L ≃ H0 can be probed using the last scattering surface (SLS) imprinted in the CMB.13 Therefore, since the dark energy and the topological structure of the universe are intimately linked one another, it is possible to try to measure one of them in order to obtain information on the other one. In particular, we would be looking for a non-trivial topology in the microwave sky, whose typical size is set by its relation to the observed vacuum energy. The only residual information coming from the fact that the manifold we live in could be compact is stored in the free constant c of formula (24). This constant does not specify exactly which manifold we are dealing with, because, as previously mentioned, it is not possible to track analytically the parameters defining it. However, once more referring to the 2d model of,9 this coefficient is expected to be entirely specified by the structure of the manifold, such as its linear sizes, the angles at which the sides are with respect to each other, and the relative twisting of the glued faces. In particular, it arises whenever there is an asymmetry between different linear lengths, in which case the linear correction appears. Hence, the precise structure of the manifold is not fully assessable; nevertheless, we can estimate its linear size by comparing, or normalising, the expected mismatch in vacuum energy eq. (24) to the observed one. This linear size would then refer to the smallest dimension, and would describe an effective T1 -universe. Practically, one can define this size of the manifold as L = (cgrav H0 )−1 , and therefore explicitly obtain an estimate for the linear length of the torus L=
1 ≈ 17H0−1 ≈ 74Gpc . cgrav H0
(25)
Notice that this number, although subject to some variability when the reference numbers are chosen slightly differently, is beyond that which can be probed with the circles in the sky method.14 Moreover, the current CMB mission Planck is most likely going to be able to look for signatures of a small universe of such size, thanks to improved resolution over that of COBE and WMAP available for the analyses.15 A different way of putting the result quoted in (25) is by comparing it to the size of the SLS, which, in a FLRW universe filled with dust and cosmological constant in respective proportions of ΩM = 0.27 and ΩV = 0.73, is written as Z t0 1 1 7 dt 2 Ω0V dSLS = a0 ≃p 0 (26) 2 F1 ( , , , − 0 ) . 6 2 6 ΩM Ω M H0 tSLS a(t) With this definition we find
dSLS ≈ 3.4H0−1 ⇒ L ≈ 5dSLS ,
(27)
which is well beyond the entire SLS given by 2dSLS . It is important to stress once more that thanks to the proposed mechanism that explains the observed value for the vacuum energy we are able, via its link with the possible non-trivial topology of the 3d space, to make a prediction, eq. (25) or (27) on the size of the compact manifold. This number is entirely determined by known
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
448
QCD physics. If the value of the cosmological constant is set by this mechanism, then the size of the manifold is also fixed, rendering the mechanism directly testable and falsifiable through CMB measurements. 3. Expanding Universe and Dynamical Vacuum Energy The second case we intend to analyse is the case of an expanding FLRW universe. Going to curved space one just needs to covariantise the derivative operators in the effective Lagrangian (3) (which in fact, acting on scalars, are going to reduce to the usual partial derivatives). In addition to the Lagrangian density however, we need to specify what is of the auxiliary conditions (21). In Minkowski space, these conditions enforce the disappearance of the Hamiltonian H expectation value for any physical state (which can be directly checked with the canonical Fourier space mode expansion): hHphys |H|Hphys i = 0 .
(28)
Things change however, once the background is, for example, FLRW. It is well known that there are inherent subtleties and obstacles when we attempt to formulate a QFT on a curved space.16 In this case there is not a natural choice for the set of modes that on which the fields are expanded, these sets being closely related to a more or less “natural” coordinate system. Indeed, the Poincar´e group is no longer a symmetry of the spacetime and, in general, it would be not possible to separate positive frequency modes from negative frequency ones in the entire spacetime, in contrast with what happens in Minkowski space where the vector ∂/∂t is a constant a Killing vector. For our specific problem, i.e., the study of the ghost dynamics in a curved background, these considerations imply that there will be no simple formulation of the physical Hilbert subspace, as there is no natural mode decomposition similar to the Minkowski expansion i Xh φ1 (t, ~x) = ak uk (t, ~x) + a†k u∗k (t, ~x) , k
i Xh φ2 (t, ~x) = bk vk (t, ~x) + b†k vk∗ (t, ~x) .
(29)
k
This means that a transition from a complete orthonormal set of modes to different one (the so-called Bogolubov’s transformations) will always mix positive frequency modes (defined with the annihilation operators ak and bk ) with negative frequency ones (associated with the creation operators a†k and b†k ). As a result of this mixture, the vacuum state defined by a particular choice of the annihilation operators will not be “empty” once we switch back to the original basis defined by ak and bk . In other words, in curved space one should generally expect some relevant physical effects due to the ghost modes. In particular, as we shall see, they can give non-zero contribution to the energy (see (30) below).
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
449
Such consequences arising in going from Minkowski to curved space should not be a surprise to anyone who is familiar with the problem of cosmological particle creation in a gravitational background, or the problem of photon emission by a neutral body which is accelerating. Only very few systems of this kind can be studied and solved exactly, see e.g. the reviews.16 The generic picture emerging from these analyses signals a physical production of particles stemming from the interaction with the gravitating background. The spectrum of the produced particles as well as the rate of production have been discussed in literature in great details. The salient outcome for our work turns out to be the fact that the typical magnitude of the Bogolubov’s coefficients is proportional to the rate at which the background is changing (the Hubble parameter H in case of an expanding universe, or the acceleration rate if we are studying photon emission by a neutral body), and to the total extent of this process, e.g., the total amount of expansion. The characteristic frequencies of the modes gravity can excite in this set up are of order of the Hubble parameter ωk ≃ H, whereas higher frequency ones are exponentially suppressed. This last result is easy to understand physically, because one expects the strength of the expansion to be able to excite modes for which ωk . H, but not to possess enough energy to reach the higher end of the spectrum, that is, high k modes are only excited very inefficiently. Now, the number of particles may be a deceiving concept in a general background, and depends on whole prehistory of the spacetime for its interpretation to be sensible:16 it is safer to look at these conclusions by considering the energymomentum tensor because it is free from such kind of uncertainties. We expect to obtain an extra (in comparison with Minkowski space) time-dependent vacuum energy density from the fact that X (1) (2) hHphys |H|Hphys i = ωk (|βkl |2 + |βkl |2 ) , (30) l
where the β coefficients are the so-called Bogolubov coefficients relating different vacua in curved background. According to the previous paragraph then, this contribution would hence be of the form ρΛ ≃ Λ3QCD H0 · f (a(t), H(t)) ,
(31)
where ΛQCD is the scale of confinement (typically around 100 MeV). Unfortunately the precise form of such vacuum energy is not known analytically (we are dealing with strong interaction in a curved background), and several factors may change the form of f in eq. (31), thereby modifying the cosmological evolution of this form of dark energy. Nevertheless, cosmology with this time-dependent vacuum energy can be studied for viability, for example by exploring the consequences of its anomalous coupling to electromagnetism.17 So, the basic result of this section can be formulated as follows: when QCD is coupled to gravity the “would be” unphysical ghost, although it still is not an
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
450
asymptotic degree of freedom, nevertheless contributes to the vacuum energy density in an expanding universe. This time-dependent “ghost condensation” can, for the most part, be regarded as particle emission in an expanding universe, bearing in mind the subtleties reviewed in this section, i.e., no actual particles are being produced. All such effects are proportional to the rate of expansion H, hence, very small, as H/ΛQCD ∼ 10−41 , and are seen to be related to the tiny momentum ωk ≃ k ≃ H available for efficient gravitational interactions, higher frequencies being exponentially suppressed. 4. Dark Energy and Magnetic Fields Among the most interesting applications of the QCD vacuum energy proposal briefly summarised above, is the possibility of generating large-scale magnetic fields with intensity (today) of about 1 µG. A model which ties dark energy and the dynamical generation of cosmological magnetic fields together is very much appealing now, on the one hand because of the possibility of double-testing a single model via its doubled-up signatures, and on the other hand since most ideas proposed in this area are facing major viability and/or theoretical challenges, see.18 Such a connection is possible, and in a definite way, because the very same ghost field reviewed before couples via the triangle anomaly to electromagnetism with a constant which is unambiguously fixed in the SM. Indeed, the “ghost condensate” is not destined to manifest itself only in the vacuum energy of the theory, but, as is the case for pions and η ′ mesons, the ghost dipole in the SM is coupled to the electromagnetic field via the anomalous term φ2 − φ1 α 2 Nc Tr(I3 Qi ) Fµν F˜ µν . (32) L(φ2 −φ1 )γγ = 4π fη ′ Here α is the fine-structure constant, fη′ is the decay constant for the η ′ , Nc is the number of colours, and I3 and the Qi s are the light quarks isotopic spin and electric charges, respectively. Finally, Fµν is the usual electromagnetic field strength (in √ curved space), and F˜µν = ǫµνρσ F µν /2 its dual. We choose ǫµνρσ = ǫµνρσ / −g with M the Minkowski antisymmetric tensor following from ǫ0123 = +1, and g = det gµν M the determinant of the metric tensor. We are not really interested in η ′ physics as the heavy η ′ meson of course is not excited in our universe: for our future discussions we then safely neglect the massive physical η ′ field, and keep only the ghost field φ1 and its companion φ2 along with ~ · B) ~ is the EM field. Our Lagrangian (recall that Fµν F˜ µν ≃ E 1 1 1 L = − Fµν F µν + Dµ φ2 Dµ φ2 − Dµ φ1 Dµ φ1 4 2 2 α φ2 − φ1 ~ ~ φ2 − φ1 2 − Nc Tr(I3 Qi ) E · B + Nf mq |h¯ q qi| cos , π fη ′ fη ′
(33)
where the electric and magnetic fields are the usual Minkowski ones (not rescaled by the scale factor of the universe a(t)), and the covariant derivative Dµ is defined as
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
451
Dµ = ∂µ + Γµ so that, for instance Dµ V ν = ∂µ V ν + Γνµλ V λ . The expression (33) is the exact low energy Lagrangian describing the interaction of the ghost field φ1 and its companion φ2 with electromagnetism in the gravitational background defined by the Γµ . This interaction is negligible in Minkowski space due to the Gupta-Bleuler constraint (21); however, in the expanding universe it automatically leads to the generation of the physical electromagnetic field. What is important is that the typical momentum k EM of the generated EM field will be of the same order of magnitude of a typical momentum of the the ghost k DE , the latter being of order H in an expanding universe; consequently ωkEM ≃ ω DE ≃ H .
(34)
4.1. Feeding the magnetic field Let us explore the consequences of this interaction in our specific circumstances. The ghost field in an expanding universe should be treated as the large correlated classical field which emerges from a non-zero expectation value hHphys |(φ2 − φ1 )|Hphys i = 6 0 as explained in.2 The Fourier expansion of these classical fields φ2 and φ1 is saturated by very low frequencies ωk ≃ H, while higher frequency modes ωk ≫ H are strongly suppressed as a result of the relative suppression of the so-called Bogolubov coefficients.2 For future convenience we introduce the dimensionless coupling constant which appears in our basic expression (32) α (35) β ≡ Nc Tr(I3 Q2i ) . π In the expanding background, the time scale τ which is required for an efficient energy transfer from a source (the ghost dipole) and a recipient (the electromagnetic field) would be of order17 Hτ ≃ 1/β ,
(36)
which suggests that, since in nature the EM interaction is weak (β ≪ 1), then τ ≫ H −1 : this clearly makes little sense, and the appropriate interpretation is that the energy transfer is very inefficient and only a very small fraction β of the available energy can be at most injected into the magnetic field within a Hubble time. With this interpretation in mind we arrive at our final estimate for the magnetic energy that has flowed from the DE field ρEM ≃ β · ρΛ ,
(37)
where ρEM ≃ E 2 + B 2 . What is most important in this expression is that we see how the cosmological evolution of the electromagnetic energy density (which will eventually reduce to B 2 only, as the electric field is screened) tracks that of the vacuum energy density. Hence, whatever the precise form of the latter will turn out to be (notice that the problems we are faced with in this particular model are merely technical), the
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
452
magnetic fields generated this way will inherit it; a reliable and fully consistent calculation of the time dependence of ρΛ may still be missing, but the connection with electromagnetism is well specified, and the bases for phenomenological investigations are laid. Gathering all the pieces together we can finally work out some numerology. If we take (37) and substitute ρEM ≃ B 2 , it straightforwardly follows that during the last Hubble time of life of the universe, an O(H0−1 ) correlated magnetic field is born with intensity r α · ρΛ ∼ 1µG , 1G = 1.95 · 10−2 (eV)2 . (38) B0 ∼ π As we mentioned above, formula (38) should be treated as an order of magnitude estimate due to a number of numerical factors which have been neglected, along with theoretical uncertainties, but the close connection between time-evolving dark energy and magnetic field is left intact. To conclude the section: if DE and EM are coupled via the standard triangle anomaly (32) than one should expect that this would trigger energy transfer between the two, with the DE as a source and EM as target. In this case some simple, order of magnitude estimates suggest that a magnetic field with intensity (38) will be induced on scales of order O(H0−1 ) today. This result is not very sensitive to the specific properties of the nature of DE, as long as it meets basic requirements such as the pseudoscalar structure, its overall scale ρΛ ∼ (10−3 eV)4 , and time and space variations of the order of the Hubble parameter. These conditions are automatically satisfied in our model,2,17 that is, ω ≃ k ≃ H, in such a way that the DE component does not clump in contrast with matter (visible or dark). In17 the effective coupling constant between DE and EM is univocally determined by eq. (32) in which one substitutes hφ2 − φ1 i → H; without any particular adjustment this kind of construction straightforwardly leads to the appropriate order of magnitude for B. 5. Conclusion To conclude, let us simply very schematically recapitulate what we have presented in the main body of the paper. The infrared sector of QCD contains a very special degree of freedom, the Veneziano ghost, which is always paired up with another massless pole. The existence of this duo of fields is well established in QFT, is part of the familiar SM, and has been experimentally verified. Even though the theory seems to be ill-defined to to the existence of the ghost state, we have in fact seen that this is not so, thanks to the constraint placed upon the theory via the additional condition (21). The Veneziano ghost is crucial enough in ordinary Minkowskian 4d QCD, where it leads to the correct η ′ mass, alongside the saturation of the WI, and the correct θdependence. When we change the reference spacetime some more interesting findings arise.
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
453
First of all, when we enclose QCD in a compact manifold, the existence of the ghost dipole returns a Casimir-like piece of vacuum energy (24), which is numerically very close to the observed value, as long as the manifold’s typical size is of order of the Hubble parameter today. This effect is the more unexpected when one realises that in QCD all (physical) degrees of freedom are massive, automatically implying that any Casimir effect be exponentially suppressed: it is the Veneziano ghost that allows for this expectation to be evaded. Secondly, when we consider the theory in an expanding universe, then the ghost dipole is coupled to gravity, and gives rise to yet another contribution to the vacuum energy (31). This piece is naturally of the correct order of magnitude, due to the properties of the Bogolubov coefficients which parametrise the dynamical breaking of the auxiliary conditions (21). This breaking is in turn a consequence of the reduced set of symmetries of a FLRW universe, in which time is not a Killing vector, thereby flawing the usual decomposition in terms of positive and negative frequencies upon which the definition of Fock vacuum is based. Thirdly, the ghost dipole couples anomalously to electromagnetism, and is able to generate µG magnetic field at late times. This is achieved in an expanding universe only, as a spin-off of the resuscitation of the ghost condensate in a timedependent background. The energy stored in the condensate can flow towards electromagnetism and feed magnetic fields correlated on scales of order of the observable universe and beyond, thereby providing an interesting (and unique) mechanism to produce such fields in the late-time cosmology. In all cases, we are not claiming that the ghost field becomes a propagating degree of freedom, or becomes an asymptotic state. The description in terms of the ghost is a convenient way to account for the physics hidden in the non-trivial boundary conditions, related to the existence of the θ-vacua, and ultimately to the possibility of large gauge transformation in the QCD sector. Yet, this physics appear to be bearer of many interesting observational consequences, which may be essential in approaching today’s greatest problem of cosmology, all within the Standard Model of particle physics.
Acknowledgments I would like to thank the organisers of the Workshop for having given me the chance to present my work at this gathering and in this volume of proceedings. One special thought goes to the African Institute of Mathematical Sciences (AIMS) in Muizenberg (Cape Town) (http://www.aims.ac.za/) which I visited the week preceding the Beyond 2010 workshop. I wish to thank Bruce Bassett for making this visit possible, and everyone at AIMS for being great hosts despite all the difficulties of making science in a beautiful and troubled country like South Africa, and I would like to urge everyone in the field to pay visit to this little gemma in the panorama of African science, for these people work twice as hard as they can, and deserve all our support. I also wish to express my sincere appreciation for their
November 24, 2010
11:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.01˙Urban
454
achievements and ongoing efforts, as an encouragement for a much brighter future of success for young scientists at AIMS and all across Africa, and with the hope that the next Beyond meetings, as well as all the other high-profile particle physics and cosmology workshops and conferences around the globe, will be attended by more and more excellent researchers from the African continent. References 1. F. R. Urban and A. R. Zhitnitsky, arXiv:0906.2162 [gr-qc]. 2. F. R. Urban and A. R. Zhitnitsky, arXiv:0909.2684 [astro-ph.CO]. 3. A. G. Riess et al. [Supernova Search Team Collaboration], Astron. J. 116, 1009 (1998) [arXiv:astro-ph/9805201]; S. Perlmutter et al. [Supernova Cosmology Project Collaboration], Astrophys. J. 517, 565 (1999) [arXiv:astro-ph/9812133]; E. Komatsu et al., arXiv:1001.4538 [astro-ph.CO]. 4. E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006) [arXiv:hep-th/0603057]; J. Frieman, M. Turner and D. Huterer, Ann. Rev. Astron. Astrophys. 46, 385 (2008) [arXiv:0803.0982 [astro-ph]]. 5. G. Veneziano, Nucl. Phys. B 159, 213 (1979). 6. E. Witten, Nucl. Phys. B 156, 269 (1979). 7. E. Vicari and H. Panagopoulos, Phys. Rept. 470, 93 (2009) [arXiv:0803.1593 [hep-th]]; G. M. Shore, Lect. Notes Phys. 737, 235 (2008) [arXiv:hep-ph/0701171]. 8. J. B. Kogut and L. Susskind, Phys. Rev. D 11, 3594 (1975). 9. F. R. Urban and A. R. Zhitnitsky, Phys. Rev. D 80, 063001 (2009) [arXiv:0906.2165 [hep-th]]. 10. P. Di Vecchia and G. Veneziano, Nucl. Phys. B 171, 253 (1980). 11. C. Rosenzweig, J. Schechter and C. G. Trahern, Phys. Rev. D 21, 3388 (1980). 12. S. Gupta, Proc. Phys. Soc. A 63, 681, (1950); K. Bleuler, Helv. Phys. Acta 23, 567 (1950). 13. F. R. Urban and A. R. Zhitnitsky, JCAP 0909, 018 (2009) [arXiv:0906.3546 [astroph.CO]]. 14. N. J. Cornish, D. N. Spergel, G. D. Starkman and E. Komatsu, Phys. Rev. Lett. 92, 201302 (2004) [arXiv:astro-ph/0310233]; N. J. Cornish, D. N. Spergel and G. D. Starkman, Class. Quant. Grav. 15, 2657 (1998) [arXiv:astro-ph/9801212]; J. Shapiro Key, N. J. Cornish, D. N. Spergel and G. D. Starkman, Phys. Rev. D 75, 084034 (2007) [arXiv:astro-ph/0604616]. 15. A. de Oliveira-Costa, G. F. Smoot and A. A. Starobinsky, Astrophys. J. 468, 457 (1996) [arXiv:astro-ph/9510109]; A. de Oliveira-Costa, M. Tegmark, M. Zaldarriaga and A. Hamilton, Phys. Rev. D 69, 063516 (2004) [arXiv:astro-ph/0307282]. 16. N. D. Birrell and P. C. W. Davies, Quantum Fields In Curved Space, Cambridge Univ. Pr. , 1982; L. E. Parker and D. J. Toms, Quantum Field Theory In Curved Spacetime, Cambridge Univ. Pr. , 2009. 17. F. R. Urban and A. R. Zhitnitsky, arXiv:0912.3248 [astro-ph.CO]. 18. P. P. Kronberg, Rept. Prog. Phys. 57, 325 (1994). D. Grasso and H. R. Rubinstein, Phys. Rept. 348, 163 (2001) [arXiv:astro-ph/0009061]; M. Giovannini, Int. J. Mod. Phys. D 13, 391 (2004) [arXiv:astro-ph/0312614].
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
455
DETERMINING DARK ENERGY CHRIS CLARKSON Centre for Astrophysics, Cosmology and Gravitation, and, Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch 7701, South Africa
[email protected] I consider some of the issues we face in trying to understand dark energy. Huge fluctuations in the unknown dark energy equation of state can be hidden in distance data, so I argue that model-independent tests which signal if the cosmological constant is wrong are valuable. These can be constructed to remove degeneracies with the cosmological parameters. Gravitational effects can play an important role. Even small inhomogeneity clouds our ability to say something definite about dark energy. I discuss how the averaging problem confuses our potential understanding of dark energy by considering the backreaction from density perturbations to second-order in the concordance model: this effect leads to at least a 10% increase in the dynamical value of the deceleration parameter, and could be significantly higher. Large Hubble-scale inhomogeneity has not been investigated in detail, and could conceivably be the cause of apparent cosmic acceleration. I discuss void models which defy the Copernican principle in our Hubble patch, and describe how we can potentially rule out these models.
1. Introduction The standard LCDM model of cosmology has reached a maturity where we may think of it as a paradigm: more than just a model, it is rather a worldview underlying the theories and methodology of our particular scientific subject. In many respects it is fantastic, with a handful of constant parameters describing the main features of the model, which lies neatly within the 1- or 2-σ errors bars of most data sets. It is tempting to think of it as a science which is nearing completion, with remaining work simply reducing errors on the parameters. However, it requires (at least) 3 pieces of physics we don’t yet understand: inflation of some sort for the initial conditions; dark matter, and dark energy. Depending on who you speak to, the dark energy problem ranges from being not a problem at all, other than the old cosmological constant problem (landscape lovers), to being the greatest mystery/calamity in all of science – ever! (if you’re speaking to someone writing a grant proposal). Whether it turns out to be a storm in a teacup, or a revolution in our understanding of the cosmos, it’s going to be difficult getting a handle on it, and ruling out many of the possibilities may be very hard or even impossible for the foreseeable future. In this article I discuss three different aspects of the dark energy problem, and I describe some subtle observable properties of the FLRW models which can help us see if we’re on the wrong track. I
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
456
consider first dark energy in the exact background FLRW models, then in perturbed FLRW models, and finally discuss non-FLRW models with no dark energy in them at all. • Attempting to reconstruct the dark energy dynamics poses many difficulties which are well known, not least of which are degeneracies with the other parameters of the model. The physical motivation for many models is dubious at best, yet it is clear that we have to keep an open mind about what the dynamics could be. However, there are consistency relations we may use to test specifically for deviations from Λ, which may be used without specifying a model at all. • How does structure in the universe get in the way of our interpretation of a ‘background’ model at all? That is, how do we smooth the matter in the universe to give an FLRW model? This interferes which what we think our background model should be, and hence contaminates our understanding of the dark energy. This effect is surprisingly large – at least 10% in the deceleration parameter. • It’s conceivable that dark energy has nothing to do with new physics at all, and that we’re using the wrong background solutions in the first place because the universe has a large Hubble-scale inhomogeneity. These models aren’t fully developed, but deserve further investigation, even though they nominally break with the Copernican principle. Again, there are observational consistency relations which let us test for and potentially falsify these sorts of deviations from the standard paradigm. 2. Observing Λ – or not Within the FLRW paradigm, all possibilities for dark energy can be characterised, as far as the background dynamics are concerned, by the dark energy equation of state w(z).1 Unfortunately, from a theoretical perspective the array of possibilities is vast and not well understood at all, implying that w(z) could really be pretty much anything. Even Lemaˆıtre-Tolman-Bondi (LTB) void models can be interpreted, to an extent, as an effective dark energy model (see below).2 Our priority in cosmology today must therefore lie in searching for evidence for w(z) 6= −1. The observational challenge lies in trying to find a straightforward, yet meaningful and sufficiently general way to treat w(z). Observations which explore the dynamics of the background allow us, in principle, access to two functions: H(z) and dL (z), and this gives us two independent ways to establish w(z). On top of this, if we are to treat dark energy as a complete unknown, the sound speed also needs to be reconstructed independently. The dark energy equation of state is typically reconstructed using distance measurements as a function of redshift. The luminosity distance may be written as p Z z c(1 + z) 0 H0 √ dL (z) = sin −Ωk dz , (1) H(z 0 ) H0 −Ωk 0
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
457
where H(z) is given by the Friedmann equation, Z H(z)2 = H02 Ωm (1 + z)3 + Ωk (1 + z)2 + ΩDE exp 3 0
z
1 + w(z 0 ) 0 dz 1 + z0
,
(2)
and ΩDE = 1 − Ωm − Ωk . Writing D(z) = (H0 /c)(1 + z)−1 dL (z), we have, using h(z) = H(z)/H0 3 1 Ωk (1 + z)2 + 2(1 + z)hh0 − 3h2 (3) 3 (1 + z)2 [Ωm (1 + z) + Ωk ] − h2 h i 2(1 + z)(1 + Ωk D2 )D00 − (1 + z)2 Ωk D02 + 2(1 + z)Ωk DD0 − 3(1 + Ωk D2 ) D0 , = 3 {(1 + z)2 [Ωk + (1 + z)Ωm ] D02 − (1 + Ωk D2 )} D0
w(z) = −
(4) which, in principle, gives the dark energy EOS from Hubble rate or distance data provided we know Ωm and Ωk . Written in this way we see just how difficult characterising deviations from Λ is: we need second-derivatives of distance data. If we wish to reconstruct w(z) in a meaningful way, we must know distances extremely accurately – it is likely in fact that there are large classes of models we can never rule out using observations of the background model alone. In Fig. 1 we show a caricature of this problem wherein as we go from D(z) → H(z) → w(z) fluctuations grow by a factor of 10 and then 100. (Although the curves here look amusingly deranged, in, e.g., LTB void models, the radial profile can oscillate with no restrictions a priori. None of these can be ruled out just because they look funny! Because the distances at large z match up, the CMB should be reasonably unaffected; however, there might be a large ISW effect depending on how the sound speed is treated.) The reason for this of course is that at each step a derivative is taken; roughly
Fig. 1. Tiny fluctuations in D(z) are amplified drastically when reconstructing w(z). The fluctuations show up more strongly in H(z) which is more sensitive to w as an observable. This is assuming that Ωm and Ωk are known perfectly. In a sense then, these types of w(z) are hidden in the distance data. The thin grey curves for w(z) represent different reconstructions of the grey dotted curve using errors in Ωm = 0.3 ± 0.015 and Ωk = 0.0 ± 0.05 – note that Λ is practically consistent with this.
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
458
speaking this induces a fluctuation of amplitude O(∆z −1 ) where ∆z is the width of the change in D(z), relative to LCDM. In terms of errors, if we know D(z) on a scale ∆z, the error on each derivative picks up a factor of at least ∆z −1 . To further complicate matters, the reconstructed w(z) relies on knowing the parameters Ωm and Ωk accurately. For one of the dark energy models in Fig. 1 we show how errors in Ωm,k propagate into errors on w(z): even given perfect D(z), this uncertainty translates into almost catastrophic uncertainty in w(z) for z & 1. 2.1. Curvature? One of the key problems for reconstructing w(z) lies in measuring the parameters Ωk and Ωm accurately enough. For example, let’s say inflation is right, and use the prior Ωk = 0. What effects would this have on w(z) if the actual value we should be using is non-zero? In Fig. 2 we show how curvature looks like evolving dark
Fig. 2. We can mistake curvature for evolving dark energy. If the underlying model is curved LCDM and we reconstruct w(z) with Ωk = 0 from distances, we get the ‘dark energy’ models, right. Conversely, if w(z) really looks like one of these curves, we could mistake this for curvature by assuming Λ. The accuracy we must know the curvature to avoid problems like this is shown left; note the sweet spot at z ∼ 0.86 where we can reconstruct w(z) accurately from distances irrespective of curvature. From.3
energy if we assume the wrong prior. For closed models in particular, the resulting models look rather like the effective w(z) found in void models, discussed below. If we reconstruct the same curves using H(z) data we find the same sort of behaviour from the dynamical effect of curvature, but the curves don’t converge and cross w = −1 as they do for distance data, which comes from the focussing or defocussing of light by the curvature of space. At low redshift, then, both H(z) and D(z) suffer the same degeneracy with curvature; past the sweet spot, where dark energy may be determined from distances without contamination from curvature, they behave in opposite directions, implying that the two observables are complimentary at high redshift. Looking at it another way, if we parameterise w(z) by a family of constants wi at low z, the degeneracy axes in parameter space will be parallel for low z and will rotate as we increase z. See3,4 for more details.
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
459
3. Small inhomogeneity and backreaction A potentially critical issue arises when we try to specify and interpret a background solution at all. If we write down an FLRW solution, say, then what is the relation of that to our real universe? For example, it is natural to describe CDM (for example) as a dust fluid with zero pressure. If CDM is particulate, then the particles involved are tiny and a dust fluid should be a great description. However, most of the CDM has clustered into galaxies which have frozen out of the cosmic expansion, and the galaxies themselves act as tracers for the expansion, and so are effectively the particles of our cosmic fluid. The real metric we should use if we describe our fluid as CDM particles is fantastically complicated, and would not have a FLRW form unless smoothed in some way. Instead, it is common to think of our fundamental particles in cosmology as galaxies themselves, or clusters of them. Can we describe these accurately as as dust fluid? Perhaps. But the number of galaxies in our Hubble sphere is only of order 1010 , which is well below Avogadro’s number. Looking at a simulation on 100 Mpc scales, Fig. 3, the dark matter doesn’t look like a smooth fluid at all. Does such a small number really allow the smooth approximation the FLRW model requires? Furthermore, each galaxy carries with it Weyl curvature which conveniently ‘averages’ into Ricci curvature – or zero – when the background FLRW model is constructed. How? This is important because when we treat ‘a galaxy’ as a particle with zero pressure, we implicitly pull into that description its gravitational field. That is, the energy density of a box of galaxies is more than the sum of the individual mass densities, simply because gravity gravitates; their own gravitational field must be somehow added to their mass-energy when considered collectively in this way. How do we do this? This is analogous to the case when we average over a gravitational wave and give it an effective energy-momentum tensor, which is then fed back into the field equations. One aspect of this ‘averaging problem’ comes when we try to match the late time universe today, which is full of structure, to the early time universe, which isn’t. At the end of inflation we are left with a universe with curvature, kinf ' 0, and cosmological constant, Λinf , which are fixed for all time (and might be zero), and perturbations which are of tiny amplitude and well outside the Hubble radius; there is no averaging problem at this time, and the idea of background plus perturbations is very natural and simple to define. Fast-forward to today, where structures are nonlinear, are inside the Hubble radius, and many have broken away from the cosmic expansion altogether. We may still apparently describe the universe as FLRW plus perturbations to high accuracy; that is, it is natural and seemingly correct to define a FLRW background, but it is implicitly assumed that this background is the same one that we are left with at the end of inflation, in terms of kinf and Λinf . Mathematically we can follow a model from inflation to today, but when we try to fit our models to observations to describe our local universe we are implicitly smoothing over structure, and this can contaminate what we think our inflationary background
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
460
Fig. 3. Fiddling with the Millennium simulation.7 Describing the universe as ‘smooth’ doesn’t look right on scales of order 100h−1 Mpc, shown here in the black and white boxes (top panel). In the central row, we zoom out from 100h−1 Mpc by a factor of 2 each time (zooming out from the top left corner); only when we get to the last two boxes does it start to look homogeneous-ish, which is ∼ 800h−1 Mpc. (These boxes have the same depth, 15h−1 Mpc, and it’s the volume that really counts, however.) The averaging problem is shown in the bottom row: how do we go from left to right? Does this process give us corrections to the ‘background’, or is it the ‘background’ itself?
FLRW model should be. Indeed, it is not clear that the background smoothed model should actually obey the field equations at all; nor is it trivial to calculate the average null cone, and how this compares with the path taken by light in an averaged model. Within the standard paradigm the averaging problem also becomes a fitting problem; are the background parameters we are fitting with the CMB actually the same as those when fitting SNIa? If there were no dark energy problem, this would be mainly a technical question of interest for internal consistency of our model. With the dark energy problem it becomes paramount for two reasons. One is that it could provide the ‘solution’, in that apparent acceleration results from our inability to model non-linear structure properly. The second is because of the difficulties we have in trying to reconstruct w(z) – as we have seen, even tiny changes to D(z), for example, can result in big changes to w(z), so if our background model is off by just a little in an unknown way, chaos could ensue in our interpretation of things.
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
461
3.1. Smoothing the standard model Take the metric in the Poisson gauge, ds2 = − 1 + 2Φ + Φ(2) dt2 + a2 1 − 2Φ − Ψ(2) δij dxi dxj ,
(5)
which describes a flat FLRW background with cosmological constant plus density perturbations up to second-order. Φ obeys the usual Bardeen equation, and Φ(2) , Ψ(2) may be given as integrals over Φ2 terms. Ideally, we would like to construct a smooth FLRW ‘spacetime’ from an lumpy inhomogeneous spacetime by averaging over structure. This would, in principle, have a metric ds2eff = −dτ 2 + a2D γij dy i dy j ,
(6)
where τ is the cosmic time and aD (τ ) an averaged scale factor, the subscript D indicating that it has been obtained at a certain spatial scale D, which is large enough so that a homogeneity scale has been reached; in this case γij will be a metric of constant curvature. Unfortunately, we don’t know how to construct ds2eff . We don’t know what field equations it would obey, nor do we know how to calculate observational relations; none of these would be as in GR. What we can do, however, is calculate averages of scalars associated with Eq. 5, such as the Hubble rate, and deceleration parameter. There are more complications lurking, however. When we calculate the averaged Hubble rate, we have to decide what to calculate the Hubble rate of, and on which spatial surfaces to perform the average. We could calculate the average of the fluid expansion in the restframe of the fluid. This seems like a natural choice except that, in the case of perturbed FLRW, the gravitational field has a different ‘rest-frame’ from the fluid. This may be characterised by the frame in which the magnetic part of the Weyl tensor vanishes and the electric part becomes a pure potential field, with potential (2) Φ + Φ2 + 41 (Φ(2) na = −N ∂a t, where N 2 = −gtt = + Ψ ). This is the frame (2) a 1 + 2Φ + Φ . The fluid with 4-velocity u drifts through this frame with peculiar velocity v a = (0, v i )/a where vi = 21 ∂i (2v (1) +v (2) ). An observer at rest with respect to the gravitational field will measure the fluid to have expansion θ = (g ab + na nb )∇a ub .
(7)
In the lumpy spacetime, when we consider the length-scale ` associated with θ, we have 1 1 d` 1 ∂` θ = na ∇a ln ` = = . 3 ` dtprop N ` ∂t We have a freedom in our choice of coordinates in the lumpy spacetime to be those which are most appropriate in the smoothed one. Hence, if we demand that t represents the proper time in the smoothed spacetime, τ , we can define a presynchronised, smoothed, Hubble parameter using N θ as8,9 Z 1 1 Jd3 x N θ , (8) HD ≡ hN θiD = 3 3VD D
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
462
where RJ = a3 1 − 3Φ + 23 Φ2 − Ψ(2) is the 3-dimensional volume element and VD = D Jd3 x is the volume of the domain. We may think of this as the average Hubble parameter which preserves the length-scale ` after smoothing, according to the pre-chosen proper time in the smoothed spacetime. We can use this to then define the effective scale factor for the averaged model as the function aD (t) obeying: HD =
∂t aD . aD
(9)
We then define an averaged deceleration parameter as qD (z) = −
1 a ¨D , 2 HD aD
(10)
where a ¨D /aD is given by a generalised Raychaudhuri equation. To calculate HD perturbatively for the metric (5) we may expand the averages h·iD . For any scalar function Υ, the Riemannian average hΥiD can be expanded in R
terms of the Euclidean average over the domain D, defined as hΥi = the background space slices as: h i hΥiD = Υ(0) + hΥ(1) i + hΥ(2) i + 3 hΥ(1) ihΦi − hΥ(1) Φi ,
d3 x Υ , d3 x D
D R
on
(11)
where Υ(0) , Υ(1) and Υ(2) denote respectively the background, first order and second order parts of the scalar function Υ = Υ(0) + Υ(1) + Υ(2) . With these definitions in mind we can now calculate average quantities in the perturbed concordance model. The averaged Hubble rate as defined by equation (8) is given by:9 2 ˙ + hΦ Φi ˙ ˙ − 2(1 + z) Hh∂ 2 Φi + h∂ 2 Φi HD = H − hΦi 2 9H Ωm h i 2(1 + z)2 n 2 2˙ 2HΩ + HhΦ ∂ Φi + hΦ ∂ Φi m 9H 3 Ω2m o ˙ + h∂ k Φ˙ ∂k Φi ˙ +(1 + 3Ωm )H 2 h∂ k Φ ∂k Φi + (2 + 3Ωm )Hh∂ k Φ ∂k Φi i 2(1 + z)2 h 2 2˙ HhΦih∂ Φi + hΦih∂ Φi 3H 2 Ωm 1 ˙ (2) 1 − hΨ i + (1 + z)h∂ 2 υ (2) i. (12) 2 6 Here, H = H(z) is the usual background Hubble rate, in terms of the background redshift. As we can see the averaged Hubble rate is a bit ludicrous. A similar expression for the Raychaudhuri equation takes up most of a page! Given a particular realisation of a universe we can now calculate HD for a given domain size D at a particular location. This is a bit of a pain, and doesn’t really tell us that much. Instead, if our perturbations are Gaussian we can calculate the ensemble averages of the averaged quantities. This would tell us, assuming ergodicity, what a typical patch D would look like. We can also calculate the variance of a given quantity straightforwardly. ˙ − −3hΦihΦi
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
463
Fig. 4. The change to the Hubble rate from backreaction, from second-order scalar perturbations. On the left, we see the backreaction effect growing over time until Λ kicks in. The right shows the change today as we change the domain size; only when the domain is around the equality scale does the backreaction effect become managable.
Most of the terms we are dealing with are scalars schematically of the form ∂ m Φ(x)∂ n Φ(x) where m and n represent the number of derivatives (not indices), such that m + n is even. Then the ensemble average relates all terms involving products of Φ to the primordial power spectrum. For example: Z h∂ m Φ(x)∂ n Φ(x)i = (−1)(m+3n)/2 dk k m+n−1 PΦ (k). (13) For integrals which result from m + n > 4 we have a UV divergence which needs a cutoff; this necessitates a smoothing scale RS . Other terms of the form h· · ·ih· · ·i have an explicit dependence on the length scale RD , specifying the radius of our spherical domain. In Fig. 4 we show the averaged Hubble rate for the concordance model. As a function of the background redshift parameter the backreaction grows during the matter era, and starts to decay as Λ becomes important. The variance is significantly −1 larger than the pure backreaction, implying that in a domain of size kequality we can expect fluctuations in the Hubble rate of order a percent or so. On domains smaller than this the backreaction and variance can become very significant indeed. Turning now to the deceleration parameter we see that the backreaction itself is very significant: q can be increased by 10% or more on domains of order the equality scale. In particular, the smoothing scale determines the overall amplitude, owing to the UV divergence in the integrals over the power spectrum. A much more accurate treatment of the high-k part of PΦ is warranted to get a more accurate estimate; however, choosing the Silk scale seems to be a very conservative choice for the smoothing scale, and this still implies a very important effect. In the middle graph of Fig. 5, we see that as we increase the domain size the
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
464
Fig. 5. The change to the deceleration parameter from backreaction, from second-order scalar perturbations. On the left, we see the backreaction effect growing over time until Λ kicks in, as we saw for H. The variance become small for small z, with a sweet spot at z ≈ 0.2. The middle figure shows the change today as we change the domain size; for large domains the backreaction effect leaves a significant positive offset to the deceleration parameter. On the right we show the UV divergence which we get from the smoothing scale.
variance drops to zero, and the overall backreaction effect does not vanish, even on Hubble scales. This is important because our background model is renormalised by the process of perturbing then smoothing. It is worth comparing these results to Fig. 3, the middle row in particular. To calculate the deceleration parameter accurately we must consider domains of ∼ 600Mpc across (i.e., the third box, roughly), and then somehow subtract off the structure appropriately. This is critical for dark energy reconstruction because we don’t know how to subtract the backreaction parts of our model to get at the underlying background we are left with at the end of inflation. How will this affect our reconstructed w(z)? We don’t yet know. In this volume, Jim Peebles argues that the backreaction effect must be small, and ∼ h∂k Φ∂ k Φi.10 This seems to be correct as far as the Hubble rate goes. But when we calculate the deceleration parameter from the generalised Raychaudhuri equation, this has terms like ah∂ 2 Φ∂ 2 Φi ∝ hδ 2 i which dominate; the ensemble average of this has a UV divergence for a scale-invariant spectrum, and conservatively cutting this off at the Silk scale gives a significant contribution. The origin of this term it to be found in the divergence of the velocity between the rest frames of the gravitational potential and the CDM.
4. Large Inhomogeneity: Voids and the Copernican Principle An odd explanation for the dark energy problem in cosmology is one where the underlying geometry of the universe is significantly inhomogeneous on Hubble scales, and not homogeneous as the standard model assumes. These models are possible because we have direct access only to data on our nullcone and so can’t disentangle temporal evolution in the scale factor from radial variations. Such explanations are considered ungainly compared with standard cosmology because naively they revoke the Copernican principle, placing us at or very near the centre of the universe.
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
465
Perhaps this is just because the models used – Lemaˆitre-Tolman-Bondi (LTB) or Szekeres to date – are very simplistic descriptions of inhomogeneity, and more elaborate inhomogeneous ones will be able to satisfy some version of the Copernican principle (CP) yet satisfy observational constraints on isotropy (e.g., a Swiss-Cheese model or something like that). Alternatively, within the multiverse context, one can imagine a vast universe in which our little patch happens to be rather inhomogeneous. We can even imagine a multiverse in which there are many void-like regions; even if we happened to be near the centre of one with a Hubble-scale inhomogeneity, this may be natural within a larger context, in the same way we discovered that the Milky Way is not particularly special once understood in the context of a plethora of galaxies. With this idea, we needn’t violate the Copernican Principle if we live near the centre of a Hubble scale void; rather, we should just change our perspective.11 Indeed, as argued in,11 it is worth reflecting on the fact that the anthropic ‘explanation’ for the current value of Λ, which relies on a multiverse of some sort for its philosophical underpinning, necessitates the violation of the Copernican principle simply because the vast majority of universe patches are nothing like ours, and not at all suitable for complex life. Instead, we may think of these models as smoothing all observables over the sky, thereby compressing all inhomogeneities into one or two radial degrees of freedom centred about us – and so we needn’t think then as ourselves ‘at the centre of the universe’ in the standard way. In this sense they are a natural first step in developing a fully inhomogeneous description of the universe. Whatever the interpretation, such models are at the toy stage, and have not been developed to any sophistication beyond understanding the background dynamics, and observational relations; in particular, perturbation theory and structure formation is more-or-less unexplored, though this is changing. They should, however, be taken seriously because we don’t yet have an explanation for dark energy in which the late time physics is well understood in any other form. Indeed, one could argue that these models are in fact the most conservative explanation for dark energy, as no new physics needs to be introduced. We model an inhomogeneous void as a spherically symmetric LTB model with metric ds2 = −dt2 +
a2k (t, r) 1 − κ(r)r2
dr2 + a2⊥ (t, r)r2 dΩ2 ,
(14)
where the radial (ak ) and angular (a⊥ ) scale factors are related by ak ≡ (a⊥ r)0 and a prime denotes partial derivative with respect to coordinate distance r. The curvature κ = κ(r) is not constant but is instead a free function. From these two scale factors we define two Hubble rates: H⊥ = H⊥ (t, r) ≡
a˙ ⊥ , a⊥
Hk = Hk (t, r) ≡
a˙ k ak
(15)
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
466
using which, the Friedmann equation takes on its familiar form: 2 H⊥ −2 = Ωm a−3 ⊥ + Ωk a⊥ , 2 H⊥0
(16)
where Ωm (r)+Ωk (r) = 1 and Ωm (r) is a free function, specifying the matter density parameter today. In general, H⊥0 (r) is also free, but removing the decaying mode fixes this in terms of Ωm (r). The fact that these models have one free function implies that we can design models such that they can give any distance modulus we like. For example, if we choose Ωm (r) to reproduce a LCDM D(z) then the LTB model is a void with steep radial profile which is, strictly speaking, non-differentiable at the origin if we want q0 < 0. Much has been made of this non-differentiability, but it’s irrelevant for this sort of cosmological modelling (we don’t expect any model to hold smoothly down to infinitesimal scales!). However, such freedom implies that it’s impossible to tell the difference between an evolving dark energy FLRW model and a void model, using distance data alone. Distance Modulus
Distance Modulus 0.04 0.03 0.02
Flat RCDM #5 #4 #3 #2 #1
Hubble Rate
300
Data Flat RCDM EdS #5 #4 #3 #2 #1
2 250 1
ï0.01 ï0.02
0
ï1 Data Flat RCDM EdS #5 #4 #3 #2 #1
ï0.03
ï2 ï0.04 ï0.05 ï0.06
ï3 0.2
0.4
0.6
0.8
1
1.2
H(z) (km/s/Mpc)
0
µïµempty
µïµRCDM
0.01
0.2
1.4
0.4
0.6
0.8
1
1.2
1.4
1.6
200
150
100
50 0
1.8
0.2
0.4
0.6
0.8
z
z
M(r)
Effective w
(z) from Distance Measurements DE
Reconstructed 1M 1 0.6
10
wDE(z)
5 r (Gpc)
ï0.5
1.4
1.6
1.8
0.4
0 0.5
1
z
1.5
0
FLRW Consistency Relation
0.5
1
z
1.5
FLRW Consistency Relation
0.5
0.06
0 0.04
L(z)
ï1
ï0.5
0.02
ï1
0.15
0 ï1.5
ï1.5
0.1
0
0.6
0.2
ï0.2
Flat RCDM EdS #5 #4 #3 #2 #1
0.25
0.2
0.2 0
0
0 0
eff
0.5
1.2
From H(z) data Flat RCDM #5 #4 #3 #2 #1
0.8
0.4
1k (z)
eff
1M (z)
0.5
C(z)
1
1
z
Reconstructed 1k
−1
0 r (Gpc)
1
0
0.2
0.4
0.6
0.8
1
z
1.2
1.4
1.6
1.8
0
0.5
1
z
1.5
ï0.02 0
0.5
1
z
1.5
Fig. 6. A void with density parameter given in the plots on the bottom left, produces a distance modulus which is very close to LCDM, and fits the constitution SNIa data better; we may also fit it to age data to give Hk (z) (top right). The data does not favour a sharp void at the centre. If we try to represent the void as an effective w(z) in FLRW models we get the curves on the bottom middle. Compare to Fig. 1 – each of those effective w(z)’s would translate into a void profile of one sort or another. On the bottom right, we have plots of the various FLRW consistency relations for these models – deviations from the dotted lines show deviations from LCDM for the left two, and deviations from FLRW for the two on the right. From.12
In Fig. 6 we show some void profiles which fit the constitution supernovae very nicely – marginally better than LCDM in fact. While the data is not yet good
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
467
enough to say too much about the shape of the void profile, we can say that the size is 6 Gpc across, roughly. We also show the effective dark energy equation of state which produces the same distance modulus as these void models but assuming an FLRW model; note that it’s apparently phantom over a reasonable redshift range. Whether one likes these models or not, we are forced to look at them further. How can we tell the difference between dark energy in an FLRW model, and inhomogeneity in a void model if they produce similar D(z)? One can argue that it must be ‘unlikely’ that we are at the centre, and so look for anisotropies in, for example, the CMB. This will probably constrain us to be within 1% of the centre, say, compared to the Hubble scale. Does this rule them out because this is unlikely? Unfortunately not – the Copernican principle would be violated in our Hubble patch, but this would just leave us with a spatial version of the coincidence problem to explain. What we need are ways to rule out these models for a central observer. One way to do this is (presumably) through the matter power spectrum and the CMB; that is, by observing perturbations. Essentially, this will give generalised Bardeen potentials, but which are now all mixed up with gravitational waves and vector modes. For example, in LTB models the generalised Bardeen equation is, for even parity modes with spherical harmonic index ` ≥ 2:13 κ ϕ¨ + 4H⊥ ϕ˙ − 2 2 ϕ = S(χ, ς). (17) a⊥ Here S(χ, ς) is a source term which couples this potential to gravitational waves, χ, and vector modes, ς – these in turn are sourced by ϕ. This represents the fact that the gravitational field is inherently dynamic even at the linear level, and that structure may grow more slowly due to the dissipation of potential energy into gravitational radiation. Mathematically, we have a very complicated set of coupled pdes to solve for each harmonic `. Furthermore, since H⊥ = H⊥ (t, r), a⊥ = a⊥ (t, r) and κ = κ(r), perturbations in each shell about the centre will grow at different rates, and it’s because of this that the perturbations generate gravitational waves and vector modes. These equations have not yet been solved in full generality. Thus, we can expect different structure growth in LTB models, but in what way we don’t yet know. Presumably, we will need to observe the full power spectrum evolving from high redshift until today to definitely decide between FLRW and LTB, because there also exists a degeneracy with the primordial power spectrum. Instead of resorting to perturbations, we can try to test the validity of the FLRW assumption directly using properties of the background solutions. For example, rearranging Eq. 1, we have the curvature parameter today given by3 2
Ωk =
[h(z)D0 (z)] − 1 ≡ Ok (z). [D(z)]2
(18)
On the face of it, this gives a way to measure the curvature parameter today by combining distance data with Hubble rate data, irrespective of the redshift of measurement. In FLRW this will be constant as a function of z, independently of the dark energy model, or theory of gravity. An example of this is given in Fig. 7 using
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
468
current data. Alternatively, we may re-write this as the condition that14 C(z) = 1 + h2 DD00 − D02 + hh0 DD0 ,
(19)
must be zero in any FLRW model at all redshifts, by virtue of Eq. (18). In more general spacetimes this will not be the case. In particular, in LTB models, even for a for a central observer, we have C(z) 6= 0, or Ok (z) 6=const.
Fig. 7. Estimated Ok (z) using two values of H0 , from.15 The points indicated with an arrow show the SN (constitution) data combined with the recent BAO (SDSS-DR7) and CMB (WMAP) data. The other points show the same using age data. The thick vertical lines represent the spread in the constructed point arising from uncertainty in the reconstructed distances; the thin vertical lines represent the errors arising from H(z) and H0 . The faded points are two age data points which are at such high redshift they have no corresponding SNIa, so should not be taken too seriously. See15 for details.
This tells us that in all FLRW models there exists a precise relationship between the Hubble rate and distance measurements as we look down our past null cone. This relationship can be tested experimentally without specifying a model at all, if we reconstruct the functions H(z) and D(z) in a model independent way and independently of each other. This would then provide a model-independent method by which to experimentally verify the Copernican assumption, and so verify the basis of the FLRW models themselves. Considering Ok (z) and C(z) in non-FLRW models reveals how useful these tests might be. Wiltshire discusses C(z) in the timescape cosmology, arguing that it may be used more broadly than just a test of the Copernican principle.16 Although C(z) 6= 0 in virtually all inhomogeneous models, once isotropy about us is established on a given scale, we can use these tests to eliminate the possibility of radial inhomogeneity, thereby testing the Copernican principle. It should be pointed out that there exists a class of LTB models for which we can design both D(z) and Hk (z) to match those of FLRW, and so these would pass the homogeneity test presented here, if it were used with H = Hk . Such models utilise the decaying mode of LTB models to do this.17 (A similar solution with H⊥ = HΛCDM surely exists too.) To really rule this special class out we would need to use H⊥ (z) as an observable in Ok (z) or C(z) which can be measured using the
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
469
time drift of cosmological redshifts, given by, in LTB:18 z(z) ˙ = (1 + z)H0 − H⊥ (z) .
(20)
To really rule out LTB models, then, we need to show that Hk (z) = H⊥ (z) for all z. Hk (z) can be measured using the relative ages of passively evolving galaxies, using dt 1 = . This is nice for this test because it can be used relatively dz (1 + z)Hk (z) model-independently. Other methods for measuring H(z) include the BAO and the perturbative dipole in the distance modulus,19 both of which rely on FLRW perturbations at present. 5. Discussion If the cosmological constant, Λ, is the underlying cause of dark energy then the viewpoint that cosmology is nearly complete is probably true. If Λ is wrong, however, then all bets are off, as ‘dark energy’ could then be all sorts of things, from x-essence to modified gravity to large-scale inhomogeneity. Attempts to justify the value of Λ using landscape arguments along with the multiverse, necessarily combined with the anthropic principle, open two tenable doors: one is that the universe as a whole is colossal or even infinite, and the other is that our Hubble volume is both tiny and incredibly special. If this is indeed the case, then the multiverse breaks with the Copernican principle in a spectacular way: we exist in a highly exceptional corner of the universe, and are not typical observers except in our little patch where the fundamental constants are just so (which might be rather large in terms of Hubble volumes of course, but small in terms of all that there is). If we’re happy to break with a ‘global Copernican principle’ to explain the value of Λ, it is not philosophically unreasonable to break with a ‘local Copernican principle’ instead, in order to – perhaps – preserve a global one. The void models offer the possibility of describing dark energy as radial inhomogeneity in a dynamically predictable model, rather than as an unknown dynamical degree of freedom in a postulated homogeneous model. One can even argue that inflation predicts such a scenario20 ! That they break with the Copernican principle on our Hubble scale is a cause for concern, and even a sophisticated inhomogeneous model which drops the spherical symmetry may well suffer a similar problem. On the scale of the multiverse anything goes, so it’s easy to imagine a universe with inhomogeneous fluctuations on Hubble and super-Hubble scales.11 Of course almost all speculation beyond a few times our Hubble sphere is really ‘fiction science’ at the moment, requiring wild assumptions and extrapolations in all sorts of ways. Nevertheless, such speculation is important, and tests, such as those presented here, which can decide the scale of homogeneity out to the Hubble scale without assuming a priori the local Copernican principle, may play an useful role in discussing such questions. All the different possibilities for dark energy typically involve functional degrees of freedom. From an observational point of view, we are really in the position where
November 26, 2010
18:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.02˙Clarkson
470
we have to try to rule out the very simplest models, as we can’t investigate function spaces observationally in a meaningful way. It is useful in my view to try to construct model-independent tests of different models where we can. In particular, Ok (z) is non-constant and C(z) is non-zero if the FLRW models themselves are incorrect, independently of the theory of gravity used, or fluid used to model the dark energy. These are useful precisely because they can be implemented without specifying a background model at all, if the observables D(z) and H(z) are constructed directly from the data. A important issue lies in the backreaction of perturbations which is tied up with the averaging problem. How do we smooth structure to connect to the background model at all? We have seen that backreaction in the concordance model renormalises our background, which gives a change in the deceleration parameter of at least 10%, apparently far higher than would be guessed by hand-waving arguments. Thus, even if dark energy is the cosmological constant, it might be difficult to see it as such until this problem is further quantified and understood. Acknowledgments I would like to thank Kishore Ananda, Bruce Bassett, Tim Clifton, Marina Cˆ ortes, George Ellis, Sean February, Ren´ee Hlozek, Julien Larena, Teresa Lu, Mat Smith, Jean-Philippe Uzan and Caroline Zunckel for collaboration and discussions which went into the work presented here. I would like to thank Sean February for the plots in Fig. 6, and Roy Maartens for comments. This work is supported by the NRF (South Africa). References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.
See, e.g., E. J. Copeland, M. Sami and S. Tsujikawa, hep-th/0603057 for a review. See, e.g., M. N. C´el´erier arXiv:astro-ph/0702416 (2007) for a review. C. Clarkson, M. Cˆ ortes and B. Bassett, JCAP08(2007)011 R. Hlozek, et al., arXiv:0801.3847. V. Sahni, A. Shafieloo and A. A. Starobinsky, Phys. Rev. D, 78, 103502 (2008) C. Zunckel and C. Clarkson, Phys. Rev. Lett. 101, 181301 (2008) V. Springel etal, Nature 435 629-636 (2005) J. Larena, Phys. Rev. D 79 084006 (2009) C. Clarkson, K. Ananda and J. Larena, arXiv:0907.3377 (2009) P. J. E. Peebles, arXiv:0910.5142 (2009) J.-P. Uzan, In “Dark energy: observational and theoretical approache”, Ed. P. RuizLapuente, (Cambridge University Press, 2010) S. February et al. arXiv:0909.1479 (2009) C. Clarkson, T. Clifton and S. February, JCAP02(2009)023 C. Clarkson, B. Bassett and T. H.-C. Lu, Phys. Rev. Lett. 101 181301 (2008) A. Shafieloo and C. Clarkson, arXiv:0911.4858 (2009) D. Wiltshire arXiv:0909.0749 (2009) M.-N. C´el´erier, K. Bolejko, A. Krasinski and C. Hellaby, arXiv:0906.0905 (2009) J.-P. Uzan, C. Clarkson, G. Ellis, Phys. Rev. Lett. 100, 191303 (2008) C. Bonvin, R. Durrer and M. Kunz, Phys. Rev. Lett., 96 191302 (2006) A. Linde, D. Linde and A. Mezhlumian, Phys. Letts. B 345 203 (1995)
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
471
INTERACTING MAJORANA FERMIONS AND COSMIC ACCELERATION G. J. STEPHENSON JR.∗ and P. M. ALSING1 Department of Physics and AStronomy, University of New Mexico, Albuquerque, NM 87111, USA ∗ E-mail:
[email protected] 1 E-mail:
[email protected]
T. GOLDMAN2 Theoretical Division, Los Alamos National Laboratory Los Alamos, NM 87545, USA 2 E-mail:
[email protected]
B. H. J. MCKELLAR3 School of Physics, University of Melbourne Melbourne, Vic 3010, Australia 3 E-mail:
[email protected]
We consider the possibility that a simple system consisting of one species of Majorana fermion with a vacuum mass, m0 , interacting with a scalar field of mass, mζ , can be the source of repulsion in the Friedmann equation, leading to acceleration of the expansion of the Universe at particular epochs. We assume a very cold system, approximated by a degenerate Fermi gas. We examine numerical results appropriate to parameter ranges that would allow active neutrinos as candidate fermions, and show that this is an unlikely result. We then extend the parameter ranges to include the possibility that the fermion in question is the Lightest Supersymmetric Particle or a much lighter possible sterile neutrino and show that these possibilities survive. Keywords: Dark Energy, Neutral Fermions
1. Introduction The current view of the Universe is that the energy density at the current epoch is about four per cent constituents that we can see with some probe (visible matter), twenty three per cent constituents that behave themselves under gravity but that we cannot see (dark matter) and seventy three per cent constituents that we
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
472
cannot see and cause repulsion, not attraction (dark energy). If one tries to model dark energy as a fluid, according to the Einstein equations that fluid must have a negative pressure. The simplest source that is compatible with Einstein gravity is a cosmological constant which corresponds to an energy density that does not change with the expansion of the Universe. The equation of state relates the energy density (ρE ) and the pressure (P ) of a fluid through a parameter w as P = wρE . For the cosmological constant w = −1. However, we know of systems in Nature which, over some range of parameter space, exhibit negative pressure. These systems, necessarily interacting, are bound and have an equilibrium density. When perturbed to either a higher density or a lower density, the energy of the system increases. In particular, if the density is lowered by forcing an increase in the volume, this increase in internal energy corresponds to negative pressure. Several years ago it was suggested that neutrinos might interact weakly among themselves through the exchange of a very light scalar particle,1,2 with possible consequences for the evolution of the Universe and for the propagation of neutrinos from distant events. Three of the current authors examined this proposition for scalars with astrophysical ranges to explore the possibility of neutrino clustering. 3,4 These more recent observations have led us to revisit the problem, considering much lighter scalars with a range comparable to the Horizon at z ≈ 1. We provided some early comments,5 and then showed how it led to a system with w ≈ −1 in a range of the development of the Universe,6,7 with w → 1/3 at very early times, and w → 0 near the present. As shown below, we find that w is not constant in our model. The theoretical framework to discuss this general problem was worked out many years ago for the discussion of nuclear matter,8 which clearly is a self bound system of interacting fermions, that is, one which resists expansion to infinite dilution, hence exhibiting negative pressure over some range of parameter space. The discussion to follow is not at all confined to neutrinos as the fermion component, but can be applied to any neutral fermion field. Earlier discussions always treated the fermions as Dirac particles, as would be appropriate for a system of nucleons, and allowed for the change to Majorana fermions by halving the number of degrees of freedom. In this paper, we explicitly work with Majorana fermions from the beginning.
2. Majorana Fermions With A Scalar Interaction The effective Lagrangian for a Majorana field Φ (which is Grassmann-valued in this Weyl spinor representation), with vacuum (non-interacting) rest mass m0 , interacting with a real scalar field, ζ, with mass mζ , is:
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
473
1 † µ 1 [Φ σ ∂µ Φ − ∂µ Φ† σ µ Φ] + m0 [ΦT σ 2 Φ + Φ† σ 2 Φ∗ ] 2ı 2 1 2 + ζ(∂ − m2ζ )ζ 2 1 + g[ΦT σ 2 Φ + Φ† σ 2 Φ∗ ]ζ 2
L=
(1)
which gives as the equations of motion
1 ∂ 2 + m2ζ ζ = − g[ΦT σ 2 Φ + Φ† σ 2 Φ∗ ] 2 [iσ µ ∂µ ] Φ = m0 σ 2 Φ∗ + gζσ 2 Φ∗ .
(2) (3)
As usual, we set ~ = c = 1 and σ µ = (1, −~σ ). We have omitted nonlinear scalar selfcouplings here, even though they are required to exist by field theoretic selfconsistency,5 as they may consistently be assumed to be sufficiently weak as to be irrelevant to our concerns here. The parameter m0 is the renormalized vacuum mass that the fermion would have in the absence of other physical fermions. We look for solutions of these equations in infinite matter which are static and translationally invariant. Eq.(2) then gives ζ =−
g 1 T 2 [Φ σ Φ + Φ† σ 2 Φ∗ ], m2ζ 2
(4)
which, when substituted into Eq.(3) gives a value for the effective mass (m∗ ) of the fermion as m∗ = m 0 −
g2 1 T 2 [Φ σ Φ + Φ† σ 2 Φ∗ ]. m2s 2
(5)
These equations are simply the equations of Quantum Hadrodynamics8 appropriate to Majorana fermions with a scalar density operator given as S=
1 T 2 [Φ σ Φ + Φ† σ 2 Φ∗ ]. 2
(6)
These equations are operator equations. We next act with each of these equations on a state |Ωi defined as a filled Fermi sea of fermions, with a number density ρ per fermion spin state, and Fermi momentum kF , related as usual by ρ = kF 3 /(6π 2 ). We need the expectation value of S acting in this state, giving Z 2 m∗ 3 , hΩ|S|Ωi = d k √ (7) ∗2 2 (2π)3 |~k|≤kF m +k
where 2 is the number of spin states which contribute for Majorana fermions. Thus the value of the effective mass is determined from the integral equation Z kF g2 m∗ m∗ = m 0 − 2 2 k 2 dk √ . (8) π mζ 0 m∗2 + k 2
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
474
To discuss the solutions of this equation, we reduce it to dimensionless form, di2 2 0) viding by m0 , and introducing the parameter K0 = g π(m and the variables 2 m2 ζ
y =
m∗ m0 , x
with eF =
=
p
k m 0 , xF
kF m0 .
=
Then Eq.(8) becomes Z xF x2 dx p y = 1 − yK0 y 2 + x2 0 eF + x F yK0 2 eF xF − y ln , = 1− 2 y
(9) (10)
x2F + y 2 .
This choice of scaled variables gives all energies (and momenta) in units of the vacuum fermion mass. For consistency, we define the dimensionless scalar mass as m µ = mζ0 in these same units. One can regard Eq.(9) as a non-linear equation for y as a function of either eF or xF . As a function of eF , y is multiple valued (when a solution exists at all), whereas y is a single valued function of xF . The total energy of the system is a sum of the energy of the fermions, given by Ef = ef m0 2N , and the energy in the scalar field, Eζ = eζ m0 2N , where N is the total number of fermions in each contributing state. These expressions serve to define the per fermion quantities, ef and eζ , as well as the total energy per fermion, < e >= ef + eζ . Also, Eζ = Eζ V , where Eζ = 21 m2ζ ζ 2 is the energy density of the (here uniform) scalar field. We find that ef = = and
3 x3F 3 x3F
Z
xF
x2 dx 0
p
x2 + y 2
x3F eF xF y 2 e F y4 + − ln 4 8 8
K0 3 2 eζ = y 2 x3F =
Z
xF 0
1 3 (1 − y)2 . 2K0 x3F
eF + x F y
x2 dx p x2 + y 2
(11)
!2 (12)
For the fermion system to be bound, the minimum of < e >, as a function of density (or xF ), must be less than 1, the value that obtains in the zero density limit. Having y as a function of xF we can calculate < e > as a function of xF . The results are displayed in Figure 1 for several values of K0 . Increasing K0 moves the curves down and to the left. For all values shown, there is a region of xF where < e > < 1, indicating binding. There are also regions for each curve where < e > decreases as xF increases. This implies that the internal energy decreases as the density of the system increases or,
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
475
Scaled Total Energy and Effective Mass vs xF for K0 = 3.35, 10, 100, 1000, 10000 1.1 1
Total Energy Effective Mass
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.1
0.2
0.3
0.4
Fig. 1.
0.5
xF
0.6
0.7
0.8
0.9
1
y and < e > vs xF .
equivalently, that the internal energy increases as the system expands, signifying negative pressure.
3. Connection to Dark Energy In 1998, studies of type 1a supernovae9,10 demonstrated that the Universe is now in a state of re-acceleration. This observation has profound implications for our understanding of the composition of the Universe. The acceleration and the composition of the Universe are related by the Friedmann Equation 4πG a ¨ =− (ρE + 3P ). (13) a 3 It is conventional and convenient to introduce the parameter w, which expresses the pressure P in terms of the total energy density ρE P = wρE Note that • Positive acceleration requires w < −1/3.
(14)
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
476
• w = −1 is equivalent to Einstein’s cosmological constant Λ. Because of this we have omitted Λ from equation (13). • The most recent data and analysis11 show that w is close to −1. The analysis, which is based on the assumption that w is constant, gives 1 + w = 0.013 ± 0.067 ± 0.11 .
(15)
The data of Reiss et al.12 show that w does not vary rapidly with z, but the errors are large and whether it is flat or rising as z increases is an open question. The variation of w in our model can be made consistent with the present data, as we now show. For the system we are studying here, 1 xF ∂ < e > 3 < e > ∂xF 1 eF K0 x3F − 3(2 − y)(1 − y) = 3 eF K0 x3F + (2 − y)(1 − y)
w=
(16)
This leads to 1+w =
4eF K0 x3F 3[eF K0 x3F + (2 − y)(1 − y)]
(17)
from which it follows that w > −1. The results are shown for several values of the parameter K0 in Figure 2. There are general features to notice. In the zero density limit (xF goes to 0), w goes to 0, as appropriate for cold matter. Although it cannot be seen in the figure, w actually approaches 0 from above. For large density w → + 31 , as appropriate for a relativistic gas. Figure 2 suggests scaling for large enough K0 and this can be shown with the use of analytic expansions. For small xF , w becomes, to a very good approximation, w(ξ) where ξ = K0 x3F
(18)
Empirically, the location of the minimum of w, xF min is given by 1
xF min = (3.83/K0) 3
(19)
and the minimum value of w is w ≈ −1 + 2xF min
(20)
To compare the model acceleration for a given value of K0 with the prediction of ΛCDM, including the ΛCDM estimate of the total matter density and ignoring radiation, we need to set the relation between z and xF . We do this by choosing the value of xF corresponding to z = 0.755, where the acceleration goes through 0.
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
477
The most obvious choice would be to use xF min . However, consider the example of K0 = 109 . Since w remains very close to its minimum for a range of xF , we could choose a larger value of xF or, equivalently, set xF min to correspond to a smaller value of z = zmin . Figure 3 displays this feature, with negligible differences in w out to z ≈ 1.0. The figure also illustrates the fact that this model would predict an eventual return to a deceleration. To convert this into a calculation of the acceleration of the scale parameter, we model the Universe as consisting of cold matter, both dark and visible, with the usual energy density and set the energy density of the system of Majorana fermions and scalar field to equal the value deduced for the cosmological constant of (2.4meV ) 4 at z = 0.755, the value where the acceleration goes through 0. This is shown in Figure 4 for K0 = 109 and different choices of zmin . For comparison, we include the prediction of the model with a cosmological constant and the same assumptions about matter, labeled as ΛCDM. To demonstrate the fact that there are some differences between the curves, in particular differences from ΛCDM, we display the actual zero crossings in a blown up scale in Figure 5. This difference arises from our procedure of using an energy density appropriate for w = −1 at z = 0.755 whereas Figure 3 demonstrates the small deviation from −1 with different choices of zmin .
W vs. log10(xF) for 11 values of K0 0.4 0.3
w = +1/3
0.2 0.1 0 -0.1 -0.2
w
-0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9
12
10 11 10 10 10 9 10 8 10 7 10 6 10 5 10 4 10 3 10 2 10
-1 -5.5
-5
-4.5
-4
-3.5
-3
-2.5
-2
-1.5
-1
log10(xF) Fig. 2.
w vs. log10 (xF ) for 11 values of K0 .
-0.5
0
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
478
w vs. z for w = min at 6 values of z -0.10 w = min at z=
0.2 0.4 0.6 0.8 1.0 zminref 1.2
-0.20 -0.30 -0.40
w
-0.50 -0.60
zminref= 0.755 -0.70 -0.80 -0.90 -1.00 0.0
0.1
0.2
0.3
0.4
Fig. 3.
0.5
0.6
0.7
z
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
w vs z for various choices of zmin .
Note the caption 109 ≤ K0 in Figures 4 and 5. This indicates that we are well into the scaling region and the curves will be the same for any larger values of K0 when plotted against z.
4. Results To a very good approximation, the energy density of the system of fermions and 0.5m4 scalars evaluated at xF min is given by π2 K00 . This allows us to evaluate, for a given K0 , the vacuum mass of the fermion, m0 and the number density ρ at a particular z. We display two examples, m0 = 160 meV, 160 GeV . m0 = 160 meV K0 = 106 ρ = 33 × 103 (cm)−3
(21)
m0 = 160 GeV K0 = 1054 ρ = 33 × 10−9 (cm)−3
(22)
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
479
Scale Acceleration vs. z for w = min at 6 values of z and ΛCDM
2
2
(1/a)d a/dt (2.9461x10
-38 -2
s )
50 9
for 10 ≤ K0
25 0
-25
-50
w=min at z=
0.151 0.302 0.453 0.604 0.755 0.906 ΛCDM
-75 -100
-125 0.0
0.1
0.2
0.3
Fig. 4.
0.4
a ¨ a
0.5
0.6
0.7
z
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
vs z for various choices of zmin .
where the densities hold at z ≈ 0.75. The first example is chosen to consider the possibility that the known active neutrinos, if the masses are in the “degenerate” range, could be a candidate. The number density of light, active neutrinos today is about 110 (cm)−3 for each flavor and spin. That becomes about 185 at z ≈ 0.75. Even including two spins and three flavors, their number density is too small, ruling them out. The second example would be appropriate for the lightest supersymmetric particle (LSP) suggested in many extensions of the Standard Model. Extrapolating back to the epoch of Big Bang Nucleosynthesis (BBN), the system does behave as another species of relativistic fermion, but with a number density so far below the neutrinos that there would be no effect. Therefore this possibility remains viable. From Figure 1 one sees that the increase in internal energy with increasing volume (decreasing xF ) and the rise in the effective mass (y) occur in the same range of xF . We interpret that as an indication that the scale of the scalar interaction has become comparable to the volume scale which, in this case, is the Horizon. If we argue that this corresponds to the epoch when cosmic deceleration begins to shift to acceleration, then the range of the scalar might be comparable to the scale of the Universe when w is near −1. This suggests that, for this example, g2 mζ ≈ 3 × 10−30 meV and that 4π ≈ 3 × 10−34 . Such a value would be difficult to
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
480
Scale Acceleration vs. z for w = min at 6 values of z and ΛCDM
1
s )
9
0
2
2
(1/a)d a/dt (2.9461x10
-38 -2
for 10 ≤ K0
w=min at z=
-1
0.151 0.302 0.453 0.604 0.755 ΛCDM
0.74
0.75
Fig. 5.
Enlargement of zero crossings from figure 4.
z
0.76
rule out by terrestrial experiments. Obviously any neutral Majorana fermion with a mass much larger than 160 meV could be a candidate. An example is a possible sterile neutrino in the keV range.
5. Discussion We have shown that a system of neutral Majorana fermions interacting with an extremely light scalar can lead to aa¨ > 0 for a wide range of vacuum masses, m0 . The equation of state parameter w can, for large m0 and consequently large K0 , remain close to −1 over an extended range of z, allowing this system to closely approximate the prediction of ΛCDM. In this case the smallness of the number density implies no constraint from BBN. This calculation was performed in an adiabatic approximation, and values of w near −1 occur when the system is moving away from its equilibrium density. This raises the possibility of a phase transition in which the system breaks up into small clouds at the equilibrium density, in which case the system will behave like dark matter rather than dark energy. Since the cloud thickness 3 is of order m−1 ζ , this breakup −1 will not occur until an epoch when a mζ . Even without such a phase transition,
November 24, 2010
13:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.03˙Stephenson
481
the system eventually goes over to dark matter and ceases to be a source of cosmic acceleration. The test of these ideas as opposed to a cosmological constant is to measure the value of w as a function of z, since this model definitely requires a variation from constancy at both larger and smaller values of z than 0.755.
Acknowledgments We want to thank the organizers for the opportunity to present these results at this very stimulating conference. GJS thanks the staff of the University of Capetown for making attendance in that beautiful spot so pleasant. This work was carried out in part under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboritory under Contract No. DE-AC52-06NA25396, the Australian Research Council, and the Stephenson and McKellar pension funds.
References 1. M. Kawasaki, H. Murayama and T. Yanagida, Mod. Phys. Lett. A 7, 563 (1992). 2. R.A. Malaney, G.D. Starkman and S. Tremaine, Phys. Rev. D51, 324 (1995). 3. G. J. Stephenson Jr., T. Goldman and B. H. J. McKellar, Int. J. Mod. Phys. A13, 2765 (1998). 4. G. J. Stephenson Jr., T. Goldman and B. H. J. McKellar, Mod. Phys. Lett. A 12, 2391 (1997). 5. B. H. J. McKellar, M. Garbutt, T. Goldman and G. J. Stephenson Jr., Mod. Phys. Lett. A19, 1155 (2004). 6. T. Goldman, G. J. Stephenson, Jr., P. M. Alsing and B. H. J. McKellar, arXiv:0905.4308 [hep-ph]. 7. B. H. J. McKellar, T. Goldman, G. J. Stephenson Jr. and P. M. Alsing, AIP Conf. Proc. 1178, 118 (2009). 8. B. D. Serot and J. D. Walecka, Adv. in Nucl. Phys. 16, 1 (J. W. Negele and E. Vogt, eds. Plenum Press, NY 1986). 9. A. G. Riess et al. [Supernova Search Team Collaboration], Astron. J. 116, 1009 (1998). 10. S. Perlmutter et al. [Supernova Cosmology Project Collaboration], Ap. J. 517, 565 (1999). 11. M. Hicken et al., Ap. J. 700, 1097 (2009). 12. A. G. Reiss, et. al., Ap. J. 659. 98 (2007).
November 24, 2010
13:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.04˙Leubner
482
NONEXTENSIVITY IN A DARK MAXIMUM ENTROPY LANDSCAPE M. P. LEUBNER∗ Institute for Astro- and Particle Physics, University of Innsbruck, Innsbruck, A-6020, Austria ∗ E-mail:
[email protected] Nonextensive statistics along with network science, an emerging branch of graph theory, are increasingly recognized as potential interdisciplinary frameworks whenever systems are subject to long-range interactions and memory. Such settings are characterized by non-local interactions evolving in a non-Euclidean fractal/multi-fractal space-time making their behavior nonextensive. After summarizing the theoretical foundations from first principles, along with a discussion of entropy bifurcation and duality in nonextensive systems, we focus on selected significant astrophysical consequences. Those include the gravitational equilibria of dark matter (DM) and hot gas in clustered structures, the dark energy(DE) negative pressure landscape governed by the highest degree of mutual correlations and the hierarchy of discrete cosmic structure scales, available upon extremizing the generalized nonextensive link entropy in a homogeneous growing network. Keywords: Nonextensive statistics, network science; DM, DE, structure scale hierarchy.
1. Introduction Long-range interactions, as manifestation of nonextensivity in nature, appear ubiquitously as generic property of physical structures on all scales where strong, weak, gravitational or electromagnetic interactions provide the source of non-local couplings and correlations. Only two limiting situations are elementary accessible: crystal structures of maximum order and minimum entropy can be described by basic geometry whereas a fully thermalized gas of minimum order and maximum entropy is governed by the classical Boltzmann-Gibbs (BG) statistics. Reality in nature emerges somewhere between and requires to deal with nonextensive complexity. Statistical mechanics is fundamentally based on the adoption of a specific entropy functional S serving as shortcut for the vast, detailed information stored in a system and providing the connection to the macroscopic behavior. There is no systematic context available able to determine a generally applicable entropy P function. The entropic form for ergodic systems S = − pi ln(pi ), introduced by Boltzmann-Gibbs and written in a discrete form by Shannon, provides the foundation of standard statistics for extensive systems. Hence only systems visiting with equal probability all allowed states are accessible within the spirit of Boltzmann’s molecular chaos hypothesis where the members of an ensemble are independent and
November 24, 2010
13:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.04˙Leubner
483
not governed by any correlations. Since non-ergodicity is the generic case for complex systems a generalized entropy functional is required, able serving as information shortcut for nonextensive systems where correlations due to long-range interactions dominate. A suitable generalization of the classical BG statistics was proposed by Renyi1 and later revived by Tsallis,2 denoted frequently as q-statistics in order to account for the only free parameter q of the theory, the entropic index. Linking the q-entropy to the widely used empirical κ-distribution family the q-formalism was reformulated by the transformation κ = 1/(1 − q), generating conveniently a symmetric interval for the entropic index −∞ ≤ κ ≤ ∞ by Leubner.3–5 In this notation the generalized entropy functional for non-ergodic/nonextensive systems reads2,4
Sκ = κ(
X
1−1/κ
pi
− 1)
(1)
where κ = ∞ represents the extensive limit of statistical independence. pi denotes the probability of the i−th microstate and S is extremized for equiprobability. As illuminating example and consequence of Eq. (1) let us consider the nonextensive entropy of two separated subsystems Sκ (A) and Sκ (B) yielding after mixing Sκ (A + B) = Sκ (A) + Sκ (B) + 1/κSκ (A)Sκ (B) where the entropic index κ quantifies the degree of non-ergodicity in the system. For κ = ∞ the entropy additivity of standard BG statistics is reproduced. In general, the nonextensive, pseudo-additive and κ-weighted term may assume positive or negative values indicating a nonextensive entropy bifurcation. Consequently, nonextensive systems are subject to a dual nature since positive κ-values imply the tendency to less organized states of increased entropy (superextensive), whereas negative κ-values provide states of a higher level of organization and decreased entropy (subextensive), as compared to the BGS state, see e.g.4,6,7 Today, nonextensive statistics appears as highly interdisciplinary branch applied, besides physical research, in chemistry, biology, economics, medicine, social and cognitive science, computer science or information theory. Emerging about 4 decades ago from the analysis of astrophysical plasma energy distributions3–5,8–10 today a large veriety of physical environments are studied in the context of nonextensive entropy generalization. Those include the probability distributions of fluctuations in plasma turbulence,11,12 scale invariant power-law distributions relying on self-organized criticality,13 dark matter and plasma density distributions of selfgravitating systems,6,14 the distribution of peculiar velocities of spiral galaxies or the cosmic microwave background radiation.15 Moreover, nonextensive scalar field models were proposed to understand the dark energy problem16 and also the hierarchy of discrete cosmic structure scales is available from the concept of generalized entropy when combined with recent interdisciplinary developments in network science, see below.
November 24, 2010
13:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.04˙Leubner
484
2. Duality and Probability Distributions Both, the duality of stationary states and the duality of heat capacities are fundamentally manifest in nonextensive statistics, allowing to classify thermodynamic states and self-interacting states as distinct domains in the parameter space of the entropic index. Nonextensive thermodynamic equilibria are maximum entropy states, constraint by κ > 0, whereas kinetic equilibria are found from the zeros of the collission integral in the domain κ0 < 0. Both equilibria are related by κ0 = −κ where the limiting BG-state for κ = ∞ corresponds to the self-dual extensive case. 6,7 Moreover, the duality of heat capacities is found from the energy derivative of a heat bath d/dE(1/β) = 1/κ, where 1/β = T , the temperature. Hence, the domain κ > 0 with positive heat capacity must be identified as thermodynamic state and the domain κ < 0, corresponding to negative heat capacity, defines self-interacting systems. For κ = ∞ we are dealing with the ideal heat bath where any loss or gain of energy is possible without change of the temperature.17 Consequently, in non-ergodic environments, subject to finite values of κ, the BG self-dual state for κ = ∞ bifurcates requiring to identify systems with κ > 0 as thermodynamic (plasma) states and systems with κ < 0 as self-interacting states (dark matter). Contrary to thermodynamic systems where the tendency to dis-organization is accompanied by increasing entropy, self-interaction tends to result in structures of a higher level of organization and decreased entropy. After this crucial clarification of nonextensive duality let us compute probability distributions for non-ergodic systems. Extremizing the entropy (1) under conservation of mass and energy in a gravitational potential Ψ yields the energy distribution −κ 1 v 2 /2 − Ψ f ± = B± 1 + κ σ2
(2)
where σ corresponds to the mean energy or variance of the distribution. Hence, the exponential probability function of the Maxwellian gas of an uncorrelated ensemble of particles is replaced by the characteristics of a scale invariant power-law where the sign of κ, indicated by superscripts, governs the corresponding entropy bifurcation. We note that the distribution (2) is derived without introducing any specific form of interactions where B ± refers to the proper normalization.4 For a graphical representation of the two distribution families see Leubner.6,18 Here we note only that both sets of curves (κ > 0, κ < 0) merge for κ = ∞ in one solution, representing the BG extensive limit. 3. Dark Matter and Hot Gas Density Distributions DM and hot gas density profiles, as observed in galaxies and clusters or generated in simulations, are commonly modeled by empirical fitting functions.19,20 The dual character of nonextensive statistics provides for the first time a theory, able to mimic accurately both, DM halo density distributions of self-gravitating particles along
November 24, 2010
13:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.04˙Leubner
485
with the hot plasma density profiles, by simply changing the sign of the entropic index in Eq. (2). The density distribution in a gravitational potential is found after integration over all velocities as (3/2−κ) ρ± = ρ0 1 − Ψ/(κσ 2 )
(3)
Combining with Poisson’s equation ∆Ψ = −4πGρ yields a second order nonlinear differential equation, determining the radial density profiles of both components, plasma and DM in clustered structures without need of specifying the source of interactions in the system, for detaisl see Leubner.6 As natural consequence of nonextensive entropy generalization the standard isothermal sphere profile bifurcates into two distribution families controlled by the sign and value of κ. Physically, we regard the DM halo as an ensemble of self-gravitating, weakly interacting particles in dynamical equilibrium21 and the hot gas component as an electromagnetically interacting high temperature plasma in thermodynamic equilibrium. As discussed previously, the duality of equilibria in nonextensive statistics appears in the nonextensive stationary states of thermodynamics subject to finite positive heat capacity and in the kinetic stationary states with negative heat capacity, a typical property of self-gravitating systems, where both are related only via the sign of the coupling parameter κ. Consequently we assign solutions with negative κ-values to the DM component and those with positive κ-values to the hot plasma component, for mathemetical and graphical details see Leubner.6 ±
4. Dark Energy Domain We identify dark energy with the vacuum energy of a self-interacting scalar field whose potential energy generates a cosmological constant.16 In this situation the chaotic behavior of the strongly self-interacting scalar fields is associated with the vacuum fluctuations where the probability distribution is equivalent to the nonextensive distribution f (E) ∼ (1 +
1 βE)−κ κ
(4)
Here E = mΦ2 /2 and β −1 = m, i.e. the thermal energy of the nonextensive gas coincides with the scalar field mass. The nonextensive context provides naturally a dark energy landscape within the interval 1/2 < κ < 3/2 of positive heat capacity and negative pressure pDE < 0. This domain R 2 is found from the second moment of the nonextensive distribution pDE ∼ ρDE v f (v)dv (ρ is the energy density) as κ pDE = ρDE κ − 3/2
(5)
The constraint on the DE equation of state wΛ = pDE /ρDE . −1 restricts the nonextensive parameter to κ & 0.75, consistent with recent observations. The
November 24, 2010
13:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.04˙Leubner
486
nonextensive analysis demonstrates that dark energy behaves like an ordinary gas with positive heat capacity but subject to negative pressure and highest degree of correlations. A summary of the nonextensive domains, as determined by the value of κ, is schematically provided in Fig. 1, left panel.
4
2 OM, κ>0, nonextensive
3.5
1.8
2.5
Entropy S(N;κ)
Entropy S(κ)
3
2 1.5 BGS, κ=∞, extensive
1 0.5 0
0
DE
4 6 Entropic Index κ
1.4
1.2
DM, κ<0, nonextensive
2
1.6
1 8
10
0
10
20 Nodes N
30
40
Fig. 1. Left panel: A schematic plot of the entropic domains. 3/2 < κ < ∞ represents thermodynamic states and −∞ < κ < 0 covers self-interacting systems. The dark energy domain is restricted to 1/2 < κ < 3/2. Right panel: A schematic plot of the link entropy for some κ values. The extremum appears as minimum and is independent of κ.
5. Discrete Cosmic Inhomogeneity Scales In the last section we merge the methodology applied in modern network science with the concept of nonextensive entropy generalization providing a new theoretical context for the generation of discrete, hierarchically nested structure scales. Network science is a highly interdisciplinary emerging branch of graph theory with potential applications in discrete mathematics, physics, information and communication theory, sociology, biology or internet analysis.22 The question why we observe in the universe discrete structure scales as elementary particles, stellar systems, globular clusters or galaxies, but nothing between, was originally addressed by Chandrasekar.26 This issue was discussed more recently in a semi-empirical approach by Leubner.23–25 Here we argue that a suitable concept for the entropy of a discrete matter distribution is required to understand the existence of a structure scales hierarchy. In contrast to thermodynamic systems driven to a uniform distribution of less organization and increased entropy, the elements of gravitating systems tend to clump, thus implying a gravitational arrow of time pointing in the direction of growing inhomogeneity. The universe acts as a self-organizing system, evolving spontaneously into increasingly complex higher order structures due to the long-range nature of gravitational interaction. Consider an undirected network G0 (N0 , L0 ) of N0 nodes and L0 links and compute the node and link entropy.27 The entropy per node i in this basic configuration
November 24, 2010
13:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.04˙Leubner
487
corresponds to the Hartley information measure28 Si = lnNi /N0 , identical to the BGS logarithmic entropy conjecture. The probability p(i, j) of a given link reads pi,j =
L0 N0 (N0 − 1)/2
(6)
for every couple of nodes (i, j) in the system. Physically we associate the nodes with an ensemble of uniformly distributed, self-gravitating particles where the corresponding information flow of the mutual interactions is provided through the links of G0 . If all nodes are connected uni-directional L0 = N0 (N0 − 1)/2 ' N02 /2 for P N0 >> 1 and p(i, j) = 1. R O O T {G0}
rd
3 level {G3} nd
2
level {G2}
st
1 level {G1}
E L E M E N T S {g0}
Fig. 2. Left panel: A hierarchical tree with three generations indicating how elements at a certain level merge into higher order structures of the same richness. Right panel: The formation of hierarchically nested clusters. Elements Gi of the original system G0 are the building blocks of sub-clusters Gi+1 of statistically equal richness.
Next, let the members of the cluster G0 merge, generating N1 sub-clusters G1 of statistically equal number of constituents n1 , see Fig. 2. The new configuration provides a global network of L1 = N1 (N1 − 1)/2 ' N12 links where each sub-cluster is governed by it’s internal local network of l1 = n1 (n1 − 1)/2 ' n21 links. Hence, we apply the condition that all sub-clusters are governed by their own closed network where the connectivity between members of different sub-clusters is suppressed. This is just what nature demonstrates since the gravitational interaction of e.g. single stars belonging to different galaxies is negligible. Moreover, since nodes correspond physically to interacting particles we are dealing with node conservation such that N0 = N1 n1 , i.e. the number of nodes in the original network G0 equals the number of nodes n1 within each sub-cluster G1 times the total number of sub-clusters N1 . According to Eq. (6) in this configuration the probability for a given link of all N1 internal networks assumes the simple form pi,j = N1 n21 /N02 and of the global external network P (i, j) = N12 /N02 . Hence, after multiplying with N02 the link entropy of the entire nonextensive system is available as Sκ = κ((N1 n21 + N12 )1−1/κ − 1)
(7)
November 24, 2010
13:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.04˙Leubner
488
Upon substituting for n1 using N0 = N1 n1 and extremizing the link entropy the new configuration is determined through the internal and external node distribution 2/3 by N1 = N0 /21/3 or, equivalently n1 = (2N0 )1/3 for given nodes N0 of the original network. The network configuration appears independent of κ since it is either fully connected (internal) or disconnected (external) where the entropy assumes actually assumes a minimum.29 This particular network connectivity generates the state of highest degree of organization or inhomogeneity in physical notation. Any change of connectivity enhances the link entropy resulting in a reduced structure diversity, see Fig. 1, right panel. Based on the above considerations it is straightforward to generalize this network for a hierarchically nested network configuration allowing the members of each subcluster Gi to merge forming at some i + 1 level Ni+1 sub-clusters Gi+1 of richness ni+1 , see Fig. 2. In general, the node distribution within the system is defined by the simple recursion formula as 2/3
N1+1 = Ni
/21/3
(8)
Nonextensive statistics is independent of any particular force of interaction wherefore any specific network configuration mimics the consequences of the corresponding connectivity or correlations without reflection to the nature of interactions. Upon introducing physical systems we require additivity in the sense that a systems mass M is defined in terms of the mass m of N building blocks by N m = M . Applying proper subscripts with regard to equation (8) the fundamental scaling law for the mass hierarchy reads mi+1 = (2m2i M )1/3
(9)
Next we introduce a basic principle of statistics determining the mean spatial spread ri of an ensemble of ni particles, distributed within an area ri+1 , as ri = √ ri+1 / ni . Therefore, the hierarchy of structure scales is governed by an invariant, 2 the surface density Σ0 = mi /ri2 = mi+1 /ri+1 = .... = M/R2 = const. Combining with (9) with respect to the proper indices yields ri+1 = (21/2 ri2 R)1/3
(10)
the third fundamental recursion relation defining the radius (interaction scale length) of bound systems. Finally, the mean separation distance between the massive 1/3 objects is trivially obtained as di = 2ri+1 /ni . Applying H0 ' 70 km sec−1 M pc−1 we summarize the resulting structure quantization in Table 1, where at Planck’s scale N0 = GM 2 /(~c) = c5 /(~H02 G) = 7 × 10121 can be calculated. Apparently, the value of N0 corresponds to the current value of the Bekenstein-Hawking entropy of the universe inside a Hubble radius30 where an equivalent number turns out for the ratio of the cosmological constant set at Planck’s scale and it’s present value Λ0 .
November 24, 2010
13:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.04˙Leubner
489
Ground state hadrons G1 are predicted with mass m ' 10−25 g and interaction length of r1 ' 10−13 cm, consistent with the quark confinement scale Proceeding to generation G2 , condensed matter, the mass density approaches unity and the mean distance of the constituents G1 is d1 ' 10−8 cm, of the order of Bohr’s radius. Hence, condensed matter with ρ2 ⇒ 1 is bound on scale lengths of atomic dimensions, an impressive prediction of the nonextensive network approach. On intermediate scales of the hierarchy, planetesimals G3 and comets play a key role in modern theories of stellar system evolutions. A protoplanetary disk of solar mass and a radius of 1017 cm, evolving into a stellar system G4 , can contain n3 ' 1013 planetesimals G3 with individual masses of m3 ' 1020 g. This remarkable agreement can be forced in view of the orbits of solar system comets having aphelia of r4 ' 6 × 1017 cm defining the edge of the bound solar system G4 . Finally, it can be verified easily that generations G5 - G8 are reproduced in the universe by representative members of globular star clusters, today favored as building blocks of galaxies, followed by galaxies, galaxy clusters and superclusters. Superclusters G8 are the largest well defined structures, surrounded by low density regions of similar scale, reproduced by the present analysis as well. In further steps, the sequence converges to the universe G∞ as Ni ⇒ 1. Table 1: Scaling properties of fundamental cosmic structures Gi
mi [g]
ri [cm]
di [cm]
G0 G1 G2 G3 G4 G5 G6 G7 G8
3×10 1×10−25 1×102 2×1020 3×1032 3×1040 7×1045 2×1049 6×1051
2×10 4×10−13 1×101 1×1010 2×1016 2×1020 8×1022 5×1024 8×1025
2×10 2×10−8 3×104 3×1012 7×1017 3×1021 6×1023 2×1025 3×1026
7×10 1×1081 9×1053 7×1035 7×1023 6×1015 3×1010 7×106 3×104
5×10 1×1027 1×1018 1×1012 1×108 2×105 4×103 2×102 4×101
Planck scale hadronic matter condensed matter planetesimals stellar systems globular clusters galaxies galaxy clusters superclusters
G∞
2×1056
1×1028
1×1028
1
1
the universe
−66
−33
−26
Ni 121
ni
structure 40
6. Summary and Discussion Complex non-ergodicity is generically embedded in the vast variety of cosmic structures over all scales requiring a nonextensive measure of the underlying entropy functional. The generalized entropy concept provides four particular domains regarding the parameter space of the entropic index where in the limit κ = ∞ the classical BG domain is reproduced. For finite entropic index two additional domains emerge naturally as consequence of nonextensive duality determining thermodynamic equilibria for 3/2 < κ < ∞ and equilibria for self-gravitating systems in the range −∞ < κ < 0 subject to negative heat capacity. The remaining domain of
November 24, 2010
13:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.04˙Leubner
490
highest degree of correlations 1/2 < κ < 3/2 provides with positive heat capacity and negative pressure a repulsive dark energy landscape. Finally, the nonextensive generalization of the Hartley information entropy, commonly applied in network science, serves as new measure for the link entropy of a nested hierarchy of networks. Upon extremizing the entropy functional the resulting unique solution is subject to a minimum, thus naturally generating in physical terms a hierarchy of structure scales of highest degree of order. The numerical evaluation impressively provides all observationally known discrete cosmic structure scales in terms of masses and radii, ranging from Planck’s scale to superclusters. Acknowledgments This work was supported by the Austrian Wissenschaftsfonds under P20131-N16. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30.
A. Renyi, Acta Math. Hungaria 6, 285 (1955). C. Tsallis, J. Stat. Phys. 52, 479 (1988). M. P. Leubner, Astrophys. Space Sci. 282, 573 (2002). M. P. Leubner, Phys. Plasmas 11, 1308 (2004). M. P. Leubner, Astrophys. J. 404, 469 (2004). M. P. Leubner, Astrophys. J. 632, L1 (2005). I. V. Karlin, M. Grmela and A. N. Gorban, Phys. Rev. E 65, 036128 (2002). M. P. Leubner, J. Geophys. Res. 87, 6331 (1982). D. A. Mendis and M. Rosenberg, Ann. Rev. Astron. Astrophys. 32, 419 (1994). M. P. Leubner, Planet. Space Sci. 48, 133 (2000). L. Sorriso-Valvo, V. Carbone and P. Veltri, Geophys. Res. Lett. 26, 1801 (1999). M. P. Leubner and Z. Voros, Astrophys. J. 618, 547 (2005). P. Bak, C. Tang and K. Wiesenfeld, Phys. Rev. A 38, 364 (1988). T. Kronberger, M. P. Leubner and E. van Kampen, Astron. Astrophys., 453, 21 (2006). V. H. Hamity, and D. E. Barraco, Phys. Rev. Lett. 76, 4664 (1996). C. Beck, Physica A 340, 459 (2004). M. P. Almeida, Physica A 300, 424 (2001). M. P. Leubner, Z. V¨ or¨ os, Astrophys. J. 618, 547 (2005). A. Cavaliere and R. Fusco-Femiano, Astron. Astrophys. 49, 137 (1976). J. F. Navarro, C. S. Frenk and S. D. M. White, Astrophys. J. 462, 563 (1996). D. N. Spergel and P. J. Steinhard, Phys. Rev. Lett. 84, 3760 (2000). K. Boerner, S. Sanyal and A. Vespignani, Network Science, Ann. Rev. Inf. Sci. Techn., Cronin, ed., 537 (2007). M. P. Leubner, P., Nucl. Phys. B 80, 9, (2000). M. P. Leubner, Grav. Cosmol. Suppl. 6, 144, (2000). M. P. Leubner, in Dark Matter in Astroparticle and Particle Physics, H. V. KlapdorKleingrothaus and R. D. Viollier, eds., Springer, Heidelberg, 312 (2002). S. Chandrasekar, Nature 139, 757 (1937). G. Bianconi, EPL, 81, 28005 (1008). Q. A. Wang, Chaos, Solitons & Fractals, 12, 1431 (2001). L. Haifeng, K. Zhang, and T. Jiang, IEEE, CSB’04, 142 (2004) J. D. Barrow, Mod. Phys. Lett. A 14, 1067 (1999).
November 24, 2010
14:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.05˙Beckwith
491
DECELERATION PARAMETER Q(Z) IN 4D AND 5D GEOMETRIES, AND IMPLICATIONS OF GRAVITON MASS IN MIMICKING DARK ENERGY IN BOTH GEOMETRIES A. W. BECKWITH∗ American Institute of Beam Energy Propulsion, life member, 71 Lakewood Court, Apt 7, Moriches, NY 11955 USA ∗ E-mail: ab
[email protected] The case for a four-dimensional graviton mass (non zero) influencing reacceleration of the universe in both four and five dimensions is stated, with particular emphasis on the question whether 4D and 5D geometries as given here yield new physical insight as to cosmological evolution. Both cases give equivalent reacceleration one billion years ago, which leads to the question whether other criteria can determine the relative benefits of adding additional dimensions to cosmology models. Keywords: Graviton mass; deceleration parameter; 4D and 5D geometries
1. Introduction A first-principle introduction to the detection of gravitational wave density is the definition given by Maggiore:1
Ωgw
ρgw ≡ ≡ ρc
fZ=∞
d(log f ) · Ωgw (f ) ⇒ h20 Ωgw (f ) ∼ = 3.6 ·
f =0
h n i f 4 f · (1) 1kHz 1037
Here, nf is the frequency-based numerical count of gravitons per unit phase space, which may also depend on the interaction of gravitons with neutrinos in plasma during early-universe nucleation, as modeled by Marklund et al.2 However, it is not clear what sort of mechanism is appropriate for considering macro effects of gravitons. Reacceleration of the universe, as a function of graviton mass, could be such a mechanism. We assume Snyder geometry and use the following inequality for a change in the Heisenberg Uncertainty Principle (HUP) (see Battisti3 ): ∆x > (1/∆p) + ls2 · ∆p ≡ (1/∆p) − α · ∆p
(2)
For brane worlds, α < 0, whereas α > 0 for loop quantum gravity (LQG). Next, we assume that the mass of the graviton is partly due to the stretching alluded to by Fuller and Kishimoto,4 and investigate this for a modification of a joint KaluzaKlein (KK) tower of gravitons, as given by Maartens5 for dark matter (DM). The
November 24, 2010
14:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.05˙Beckwith
492
assumption that the stretching of early relic neutrinos would eventually lead to the KK tower of gravitons - for when α < 0 - can be understood as follows (Beckwith 6 ): n + 10−65 grams (3) L Eq. (3) is the starting point for a KK tower version of the Friedman equation (Maartens5 ): mn (graviton) =
2
a˙ =
κ ˜2 ρ2 Λ · a2 m 2 ρ+ a + + 2 −K 3 2λ 3 a
(4)
Maartens5 also gives a second Friedman equation: 2 κ ˜ ρ2 Λ · a2 m K H˙ 2 = − · [p + ρ] · 1 + + −2 4 + 2 2 λ 3 a a
(5)
For ρ ∼ = −P , we will have red-shift values z between zero and 1.0-1.5 with exact inequality for z between zero and 0.5. We then obtain (Beckwith6 ):
q=−
H˙ 2 2 a ¨a ≡ −1 − 2 = −1 + 2 4 ≈ −1 + (6) 2 a˙ H 2 + δ (z) κ ˜ m a · ρ + ρ2 2λ + 1
Eq. (6) assumes Λ = 0 = K and we also use a ≡ [a0 = 1]/(1 + z). The net effect of presenting how gravitons with a small mass in four dimensions can account for reacceleration of the universe is a substitute for dark energy (DE). This is usually acquired with Λ 6= 0, even if curvature κ is set equal to zero. The deceleration parameter q in Eq. (6) has higher-dimension contributions in the brane theory case, but not in the LQG case. 2. Consequences of Small Graviton Mass for Reacceleration of the Universe
In a revision of the work of Alves et al.,7 Beckwith6 used a higher-dimensional model of the brane world and the KK graviton towers of Maartens.5 The density of the brane world Alves et al.7 used in the Friedman equation is then applied for a non-zero graviton (Beckwith6 ): a 3 0
ρ ≡ ρ0 ·
a
4 mg c 6 a 2a2 1 − · + − 8πG~2 14 5 2
(7)
Eq. (6) creates a joint DM and DE model, with all of Eq. (6) being for KK gravitons and DM, and 10−65 grams being a four-dimensional DE. Eq. (5) is part of a KK graviton presentation of DM/DE dynamics. Beckwith6 found that one billion years ago, at z ∼ 0.423, the acceleration of the universe did not slow down but increased instead (shown in Fig. 1).
November 24, 2010
14:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.05˙Beckwith
493
Fig. 1. Reacceleration of the universe based on Eq. (3); q < 0 if z < 0.4 and z ∼ 1.5 is 4.5 billion years after the Big Bang.
3. Connecting Neutrinos with Gravitons by Looking at Their Wavelengths Assuming m0 (graviton) ≈ 10−65 grams for gravitons in four dimensions, Bashinsky8 and Beckwith6 suppose that density fluctuations are influenced by a modification of overall cosmological density hρ in the Friedmann equations by the pro i 8 portionality factor given by Bashinsky: 1 − 5 · (ρneutrino /ρ) + ϑ [ρneutrino /ρ]2 . This proportionality factor for ρ should be taken as an extension of the results of Marklund et al.,2 where neutrinos interact with plasmons and plasmons interact with gravitons, thereby implying neutrino-graviton interactions. Also, graviton wavelengths have the same order of magnitude as those of neutrinos. Note (Valev 9 ):
mgraviton |RELAT IV IST IC < 4.4 × 10−22 h−1 eV /c2 ⇔ λgraviton ≡
~ mgraviton ·c
(8)
< 2.8 × 10−8 meters
An extension on the work of Marklund et al.2 and Valev9 is the suggestion that some gravitons may become larger (Will10 ), i.e. λgraviton ≡ m ~ ·c < 104 m. graviton
4. Are Inflaton and Quintessence Manifestations of a Complex Field? Link Between Graviton Wave/Gravitons and Initial/Final Inflation? Yurov11 brought up that the following field could take on both inflaton and quintessence phenomenology: √ Φ (t) = ϕ (t)exp(iθ (t))/ 2
(9)
November 24, 2010
14:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.05˙Beckwith
494
In the model of Yurov,11 who assumes cyclic behavior where the value of M = Φ · θ˙ is a constant, and supposes an overall chaotic inflationary-style potential, this dual-use, complex scalar field (Eq. 10) is part of a relatively simple chaotic potential: 2
↔2
V = m Φ∗ Φ
(10)
Re-emergence of inflation allegedly occurs at the end of first inflation, and also at a second inflationary period, commencing at or before red shift Z ∼ 0.423. The re-emergent second inflationary emergent field ϕ+ allegedly is of the following form, with time t taken a billion years ago to the present (Yurov 11 ): 1/3 p 3M 2 t 3 ϕ+ = ϕ0,+ − 3/2 · ↔ m
(11)
2 1 M2 κ ˜ ρ2 m ↔2 2 2 2 H = · ϕ˙ + m ϕ + 2 ↔ H = ρ+ + 4 6 ϕ 3 2λ a
(12)
Tying the complex scalar field to evolving Friedman equations can be accomplished by using the linkage suggested by Beckwith.12 It follows from initial inflationary conditions, assuming that m is a typical inflaton mass. Equivalence is given to the representation of inflaton physics by Yurov 11 (left-hand side) and the expression of a brane world in a Friedman equation by Beckwith12 (right-hand side): 2
Next, the representation of a Friedman equation given by Yurov 11 (left) is paired with the suggestion Beckwith12 made for the second Friedman equation (right): h mi H˙ = V − 3H 2 ↔ H˙ 2 ∼ = −2 4 a
(13) h 2i ↔ Equivalence is also assigned to the typical bound between m ≤ l4 as given
11 6,12 by Yurov, and the brane world initial line elements from the work of Beckwith l2 µ v 2 2 dS 5−dim = z2 · ηuv dx dx + dz . A linkage in early inflation and inflaton ϕ0,− to first inflationary dynamics for ϕ (t) is then given by:
ϕ (t) = ϕ0,− −
p ↔ 2/3 · m · t
(14)
5. Conclusions If a joint DM and DE model as given by Eq. 6 is consistent with known astrophysical observations, connections between Eq. (8) and Eq. (3) should be proven, and further work is needed on Eq. (6), to get better results than provided by Chaplygin-gas style joint DM-DE models (Debnath and Chakarborty 13). We want other measurements than dependence on baryon acoustic oscillation data and supernovas as proof
November 24, 2010
14:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.05˙Beckwith
495
for DE, when the DE leads to ω0de = −1.08 < −1. Answering these questions requires new developments to improve the sensitivity of graviton wave detectors (Maggiore1). Note that DE does not appear in the beginning of inflation, and Eq. (6) may link the DE with the emergence and nucleation of gravitons, i.e., if DE ∼ m0 (graviton) ∝ 10−65 grams in four dimensions. We need improvements over h ∼ 10−23 GW sensitivity to investigate DE-DM, as discussed with Weiss.14 Further connections could arise from determining whether we have tension λ = 3MP2 4πl2 ^
(Maartens;5 Beckwith12 ). A small value of l∼ λ would be consistent with the used approximation ρ/2λ ≈ 0.01, and would lead to the following (as derived by Ng 15 ):
3 ^ S ≈ N · log V λ + 5/2 ≈ N = number of gravitons
(15)
Confirming whether Eq. (15) holds true for initial graviton production N would be priceless. Even better would be to determine whether Fig. 1 applies for both geometries present in the Snyder uncertainty bound in Eq. (2). In addition, there would be opportunities to confirm whether Beckwith6,12 is correct about semiclassical treatments of graviton mass, in confirmation of the deterministic quantum mechanics of ’t Hooft.16 References 1. M. Maggiore, Gravitational Waves, Volume 1: Theory and Experiment (Oxford University Press, Oxford, 2008). 2. M. Marklund, G. Brodin and P. Shukla, Physica Scripta T82, 130 (1999). 3. V. Battisti, Phys. Rev. D 79 083506 (2009), arXiv:0805.1178v2. 4. G. Fuller and C. Kishimoto, Phys. Rev. Lett. 102, 201303 (2009). 5. R. Maartens, Living Rev. Relativity 7 (2004). 6. A. W. Beckwith, Progress in Particle and Nuclear Physics in press (2010), doi:10.1016/j.physletb.2003.10.071. 7. M. E. S. Alves, O. D. Miranda and J. C. N. d. Araujo, Physics Letters B submitted (2009). 8. S. Bashinsky, Coupled evolution of primordial gravity waves and relic neutrinos (2005), arXiv:astro-ph/0505502. 9. D. Valev, Aerospace Res. Bulg. 22, 68 (2008), arXiv:hep-ph/0507255. 10. C. M. Will, Living Rev. Relativity 9 (2006). 11. A. V. Yurov, Complex field as inflaton and quintessence(August 2002), arXiv:hepth/0208129v1. 12. A. W. Beckwith, De-celeration parameter q(z) and does the inflaton ?(t) play a role in an increase in cosmological acceleration at z .423? i.e. how to link early universe inflation with re acceleration? (2010), viXra:1003.0193. 13. U. Debnath and S. Chakraborty, Int. J. of Theor. Phys. 47, p. 2663 (2008). 14. R. Weiss, Personal communication, at ADM 50, College Station, Texas, US, 2009. 15. Y. J. Ng, Entropy 10, 441 (2008), DOI: 10.3390/e10040441. 16. G. t Hooft, The mathematical basis for deterministic quantum mechanics, in Beyond the Quantum, ed. T. M. Nieuwenhuizen (World Scientific, 2006), ch. 1, pp. 2–19.
November 24, 2010
14:48
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.06˙Ohlsson
496
NEUTRINOS FROM KALUZA–KLEIN DARK MATTER ANNIHILATIONS IN THE SUN TOMMY OHLSSON∗ Department of Theoretical Physics, School of Engineering Sciences, Royal Institute of Technology (KTH) Roslagstullsbacken 21, SE-106 91 Stockholm, Sweden ∗ E-mail:
[email protected] In this talk, we compute fluxes of neutrinos from Kaluza–Klein dark matter annihilations in the Sun based on cross-sections from both five- and six-dimensional models. For our numerical calculations, we use WimpSim and DarkSUSY. In addition, we compare our results with the ones derived earlier in the literature. Keywords: Dark matter theory; Extra dimensions; Neutrino astronomy
1. Introduction Let us shortly motivate the investigation of dark matter (DM) in the Universe. Cosmological and astronomical observations have measured the energy budget of the Universe to be 4 % ordinary baryonic matter, 23 % dark matter, and 73 % dark energy.1 Since the DM constitutes approximately a quarter of the total energy budget of the Universe, it is therefore of interest to study its importance. One of the most plausible DM candidates are Weakly Interacting Massive Particles (WIMPs). In particular, neutralinos (χ) are promising WIMP candidates. In this talk, we will however study Kaluza–Klein (KK) particles, which are another type of WIMPs. Now, extra-dimensional field theory is non-renormalizable, which means that we have to view such a theory as an effective theory. Therefore, there is a need for a UV completion of the theory. KK particles arise in models with extra dimensions. If so-called KK parity is conserved, then the lightest KK particle (LKP) is stable (cf., LSP in supersymmetry). If neutral, the LKP can be a good DM candidate. In this part of the proceedings, we abbreviate Kaluza–Klein dark matter by KKDM. In addition, see related talks by Neubert and Volkas at this conference. What about WIMP capture and annihilation in the Sun as well as neutrino production and detection? WIMPs in the Milky Way halo can scatter in the Sun and be gravitationally bound to it. Eventually, they will scatter again and sink to the core of the Sun. In the core, WIMPs (here: KKDM) will accumulate and can annihilate and produce neutrinos. Only ν’s can escape the Sun (from WIMP
November 24, 2010
14:48
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.06˙Ohlsson
497
annihilations). In the propagation from production in the Sun to detection at the Earth, neutrino oscillations are used. Then, muons are induced by ν’s in Earth matter. Therefore, fluxes of muons are detected at Earth. Note that we use the DarkSUSY and WimpSim packages to compute muon fluxes at an Earth-based detector.
2. Extra-Dimensional Models In extra-dimensional models, WIMPs are KKDM particles. Especially, in UED models, all SM particles are allowed to propagate in one or more extra dimensions.2 In the so-called minimal UED model, i.e., the MUED model, the LKP is the first mode of the U(1) gauge boson. However, we investigate five- and six-dimensional UED models that are based on the SM gauge group and have more general mass spectra than the MUED model. In our analyses, we have made the following approximations: i) all SM particles are assumed to be massless, ii) we ignore EWSB effects, iii) we neglect Yukawa couplings, since they give negligible contributions for the processes of interest (even for the top quark Yukawa coupling), and finally, iv) we ignore self-couplings of the Higgs boson, since none of the studied processes involve this interaction.
2.1. Five dimensions In five dimensions, spinors are four-component objects. However, there is no chirality operator, which implies that the Dirac representation is irreducible. Thus, this means that the simplest choice of geometry, the circle S 1 , for the fifth dimension does not work. Nevertheless, the orbifold S 1 /Z2 does the job. For gauge fields in the extra dimensions, there is an additional component A5 . In four dimensions, the zero mode of such a gauge field appears as a massless scalar. Therefore, we take A 5 to be odd in y, i.e., no zero mode exists. In general, KK expansions of fields are given by " # ∞ nπy √ X 1 (0) µ (n) µ A (x , y) = √ A (x ) + 2 A (x ) , cos R πR n=1 r ∞ nπy 2 X (odd) µ sin A(n) (xµ ), A (x , y) = πR n=1 R (even)
µ
(1) (2)
where the index n = 1 gives the first KK modes. In addition to the SM parameters, the compactification radius R and the cut-off scale Λ are the only free parameters. Here, we ignore effects of KK modes higher than the first-level (n = 1) mode. In
November 24, 2010
14:48
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.06˙Ohlsson
498
addition, the gauge part of the five-dimensional Lagrangian is g (0),a (1),bµ (1),cν A A Lgauge = − f abc Fµν 2 g − f abc (∂µ A(1),a − ∂ν A(1),a )(A(0),bµ A(1),cν + A(0),cν A(1),bµ ) ν µ 2 i2 g 2 h abc (0),b (1),c − f (Aµ Aν + A(0),c A(1),b ) . (3) ν µ 4 In five dimensions, the possible DM candidates are the first KK modes of neutrinos, the two neutral components of the Higgs doublet, and the B and W 3 bosons. However, KK neutrinos are ruled out as DM and scalar DM is not interesting in this context, since it has no spin-dependent interactions. Thus, the only interesting DM candidates are B (1) and W 3(1) . Some comments on so-called boundary localized terms (BLTs) are in order. In general, orbifold fixed points imply BLTs, which lead to momentum nonconservation in the extra dimensions. However, this means conservation of KKparity. Here, BLTs are included in the Lagrangian and they i) affect the spectrum (at tree level), which means that we can have different LKPs, ii) affect the coupling constants (at tree level), which we have not taken into account, iii) are not determined by the SM parameters, and iv) decrease predictivity of the models. 2.2. Six dimensions In six dimensions, spinors are eight-component objects, and as in four dimensions, there is a chirality operator. Here, the orbifold is the chiral square T 2 /Z4 . In general, KK expansion of the fields are given by XX 1 fn(j,k) (x4 , x5 )A(j,k) (xµ ) , (4) A(xµ , x4 , x5 ) = δn,0 A(0,0) (xµ ) + L j≥1 k≥0
where
fn(j,k) (x4 , x5 )
" 4 1 jx + kx5 nπ −inπ/2 = e cos + 1 + δj,0 R 2 # 4 nπ kx − jx5 + ± cos R 2
(5)
Here the indices (j, k) = (1, 0) give the first KK modes. In this case, the gauge part of the five-dimensional Lagrangian is j1 ,j2 ,j3 (j1 ),a (j2 ),b µ (j3 ),cν Lgauge = −gf abc δ0,0,0 Aµ Aν ∂ A g (1),a (1),b f abc AH (∂ µ AH )A(0),c + h.c. + µ 2 2 g j1 ,j2 ,j3 ,j4 (j1 ),b (j2 ),c (j3 ),dµ (j4 ),eν − f abc f ade δ0,0,0,0 Aµ Aν A A 4 2 g (1),c (1),e − f abc f ade AH AH A(0),b A(0),dµ µ 2
(6)
November 24, 2010
14:48
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.06˙Ohlsson
499
In six dimensions, the possible DM candidates are the same as in five dimensions, (1) 3(1) and in addition, the first-level adjoint scalars BH and WH belong to the candidates. However, adjoint scalar DM is not interesting. Thus, in conclusion, the interesting DM candidates are the same as in five dimensions. 3. Capture Rates, Branching Ratios, and IceCube In this section, we discuss capture rates and compare the result from an approximative formula given by ! 3 2 SD σWIMP,p ρ 270 km/s 1 TeV 18 −1 Capprox ' 3.35 · 10 s 0.3 GeV/cm3 v¯ 10−6 pb mWIMP (7) and the result from DarkSUSY. In Fig. 1, the ratio of the two different capture rates is shown as a function of the WIMP mass mWIMP . We observe that there is a difference between the two capture rates that is about 25 %. 1.3
Capprox/CDarkSUSY
1.28 1.26 1.24 1.22 1.2 200
400
600 mWIMP [GeV]
800
1000
Fig. 1. The ratio of the capture rates as a function of the WIMP mass m WIMP [as obtained using Eq. (7) and DarkSUSY]. This figure has been adopted from Ref.3
In Table 1, we present branching ratios for pair annihilations of KKDM into different final states. The values of the branching ratios have been computed for two different values (1.0 and 1.3) of the relative mass splitting between the LKP and the KK quarks, which is defined as mq(1) − mLKP . (8) rq ≡ mLKP We observe that the branching ratios into a pair of Higgs bosons are small and they contribute only with a tiny fraction to the muon-antimuon fluxes. In addition, we
November 24, 2010
14:48
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.06˙Ohlsson
500
have included in the table the corresponding branching ratios, which were found in Ref.4 Especially, we note that the values of the branching ratios into W + W − are different in our calculations compared with the values given in Ref.4 Table 1.
Branching ratios for pair annihilations of KKDM.
Final state rq
B (1) 1.0 1.3
W 3(1) 1.0 1.3
B (1)
u ¯u ¯ dd ν¯ν l+ l− hh ZZ W +W −
0.125 0.008 0.011 0.183 0.004 0.004 0.010
0.017 0.017 0.005 0.005 0.002 0.002 0.866
0.04 0.04 0.013 0.20 × × 0
0.084 0.006 0.013 0.223 0.005 0.005 0.012
0.010 0.010 0.005 0.005 0.002 0.002 0.908
W 3(1) Ref.4 0.043 0.043 0.013 0.01 × × 0.65
Muon flux from the Sun (km-2y-1)
In Fig. 2, the bounds on the muon-antimuon flux at IceCube from LKP annihilations the Sun are presented. We will include the bounds found by the IceCube 5
10
allowed mγ (1) , ∆ q(1) IceCube-22 LKP γ
∆ q(1) =0.01
(1)
(2007)
104
3
10
∆ q(1) =0.1
102 102
3
10
104 LKP mass (GeV)
Fig. 2. Limits on the muon flux from LKP annihilations in the Sun including systematic errors (squares). This figure has been adopted from Ref.5
collaboration in our results. 4. Results In this section, we present the results of our computations of the muon-antimuon flux in a detector at Earth from the Sun. In Fig. 3, the left plot shows the results
November 24, 2010
14:48
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.06˙Ohlsson
501
for B (1) being the LKP, whereas the right plot shows the results for W 3(1) being the LKP.3 For both plots, a muon energy threshold Eµth = 1 GeV has been used. Note that the thicker curve segments show the LKP mass range that reproduces correct relic abundance. 4
10
2
10
-1
1
10
0
10
-1
10
-2
10
rq = 0.01 rq = 0.05 rq = 0.1 rq = 0.5 IceCube
2
10
-2
3
Muon-antimuon flux [km yr ]
-2
-1
Muon-antimuon flux [km yr ]
10
rq = 0.01 rq = 0.05 rq = 0.1 rq = 0.5 IceCube
4
10
0
10
-2
10
-4
10
-6
10
-8
500
1000
2000 1500 mLKP [GeV]
2500
3000
10
500
1000
2000 1500 mLKP [GeV]
2500
3000
Fig. 3. Left: The muon-antimuon flux in a detector at Earth as a function of the WIMP mass mWIMP for B (1) being the LKP. Right: The muon-antimuon flux in a detector at Earth as a function of the WIMP mass mWIMP for W 3(1) being the LKP. These figures have been adopted from Ref.3
Finally, we comment shortly on earlier results obtained in the literature. Neutrinos from KKDM annihilations in the Sun have previously been studied by: D. Hooper and G.D. Kribs6 and T. Flacke, A. Menon, D. Hooper, and K. Freese.4 Our study3 is a more careful treatment, it includes a six-dimensional model, it gives different branching ratios for W 3(1) , and it results in a difference of 20 %– 30 %. In addition, the IceCube collaboration (see Ref.5 ) has computed fluxes for the five-dimensional MUED model, which are similar to our results. 5. Summary and Conclusions We have investigated KKDM in two extra-dimensional models – one fivedimensional model and one six-dimensional model. In both models, B (1) and W 3(1) as LKPs are the interesting DM candidates. We have calculated the flux of neutrinoinduced muons and antimuons in an Earth-based neutrino telescope (e.g. IceCube). The fluxes for the five- and six-dimensional models are equal. Therefore, it is not possible to distinguish them. The flux of neutrinos is somewhat larger for B (1) than for W 3(1) . If B (1) is the LKP, IceCube can put constraints on the parameter space. However, not if W 3(1) is the LKP. Acknowledgments I would like to thank my collaborators Mattias Blennow and Henrik Melb´eus for useful collaboration that led to the publications upon which this talk is based. In addition, I would like to thank the organizers of 5th International Conference
November 24, 2010
14:48
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.06˙Ohlsson
502
on Beyond the Standard Models of Particle Physics, Cosmology, and Astrophysics (Beyond 2010) for the invitation. This work was supported by the Royal Swedish Academy of Sciences (KVA) and the Swedish Research Council (Vetenskapsr˚ adet), contract no. 621-2008-4210. References 1. 2. 3. 4.
E. Komatsu et al., Astrophys. J. Suppl. 180, 330 (2009). T. Appelquist, H.-C. Cheng and B. A. Dobrescu, Phys. Rev. D64, 035002 (2001). M. Blennow, H. Melb´eus and T. Ohlsson, JCAP 1001, 018 (2010). T. Flacke, A. Menon, D. Hooper and K. Freese, Kaluza–Klein dark matter and neutrinos from annihilation in the Sun, arXiv:0908.0899 [hep-ph], (2009). 5. R. Abbasi et al., Limits on a muon flux from Kaluza–Klein dark matter annihilations in the Sun from the IceCube 22-string detector, arXiv:0910.4460 [astro-ph.CO], (2009). 6. D. Hooper and G. D. Kribs, Phys. Rev. D67, 055003 (2003).
November 24, 2010
15:3
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.07˙Bilic
503
COSMOLOGICAL k-ESSENCE CONDENSATION ´ NEVEN BILIC Rudjer Boˇskovi´ c Institute, 10002 Zagreb, Croatia E-mail:
[email protected] GARY B. TUPPER∗ and RAOUL D. VIOLLIER† Centre of Theoretical Physics and Astrophysics, University of Cape Town, Rondebosch 7701, South Africa ∗ E-mail:
[email protected]; ‡ E-mail:
[email protected] We consider a model of dark energy/matter unification based on a k-essence type of theory similar to tachyon condensate models. Using an extension of the general relativistic spherical model which incorporates the effects of both pressure and the acoustic horizon we show that an initially perturbative k-essence fluid evolves into a mixed system containing cold dark matter like gravitational condensate in significant quantities.
The most popular cosmological models such as ΛCDM model and a quintessenceCDM model assume that DM and DE are distinct entities. Another interpretation of the observational data is that DM/DE are different manifestations of a common structure. The first definite model of this type was proposed a few years ago,1–3 based upon the Chaplygin gas, a perfect fluid obeying the equation of state p=−
A , ρ
(1)
which has been extensively studied for its mathematical properties.4 The general class of models, in which a unification of DM and DE is achieved through a single entity, is often referred to as quartessence.5,6 Among other scenarios of unification that have recently been suggested, interesting attempts are based on the so-called kessence,7,8 a scalar field with noncanonical kinetic terms which was first introduced as a model for inflation.9 All models that unify DM and DE face the problem of nonvanishing sound speed and the well-known Jeans instability. Soon after the appearance of 1 and,2 it was pointed out that the perturbative Chaplygin gas (for early work see,10 and more recently11 ) is incompatible with the observed mass power spectrum12 and microwave background.13 Essentially, these results are a consequence of a nonvanishing comov-
November 24, 2010
15:3
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.07˙Bilic
504
ing acoustic horizon ds =
Z
dt
cs . a
(2)
The perturbations whose comoving size R is larger than ds grow as δ = (ρ − ρ¯)/¯ ρ∼ a. As soon as R < ds , the perturbations undergo damped oscillations. For the Chaplygin gas we have ds ∼ a7/2 /H0 , where H0 is the present day value of the Hubble parameter, reaching Mpc scales already at redshifts of order 10. However, as soon as δ ' 1 the linear perturbation theory cannot be trusted. A significant fraction of initial density perturbations collapses in gravitationally bound structure - the condensate and the system evolves into a two-phase structure - a mixture of CDM in the form of condensate and DE in the form of uncondensed gas. The simple Chaplygin gas does not exhaust all the possibilities for quartessence. A particular case of k-essence9 is the string-theory inspired tachyon Lagrangian14 p L = −V (ϕ) 1 − g µν ϕ,µ ϕ,ν , (3) where
X ≡ g µν ϕ,µ ϕ,ν .
(4)
It may be shown that every tachyon condensate model can be interpreted as a 3+1 brane moving in a 4+1 bulk.15,16 Eq. (1) is obtained using the stress-energy √ tensor Tµν derived from the Lagrangian (3) with V (ϕ) replaced by a constant A. In a recent paper16 we have developed a fully relativistic version of the spherical model for studying the evolution of density perturbations even into the fully nonlinear regime. The formalism is similar in spirit to17 and applicable to any k-essence model. The key element is an approximate method for treating the effects of pressure gradients. Here we give a brief description of our method and its application to a unifying model based on the Lagrangian (3) with a potential of the form V (ϕ) = Vn ϕ2n ,
(5)
where n is a positive integer. In the regime where structure formation takes place, this model effectively behaves as the variable Chaplygin gas18 with the equation of state (1) in which A ∼ a6n . As a result, the much smaller acoustic horizon ds ∼ a(7/2+3n) /H0 enhances condensate formation by two orders of magnitude over the simple Chaplygin gas. Hence this type of model may salvage the quartessence scenario. A minimally coupled k-essence model,9,19 is described by Z R 4 √ + L(ϕ, X) , (6) S= d x −g − 16πG where L is the most general Lagrangian, which depends on a single scalar field ϕ of dimension m−1 , and on the dimensionless quantity X defined in (4). For X > 0 the energy momentum tensor obtained from (6) takes the perfect fluid form, Tµν = 2LX ϕ,µ ϕ,ν − Lgµν = (ρ + p)uµ uν − p gµν ,
(7)
November 24, 2010
15:3
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.07˙Bilic
505
with LX denoting ∂L/∂X and 4-velocity ϕ,µ uµ = sgn (ϕ,0 ) √ . X
(8)
The sign of uµ is chosen so u0 is positive. The associated hydrodynamic quantities are p = L(ϕ, X);
ρ = 2XLX (ϕ, X) − L(ϕ, X),
and the speed of sound is defined as16 ∂p ∂p LX 2 cs ≡ = = . ∂ρ s/n ∂ρ ϕ LX + 2XLXX
(9)
(10)
Two general conditions LX ≥ 0 and LXX ≥ 0 are required for stability20 and causality.21 Now, using (8)-(9) the ϕ field equation can be expressed as √ (11) ρ˙ + 3H(ρ + p) + (ϕ˙ − sgn (ϕ,0 ) X)∂L/∂ϕ = 0. Since the 4-velocity (8) is derived from a potential, the associated rotation tensor vanishes identically. The Raychaudhuri equation for the velocity congruence combined with Einstein’s equations and the Euler equation assumes a simple form 2 µν cs h ρ,ν 2 µν ˙ 3H + 3H + σµν σ + 4πG(ρ + 3p) = , (12) p+ρ ;µ where σµν is the shear tensor and hµν = gµν − uµ uν is a projector onto the threespace orthogonal to uµ . The quantity H is the local Hubble parameter. defined as 3H = uν ;ν . We thus obtain an evolution equation for H sourced by shear, density, pressure and pressure gradient. If cs = 0, as for dust, Eq. (12) and the continuity equation comprise the spherical model.22 However, we are not interested in dust, since generally cs 6= 0 and the right hand side of (12) is not necessarily zero. In general, the 4-velocity uµ can be decomposed as23 p (13) uµ = (U µ + v µ ) / 1 − v 2 , √ where U µ = δ0µ / g00 is the 4-velocity of fiducial observers at rest, and v µ is spacelike, with v µ vµ = −v 2 and U µ vµ = 0. In comoving coordinates v µ = 0. In spherically symmetric spacetime it is convenient to write the metric in the form ds2 = N (t, r)2 dt2 − b(t, r)2 (dr2 + r2 f (t, r)dΩ2 ),
(14)
where N (t, r) is the lapse function, b(t, r) is the local expansion scale, and f (t, r) describes the departure from the flat space for which f = 1. We assume that N , a, and f are arbitrary functions of t and r which are regular and different from zero at r = 0. Then, the local Hubble parameter and the shear are given by 2 1 b,0 1 f,0 2 1 f,0 H= + ; σµν σ µν = . (15) N b 3 f 3 2N f
November 24, 2010
15:3
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.07˙Bilic
506
In addition to the spherical symmetry we also require an FRW spatially flat asymptotic geometry, i.e., for r → ∞ we demand N → 1;
f → 1;
(16)
b → a(t),
where a denotes the background expansion scale. The righthand side of (12) is difficult to treat in full generality. As in,17 we apply the “local approximation”. The density contrast δ = (ρ − ρ¯)/¯ ρ is assumed to be of fixed Gaussian shape of comoving size R with time-dependent amplitude, so that ρ(t, r) = ρ¯(t)[1 + δR (t) e−r
2
/(2R2 )
],
(17)
and the spatial derivatives are evaluated at the origin. This is in keeping with the spirit of the spherical model, where each region is treated as independent. Since ∂i ρ = 0 at r = 0, naturally ∂i N = 0 and ∂i b = 0 at r = 0. Hence, N (t, r) = N (t, 0)(1 + O(r 2 ));
b(t, r) = b(t, 0)(1 + O(r 2 )).
(18)
Besides, one finds f,0 → 0 as r → 0 which follows from Einstein’s equation G1 0 = 0. From now on we denote by H, b, and N the corresponding functions of t and r evaluated at r = 0, i.e., H ≡ H(t, 0), b ≡ b(t, 0) and N ≡ N (t, 0). According to (15), the shear scalar σµν σ µν vanishes at the origin. Evaluating (12) at r = 0 yields our working approximation to the Raychaudhuri equation. We will now apply our formalism to a particular subclass of k-essence unification models described by (3). The equation of state is then given by p=−
V (ϕ)2 , ρ
(19)
and the quantity X may be expressed as X(ρ, ϕ) = 1 −
V (ϕ)2 = 1 − c2s = 1 + w. ρ2
(20)
The continuity equation, Eq. (11), and Eq. (12) evaluated at r = 0 determine the evolution of the density contrast. However, this set of equations is not complete as it must be supplemented by a similar set of equations for the background quantities ρ¯ and H. The complete set of equations for ρ¯, H, ϕ, b, ρ, and H is 2 dϕ = X(ϕ, ρ¯), (21) dt d¯ ρ + 3H(¯ ρ + p¯) = 0, dt
(22)
4πG dH + H2 + (¯ ρ + 3¯ p) = 0, dt 3
(23)
db = N bH, dt
(24)
November 24, 2010
15:3
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.07˙Bilic
507
dρ + 3N H (ρ + p) = 0, dt
(25)
4πG c2s (ρ − ρ¯) 2 H + (ρ + 3p) − 2 2 = 0, (26) 3 b R (ρ + p) p where p¯ = p(¯ ρ, ϕ) and N = X(ϕ, ρ¯)/X(ϕ, ρ). Eqs. (21) and (24) follow from (11) and (15), respectively, Eqs. (11) and (25) are the continuity equations, and Eqs. (23) and (26) are the Raychaudhuri equations for the background and the spherical inhomogeneity, respectively. Now we restrict our attention to the potential (5). In the high density regime we √ have X ' 1, and (21) can be integrated yielding ϕ ' 2/(3H). Here H ' H0 Ωa−3/2 with Ω being the equivalent matter content at high redshift. Hence, V (ϕ)2 ∼ a6n , which leads to a suppression of 10−6 of the acoustic horizon at z = 9 for n = 1. To proceed we require a value for the constant Vn in the potential (5). As the main purpose of this paper is to investigate the evolution of inhomogeneities we will not pursue the exact fitting of the background evolution. Instead, we estimate Vn as follows. We integrate (21) approximately with dH +N dt
X = 1 + w(a) ' 1 −
ΩΛ , ΩΛ + Ωa−3
Ω + ΩΛ = 1,
(27)
as in a ΛCDM universe24 and we fix the pressure given by (3) to equal that of Λ at a = 1. In this way the naive background in our model reproduces the standard cosmology from decoupling up to the scales of about a = 0.8 and fits the cosmology today only approximately (figure 1(a)). We solve our differential equations with a starting from the initial adec = 1/(zdec + 1) at decoupling redshift zdec = 1089 for a particular comoving size R. The initial values for the background are given by s Ω Ω 2 ρ¯in = ρ0 3 ; Hin = H0 ; ϕin = , (28) 3 adec adec 3Hin and for the initial inhomogeneity we take ρin = ρ¯in (1 + δin ) ,
Hin = Hin
δin 1− 3
,
(29)
where Ω = 0.27 represents the effective dark matter fraction and δin = δR (adec ) is a variable initial density contrast, chosen arbitrarily for a particular R. In figure 1(b) the representative case of evolution of two initial perturbations starting from decoupling for R = 10 kpc is shown for n = 2. The plots represent two distinct regimes: the growing mode or condensation (dashed line) and the damped oscillations ( solid line). In contrast to the linear theory, where for any R the acoustic horizon will eventually stop δR from growing, irrespective of the initial value of the perturbation, here we have for an initial δR (adec ) above a certain threshold δc (R), δR (a) → ∞ at finite a, just as in the dust model. Thus perturbations with
November 24, 2010
15:3
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.07˙Bilic
508
(a)
(b)
Fig. 1. (a) Evolution of the background in the tachyon spherical model. (b) Evolution of δ R (a) from adec = 1/1090 for R = 10 kpc, δR (adec ) =0.004 (solid) and δR (adec ) =0.0055 (dashed).
δR (adec ) ≥ δc (R) evolve into a nonlinear gravitational condensate that at low z behaves as pressureless super-particles. Conversely, for a sufficiently small δ R (adec ), the acoustic horizon can stop δR (a) from growing. The crucial question now is what fraction of the tachyon gas goes into condensate. In25 it was shown that if this fraction was sufficiently large, the CMB and the mass power spectrum could be reproduced for the simple Chaplygin gas. To answer this question quantitatively, we follow the Press-Schechter procedure 26 as in.17 Assuming δR (adec ) is given by a Gaussian random field with dispersion σ(R), the condensate fraction at a scale R is given by Z ∞ dδ δc (R) δ2 √ F (R) = 2 = erfc √ exp − 2 , (30) 2σ (R) 2πσ(R) 2 σ(R) δc (R) where δc (R) is the threshold shown in figure 2(a) . In figure 2(a) we also exhibit the dispersion Z ∞ dk exp(−k 2 R2 )∆2 (k, adec ), (31) σ 2 (R) = k 0 calculated using the variance of the concordance model27 4 ns −1 k k 2 2 ∆ (k, a) = const T (k) . aH 7.5a0 H0
(32)
In figure 2(b) we present F (R) for const=7.11×10−9, the spectral index ns =1.02, and the parameterization of Bardeen et al.28 for the transfer function T (k) with ΩB =0.04. The parameters are fixed by fitting (32) to the 2dFGRS power spectrum
November 24, 2010
15:3
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.07˙Bilic
509
(a)
(b)
Fig. 2. (a) Initial value δR (adec ) versus R for Ω = 0.27 and h = 0.71. The threshold δc (R) is shown by the line separating the condensation regime from the damped oscillations regime. The solid line gives σ(R) calculated using the concordance model. (b) Fraction of the tachyon gas in collapsed objects using δc (R) and σ(R) depicted in (a).
data.29 Our result demonstrates that the collapse fraction is about 70% for n = 2 for a wide range of the comoving size R and peaks at about 45% for n = 1. Albeit encouraging, these preliminary results do not in themselves demonstrate that the tachyon with potential (8) constitutes a viable cosmology. Such a step requires the inclusion of baryons and comparison with the full cosmological data. What has been shown is that it is not correct in an adiabatic model to simply pursue linear perturbations to the original background: the system evolves nonlinearly into a mixed system of gravitational condensate and residual k-essence so that the “background” at low z is quite different from the initial one. Because of this one needs new computational tools for a meaningful confrontation with the data. The tachyon k-essence unification remains to be tested against large-scale structure and CMB observations. An encouraging feature of the positive power-law potential is that it provides for acceleration as a periodic transient phenomenon30 which obviates the de Sitter horizon problem.31 Acknowledgments We wish to thank Robert Lindebaum for useful discussions. This research is in part supported by the Foundation for Fundamental Research (FFR) grant number PHY99-1241, the National Research Foundation of South Africa grant number FA2005033 100013, and the Research Committee of the University of Cape Town. The work of NB is supported in part by the Ministry of Science and Technology of
November 24, 2010
15:3
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.07˙Bilic
510
the Republic of Croatia under Contract No. 098-0982930-2864. References 1. A. Kamenshchik, U. Moschella, and V. Pasquier, Phys. Lett. B 511, 265 (2001). 2. N. Bili´c, G.B. Tupper, and R.D. Viollier, Phys. Lett. B 535, 17 (2002). 3. N. Bili´c, G.B. Tupper, and R.D. Viollier, in DARK 2002, eds. H.V. KlapdorKleingrothaus and R.D. Viollier, Int. Conf. B 535, 17 (2002); [arXiv:astroph/0207423]. 4. R. Jackiw, Lectures on fluid dynamics (Springer-Verlag, New-York, 2002). 5. M. Makler, S.Q. de Oliveira, and I. Waga, Phys. Lett. B 555, 1 (2003). 6. R.R.R. Reis, M. Makler, and I. Waga, Phys. Rev. D 69, 101301 (2004). 7. L.P. Chimento, Phys. Rev. D 69, 123517 (2004). 8. R.J. Scherrer, Phys. Rev. Lett. 93, 011301 (2004). 9. C. Armendariz-Picon, T. Damour, and V. Mukhanov, Phys. Lett. B 458, 209 (1999). 10. J.C. Fabris, S.V.B. Gon¸calves, and P.E. de Souza, Gen. Relativ. Gravit. 34, 53 (2002); ibid. 34, 2111 (2002). 11. V. Gorini, A.Y. Kamenshchik, U. Moschella, O.F. Piattella, and A.A. Starobinsky, JCAP 0802, 016 (2008). 12. H.B. Sandvik, M. Tegmark, M. Zaldarriaga, and I. Waga, Phys. Rev. D 69, 123524 (2004). 13. P. Carturan and F. Finelli,Phys. Rev. D 68, 103501 (2003). 14. M.R. Garousi, Nucl. Phys. B 584, 284 (2000); A. Sen, JHEP 0207, 065 (2002). 15. N. Bili´c, G.B. Tupper, and R.D. Viollier, J. Phys. A 40, 6877 (2007); [arXiv: grqc/0610104]. 16. N. Bilic, G.B. Tupper and R.D. Viollier, Phys. Rev. D 80, 023515 (2009) [arXiv:0809.0375 [gr-qc]]. 17. N. Bili´c, R.J. Lindebaum, G.B. Tupper, and R.D. Viollier, JCAP 0411, 008 (2003); [arXiv: astro-ph/0307214]. 18. Z.-K. Guo and Y.-Z. Zhang, Phys. Lett. B 645, 326 (2007). 19. J. Garriga and V.F. Makhanov, Phys. Lett. B 458, 219 (1999). 20. S.D.H. Hsu, A. Jenkins, and M.B. Wise, Phys. Lett. B 597, 270 (2004); N. Bilic, G.B. Tupper, and R.D. Viollier, JCAP 0809, 002 (2008); [arXiv: 0801.3942]. 21. G.F.R. Ellis, R. Maartens, and M.A.H. MacCallum, Gen. Relativ. Gravit. 39, 1651 (2007). 22. E. Gazt˜ anaga and J.A. Lobo, Astrophys. J. 548, 47 (2001). 23. N. Bili´c, Class. Quant. Grav. 16, 3953 (1999); [arXiv: gr-qc/9908002]. 24. D. Bertacca, S. Matarrese, and M. Pietroni, Mod. Phys. Lett. A22, 2893 (2007). 25. N. Bili´c, R.J. Lindebaum, G.B. Tupper, and R.D. Viollier, in Proceedings of the XVth Rencontres de Blois, France, 2003, eds. J. Dumarchez et al. (The Gioi Publishers, Vietnam, 2005); [arXiv: astro-ph/0310181]. 26. W.H. Press and P. Schechter, Astrophys. J. 187, 425 (1974). 27. G. Hinshaw et al., Astrophys. J. Suppl. 170, 288 (2007); D.N. Spergel et al. Astrophys. J. Suppl. 170, 377 (2007); E. Komatsu et al., Astrophys. J. Suppl 180, 330 (2009). 28. J.M. Bardeen, J.R. Bond, N. Kaiser, and A.S. Szalay, Astrophys. J. 304, 15 (1986). 29. W.J. Percival et al., Mon. Not. R. Astron. Soc. 327, 1297 (2001). 30. A. Frolov, L. Kofman, and A. Starobinsky, Phys. Lett. B 545, 8 (2002). 31. N. Bili´c, G.B. Tupper, and R.D. Viollier, JCAP 0510, 003 (2005); [arXiv: astroph/0503428].
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
511
SIGNALS FROM THE DARK UNIVERSE: NEW RESULTS FROM DAMA/LIBRA R. BERNABEI∗ , P. BELLI, F. MONTECCHIA1 , F. NOZZOLI Dip. di Fisica, Universit` a di Roma “Tor Vergata”, and INFN, sez. Roma “Tor Vergata”, I-00133 Rome, Italy ∗ E-mail:
[email protected] 1 also: Lab. Sperim. Policentrico di Ingegneria Medica, Universit` a di Roma “Tor Vergata” F. CAPPELLA, A. d’ANGELO, A. INCICCHITTI, D. PROSPERI Dip. di Fisica, Universit` a di Roma “La Sapienza” and INFN, sez. Roma, I-00185 Rome, Italy R. CERULLI Laboratori Nazionali del Gran Sasso, I.N.F.N., I-67010 Assergi, Italy C.J. DAI, H.L. HE,H.H. KUANG, X.H.MA, X.D.SHENG, R.G. WANG, Z.P. YE2 IHEP, Chinese Academy, P.O. Box 918/3, Beijing 100039, China 2 also: University of Jing Gangshan, Jiangxi, China The latest results from DAMA/LIBRA, running at the Gran Sasso National Laboratory of the I.N.F.N., are presented. The cumulative exposure with those previously released by the former DAMA/NaI and by DAMA/LIBRA is 1.17 ton × yr, corresponding to 13 annual cycles. The data further confirm the model independent evidence of the presence of Dark Matter (DM) particles in the galactic halo on the basis of the DM annual modulation signature (8.9 σ C.L. for the cumulative exposure). The obtained results are summarized and the update of some of the many possible corollary model dependent quest for the candidate particle are given. Keywords: Scintillation detectors, elementary particle processes, Dark Matter.
1. Introduction The DAMA project is based on the development and use of low background scintillators; several low background set-ups have been realized, such as: i) DAMA/NaI; 1–14 ii) DAMA/LXe;15 iii) DAMA/R&D;16 iv) DAMA/LIBRA;17–20 v) DAMA/Ge21 for sample measurements, which is located in the low background LNGS Ge facility. In particular, the former DAMA/NaI and the present DAMA/LIBRA experiments at the Gran Sasso National Laboratory have the main aim to investigate the presence of Dark Matter particles in the galactic halo by exploiting the model independent Dark Matter annual modulation signature originally suggested in the mid 80’s in ref.22 In fact, as a consequence of its annual revolution around the Sun, which is
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
512
moving in the Galaxy travelling with respect to the Local Standard of Rest towards the star Vega near the constellation of Hercules, the Earth should be crossed by a larger flux of Dark Matter particles around ∼ 2 June (when the Earth orbital velocity is summed to the one of the solar system with respect to the Galaxy) and by a smaller one around ∼ 2 December (when the two velocities are subtracted). Thus, this signature has a different origin and peculiarities than the seasons on the Earth and than effects correlated with seasons (consider the expected value of the phase as well as the other requirements listed below). This annual modulation signature is very distinctive since the effect induced by DM particles must simultaneously satisfy all the following requirements: the rate must contain a component modulated according to a cosine function (1) with one year period (2) and a phase that peaks roughly around ' 2nd June (3); this modulation must only be found in a well-defined low energy range, where DM particle induced events can be present (4); it must apply only to those events in which just one detector of many actually “fires” (single-hit events), since the DM particle multi-interaction probability is negligible (5); the modulation amplitude in the region of maximal sensitivity must <7% for usually adopted halo distributions (6), but it can be larger in case of be ∼ some possible scenarios such as e.g. those in refs.23,24 This offers an efficient DM model independent signature, able to test a large interval of cross sections and of halo densities; moreover, the use of highly radiopure NaI(Tl) scintillators as targetdetectors assures sensitivity to wide ranges of DM candidates, of interaction types and of astrophysical scenarios. It is worth noting that only systematic effects or side reactions able to simultaneously fulfil all the 6 requirements given above (and no one has ever been suggested) and to account for the whole observed modulation amplitude might mimic this DM signature. The DAMA/LIBRA set-up, whose description, radiopurity and main features are discussed in details in ref.18 has firstly been upgraded in September/October 2008.20 For the radiopurity, the procedures and further details see ref.17,18,20 Here we just remind that the sensitive part of this set-up is made of 25 highly radiopure NaI(Tl) crystal scintillators (5-rows by 5-columns matrix) having 9.70 kg mass each one. The detectors are housed in a sealed low-radioactive copper box installed in the center of a low-radioactive Cu/Pb/Cd-foils/polyethylene/paraffin shield; moreover, about 1 m concrete (made from the Gran Sasso rock material) almost fully surrounds (mostly outside the barrack) this passive shield, acting as a further neutron moderator. A threefold-levels sealing system excludes the detectors from the environmental air of the underground laboratory.18 A hardware/software system to monitor the running conditions is operative and self-controlled computer processes automatically control several parameters and manage alarms. Moreover: i) the light response ranges from 5.5 to 7.5 photoelectrons/keV, depending on the detector; ii) the hardware threshold of each PMT is at single photoelectron (each detector is equipped with two low background photomultipliers working in coincidence); iii) en-
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
513
ergy calibration with X-rays/γ sources are regularly carried out down to few keV; iv) the software energy threshold of the experiment is 2 keV electron equivalent (hereafter keV); v) both single-hit events (where just one of the detectors fires) and multiple-hit events (where more than one detector fires) are acquired; v) the data are collected up to the MeV region despite the optimization is performed for the lower one. The data of the former DAMA/NaI (0.29 ton × yr) and those of the first 4 annual cycles of DAMA/LIBRA (total exposure 0.53 ton×yr) have already given positive model independent evidence for the presence of DM particles in the galactic halo with high confidence level on the basis of the DM annual modulation signature (see ref.17 and references therein). In this conference the model independent results obtained with other two annual cycles DAMA/LIBRA-5,6 are presented.20 The data of the first cycle have been collected in the same conditions as DAMA/LIBRA1,2,3,4,17,18 while the data of DAMA/LIBRA-6 have been taken after the 2008 upgrade. 2. The Model Independent Results Details on the collected exposures in each one of the DAMA/LIBRA annual cycles are given in ref.20 In particular, the two new annual cycles presented here for the first time refer to a further exposure of 0.34 ton × yr; thus, the cumulative DAMA/LIBRA exposure released so far is 0.87 ton × yr and cumulatively with DAMA/NaI the exposure is 1.17 ton × yr. The only data treatment, which is performed on the raw data, is to remove noise pulses (mainly PMT noise, Cherenkov light in the light guides and in the PMT windows, and afterglows) near the energy threshold in the single-hit events; for a description of the used procedure and details see ref.18 In the DAMA/LIBRA-1,2,3,4,5,6 annual cycles about 7.2 × 107 events have also been collected for energy calibrations and about 3 × 106 events/keV for the evaluation of the acceptance windows efficiency for noise rejection near energy threshold. The periodical calibrations and, in particular, those related with the acceptance windows efficiency mainly affect the duty cycle of the experiment. Several analyses on the model-independent investigation of the DM annual modulation signature have been performed in20 as previously done in ref.17 and refs. therein. In particular, Fig. 1 shows the time behaviour of the experimental residual rates for single-hit events in the (2–4), (2–5) and (2–6) keV energy intervals. These residual rates are calculated from the measured rate of the single-hit events (already corrected for the overall efficiency and for the acquisition dead time) after subtracting the constant part: < rijk −f latjk >jk . Here rijk is the rate in the considered i-th time interval for the j-th detector in the k-th energy bin, while f latjk is the rate of the j-th detector in the k-th energy bin averaged over the cycles. The average is made on all the detectors (j index) and on all the energy bins (k index) which constitute the considered energy interval. The weighted mean of the residuals must
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
514
Residuals (cpd/kg/keV)
2-4 keV DAMA/LIBRA ≈ 250 kg (0.87 ton×yr)
Time (day) Residuals (cpd/kg/keV)
2-5 keV DAMA/LIBRA ≈ 250 kg (0.87 ton×yr)
Time (day) Residuals (cpd/kg/keV)
2-6 keV DAMA/LIBRA ≈ 250 kg (0.87 ton×yr)
Time (day) Fig. 1. Experimental model-independent residual rate of the single-hit scintillation events, measured by DAMA/LIBRA,1,2,3,4,5,6 in the (2 – 4), (2 – 5) and (2 – 6) keV energy intervals as a function of the time. The zero of the time scale is January 1st of the first year of data taking of the former DAMA/NaI experiment.17,20 The experimental points present the errors as vertical bars and the associated time bin width as horizontal bars. The superimposed curves are the cosinusoidal = 1 yr, with a phase t0 = 152.5 day functions behaviors A cos ω(t − t0 ) with a period T = 2π ω (June 2nd ) and with modulation amplitudes, A, equal to the central values obtained by best fit over the whole data including also the exposure previously collected by the former DAMA/NaI experiment: cumulative exposure is 1.17 ton × yr (see also ref.17,20 and refs. therein). The dashed vertical lines correspond to the maximum expected for the DM signal (June 2 nd ), while the dotted vertical lines correspond to the minimum. See refs.17,20 and text.
obviously be zero over one cycle. For clarity in Fig. 1 only the DAMA/LIBRA data collected over six annual cycles (0.87 ton × yr) are shown; the DAMA/NaI data (0.29 ton × yr) and comparison with DAMA/LIBRA are available in ref.17
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
515
The hypothesis of absence of modulation in the data can be discarded.17,20 The single-hit residual rate of DAMA/LIBRA-1,2,3,4,5,6 of Fig. 1 can be fitted with the formula: A cos ω(t − t0 ) considering a period T = 2π ω = 1 yr and a phase t0 = 152.5 day (June 2nd ), as expected by the DM annual modulation signature; this can be repeated for the total available exposure 1.17 ton × yr including the former DAMA/NaI data (see17 and refs. therein); Table 1 shows the results. Table 1. Modulation amplitude, A, obtained by fitting the single-hit residual rate of the six DAMA/LIBRA annual cycles (Fig. 1), and including also the former DAMA/NaI data given elsewhere (see17 and refs. therein) for a total cumulative exposure of 1.17 ton × yr. It has been obtained by fitting the data = 1 yr and t0 = 152.5 day with the formula: A cos ω(t − t0 ) with T = 2π ω (June 2nd ), as expected for a signal by the DM annual modulation signature. The corresponding χ2 value for each fit and the confidence level are also reported. Energy interval (keV) 2-4
DAMA/LIBRA (cpd/kg/keV) A=(0.0170±0.0024) χ2 /d.o.f. = 41.0/42
2-5
A=(0.0129±0.0018) χ2 /d.o.f. = 30.7/42
2-6
A=(0.0097±0.0015) χ2 /d.o.f. = 24.1/42
DAMA/NaI & DAMA/LIBRA (cpd/kg/keV) A=(0.0183±0.0022) χ2 /d.o.f. = 75.7/79 → 8.3 σ C.L. A=(0.0144±0.0016) χ2 /d.o.f. = 56.6/79 → 9.0 σ C.L. A=(0.0114±0.0013) χ2 /d.o.f. = 64.7/79 → 8.8 σ C.L.
The compatibility among the 13 annual cycles has been investigated. In particular, the modulation amplitudes measured in each annual cycle of the whole 1.17 ton × yr exposure have been analysed as in the previous ref.17 Indeed these modulation amplitudes are normally distributed around their best fit value as pointed out by the χ2 test (χ2 = 9.3, 12.2 and 10.1 over 12 d.o.f. for the three energy intervals, respectively) and the run test (lower tail probabilities of 57%, 47% and 35% for the three energy intervals, respectively). Moreover, the DAMA/LIBRA5 and DAMA/LIBRA-6 (2–6) keV modulation amplitudes are (0.0086 ± 0.0032) cpd/kg/keV and (0.0101 ± 0.0031) cpd/kg/keV, respectively, in agreement with that of DAMA/LIBRA-1,2,3,4: (0.0110±0.0019) cpd/kg/keV; we also recall that the statistical compatibility between the DAMA/NaI and DAMA/LIBRA-1,2,3,4 modulation amplitudes has been verified.17 Thus, also when adding DAMA/LIBRA-5,6, the cumulative result from DAMA/NaI and DAMA/LIBRA can be adopted. Table 2 shows the results obtained for the cumulative 1.17 ton × yr exposure when the period and phase parameters are kept free in the fitting procedure described above. The period and the phase are well compatible with expectations for a signal in the DM annual modulation signature. In particular, the phase – whose better determination will be achieved in the following by using a maximum likelihood analysis – is consistent with about June 2nd within 2σ; moreover, for
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
516
completeness, we also note that a slight energy dependence of the phase could be expected in case of possible contributions of non-thermalized DM components to the galactic halo, such as e.g. the SagDEG stream8 and the caustics.25 Table 2. Modulation amplitude (A), period (T = 2π ) and phase (t0 ), obω tained by fitting, with the formula: A cos ω(t − t0 ), the single-hit residual rate of the cumulative 1.17 ton × yr exposure. The results are well compatible with expectations for a signal in the DM annual modulation signature. Energy interval 2-4 2-5 2-6
A (cpd/kg/keV) (0.0194±0.0022) (0.0149±0.0016) (0.0116±0.0013)
(yr) T = 2π ω (0.996±0.002) (0.997±0.002) (0.999±0.002)
t0 (days) 136±7 142±7 146±7
C. L. 8.8σ 9.3σ 8.9σ
Normalized Power
Normalized Power
The DAMA/LIBRA single-hit residuals of Fig.1 and those of DAMA/NaI (see e.g.17 ) have also been investigated by a Fourier analysis, obtaining a clear peak corresponding to a period of 1 year (see Fig. 2); the same analysis in other energy region shows instead only aliasing peaks.
15
10
20
15
10 5 5
0 0
0.002
0.004
0.006
0.008 -1
Frequency (d )
0
0
0.002
0.004
0.006
0.008 -1
Frequency (d )
Fig. 2. Power spectrum of the measured single-hit residuals in the (2–6) keV (solid lines) and (6–14) keV (dotted lines) energy intervals calculated according to ref. 26 including also the treatment of the experimental errors and of the time binning. The data refer to: a) DAMA/LIBRA1,2,3,4,5,6 (exposure of 0.87 ton × yr); b) the cumulative 1.17 ton × yr exposure (DAMA/NaI and DAMA/LIBRA-1,2,3,4,5,6). As it can be seen, the principal mode present in the (2–6) keV energy interval corresponds to a frequency of 2.697 × 10−3 d−1 and 2.735 × 10−3 d−1 (vertical lines), respectively in the a) and b) case. They correspond to a period of ' 1 year. A similar peak is not present in the (6–14) keV energy interval just above.
The measured energy distribution has been investigated in other energy regions not of interest for Dark Matter, also verifying the absence of any significant back-
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
517
ground modulation a . Following the procedures described in ref.17 and ref. therein, the measured rate integrated above 90 keV, R90 , as a function of the time has been analysed. In particular, also for these two latter annual cycles the distribution of the percentage variations of R90 with respect to the mean values for all the detectors has been considered; it shows a cumulative gaussian behaviour with σ ' 1%, well accounted by the statistical spread expected from the used sampling time (see Fig. 3).
900 800 700
frequency
600 500 400 300 200 100 0 -0.1 0 0.1 (R90 -
)/ Fig. 3. Distribution of the percentage variations of R90 with respect to the mean values for all the detectors in the DAMA/LIBRA-5,6 annual cycles (histogram); the superimposed curve is a gaussian fit.
Moreover, fitting the time behaviour of R90 with phase and period as for DM particles, a modulation amplitude compatible with zero is also found in DAMA/LIBRA5 and DAMA/LIBRA-6: (0.20 ± 0.18) cpd/kg and (−0.20 ± 0.16) cpd/kg, respectively. This also excludes the presence of any background modulation in the whole energy spectrum at a level much lower than the effect found in the lowest energy region for the single-hit events. In fact, otherwise – considering the R90 mean values – a modulation amplitude of order of tens cpd/kg, that is ' 100 σ far away from the measured value, would be present. Similar result is obtained when comparing a In fact, the background in the lowest energy region is essentially due to “Compton” electrons, X-rays and/or Auger electrons, muon induced events, etc., which are strictly correlated with the events in the higher energy part of the spectrum. Thus, if a modulation detected in the lowest energy region would be due to a modulation of the background (rather than to a signal), an equal or larger modulation in the higher energy regions should be present.
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
518
2-6 keV
0.04
0.02
0
0.02
0
-0.02
-0.04
-0.04 400
500
600
Time (day) b)
6-14 keV
0.04
-0.02
300
a)
Residuals (cpd/kg/keV)
Residuals (cpd/kg/keV)
the single-hit residuals in the (2–6) keV with those in other energy intervals; see as an example Fig. 4. It is worth noting that the obtained results already account for whatever kind of background and, in addition, that no background process able to mimic the DM annual modulation signature (that is able to simultaneously satisfy all the peculiarities of the signature and to account for the measured modulation amplitude) is available (see also discussions e.g. in17,27 ).
300
400
500
600
Time (day)
Fig. 4. Experimental residuals in the (2 – 6) keV region and those in the (6 – 14) keV energy region just above for the cumulative 1.17 ton × yr, considered as collected in a single annual cycle. The experimental points present the errors as vertical bars and the associated time bin width as horizontal bars. The initial time of the figure is taken at August 7th . The clear modulation satisfying all the peculiarities of the DM annual modulation signature is present in the lowest energy interval, while it is absent just above; in fact, in the latter case the best fitted modulation amplitude is: (0.00007 ± 0.00077) cpd/kg/keV.
A further relevant investigation has been performed by applying the same hardware and software procedures, used to acquire and to analyse the single-hit residual rate, to the multiple-hit one. In fact, since the probability that a DM particle interacts in more than one detector is negligible, a DM signal can be present just in the single-hit residual rate. Thus, the comparison of the results of the single-hit events with those of the multiple-hit ones corresponds practically to compare between them the cases of DM particles beam-on and beam-off. This procedure also allows an additional test of the background behaviour in the same energy interval where the positive effect is observed. In particular, in Fig. 5 the residual rates of the single-hit events measured over the six DAMA/LIBRA annual cycles are reported, as collected in a single cycle, together with the residual rates of the multiple-hit events, in the considered energy intervals. While, as already observed, a clear modulation, satisfying all the peculiarities of the DM annual modulation signature, is present in the single-hit events, the fitted modulation amplitudes for the multiple-hit residual rate are well compatible with zero: (−0.0011 ± 0.0007) cpd/kg/keV, (−0.0008 ± 0.0005) cpd/kg/keV, and (−0.0006 ± 0.0004) cpd/kg/keV in the energy regions (2 – 4), (2
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
519
Residuals (cpd/kg/keV)
2-4 keV
Time (day) Residuals (cpd/kg/keV)
2-5 keV
Time (day) Residuals (cpd/kg/keV)
2-6 keV
Time (day) Fig. 5. Experimental residual rates over the six DAMA/LIBRA annual cycles for single-hit events (open circles) (class of events to which DM events belong) and for multiple-hit events (filled triangles) (class of events to which DM events do not belong). They have been obtained by considering for each class of events the data as collected in a single annual cycle and by using in both cases the same identical hardware and the same identical software procedures. The initial time of the figure is taken on August 7th . The experimental points present the errors as vertical bars and the associated time bin width as horizontal bars. See text and ref.17,20 Analogous results were obtained for the DAMA/NaI data.6
– 5) and (2 – 6) keV, respectively. Thus, again evidence of annual modulation with proper features as required by the DM annual modulation signature is present in the single-hit residuals (events class to which the DM particle induced events belong), while it is absent in the multiple-hit residual rate (event class to which only background events belong). Similar results were also obtained for the last two annual cycles of the DAMA/NaI experiment.6 Since the same identical hardware and the same identical software procedures have been used to analyse the two classes of
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
520
events, the obtained result offers an additional strong support for the presence of a DM particle component in the galactic halo. As in ref.17 the annual modulation present at low energy can also be shown by depicting – as a function of the energy – the modulation amplitude, Sm,k , obtained by maximum likelihood method over the data considering T =1 yr and t0 = 152.5 day. For such purpose the likelihood function of the single-hit experimental data in µ
Nijk
Sm (cpd/kg/keV)
the k−th energy bin is defined as: Lk = Πij e−µijk Nijk , where Nijk is the number of ijk ! events collected in the i-th time interval (hereafter 1 day), by the j-th detector and in the k-th energy bin. Nijk follows a Poisson’s distribution with expectation value µijk = [bjk + Sik ] Mj ∆ti ∆Ejk . The bjk are the background contributions, Mj is the mass of the j−th detector, ∆ti is the detector running time during the i-th time interval, ∆E is the chosen energy bin, jk is the overall efficiency. Moreover, the signal can be written as Sik = S0,k + Sm,k · cos ω(ti − t0 ), where S0,k is the constant part of the signal and Sm,k is the modulation amplitude. The usual procedure is to minimize the function yk = −2ln(Lk) − const for each energy bin; the free parameters of the fit are the (bjk + S0,k ) contributions and the Sm,k parameter. Hereafter, the index k is omitted when unnecessary.
0.05
0.025 0
-0.025 -0.05 0
2
4
6
8
10
12
14
16 18 20 Energy (keV)
Fig. 6. Energy distribution of the Sm variable for the total cumulative exposure 1.17 ton×yr. The energy bin is 0.5 keV. A clear modulation is present in the lowest energy region, while S m values compatible with zero are present just above. In fact, the Sm values in the (6–20) keV energy interval have random fluctuations around zero with χ2 equal to 27.5 for 28 degrees of freedom.
In Fig. 6 the obtained Sm are shown in each considered energy bin (there ∆E = 0.5 keV). It can be inferred that positive signal is present in the (2–6) keV energy interval, while Sm values compatible with zero are present just above. In fact, the Sm values in the (6–20) keV energy interval have random fluctuations around zero with χ2 equal to 27.5 for 28 degrees of freedom. All this confirms the previous analyses. The method also allows the extraction of the the Sm values for each detector, for
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
521
each annual cycle and for each energy bin. Thus, following the procedure described in ref.,17 we have also verified that the Sm are statistically well distributed in all the six DAMA/LIBRA annual cycles and in all the sixteen energy bins (∆E = 0.25 keV in the 2–6 keV energy interval) for each detector. Moreover, that procedure also allows the definition of a χ2 for each detector; the associated degree of freedom are 16 for the detector restored after the upgrade in 2008 and 96 for the others. The values of the χ2 /d.o.f. range between 0.7 and 1.22 for twenty-four detectors, and the observed annual modulation effect is well distributed in all these detectors at 95% C.L.. A particular mention is deserved to the remaining detector whose value is 1.28 exceeding the value corresponding to that C.L.; this also is statistically consistent, considering that the expected number of detector exceeding this value over twentyfive is 1.25. Moreover, the mean value of the 25 χ2 /d.o.f. is 1.066, slightly larger than expected. Although this can be still ascribed to statistical fluctuations (see before), let us ascribe it to a possible systematics. In this case, one would have an additional error of ≤ 4×10−4 cpd/kg/keV, if quadratically combined, or ≤ 5×10−5 cpd/kg/keV, if linearly combined, to the modulation amplitude measured in the (2 – 6) keV energy interval. This possible additional error: ≤ 4% or ≤ 0.5%, respectively, of the DAMA/LIBRA modulation amplitude is an upper limit of possible systematic effects. Among further additional tests, the analysis of the modulation amplitudes as a function of the energy separately for the nine inner detectors and the remaining external ones has been carried out including the DAMA/LIBRA-5,6 data to those already analysed in ref.17 The obtained values are fully in agreement; in fact, the hypothesis that the two sets of modulation amplitudes as a function of the energy belong to same distribution has been verified by χ2 test, obtaining: χ2 /d.o.f. = 3.1/4 and 7.1/8 for the energy intervals (2–4) and (2–6) keV, respectively (∆E = 0.5 keV). This shows that the effect is also well shared between inner and external detectors. Let us, finally, release the assumption of a phase t0 = 152.5 day in the procedure to evaluate the modulation amplitudes from the data of the 1.17 ton × yr. In this case alternatively the signal has been written as: Sik = S0,k + Sm,k cos ω(ti − t0 ) + Zm,k sin ω(ti − t0 ) = S0,k + Ym,k cos ω(ti − t∗ ), (1) For signals induced by DM particles one would expect: i) Zm,k ∼ 0 (because of the orthogonality between the cosine and the sine functions); ii) Sm,k ' Ym,k ; iii) t∗ ' t0 = 152.5 day. In fact, these conditions hold for most of the dark halo models; however, as mentioned above, slight differences can be expected in case of possible contributions from non-thermalized DM components, such as e.g. the SagDEG stream8 and the caustics.25 Fig. 7–left shows the 2σ contours in the plane (Sm , Zm ) for the (2–6) keV and (6– 14) keV energy intervals and Fig. 7–right shows, instead, those in the plane (Y m , t∗ ). Table 3 shows the best fit values for the (2–6) and (6–14) keV energy interval (1σ
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
522 0.03
240 2V contours
2V contours
220
0.02
0.01
6-14 keV
180
t* (day)
Zm (cpd/kg/keV)
200
6-14 keV 0
2-6 keV
160 140
2-6 keV
-0.01
120 100
-0.02
80 -0.03 -0.03
-0.02
-0.01
0
0.01
0.02
0.03
-0.04 -0.03 -0.02 -0.01
0
0.01 0.02 0.03 0.04
Ym (cpd/kg/keV)
Sm (cpd/kg/keV)
Fig. 7. 2σ contours in the plane (Sm , Zm ) (left) and in the plane (Ym , t∗ ) (right) for the (2–6) keV and (6–14) keV energy intervals. The contours have been obtained by the maximum likelihood method, considering the cumulative exposure of 1.17 ton × yr. A modulation amplitude is present in the lower energy intervals and the phase agrees with that expected for DM induced signals.
errors) for Sm versus Zm and Ym versus t∗ . Table 3. Best fit values for the (2–6) and (6–14) keV energy interval (1σ errors) for S m versus Zm and Ym versus t∗ , considering the cumulative exposure of 1.17 ton × yr. See also Fig. 7. E (keV) 2–6 6–14
Sm (cpd/kg/keV) (0.0111 ± 0.0013) -(0.0001 ± 0.0008)
Zm (cpd/kg/keV) -(0.0004 ± 0.0014) (0.0002 ± 0.0005)
Ym (cpd/kg/keV) (0.0111 ± 0.0013) -(0.0001 ± 0.0008)
t∗ (day) (150.5 ± 7.0) undefined
Finally, forcing to zero the contribution of the cosine function in eq. (1), the Zm values as function of the energy have also been determined by using the same procedure. The values of Zm as a function of the energy are expected to be zero in case of presence of a DM signal with t∗ ' t0 = 152.5 day. By the fact, the χ2 test applied to the data supports the hypothesis that the Zm values are simply fluctuating around zero (see ref.20 ); in fact, for example in the (2–14) keV and (2– 20) keV energy region the χ2 /d.o.f. are equal to 21.6/24 and 47.1/36 (probability of 60% and 10%), respectively. The behaviour of the phase t∗ variable as function of energy is shown in Fig. 8 for the total exposure (1.17 ton × yr, DAMA/NaI&DAMA/LIBRA). As in the previous analyses, an annual modulation effect is present in the lower energy intervals and the phase agrees with that expected for DM induced signals. These results confirm those achieved by other kinds of analyses. Sometimes naive statements were put forwards as the fact that in nature several
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
523
200
*
t (day)
300
100 0
2
3
4
5 6 Energy (keV)
7
8
Fig. 8. Energy distribution of the phase t∗ for the total exposure; here the errors are at 2σ. An annual modulation effect is present in the lower energy intervals up to 6 keV and the phase agrees with that expected for DM induced signals. No modulation is present above 6 keV and the phase is undetermined.
phenomena may show some kind of periodicity. It is worth noting that the point is whether they might mimic the annual modulation signature in DAMA/LIBRA (and former DAMA/NaI), i.e. whether they might be not only quantitatively able to account for the observed modulation amplitude but also able to contemporaneously satisfy all the requirements of the DM annual modulation signature. The same is also for side reactions. This has already been deeply investigated in ref.17,18 and references therein; the arguments and the quantitative conclusions, presented there, also apply to the DAMA/LIBRA-5,6 data. Some additional arguments have also been recently addressed in.27,28 No modulation has been found in any possible source of systematics or side reactions for DAMA/LIBRA as well; moreover, no one is able to mimic the signature. Thus, cautious upper limits (90% C.L.) on the possible contributions to the DAMA/LIBRA measured modulation amplitude have been estimated and are summarized in Table 4. Just as an example we recall here the case of muons, whose flux has been reported by the MACRO experiment to have a 2% modulation with phase around mid– July.29 In particular, it has been shown that not only this effect would give rise in the DAMA set-ups to a quantitatively negligible contribution,5,6,17,20 but several of the six requirements necessary to mimic the annual modulation signature – namely e.g. the conditions of presence of modulation just in the single-hit event rate at low energy and of the phase value – would also fail. Moreover, even the pessimistic assumption of whatever hypothetical (even exotic) possible cosmogenic product – whose decay or de-excitation or whatever else might produce: i) only events at low energy; ii) only single-hit events; iii) no sizeable effect in the multiple-hits counting rate – cannot give rise to any side process able to mimic the investigated DM signature. In fact, not only this latter hypothetical process would be quantitatively negligible,17,20 but in addition its phase – as it can be easily derived – would be
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
524 Table 4. Summary of the results obtained by investigating possible sources of systematics and side reactions in the data of the DAMA/LIBRA six annual cycles. None able to give a modulation amplitude different from zero has been found; thus cautious upper limits (90% C.L.) on the possible contributions to the measured modulation amplitude have been calculated and are shown here. It is worth noting that none of them is able to mimic the DM annual modulation signature, that is none is able to account for the whole observed modulation amplitude and to contemporaneously satisfy all the requirements of the signature. For details see ref.17,20 Analogous results were obtained for DAMA/NaI.5,6 Source
Radon Temperature Noise Energy scale Efficiencies
Background
Side reactions
Main comment (also see ref.18 )
Cautious upper limit (90%C.L.)
Sealed Cu Box in HP Nitrogen atmosphere, < 2.5 × 10−6 cpd/kg/keV 3-level of sealing Air conditioning < 10−4 cpd/kg/keV + huge heat capacity Efficient rejection < 10−4 cpd/kg/keV Routine < 1 − 2 × 10−4 cpd/kg/keV + intrinsic calibrations Regularly measured < 10−4 cpd/kg/keV No modulation above 6 keV; no modulation in the (2 – 6) keV multiple-hit events; < 10−4 cpd/kg/keV this limit includes all possible sources of background From muon flux variation < 3 × 10−5 cpd/kg/keV measured by MACRO In addition: no effect can mimic the signature
(much) larger than July 15th, and therefore well different from the one measured by the DAMA experiments and expected by the DM annual modulation signature (' June 2nd). Recently, a LVD analysis30 has been reported for the muon flux relatively to the period 2001–2008, which partially overlaps the DAMA/NaI running periods and completely those of DAMA/LIBRA. Similar results have been reported by BOREXINO.31 A value of ' 185 days has been measured by LVD in this period for the muon phase to be compared with (146 ± 7) days17,20 which is the measured phase by the DAMA/NaI and DAMA/LIBRA for the low energy peculiar single-hit > 5σ far from the muon modulation phase rate modulation. Thus, the latter one is ∼ measured at LNGS by the large surface apparata MACRO and LVD. In conclusion, any possible effect from muons can be safely excluded on the basis of all the given quantitative facts (and just one of them is enough). 3. Comments As regards the corollary investigation on the nature of the DM candidate particle(s) and related astrophysical, nuclear and particle physics scenarios, it has been shown – already on the basis of the DAMA/NaI result – that the obtained model independent evidence can be compatible with a wide set of possibilities. The model
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
525
dependent analyses performed with the previous DAMA/NaI results can be further addressed with the present cumulative data of DAMA/NaI and DAMA/LIBRA. Many candidates and scenarios can be investigated; we remind, for example: a) the case of low and high mass WIMP candidates with spin-independent (SI), spin-dependent (SD) and mixed SI&SD couplings.5,6 This analysis has also been extended in ref.8 considering possible contribution arising from a non thermalized DM particle component in the dark halo: in particular, the Sagittarius Dwarf Elliptical Galaxy (SagDEG) has been considered; b) the role of the electromagnetic contribution produced in the interaction of the WIMP with target nuclei,9 showing that this effect can have appreciable impact in the DM direct searches when interpreted in terms of WIMP candidates, with particular regard for the WIMP of low mass;9 c) WIMP candidates with preferred SI inelastic scattering 5,32 in many model frameworks (see e.g. ref.5 ); d) light (' keV mass) bosonic candidates, either with pseudoscalar or with scalar coupling.7 For these candidates, the direct detection process is based on the total conversion – in the target – of the mass of the absorbed bosonic particle into electromagnetic radiation. Thus, in this case the recoil of the target nucleus is negligible and it is not involved in the detection process (therefore, signals from these light bosonic DM candidates are lost in experiments applying procedures for the rejection of the electromagnetic contribution to the counting rate). e) electron interacting DM candidates,11 i.e. DM candidates which can have dominant coupling with the lepton sector of the ordinary matter. These DM particles can be directly detected only through their interaction with electrons, while they are lost by experiments based on the rejection of the electromagnetic component of the counting rate;11 f) Light DM candidate (LDM) particles12 considering inelastic scattering channels either on the electron or on the nucleus target. As result of the interaction a lighter particle is produced and the target (either nucleus or electrons) recoils releasing detectable signal in suitable detectors. g) inclusion of the known channeling effect on NaI(Tl) crystals in some model dependent analyses described above.10 h) etc. Some related arguments and some template examples of DM expectations in different scenarios superimposed to the experimental modulation amplitudes of Fig. 6 are given in the Appendix and in Fig. 21 of ref.17 They show that at the present level of sensitivity the experimental modulation amplitudes are well compatible with different-in-shape behaviours. Many other interpretations of the annual modulation results are available in literature (as e.g.,23,33–39 etc.); many others are open.
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
526
4. Comparisons It is worth recalling that no other experiment exists, whose result can be directly compared in a model-independent way with those by DAMA/NaI and DAMA/LIBRA, and that – more in general – results obtained with different target materials and/or different approaches cannot be directly compared among them in a model-independent way. This is in particular due to the existing experimental and theoretical uncertainties, not last e.g. how many kinds of dark matter particles can exist in the Universeb , the nature, the interaction types, the different nuclear and/or atomic correlated aspects, the unknown right halo model, the right DM density, etc. as well as the uncertainties on the values of each one of the many involved experimental and theoretical parameter/assumption/approximation used in the calculations. Moreover, some experimental aspects of some techniques used in the field have also to be addressed.5,28,40 It is worth noting that the implications of the DAMA model independent results are generally presented in an incorrect/partial/not-updated way; see also some talks in this Conference. Another relevant argument is the methodological robustness.41 In particular, the general considerations on comparisons reported in Appendix A of ref.17 still hold. Hence, claims for contradiction have no scientific basis. On the other hand, whatever possible “positive” result has to be interpreted and a large room of compatibility with DAMA annual modulation evidence is present. Similar considerations can also be done for the indirect detection searches, since it does not exist a biunivocal correspondence between the observables in the direct and indirect experiments. However, if possible excesses in the positron to electron flux ratio and in the γ rays flux with respect to a modeling of the background contribution, which is expected from the considered sources, might be interpreted – under some assumptions – in terms of Dark Matter, this would also be not in conflict with the effect observed by DAMA experiments. It is worth noting that different possibilities either considering different background modeling or accounting for other kinds of sources can also explain the indirect observations.42 Finally, as regards the accelerator searches for new particles beyond the Standard Model of particle Physics, it is worth noting that they can demonstrate the existence of some of the possible DM candidates, but cannot credit that a certain particle is the DM solution or the ”single” DM solution. Moreover, DM candidates and scenarios exist (even e.g. for the neutralino candidate) on which accelerators cannot give any information. It is also worth noting that for every candidate (including the neutralino) there exist various different possibilities for the theoretical aspects. Nevertheless, the results from accelerators will give outstanding and crucial complementary information in the field.
b In
fact, it is worth noting that, considering the richness in particles of the visible matter which is less than 1% of the Universe density, one could also expect that the particle part of the Dark Matter in the Universe may also be multicomponent.
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
527
5. Already Performed And Planned DAMA/LIBRA Upgradings During September 2008 the first upgrade of the DAMA/LIBRA set-up has been realized and the shield has been opened in HP Nitrogen atmosphere. This has allowed the increase of the exposed mass since one detector has been recovered by replacing a broken PMT. A new optimization of some PMTs and HVs has also been done. Finally, a total replacement of the used transient digitizers with new ones, having better performances, has been realized and a new DAQ with optical fibers has been installed and put in operation. The data taking has been restarted on October 2008. The model independent results achieved by the DAMA/LIBRA set-up has pointed out the relevance to lower the energy threshold of the experiment below 2 keV. Thus, the replacement of all the PMTs with new ones having higher quantum efficiency has been planned; this will also improve – as evident – other significant experimental aspects. A larger exposure collected by DAMA/LIBRA (or by a possible future DAMA/1ton) and the lowering of the 2 keV energy threshold will further improve: i) the experimental sensitivity; ii) the corollary information on the nature of the DM candidate particle(s) and on the various related astrophysical, nuclear and particle physics scenarios. Moreover, it will also allow the investigation – with high sensitivity – of other DM features, of second order effects and of several rare processes other than DM. In particular, some of the many topics – not yet well known at present and which can affect whatever model dependent result and comparison – are: i) the velocity and spatial distribution of the Dark Matter particles in the galactic halo; ii) the effects induced on the Dark Matter particles distribution in the galactic halo by contributions from satellite galaxies tidal streams; iii) the effects induced on the Dark Matter particles distribution in the galactic halo by the possible existence of caustics; iv) the detection of possible ”solar wakes” (the gravitational focusing effect of the Sun on the Dark Matter particle of a stream); v) the investigation of possible diurnal effects; vi) the study of possible structures as clumpiness with small scale size; vii) the coupling(s) of the Dark Matter particle with the 23 Na and 127 I and its nature; viii) the scaling laws and cross sections; etc. In addition, it is worth noting that ultra low background NaI(Tl) scintillators can also offer the possibility to achieve significant results on several other rare processes as already done e.g. by the former DAMA/NaI apparatus13,14 and just started with DAMA/LIBRA.19 Finally, we mention that a third generation R&D effort towards a possible NaI(Tl) ton set-up, DAMA proposed in 1996, has been funded by I.N.F.N. and is in progress. 6. Conclusions The new annual cycles DAMA/LIBRA-5,6 have further confirmed a peculiar annual modulation of the single-hit events in the (2–6) keV energy region satisfying all
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
528
the many requests of the DM annual modulation signature. No systematic or side processes able to simultaneously satisfy all the many peculiarities of the signature and to account for the whole measured modulation amplitude is available. The total exposure by former DAMA/NaI and present DAMA/LIBRA is 1.17 ton × yr. The DAMA/LIBRA experiment is continuously running and a new upgrade is foreseen at fall 2010. References 1. P. Belli, R. Bernabei, C. Bacci, A. Incicchitti, R. Marcovaldi, D. Prosperi, DAMA proposal to INFN Scientific Committee II, April 24th 1990. 2. R. Bernabei et al., Phys. Lett. B 389 (1996) 757; R. Bernabei et al., Phys. Lett. B 424 (1998) 195; R. Bernabei et al., Phys. Lett. B 450 (1999) 448; P. Belli et al., Phys. Rev. D 61 (2000) 023512; R. Bernabei et al., Phys. Lett. B 480 (2000) 23; R. Bernabei et al., Phys. Lett. B 509 (2001) 197; R. Bernabei et al., Eur. Phys. J. C 23 (2002) 61; P. Belli et al., Phys. Rev. D 66 (2002) 043503. 3. R. Bernabei et al., Il Nuovo Cim. A 112 (1999) 545. 4. R. Bernabei et al., Eur. Phys. J. C 18 (2000) 283. 5. R. Bernabei el al., La Rivista del Nuovo Cimento 26 n.1 (2003) 1-73. 6. R. Bernabei et al., Int. J. Mod. Phys. D 13 (2004) 2127. 7. R. Bernabei et al., Int. J. Mod. Phys. A 21 (2006) 1445. 8. R. Bernabei et al., Eur. Phys. J. C 47 (2006) 263. 9. R. Bernabei et al., Int. J. Mod. Phys. A 22 (2007) 3155. 10. R. Bernabei et al., Eur. Phys. J. C 53 (2008) 205. 11. R. Bernabei et al., Phys. Rev. D 77 (2008) 023506. 12. R. Bernabei et al., Mod. Phys. Lett. A 23 (2008) 2125. 13. R. Bernabei et al., Phys. Lett. B 408 (1997) 439; P. Belli et al., Phys. Lett. B 460 (1999) 236; R. Bernabei et al., Phys. Rev. Lett. 83 (1999) 4918; P. Belli et al., Phys. Rev. C 60 (1999) 065501; R. Bernabei et al., Il Nuovo Cimento A 112 (1999) 1541; R. Bernabei et al., Phys. Lett. B 515 (2001) 6; F. Cappella et al., Eur. Phys. J.-direct C 14 (2002) 1; R. Bernabei et al., Eur. Phys. J. A 23 (2005) 7; R. Bernabei et al., Eur. Phys. J. A 24 (2005) 51; R. Bernabei et al., Astrop. Phys. 4 (1995) 45. 14. R. Bernabei, in the volume The Identification of Dark Matter, World Sc. Pub. (1997) 574. 15. R. Bernabei et al., Eur. Phys. J. A 27 s01 (2006) 35; R. Bernabei et al., Phys. Lett. B 546 (2002) 23; R. Bernabei et al., Phys. Lett. B 527 (2002) 182; R. Bernabei et al., Nucl. Instrum. & Meth. A 482 (2002) 728; R. Bernabei et al., Eur. Phys. J. direct C 11 (2001) 1; R. Bernabei et al., Phys. Lett. B 493 (2000) 12; R. Bernabei et al., New Journal of Physics 2 (2000) 15.1; P. Belli et al., Phys. Rev. D 61 (2000) 117301; P. Belli et al., Phys. Lett. B 465 (1999) 315; R. Bernabei et al., Phys. Lett. B 436 (1998) 379; P. Belli et al., Phys. Lett. B 387 (1996) 222; Phys. Lett. B 389 (1996) 783 (err.); P. Belli et al., Il Nuovo Cim. 19 (1996) 537; P. Belli et al., Astropart. Phys. 5 (1996) 217. 16. R. Bernabei et al., Eur. Phys. J. A 36 (2008), 167; R. Bernabei et al., Phys. Lett. B 658 (2008) 193; R. Bernabei et al., ROM2F/2008/17, to appear on the Proc. of the Int. conf. NPAE 2008, Kiev, Ukraine; R. Bernabei et al., Phys. Rev. C 76 (2007) 064603; R. Bernabei et al., Nucl. Phys. A 789 (2007) 15; R. Bernabei et al., Ukr. J. Phys. 51 (2006) 1037; R. Bernabei et al., Nucl. Instrum. & Meth. A 555 (2005) 270; R. Cerulli et al., Nucl. Instrum. & Meth. A 525 (2004) 535; P. Belli et al., Nucl. Instrum. & Meth. A 498 (2003) 352; R. Bernabei et al., Nucl. Phys. 705 (2002) 29; P.
November 24, 2010
15:27
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.08˙Bernabei
529
17. 18. 19. 20. 21.
22. 23.
24. 25. 26. 27. 28. 29. 30. 31. 32. 33.
34. 35. 36. 37. 38. 39. 40. 41. 42.
Belli et al., Astropart. Phys. 10 (1999) 115; P. Belli et al., Nucl. Phys. B 563 (1999) 97; R. Bernabei et al., Astropart. Phys. 7 (1997) 73; Il Nuovo Cim. A 110 (1997) 189. R. Bernabei et al., Eur. Phys. J. C 56 (2008) 333. R. Bernabei et al., Nucl. Instr. & Meth. A 592 (2008) 297. R. Bernabei et al., Eur. Phys. J. C 62 (2009) 327. R. Bernabei et al., Eur. Phys. J. C in publication, arXiv:1002.1028. P. Belli et al., Nucl. Phys. A 806 (2008) 388; P. Belli et al., Nucl. Instrum. & Meth. A 572 (2007) 734; P. Belli et al., in the volume ”Current problems in Nuclear Physics and Atomic energy”, ed. INR-Kiev, (2006) 479. K.A. Drukier et al., Phys. Rev. D 33 (1986) 3495; K. Freese et al., Phys. Rev. D 37 (1988) 3388. D. Smith and N. Weiner, Phys. Rev. D 64 (2001) 043502; D. Tucker-Smith and N. Weiner, Phys. Rev. D 72 (2005) 063509; D. P. Finkbeiner et al, Phys. Rev. D 80 (2009) 115008. K.Freese et al. astro-ph/0309279; Phys. Rev. Lett. 92 (2004) 11301. F.S. Ling, P. Sikivie and S. Wick, Phys. Rev. D 70 (2004) 123503. W.H. Press and G. B. Rybicki, Astrophys. J. 338 (1989) 277; J.D. Scargle, Astrophys. J. 263 (1982) 835. R. Bernabei et al., arXiv:0912.0660[astro-ph.GA], to appear in the Proceed. of Scineghe09, October 2009, Assisi (It). R. Bernabei et. al, J. Phys.: Conf. Ser. 203 (2010) 012040 (arXiv:0912.4200); http://taup2009.lngs.infn.it/slides/jul3/nozzoli.pdf, talk given by F. Nozzoli. M. Ambrosio et al., Astropart. Phys. 7, 109-124 (1997). M. Selvi on behalf of the LVD coll., Proceedings of The 31st International Cosmic Ray Conference (ICRC2009) Lodz, Poland, 2009, in press. Borexino coll., D. D’Angelo talk in this Conference. R. Bernabei et al., Eur. Phys. J. C 23 (2002) 61; A. Bottino, N. Fornengo, and S. Scopel, Phys. Rev. D 67 (2003) 063519; A. Bottino, F. Donato, N. Fornengo, and S. Scopel, Phys. Rev. D 69 (2003) 037302; Phys. Rev. D 78 (2008) 083520; A. Bottino, F. Donato, N. Fornengo, S. Scopel, arXiv:0912.4025. R. Foot, Phys. Rev. D 78 (2008) 043529. Y. Bai and P.J. Fox, arXiv:0909.2900 K. Belotsky, D. Fargion, M. Khlopov and R.V. Konoplich, Phys. Atom. Nucl. 71 (2008) 147. E.M. Drobyshevski et al., Astrohys. & Astronom. Trans. 26:4 (2007) 289; Mod. Phys. Lett. A 23 (2008) 3077. Nima Arkani-Hamed et al., Phys. Rev. D79 (2009) 015014. Daniele S.M. Alves et al., arXiv:0903.3945. R. Bernabei et al., ISBN 978-88-95688-12-1, pages 1-53 (2009) Exorma Ed. (arXiv:0806.0011v2). R. Hudson, Found. Phys. 39 (2009) 174. F. Donato et al., Phys. Rev. Lett. 102 (2009) 071301; T. Delahaye et al., Astron. Astrophys. 501 (2009) 821; S. Profumo, arXiv:0812.4457; P. Blasi, Phys. Rev. Lett. 103 (2009) 051104; M. Ahlers et al., arXiv:0909.4060.
December 20, 2010
17:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.09˙Ahmed
530
RECENT RESULTS FROM WIMP-SEARCH ANALYSIS OF CDMS-II DATA Z. AHMED∗ for the CDMS-II collaboration Division of Mathematics, Physics and Astronomy, California Institute of Technology, Pasadena, CA 91125, USA ∗ E-mail: [email protected] The Cryogenic Dark Matter Search (CDMS-II) experiment, at Soudan Underground Laboratory, used germanium low-temperature particle detectors to search for Weakly Interacting Massive Particles (WIMPs), characterized by elastic nuclear scattering. We report results from the analysis of final data taken with the CDMS-II apparatus. Two events were observed in the signal region. Based on our background estimate, the probability of observing two or more background events is 23%. Combined with previous CDMS-II data, this results in an upper limit on the WIMP-nucleon spin-independent interaction cross-section of 3.8×10−44 cm2 at 90% CL for a 70 GeV/c2 WIMP. CDMS-II ended operations in March 2009, to be upgraded to SuperCDMS with detectors that are 2.5 times more massive and have improved background rejection. The first set of such detectors has been deployed at Soudan and is taking data. Keywords: Dark Matter; WIMPs; CDMS.
1. Introduction Cosmological evidence [1] indicates that only ∼4% of energy density of the universe is comprised of baryons, although more than 25% is contained in matter. The missing matter or dark matter is likely not only non-baryonic, but non-relativistic at the time of structure formation. One candidate for cold dark matter is Weakly Interacting Massive Particles (WIMPs) [2], well-motivated from a cosmological thermal relic framework and independently from proposed extensions of the Standard Model such as Supersymmetry [3–5]. WIMPs are expected to have scattering cross-sections on the order of the weak scale, and masses around ∼100 GeV/c2 [6]. As dark matter particles they would constitute diffuse halos around galaxies with isothermal velocities [7]. Terrestrially, they would appear to originate from the direction opposite to that of the Solar System’s motion through the Milky Way with an average velocity of 270 km/s. WIMPs would elastically scatter of nuclei in particle detectors thus producing a roughly exponential energy deposition spectrum, averaged around few tens of keV, with an interaction rate of < 0.1 events/kg/day [8, 9] .
December 20, 2010
17:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.09˙Ahmed
531 %" 133
6*078+903*))*:
!%%
Ba surface events
252
Cf neutrons
1
0.5
0 0
20
40 60 Recoil energy (keV)
80
0
!%%
Ba bulk gammas
133
6*0:8(;*<.0.=.21:
$#$
&'()*+,-./05,.+/
Ionization yield
1.5
100
>;02.81('2:
$" !" " !!" 0 !!"
!# " # !" &'()*+,-./01,),2304*(*).1.(
!#
Fig. 1. Background discrimination for a typical Ge detector demonstrated using in situ calibration sources. Bulk electron recoils (red dots) and low-yield surface events (black +) from a 133 Ba source, and neutron-induced nuclear recoil events (blue ◦) from a 252 Cf are marked. [Left] Ionization yield versus recoil energy. The solid black lines indicate the nuclear-recoil acceptance region. The sloping dashed magenta line indicates the ionization threshold while the vertical dashed line indicates the recoil energy analysis threshold. The region enclosed in the dot dashed lines indicates calibration events used to set the surface-event rejection cut. [Right] Normalized ionization yield (number of standard deviations from mean of nuclear recoil band) versus normalized timing parameter (timing relative to acceptance region) is shown for the same data. Events to the right of the vertical red dashed line pass the surface-event rejection cut for this detector. The solid red box is the WIMP signal region. Reproduced from Ref. [10].
2. CDMS-II Experiment The Cryogenic Dark Matter Search (CDMS-II) experiment consisted of an array of 19 germanium (250 g) and 11 silicon (100 g) particle detectors operated at cryogenic temperature (∼50 mK) [11, 12]. Each detector was a cylindrical disk, 7.6 cm in diameter and 1 cm thick. The detectors are grouped into five towers, each tower containing six detectors. Detectors are identified by their tower number (T1-T5) and their position within that tower (Z1-Z6). Particle interactions in a detector generated ionization as well as athermal phonons. An electric field across the detector separated the resulting electrons and holes which were collected on electrodes patterned on the flat faces, producing an ionization energy measurement. Phonons were collected in four superconducting thin-film absorber circuits and the energy was read out using tungsten transition-edge sensors (TESs) coupled with superconducting quantum interference devices (SQUIDs). A direct line of sight between adjacent detectors in a tower allows identification of events scattering between detectors. The ionization yield, or the ratio of charge to phonon energy depositions, provided the primary discrimination between electron recoils and nuclear recoils to better than 10−4 misidentification rate. For events within 10µm of a detector surface, charge collection was suppressed and ionization yield was reduced. Additional discrimination was obtained from the promptness of phonon pulses; surface events had faster pulses than bulk events. Combining ionization yield and phonon timing, bulk electron recoils were rejected to better than 10−6 misidentification rate and surface electron recoils to better than 10−2 . This is illustrated in Fig. 1 using
December 20, 2010
17:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.09˙Ahmed
532
3456*7+.+*5896.755*:;7)5*
&
.
<4769;3456*7+.+*5896 =9>45976.?864@*
!'%
AB8)8).;9@9),
!'$ !'# !'" !. !
"!
#! $! ()*+,-./0*12
%!
&!!
Fig. 2. Signal criteria efficiency versus recoil energy. Each line represents the cumulative effect of the criterion combined with the ones above it.
calibration data. In order to suppress ambient photons and radiogenic neutron rates, the entire experimental apparatus was surrounded by layers of lead and polyethylene. Finally, the experiment was situated at the Soudan Underground Mine at a depth of 2090 m.w.e. to suppress muon flux. An active plastic scintillator veto further tagged remaining incident muons which interacted in the apparatus to generate cosmogenic neutrons [11]. 3. Results We report results from the final WIMP-search data acquired between July 2007 and September 2008. At regular intervals during data acquisition, as well as collectively afterwards, detector performance was characterized by automated checks of detector neutralization (required for full ionization collection) and Kolmogorov-Sminrov tests on various parameter distributions including charge and phonon pulse characteristics. All 30 detectors (Ge and Si) were used to identify particle interactions, but only optimally performing Ge detectors were used to search for WIMP scatters, leading to 612 kg-days of net WIMP-search exposure. To prevent bias, the definition of physics cuts, calculation of their efficiencies, and characterization of detector response was done only using calibration data from 133 Ba and 252 Cf sources, or events in WIMP-search data outside the signal region. 356 keV γ-rays from 133 Ba were used to calibrate the ionization and phonon energy scales of the detectors. The validity and linearity of this calibration were verified at WIMP-scatter energies of interest using 10.36 keV x-rays from neutron activation of 70 Ge. Calibration data also provided sample surface events and nuclear recoils (Fig. 2) to tune the phonon timing based surface-event rejection cut for maximum sensitivity to a 60GeV/c2 WIMP. WIMP candidates were defined as events with recoil energies between 10 keV and 100 keV, within the detector fiducial volume, anti-coincident with muon veto activity, having interacted only in a single detector in the apparatus, within 2σ of the
December 20, 2010
17:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.09˙Ahmed
533
'#
1
&'>
1%'&+-'$'./+2#'-)#'". 3%44+-'$'./+2#'-)#'". !)5-#".+*).4'-,
?/-410-@-,9178-038-+, A/BB10-@-,9178-038-+,
!"#$%&'()*+,')&*
*+,-./0-+,12-345
&'" !') !'$
"#
!"# +
!1
!"#$%&'()*+,')&*
&'"
*+,-./0-+,12-345
%#
#
!'(
!') !'$
%# "# #
!'( !
+
!"#
!
"!
#!
$!
637+-413,38921:;3<=
%!
&!!
!"#
!$
#
$
"#
"$
!"#$%&'()*+-'$'./+0%#%$)-)#
Fig. 3. [Left] Ionization yield versus recoil energy for events passing all cuts, excluding yield and timing. The top (bottom) plot shows events for detector T1Z5(T3Z4). The solid red lines indicate the 2σ electron and nuclear recoil bands. The vertical dashed line represents the recoil energy threshold and the sloping magenta dashed line is the ionization threshold. Events that pass the timing cut are shown with round markers. The candidate events are the round markers inside the nuclear-recoil bands. [Right] Normalized ionization yield (number of standard deviations from mean of nuclear recoil band) versus normalized timing parameter (timing relative to acceptance region) for events passing all cuts, excluding yield and timing. The top (bottom) plot shows events for detector T1Z5(T3Z4). Events that pass the phonon timing cut are shown with round markers. The solid red box indicates the signal region for that detector. The candidate events are the round markers inside the signal regions. Also shown for reference are normalized timing parameter and normalized ionization yield histograms for calibration neutrons. Reproduced from Ref. [10].
mean ionization yield of calibration-neutron events and having failed a surface event cut based on phonon pulse timing. The efficiency of these criteria were measured as a function of energy and are plotted in Fig. 2. The WIMP-spectrum-averaged equivalent exposure for a WIMP of mass 60 GeV/c2 was 194 kg-days. Prior to unblinding, we also estimated the expected contribution of various background sources. The cosmogenic neutron background was estimated to be +0.04 0.04−0.03 (stat.) by Monte Carlo simulations of muon-induced particle showers and subsequent neutron production. The radiogenic background was estimated to be between 0.03 and 0.06 events based on counting of shielding and detector material samples. The expected background contribution from surface events was 0.6±0.1(stat.), estimated using pass-fail ratios of the surface-event cut, measured on calibration surface events and WIMP-search events outside the signal region. With the analysis finalized, the blind signal region was unmasked on November 5, 2009. We observed two events in the WIMP-acceptance region at recoil energies of 12.3 keV and 15.5 keV. These are marked in Fig. 3.
December 20, 2010
17:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.09˙Ahmed
534
−41
8=9
10
%
>?#?-@"AB?%'22CD+E F>#G%+H.2IE+E
8<9
!"#$!&'((%(12344356%)7+,0
JKLML89%+H.2IE+E
2
WIMP−nucleon σSI (cm )
Ellis LEEST Roszkowski (95%) ZEPLIN III XENON10 CDMS 2008 These data CDMS Soudan (all) Expected sensitivity −42
10
−43
10
8;9 8/9 899 =9 <9 ;9 /9 9%
−44
10
1
10
2
10
10 2
WIMP mass (GeV/c )
3
8
89
/
89
:
89
!"#$%&'((%)*+,-./0
Fig. 4. [Left] 90% C.L. upper limits on the WIMP-nucleon spin-independent cross section as a function of WIMP mass. The red (upper) solid line shows the limit obtained from the exposure analyzed in this work. The solid black line shows the combined limit for the full data set recorded at Soudan. The dotted line indicates the expected sensitivity for this exposure based on our estimated background combined with the observed sensitivity of past Soudan data. Prior results from CDMS [12], EDELWEISS II [13], XENON10 [14], and ZEPLIN III [15] are shown for comparison. The shaded regions indicate allowed parameter space calculated from certain Minimal Supersymmetric Models [16, 17] [Right]The shaded blue region represents WIMP masses and mass splittings for which there exists a cross section compatible with the DAMA/LIBRA [18] modulation spectrum at 90% C. L. under the inelastic dark matter interpretation [19]. Excluded regions for CDMS II (solid-black hatched) and XENON10 [20] (red-dashed hatched) were calculated in this work using the Optimum Interval Method. Reproduced from Ref. [10].
The candidate events occurred in periods of ideal experimental performance, separated in time by several months, and in different detectors in the apparatus. However, a detailed study revealed degraded surface event rejection for a small fraction of events with ionization energy below ∼6 keV, due to misconstructed event timing. Accounting for this effect, the surface background estimate stood revised to 0.8 ± 0.1(stat.)±0.2(syst.). Combining this with the estimated neutron background, the probability to observe two or more background events is 23%. We studied the proximity of the candidates to the surface-event rejection threshold by varying the timing cut threshold of the analysis. By reducing the expected surface event background to 0.4 events, both events are removed at a loss of 28% in WIMP-exposure. No additional events would be added by increasing the expected surface event background to 1.7 events. Based on these and previous CDMS data, we set a combined limit on spinindependent WIMP-nucleon interactions of 3.8×10−44 cm2 at 90% CL for a 70 GeV/c2 WIMP, [10], based on standard galactic halo assumptions [21]. We use the Optimum Interval Method [22], with no background subtraction. The left pane of Fig. 4 shows this limit plotted along with other recent results and favored parameter space under various theoretical models.
December 20, 2010
17:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.09˙Ahmed
535
These data were also analyzed under the hypothesis of WIMP inelastic scattering [19], which was proposed to explain the DAMA/LIBRA data [18] . We computed DAMA/LIBRA regions allowed at the 90% C. L. following the χ2 goodness-of-fit technique described in [23], without including channeling effects [24]. Limits from our data and that of XENON10 [20] were computed using the Optimum Interval Method [22]. Regions excluded by CDMS and XENON10 were defined by demanding the 90% C. L. upper limit to completely rule out the DAMA/LIBRA allowed cross section intervals for allowed WIMP masses and mass splittings. The results are shown in the right pane of Fig. 4. 4. Current Status CDMS-II ended operations in March 2009, and is being upgraded to SuperCDMS Soudan. CDMS-II detectors are being replaced with new ones, 2.5 times more massive than the old ones and with phonon sensors redesigned for better surface event rejection. The first tower of new detectors has already been deployed and is taking data at Soudan. By Summer 2010, 15kg of Ge detectors will be deployed in SuperCDMS with the goal of probing WIMP-nucleon cross-sections of 5×10−45 cm2 [25]. Acknowledgments The CDMS collaboration gratefully acknowledges the contributions of numerous engineers and technicians; we would like to especially thank Jim Beaty, Bruce Hines, Larry Novak, Richard Schmitt and Astrid Tomada. This work is supported in part by the National Science Foundation (Grant Nos. AST-9978911, PHY-0542066, PHY-0503729, PHY-0503629, PHY-0503641, PHY-0504224, PHY0705052, PHY-0801708, PHY-0801712, PHY-0802575 and PHY-0855525), by the Department of Energy (Contracts DE-AC03-76SF00098, DE-FG02-91ER40688, DE-FG02-92ER40701, DE-FG03-90ER40569, and DE-FG03-91ER40618), by the Swiss National Foundation (SNF Grant No. 20-118119), and by NSERC Canada (Grant SAPIN 341314-07). References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
E. Komatsu et al., Astrophys. J. Suppl. 180, 330 (2009). G. Steigman and M. S. Turner, Nucl. Phys. B253, p. 375 (1985). B. W. Lee and S. Weinberg, Phys. Rev. Lett. 39, 165 (1977). S. Weinberg, Phys. Rev. Lett. 48, 1776 (1982). G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept. 267, 195 (1996). G. Bertone, D. Hooper and J. Silk, Phys. Rept. 405, 279 (2005). P. Salucci and A. Borriello, Lect. Notes Phys. 616, 66 (2003). M. W. Goodman and E. Witten, Phys. Rev. D31, p. 3059 (1985). R. J. Gaitskell, Ann. Rev. Nucl. Part. Sci. 54, 315 (2004). T. C. I. Collaboration, Science 327, 1619(March 2010). D. S. Akerib et al., Phys. Rev. D72, p. 052009 (2005). Z. Ahmed et al., Phys. Rev. Lett. 102, p. 011301 (2009).
December 20, 2010
17:39
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.09˙Ahmed
536
[13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]
E. Armengaud et al., arXiv:0912.0805 (2009). E. Aprile et al., Phys. Rev. C79, p. 045807 (2009). V. N. Lebedenko et al., Phys. Rev. D80, p. 052010 (2009). J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Phys. Rev. D71, p. 095007 (2005). L. Roszkowski, R. Ruiz de Austri and R. Trotta, JHEP 07, p. 075 (2007). R. Bernabei et al., Eur. Phys. J. C56, 333 (2008). D. Tucker-Smith and N. Weiner, Phys. Rev. D64, p. 043502 (2001). J. Angle et al., Phys. Rev. D80, p. 115005 (2009). J. D. Lewin and P. F. Smith, Astropart. Phys. 6, 87 (1996). S. Yellin, Phys. Rev. D66, p. 032005 (2002). C. Savage, G. Gelmini, P. Gondolo and K. Freese, JCAP 0904, p. 010 (2009). R. Bernabei et al., Eur. Phys. J. C53, 205 (2008). Z. Ahmed, Characterization of supercdms 1-inch ge detectors, in Proceedings of 13th Interational Workshop on Low Temperature Detectors, eds. B. Young, B. Cabrera and A. Miller, THE THIRTEENTH INTERNATIONAL WORKSHOP ON LOW TEMPERATURE DETECTORSLTD13 1185 (AIP, 2009).
November 24, 2010
16:12
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.10˙Lin
537
LOW ENERGY NEUTRINO AND DARK MATTER PHYSICS WITH SUB-KEV GERMANIUM DETECTOR SHIN-TED LINa and HENRY T. WONGb (for the TEXONO Collaboration) Institute of Physics, Academia Sinica, Taipei 11529, Taiwan E-mail: [email protected] [email protected] The current goals of the TEXONO research program are on the development of germanium detectors with sub-keV sensitivities to realize experiments on neutrino magnetic moments, neutrino-nucleus coherent scattering, as well as WIMP dark matter searches. An energy threshold of 220 eV was achieved with a four-channel ultra-low-energy germanium prototype detector each with an active mass of 5 g at the Kuo-Sheng Neutrino Laboratory. New limits were placed for the couplings of low-mass WIMPs with matter with a ultra-low-energy germanium prototype detector. Data are being taken with a 500 g Point Contact Germanium detector, where a threshold of ∼350 eV was demonstrated. The dark matter program will evolve into a dedicated experiment at an underground laboratory under construction in Sichuan, China. Keywords: Neutrino-Neucleus Coherent Scattering; Neutrino Magnetic Moments; Dark Matter; Point-Contact Germanium Detector.
1. Introduction A research program on low energy neutrino and dark matter physics is pursued at the Kuo-Sheng Neutrino Laboratory (KSNL) by the TEXONO Collaboration.1 The laboratory is located at a distance of 28 m from a 2.9 GW reactor core and has an overburden of about 30 meter-water-equivalent. Results on neutrino magnetic moments2,3 and neutrino-electron scattering cross-section have been obtained.4 The present goals are to develop advanced detectors with kg-size target mass, 100 eVrange threshold and low-background specifications1 for the searches of Weakly Interacting Massive Particles (WIMPs)5 at the low-mass region as well as the studies of neutrino-nucleus coherent scattering6 and neutrino magnetic moments. 2. Experimental Set-Up The laboratory is equipped with an outer 50-ton shielding structure depicted schematically in Fig. 1(a), consisting of, from outside in, 2.5 cm thick plastic scintillator panels with photo-multiplier tubes (PMTs) readout for cosmic-ray veto (CRV), 15 cm of lead, 5 cm of stainless steel support structures, 25 cm of boron-loaded
November 24, 2010
16:12
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.10˙Lin
538
polyethylene and 5 cm of OFHC copper. The innermost volume with a dimension of 100×80×75 cm3 provides the flexibilities of placing different detectors for different physics topics. During the previous data taking periods, both the HPGe and a CsI(Tl) scintillating crystal array together with their associated inner shieldings were placed in the inner volume. The CsI(Tl) array is for the measurement of neutrino-electron scattering cross-sections. The WIMP as well as neutrino-nucleus coherent scattering and µν search was performed with the ULE-HPGe detector shown schematically in Fig. 1(b). 3. Neutrino and Dark Matter Physics 3.1. Neutrino-nucleus coherent scattering Neutrino coherent scattering with the nucleus7 is a fundamental neutrino interaction which has never been observed. The Standard Model cross section for this process is given by: G2 mN TN dσ coh ] )SM = F mN [Z(1 − 4sin2 θW ) − N]2 [1 − dT 4π 2E2ν G2 E 2 (1) σtot = F ν [Z(1 − 4sin2 θW ) − N]2 4π where mN , N and Z are the mass, neutron number and atomic number of the nuclei, respectively, Eν is the incident neutrino energy and TN is the measure-able recoil energy of the nucleus. The maximum neutrino energy for the typical reactor = 1.9 keV for Ge target (A=72.6). The ν¯e spectra is about 8 MeV, such that Tmax N differential cross section for coherent scattering versus nuclear recoil energy with typical reactor ν¯e spectra is displayed in Fig. 2. Measurement of the coherent scattering cross-section would provide a sensitive test to the Standard Model8 probing the weak nuclear charge and radiative corrections due to possible non-standard neutrino interactions or additional neutral gauge (
(a)
(b)
Fig. 1. (a) The shielding design of the KS Neutrino Laboratory. Detectors and inner shieldings were placed in the inner target volume. (b) Schematic layout of the HPGe with its anti-Compton detectors as well as inner shieldings and radon purge system.
November 24, 2010
16:12
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.10˙Lin
Count (kg-1 keV -1 day -1)
539
106 105 104 103
Ato
νN
mic
Co
her
ent
102
Free
1 10
iza
tio
(SM
)
10 10
Ion
Elect
n( µν )
ron (µ
-1
ν)
-2
10-3
Free Electron (SM)
10-4 10-5 10-6 -2 10
10-1
1
10
3
102 10 Recoil Energy (keV)
Fig. 2. The observable spectra due to neutrino interactions on Ge target with reactor ν¯e at φ(ν¯e )=1013 cm−2 s−1 . Contributions from the AI and FE channels at µν =10−10 µB , as well as from SM ν¯e -e and ν¯e -N coherent scattering are shown.
bosons. The coherent interaction plays important role in astrophysical processes where the neutrino-electron scatterings are suppressed due to Fermi gas degeneracy. It is significant to the neutrino dynamics and energy transport in supernovae and neutrons stars.9 Nuclear power reactors are intense source of electron antineutrinos (ν¯e ) at the MeV range, from which many important neutrino experiments were based. The ν¯e spectra are well-modeled, while good experimental control is possible via the reactor ON/OFF comparisons. 3.2. Neutrino magnetic moments The parameter µν is an effective parameter depending on the eigenstate compositions at the detectors.10 The study of µν is, in principle, a way to distinguish between Dirac and Majorana neutrinos11 − a crucial unresolved issue in neutrino physics. A new detection channel on atomic ionization for possible neutrino electromagnetic interactions was identified and studied.3 Significant enhancement can be expected when the energy transfer to the target is of the atomic-transition scale. Interaction cross-section induced by neutrino magnetic moments (µν ) was evaluated with the equivalent photon method. New limit of µν (ν¯e ) < 1.3 × 10−11 µB at 90% confidence level was derived using current data with reactor neutrinos. 3.3. Dark matter searches A four-channel Ultra-Low-Energy Germanium (ULEGe) prototype detector with a total active mass of 20 g has collected low-background data at KSNL.5 The trigger and analysis efficiencies are shown in Fig. 3(a). An energy threshold of (220±10) eV was achieved at an efficiency of 50%. The background spectrum with 0.338 kg-day of exposure is displayed in Fig. 3(b). Constraints on WIMP-nucleon spin-independent SI SD [σχN ] and spin-dependent [σχN (n)] couplings as functions of WIMP-mass (mχ ) were
November 24, 2010
16:12
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.10˙Lin
540
Trigger
0.8
0.6
104
Event kgŦ1 keV Ŧ1 dayŦ1
1
Event kgŦ1keV Ŧ1day Ŧ1
Efficiency
derived, as depicted in Fig. 4. Overlaid on the plots are results from experiments which define the current exclusion boundaries, the DAMA-allowed regions and that favored by SUSY models.5,12 The KSNL limits improve over previous results at mχ ∼ 3 − 6 GeV. Sensitivities for full-scale experiments at 1 cpkkd background level are projected as dotted lines. The observable nuclear recoils at mχ =5 GeV SI and σχN =0.5 × 10−39 cm2 (allowed) and 1.5 × 10−39 cm2 (excluded) are superimposed with the measured spectrum in the inset of Fig. 3(b) for illustrations.
1000
Ŧ39
(5 GeV, 1.5 u10
cm2) Ŧ39
(5 GeV, 0.5 u10
103
800
2
cm )
600
400
200
102
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
10
Measured Energy (keV)
Calibration 0.4
ACV Tag 10
BestŦfit 1V region 0.2
1 0
0
0.1
0.2
0.3
0.4
0.5 0.6 Energy (keV)
This Work (4u5 g) HPGe (1 kg) CRESSTŦ1
10Ŧ1
(a)
1
10 Measured Energy (keV)
(b)
Fig. 3. (a) Trigger efficiency for physics events recorded by the DAQ system and analysis efficiency of the PSD cut with the best-fit 1σ region, using the 20 g ULEGe prototype detector, as derived by the the 55 Fe-calibration and in situ background events with ACV tags, respectively. (b) The measured spectrum of ULEGe with 0.338 kg-day of data, after various background suppression procedures. Background spectra of the CRESST-I experiment12 and the HPGe2 are overlaid for SI ) are superimposed onto the inset. comparison. The expected spectra for two cases of (mχ , σχN
10 10
10
10
10
10
7 6 5 4 3 2 1 0 2
-36
-37
-38
10
10 3
4
5
6
-39
10
10
10
-34
-35
-40
-36
-41
10 10
-33
7
10 10
-32
-37
-42
10
-43
-44
1
10
(a)
10
2
10
-38
-39
1
10
10
2
(b)
Fig. 4. Exclusion plot of the (a) spin-independent χN (b) spin-dependent χ-neutron cross-section versus WIMP-mass.
November 24, 2010
16:12
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.10˙Lin
541
4. Performance of Point-Contact Germanium Detectors The design of Point-Contact Germanium (PCGe) detectors was first proposed in the 1980’s,13 offering the potential merits of sub-keV sensitivities with kg-scale target mass. There are intense recent interest triggered by successful realization and demonstration of the detector technique.14 A PCGe of target mass 500 g was constructed and has been collecting data in KSNL since early 2009. Similar procedures to those developed for the ULEGe were adopted to study the efficiency factors below the electronic noise edge. The results, analogous to those of Fig. 3(a), are displayed in Fig. 5(a). The trigger efficiencies were measured with two methods. The fractions of calibrated pulser events above the discriminator threshold provided the first measurement, while the studies on the amplitude distributions of in situ data contributed to the other. The relative timing between the PCGe and anti-Compton (ACV) NaI(Tl) detectors is shown in Fig. 5(b) , for “sub-noise edge” events at 200-400 eV before and after the pulse shape discrimination (PSD) selection processes. Events in coincidence with ACV at the “50−200 ns” window are due to multiple Compton scatterings, which are actual physical processes having similar pulse shapes as the neutrino and WIMP signals. It can be seen that only these events have substantial probabilities of surviving the cuts, and the fractions constitute to the PSD efficiencies. The threshold at ∼50% combined efficiencies is ∼350 eV. Intensive background and optimization studies with the PCGe at KSNL are underway. 5. Status and Plans
1
Counts per Bin
Efficiency
A detector with 1 kg mass, 100 eV threshold and 1 cpkkd background level has important applications in neutrino and dark matter physics, as well as in the monitoring of reactor operation. Crucial advances have been made in adapting the Ge detector technology to satisfy these requirements. Competitive limits have been achieved in prototype studies on the WIMP couplings with matter. Intensive re-
Pulser Background PSD
0.8
Before PSD After PSD
3
10
102
0.6
0.4
10
0.2
1 0
0
0.1
0.2
0.3
(a)
0.4
0.5
0.6 0.7 Energy(keV)
0
50
100
150
200
250
300
350
400 450 500 NaI Timing(50ns)
(b)
Fig. 5. (a) The trigger and analysis efficiencies of the 500 g PCGe detector, as derived from the test pulser and in situ events, respectively. (b) Events as a function of relative timing between ACV-NaI(Tl) and PCGe systems, before and after PSD selection.
November 24, 2010
16:12
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.10˙Lin
542
search programs are being pursued along various fronts towards realization of experiments which can meet all the technical challenges. The low energy neutrino physics program will continue at KSNL, where a 900 g PCGe detector will be installed in 2010. Dedicated dark matter search with both 20 g ULEGe and 500 g PCGe detectors will be the first experimental program conducted at CJPL commencing 2010. 5.1. China Jin-Ping underground laboratory The dark matter limits in Sec. 3.3 are by-product results of an experimental configuration optimized for neutrino physics. It is essential that the program will evolve into a dedicated dark matter search experiment in an underground location. An excellent candidate site for a deep underground laboratory was recently identified in Sichuan, China where the China Jin-Ping Laboratory (CJPL) is being constructed.15 The laboratory has more than 2500 m of rock overburden, is accessible by a road tunnel built for public traffic, and is supported by excellent infrastructures already available near the entrance. The first cavern of size 6 m(height)X6 m(width)X40 m(depth) is scheduled for completion in early this year. References 1. H. T. Wong, Mod. Phys. Lett. A 23 1431 (2008). 2. H. B. Li et al., Phys. Rev. Lett. 90 131802 (2003); H. T. Wong et al., Phys. Rev. D 75 012001 (2007). 3. H. T. Wong, H. B. Li, S. T. Lin arXiv: hep-ex/1001.2074 (2010). 4. H. B. Li et al., Nucl. Instrum. Methods A 459 93 (2001); M. Deniz, et al.Phys. Rev. D 81 072001 (2010) 5. S. T. Lin et al., Phys. Rev.D 76 061101 R (2009) and references therein. 6. H. T. Wong et al., J. Phys. Conf. Ser. 39 266 (2006). 7. D.S. Freedman, Phys. Rev.D 9 1389 (1974); Y.V. Gaponov and V.N. Tikhonov, Sov. J. Nucl. Phys. 26 31 (1977); L.H. Sehgal and M. Wanninger,Phys. Lett. B 171 107 (1986). 8. L.M. Krauss, Phys. Lett. B 269 407 (1991); J. Barranco, O.G. Miranda and T.I. Rashba, J. High Energy P. 12 021 (2005); J. Papavassiliou, J. Bernab´eu and M. Passera, Proc. of Science (HEP2005) 192 (2006) J. Barranco, O. G. Miranda, T. I. Rashba, Phys. Rev. D 76 073008 (2007). K. Scholberg, Phys. Rev. D 73 033005 (2006). 9. J.R. Wilson, Phys. Rev. Lett. 32 849 (1974); D.Z. Freedman, D.N. Schramm and D.L. Tubbs, Annu. Rev. Nucl. Sci. 27 167 (1977). 10. P. Vogel and J. Engel, Phys. Rev. D 39 3378 (1989); J.F. Beacom and P. Vogel, Phys. Rev. Lett. 83 5222 (1999). 11. B. Kayser, Phys. Rev. D 26 1662 (1982); J.F. Nieves, Phys. Rev. D 26 3152 (1982); R. Shrock, Nucl. Phys. B 206 359 (1982). 12. M. Drees and G. Gerbier Phys. Lett. B667 241 (2008) and references therein. 13. P.N. Luke et al., IEEE Trans. Nucl. Sci. 36 926 (1989). 14. P. A. Barbeau, J. I. Collar and O. Tench JCAP 09 009 (2007). 15. D. Normile, Science 324 1246 (2009).
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
543
THE “APPROACH UNIFYING SPIN AND CHARGES” PREDICTS THE FOURTH FAMILY AND A STABLE FAMILY FORMING THE DARK MATTER CLUSTERS ∗ ˇ BORSTNIK ˇ N. S. MANKOC
Department of Physics, Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, 1000 Ljubljana, Slovenia ∗ E-mail: [email protected] The Approach unifying spin and charges,1–3,5 assuming that all the internal degrees of freedom—the spin, all the charges and the families—originate in d > (1 + 3) in only two kinds of spins (the Dirac one and the only one existing beside the Dirac one and anticommuting with the Dirac one), is offering a new way in understanding the appearance of the families and the charges (in the case of charges the similarity with the Kaluza-Klein-like theories must be emphasized). A simple starting action in d > (1 + 3) for gauge fields (the vielbeins and the two kinds of the spin connections) and a spinor (which carries only two kinds of spins and interacts with the corresponding gauge fields) manifests after particular breaks of the starting symmetry the massless four (rather than three) families with the properties as assumed by the Standard model for the three known families, and the additional four massive families. The lowest of these additional four families is stable. A part of the starting action contributes, together with the vielbeins, in the break of the electroweak symmetry manifesting in d = (1+3) the Yukawa couplings (determining the mixing matrices and the masses of the lower four families of fermions and influencing the properties of the higher four families) and the scalar field, which determines the masses of the gauge fields. The fourth family might be seen at the LHC, while the stable fifth family might be what is observed as the dark matter. Keywords: Origin of families and charges; origin of Yukawa couplings; origin of gauge fields; prediction of the fourth family; prediction of stable fifth family forming the dark matter; two kinds of the Clifford objects.
1. Introduction The Standard model of the electroweak and colour interactions (extended by the right handed neutrinos), fiting with around 25 assumptions and parameters all the existing experimental data, leaves unanswered many open questions, among which are the questions about the origin of the charges (U (1), SU (2), SU (3)), of the families, and correspondingly of the Yukawa couplings of quarks and leptons and of the Higgs mechanism. Answering the question about the origin of families and their masses seems to me the most promising way leading beyond the today’s knowledge about the elementary fermionic and bosonic fields. The Approach unifying spins and charges1–3,5 does have a chance to answer the above mentioned open questions. The question is, of course, whether and when the
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
544
author a (and collaborators) will succeed to prove, that this theory really manifests in the low energy regime as the effective theory, postulated by the Standard model. Let me briefly present first i.) the starting assumptions of the Approach (in this section), ii.) a brief explanation of what does it offer (section 2), iii.) the so far made proofs and the obtained positive results (section 3) and at the end the open problems which are studying or are waiting to be studied (section 4.1). The Approach assumes1–3,5 a simple action in d = (1 + 13)-dimensional space Z Z d S= d x Lf + d d x Lg (1) with the Lagrange density for a spinor, which carries in d = (1 + 13) two kinds of the spin and no charges. One kind of spin is the Dirac spinor in (1 + (d − 1))dimensional space which, like in the Kaluza-Klein-like theories, takes care of the spin in d = (1 + 3) and all the charges, and the second kind, the new one, which takes care of the families b . There exist only two kinds of spins, corresponding to the left and the right multiplication of the Clifford algebra objects. The two kinds of spins are represented by the two kinds of the Clifford algebra objects1–3,5 i a b i a b (γ γ − γ b γ a ), S˜ab = (˜ γ γ˜ − γ˜ b γ˜ a ), 4 4 {γ a , γ b }+ = 2η ab = {˜ γ a , γ˜ b }+ , {γ a , γ˜ b }+ = 0, {S ab , S˜cd }− = 0. S ab =
(2)
The spinor interacts correspondingly only with the vielbeins f α a and the two kinds of the spin connection fields, ωabα and ω ˜ abα , 1 (E ψ¯ γ a p0a ψ) + h.c. 2 1 1 1 = f α a (pα − S ab ωabα − S˜ab ω ˜ abα ) + {pα , f α a E}. 2 2 2E
Lf = p0a
(3)
Correspondingly there is the Lagrange density for the gauge fields, assumed to be linear in the curvature ˜ Lg = E (α R + α ˜ R), ˜ = f α[a f βb] (˜ R = f α[a f βb] (ωabα,β − ωcaα ω c bβ ), R ωabα,β − ω ˜ caα ω ˜ c bβ )
(4)
and it is also the torsion field T β ab = f α [a (f β b] ),α + ω[a c b] fcβ . Indices a and α define the index in the tangent space and the Einstein index, respectively, m and µ determine the ”observed” (1 + 3)-dimensional space, s and σ the indices of ”non observed” dimensions. aI
started the project named the Approach unifying spins and charges fifteen years ago, proving alone or together with collaborators step by step that such a theory has a real chance to answer the open questions of the Standard model. The names of the collaborators and students can be found in the cited papers. b This is the only theory in the literature to my knowledge, which does not explain the appearance of families by just postulating their numbers on one or another way, through the choice of a group, for example, but by offering the mechanism for generating families.
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
545
The action Eq.(1), which manifests the families (through S˜ab ) as the equivalent representations to the spinor representation of S ab (as can be seen in the last term of Eq. (2)), has a real chance to manifest after the spontaneous breaks of symmetries in the low energy regime, as I shall comment bellow (sections 2, 3), all the properties required by the Standard model in order to fit the so far observed data. The Approach does not only explain, for example, why the left handed spinors carry the weak charge while the right ones do not, and predicts that for the spontaneous breaks of symmetries the vielbeins together with both kinds of the spin connection fields are responsible, as they are also for the Yukawa couplings and the appearance of the scalar fields, but explains the mechanism for the appearance of families, predicting the fourth family to be possibly seen at the LHC (or at somewhat higher energies)2,5,7 and the stable fifth family, whose neutrinos and baryons with masses several hundred TeV/c2 might explain the appearance of the dark matter.6 The Approach confronted and still confronts several problems (among them are the problems common to all the Kaluza-Klein-like theories which have vielbeins and spin connections as the only gauge fields, without additional gauge fields in the bulk), which we are studying step by step when searching for possible ways of spontaneous breaking of the starting symmetries, leading to the properties of the observed families of fermions and gauge and scalar fields, and looking for predictions the Approach is offering.3,6,7 I kindly ask the reader to look for more detailed presentation of the Approach in the refs.2,5,7 and the references therein. 2. The Low Energy Limit of the Approach Unifying Spin and Charges The action of Eq. (1) starts with the massless spinor, let say, of the left handedness. The spinor interacts with the vielbeins and through two kinds of spins (S ab and S˜ab ) with the two kinds of the spin connection fields. It was shown1–3,5 that the Dirac kind of the Clifford algebra objects (γ a ) determines, when the group SO(1, 13) is analysed with respect to the Standard model groups in d = (1 + 3) the spin and all the so far observed charges, manifesting the left handed quarks and leptons carrying the weak charge and the right handed weak chargeless quarks and leptons.2,5,7 Accordingly the Lagrange density Lf (Eq. 1,3) manifests after the appropriate breaks of the symmetries all the properties of one family of fermions as assumed by the Standard model: The three kinds of charges coupling fermions to the corresponding gauge fields, as presented bellow in the first term on the right hand side of Eq.(5). The second kind (˜ γ a ) of the Clifford algebra objects (defining the equivalent representations with respect to the Dirac one) determines families. Accordingly manifests the spinor Lagrange density, after the breaks of the starting symmetry (SO(1, 13) into SO(1, 7)×U (1)×SU (3) and further into SO(1, 3)×SU (2)×SU (2)× U (1)×SU (3)) the Standard model-like Lagrange density for massless spinors of eight (four + four) families (defined by 28/2−1 = 8 spinor states for each member of one
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
546
family). In the first successive break, appearing at around 1013 GeV (or at a little lower scale), the upper four families (in the Yukawa couplings decoupled from the lower four families) obtain masses, while the lower four families stay massless (due to the fact that they are singlets with respect to the generators c˜Ai ab S˜ab , forming one of the two SU (2) groups) and mass protected (since only the left handed spinors carry the weak charge). The Yukawa couplings are presented in the second term on the right hand side of Eq.(5) bellow. The third term (”the rest”) in Eq.(5) can still contribute to the second break. X X ¯ m (pm − ¯ s p0s ψ + the rest. ψγ (5) Lf = ψγ g A τ Ai AAi m )ψ + A,i
Ai
P
Ai
s=7,8
ab
Here τ (= a,b c ab S ) determine the hyper (A = 1), the weak (A = 2) and the colour (A = 3) charge: {τ Ai , τ Bj }− = iδ AB f Aijk τ Ak , f 1ijk = 0, f 2ijk = εijk , f 3ijk is the SU (3) structure tensor. In the final break (leading to SO(1, 3)×U (1)×SU (3), which is the Standard model like break) the last four families obtain masses. The two breaks appear at two very different scales and are caused by vielbeins and both kinds of the spin connection fields in any of these cases, manifesting the two kinds of the Yukawa couplings and the two kinds of scalar fields. The last break influences also the masses and the mixing matrices of the upper four families (in the Yukawa couplings decoupled from the lower ones) leading to the same gauge fields for all of the eight families.1,2 Correspondingly manifests the Lg at observable energies all the three known gauge fields in the Kaluza-Klein-like way δm µ em σ = 0 a e α= s . (6) e µ = es σ E σ Ai AAi es σ µ Here E σAi = τ Ai xσ . The scalar fields and the gauge fields manifest through the vielbeins. The Approach predicts two stable families; the first and the fifth. The quarks with the lowest mass among the upper four families, clustered into fifth family baryons, are, together with the fifth family neutrinos, the candidates to form the dark matter. For detailed calculations the reader can look at the ref.4 3. Rough Estimations and Predictions Although we can not (yet, since it needs a lot of additional understanding and calculations) tell how do these spontaneous breaks occur and at what energies do they occur, we still can make some estimations. At the break, which occurs at around 1013 GeV and makes the upper four families massive, the lightest (the stable one) of this four families obtains the mass, which is higher than several hundreds GeV (to be in agreement with the experimental data and with the prediction of the Approach that there are four rather than three families with nonzero mixing matrices) and pretty lower than the scale of the break. It is also meaningful to assume that the second break, which appears at much lower (that is weak scale)
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
547
will not influence the masses of the upper four families considerably. Expressing the P ¯ s p0s ψ of Eq. (5) in terms of the superposition of the ωstσ and ω term s=7,8 ψγ ˜ stσ as dictated by the breaks, one ends up with the mass matrices first for the upper and, after the second break, for the lower four families2,5 on the tree level. Since the ωstσ and ω ˜ stσ are not known, we only see the symmetries of the mass matrices. In the subsection (3.1) I present results of the estimations on the tree level of properties of the lower four families,2,5 when the Yukawa couplings from Eq. (5) were taken as parameter and fited to the existing experimental data. Although we were not able to calculate the masses of the fourth family members, we calculated, for the assumed masses the mixing matrices, predicting that the fourth family might be seen at the LHC or at somewhat higher energies. In the subsection (3.2) I present how do the experimental data–the cosmological ones and the data from direct measurements–limit the properties of the fifth family quarks: The present dark matter density limits (under the assumption that the fifth family baryons are mostly responsible for it) their masses in the interval 10TeV < mq5 c2 < a few hundreds TeV, while the direct measurements, if they measure our fifth family baryons, predict the masses mq5 ≥ 200 TeV c2 . These estimations will serve as a guide to next more demanding studies of the predictions of the Approach. 3.1. Predicting the fourth family properties After the last break (caused again by the vielbeins and the two kinds of the spin connection fields, connected now with the second SU (2) symmetry break in the S˜ab sector) the action (Eq. 5) manifests the properties postulated by the Standard model: the massive gauge fields Zµ and Wµ± , the massless U (1) and SU (3) fields, the scalar filed (the Standard model Higgs) and the mass matrices of quarks and leptons of the lower four families. It is a very difficult study to estimate the breaks and correspondingly the numerical values for parameters which the Standard model fits from the experimental data. We are still able to tell something about the mass matrices, if taking into account the symmetries of them, which we evaluated on the tree level. The parameterised mass matrix is presented bellow.2,5 a± b± −c± 0 b± a± + d1± 0 −c± (7) c± 0 a± + d2± b± 0
c±
b±
a± + d3±
The parameters a± , b± , c± distinguish among ui and di , or νi and ei (+ for ui and νi , and − for di and ei ), while the parameters di± distinguish among all the members of one family. Assuming that the contributions when going bellow the tree level would change the values and not the symmetries, and not paying attention on the CP non conservation (studies of the discrete symmetries of the Approach are under consideration), that is assuming that the parameters are real, we fit with the Monte Carlo program these parameters to the existing data for the three families within
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
548
the known accuracy. We did this for quarks and leptons. We were not able to tell the masses of the fourth family members unless requiring an additional symmetry.5 For the chosen values for the fourth family masses (we took (215, 285, 85, 170 GeV/c 2 for the u4 , d4 , ν4 , e4 respectively), we were able to evaluate the mixing matrices, which fit the experimental data.5 We present bellow, as an example, the mass matrix for the u-quarks (9, 22) (−150, −83) 0 (−306, 304) (−150, −83) (1211, 1245) (−306, 304) 0 0 (−306, 304) (171600, 176400) (−150, −83) (−306, 304) 0 (−150, −83) 200000 and the mixing matrix for the quarks −0.974 −0.226 −0.00412 0.00218 0.226 −0.973 −0.0421 −0.000207 . 0.0055 −0.0419 0.999 0.00294 0.00215 0.000414 −0.00293 0.999
We found the following masses of the four family members: mui /GeV = (0.0034, 1.15, 176.5, 285.2), mdi /GeV = (0.0046, 0.11, 4.4, 224.0) and mνi /GeV = (1 · 10−12 , 1 · 10−11 , 5 · 10−11 , 84.0), mei /GeV = (0.0005, 0.106, 1.8, 169.2). We are now studying the properties of the Yukawa couplings for the upper four and the lower four families bellow the tree level. Although we can not tell the masses of the fourth family yet, we can predict that the fourth family will sooner or later be experimentally confirmed, and due to my understanding, at much lower energies than the supersymmetric partners. 3.2. The fifth family baryons and neutrinos forming the dark matter The Approach predicts two times four families bellow the energy scale of 10 13 GeV. The upper four families obtain masses when one of the two SU (2) symmetries in both sectors–S ab and S˜ab –breaks. The break of the second SU (2) symmetry influences the properties of the upper four families as well, so that they manifest the same charges and interact with the same gauge fields as the lower four families. It (slightly) influences also their masses. The lowest of the four upper families is stable (with respect to the age of the universe) and is accordingly the candidate to answer the open question of both standard models–the electroweak and the cosmological–that is about the origin of the dark matter in the universe. We have much less data about the upper four families than about the lower four families. The only data about the upper four families follow from the known properties of the dark matter. In what follows I briefly present the estimations of the main properties of the fifth family members, making the assumption that the fifth family neutron is the
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
549
lightest fifth family baryon and that the neutrino is the lightest fifth family lepton c . Other possibilities are under consideration. For known masses of quarks and leptons of the fifth family members and for known phase transitions in the universe when expanding and cooling down, the behaviour of the fifth family members in the evolution of the universe as well as when the clusters of galaxies are formed and when they interact with the ordinary matter follow (although the calculations are extremely demanded). We should know also the fifth family baryon/antibaryon asymmetry. We have not yet studied the masses and the mixing matrices for the upper four families and not yet the baryon/antibaryon asymmetry within the Approach. From what it is presented in subsection 3.1 it follows that the masses of the fifth family members are above 1 TeV/c2 , since they are expected to be above the fourth family masses. We assume in what follows no baryon/antibaryon asymmetry, which, as we shall se, does not influence much the results for large enough masses (above several tens TeV/c2 ) of the fifth family quarks. For high enough masses the one gluon exchange determines the properties of quarks in baryons as well as the quarks’ interaction with the cosmic plasma when the temperature is above 1 GeV/kb and correspondingly their freezing out of the plasma and their forming the fifth family neutrons during the expansion.6 The fifth family baryons and antibaryons, decoupling from the plasma mostly before or during the phase transition, interact among themselves through the ”fifth family nuclear force”, which is for the masses of the fifth family quarks of the order of several hundred TeV/c2 for a factor 10−10 smaller than the ordinary (first family) nuclear force.6 Correspondingly the fifth family baryons and antibaryons interact in the dark matter clouds in the galaxies and among the galaxies, when they scatter, dominantly with the weak force. With the ordinary matter they interact through the ”fifth family nuclear force” and are obviously not WIMPS (weakly interacting massive particles, if WIMPS are meant to interact only with the weak force). We estimate that the fifth family neutrinos with masses above TeV/c2 and bellow 200 TeV/c2 contribute to the dark matter and to the direct measurements less than the fifth family neutrons d .
c Let
me mention as one of the arguments for the assumption that the neutron is the lightest baryon. While the electrostatic repulsion energy contributes around 1 MeV more to the mass of the first family proton that it does to the first family neutron mass, this repulsion energy is for the fifth family proton, which is made out of quarks of the masses of several hundreds TeV/c 2 , 100 GeV(which is still only 1 per mil of the whole mass). d Both, quarks and leptons of the fifth family (as well as antiquarks and antileptons), experience the electroweak break, having accordingly different electroweak properties before and after the break. We are studying these properties. The fifth family quarks, which do not decouple from the plasma before the colour phase transition, which starts bellow 1 GeV/c 2 , are influenced by the colour phase transition as well. We estimate6 that in the colour phase transition all coloured quarks and antiquarks either annihilate or form colourless clusters and decouple out of plasma
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
550
3.2.1. The fifth family quarks in the expanding universe To solve the coupled Boltzmann equations for the number density of the fifth family quarks and the colourless clusters of the quarks in the plasma of all the other fermions (quarks, leptons) and bosons (gauge fields) in the thermal equilibrium in the expanding universe we estimate6 the cross sections for the annihilation of quarks with antiquarks and for forming clusters. We do this within some uncertainty intervals, which take into account the roughness of our estimations. Knowing only the interval for possible values of masses of the fifth family members we solve the Boltzmann equations for several values of quark masses, following the decoupling of the fifth family quarks and the fifth family neutrons out of the plasma down to the temperature 1 GeV/kb when the colour phase transition starts. The fifth family neutrons and antineutrons, packed into very tiny clusters so that they are totally decoupled from the plasma, do not feel the colour phase transition, while the fifth family quarks and coloured clusters (and antiquarks) do. Their scattering cross sections grow due to the nonperturbative behaviour of gluons (as do the scattering cross sections of all the other quarks and antiquarks). While the three of the lowest four families decay into the first family quarks, due to the corresponding Yukawa couplings, the fifth family quarks can not. Having the binding energy a few orders of magnitude larger than 1 GeV and moving in the rest of plasma of the first family quarks and antiquarks and of the other three families and gluons as very heavy objects with large scattering cross section, the fifth family coloured objects annihilate with their partners or form the colourless clusters (which result in the decoupling from the plasma) long before the temperature falls bellow a few MeV/k b when the first family quarks start to form the bound states. Following further the fifth family quarks in the expanding universe up to today and equating the today’s dark matter density with the calculated one, we estimated the mass interval of the fifth family quarks to be (10 TeV < mq5 c2 < a few hundreds TeV). The detailed calculations with all the needed explanations can be found in ref.6 3.2.2. Dynamics of a heavy family baryons in our galaxy and the direct measurements Although the average properties of the dark matter in the Milky way are pretty well known (the average dark matter density at the position of our Sun is expected to be ρ0 ≈ 0.3 GeV/(c2 cm3 ), and the average velocity of the dark matter constituents around the centre of our galaxy is expected to be approximately velocity of our Sun), their real local properties are known much less accurate, within the factor of 10 e . When evaluating6 the number of events which our fifth family members (or e In
a simple model that all the clusters at any radius r from the centre of our galaxy travel in all possible circles around the centre so that the paths are spherically symmetrically distributed, the velocity of a cluster at the position of the Earth is equal to vS , the velocity of our Sun in
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
551
any stable heavy family baryons or neutrinos) trigger in the direct measurements of DAMA9 and CDMS10 experiments, we take into account all these uncertainties as well as the uncertainties in the theoretical estimation and the experimental treatments. Let the dark matter member hits the Earth with the velocity ~vdm i . The velocity of the Earth around the centre of the galaxy is equal to: ~vE = ~vS + ~vES , vES 0 with vES = 30 km/s and ~vvSS·~ vES ≈ cos θ, θ = 60 , vS = (100 − 270) km/s. The dark matter cluster of the i- th velocity class hits the Earth with the velocity: ~vdmE i = ~vdm i − ~vE . Then the flux of our dark matter clusters hitting the Earth is: P ρ ε Φdm = i ρmdmi |~vdm i −~vE |, which can be approximated by Φdm = m0 c ρ {εvdmS vS + c5 5 εvdmES vES cos θ sin ωt}. The last term determines the annual modulations observed by DAMA.9 We estimate (due to experimental data and our theoretical evaluations) ε that 31 < εvdmS < 3 and 13 < εvvdmES < 3. dmS The cross section for our fifth family baryon to elastically scatter on an ordinary nucleus with A nucleons is σA ≈ π~1 2 < |Mc5 A | >2 m2A , where mA is the mass of the ordinary nucleus f . Since scattering is expected to be coherent the cross section is almost independent of the recoil velocity of the nucleus. Accordingly is the cross 1 section σ(A) ≈ σ0 A4 εσ , with σ0 εσ , which is 9 πrc25 εσnucl , with 30 < εσnucl < 30 (taking into account the roughness with which we treat our heavy baryon’s properties and the scattering procedure) when the ”nuclear force” dominates. In all the expressions the index c5 denotes the fifth family cluster, while the index nucl denotes the ordinary nucleus. Then the number of events per second (RA ) taking place in NA nuclei of some experiment is due to the flux Φdm equal to RA εcut = εcut NA mρc0 σ0 A4 vS ε (1 + εvdmES vES εvdmS vS
5
1 cos θ sin ωt), where we estimate that 300 < ε < 300 demonstrates the uncertainties in the knowledge about the dark matter dynamics in our galaxy and our approximate treating of the dark matter properties, while εcut determines the uncertainties in the detections. Taking these evaluations into account we predict that if DAMA9 is measuring our fifth family (any heavy stable) baryons then CDMS10 (or some other experiment) will measure in a few years these events as well6 provided that our fifth family quarks masses are higher than mq5 ≥ 200 TeV c2 .
4. Concluding Remarks I demonstrated in my talk that the Approach unifying spin and charges, which assumes in d = (1 + 13)−dimensional space a simple action for a gravitational field the absolute value, but has all possible orientations perpendicular to the radius r with equal probability. In the model that the clusters only oscillate through the centre of the galaxy, the velocities of the dark matter clusters at the Earth position have values from zero to the escape velocity, each one weighted so that all the contributions give ρdm . f Although our fifth family neutron or antineutron is very tiny, of the size of 10 −5 fm, it is very massive (a few hundreds TeV/c2 ). When a quark of the ordinary nucleus scatter on it, the whole nucleus scatters with it, since at the recoil energies the quarks are strongly bound.
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
552
(manifesting with the spin connections and the vielbeins) and massless fermions carrying only two kinds of the spin (the one presented by the Dirac matrices and the additional one anticommuting with the Dirac one–there is no the third kind of the spin), no charges, shows a new way beyond the Standard model of the electroweak and colour interactions. It namely, for example, explains the origin of families, of the Yukawa couplings, of the charges, the appearing of the corresponding gauge fields and of the (two kinds of) scalar fields, it explains why only the left handed spinors carry the weak charge while the right handed ones do not, where does the dark matter originate, and others. The action manifests at low (observable) energies after particular breaks of symmetries two times four families, with no Yukawa couplings (in comparison with the age of the universe) among the lower and the upper four families. The Approach predicts the fourth family, to be possibly observed at the LHC or at somewhat higher energies and the stable fifth family, whose baryons and neutrinos form the dark matter. I discussed briefly the properties of the lower four families, estimated at the tree level. Following the history of the fifth family members in the expanding universe up to today and estimating also the scattering properties of this fifth family on the ordinary matter, the evaluated masses of the fifth family quarks, under the assumption that the lowest mass fifth family baryon is the fifth family neutron, are in the interval 200 TeV < mq5 c2 < 105 TeV.
(8)
The fifth family neutrino mass is estimated to be in the interval: a few TeV < mν5 c2 < a few hundreds TeV. 4.1. Problems to be solved In the sections (4,1) the promising sides of the Approach unifying spin and charges are briefly over viewed and in the rest of talk the achievements presented. One of the conclusions we can make is that–since there are two kinds of the Clifford algebra objects and not only one (successfully used by Dirac 80 years ago) describing the spin of fermions and in the Approach the spin and the charges, and since the other generates the equivalent representations with respect to the first one, while the families are the equivalent representations with respect to the spin and the charges– the second kind of the spin can be used to describe families, or even must, since if not, we should explain, why only one kind of the spin manifests at the low energy regime. There are, however, several questions which should be solved before accepting the Approach as the right way beyond the Standard model, offering the right explanation for the origin of families, charges and gauge fields. Since the Approach assumes that the dimension of the space time is larger than (1+3), like do the Kaluza-Klein-like theories as well as the theories with strings and membranes, it must be asked what is at all the dimension of the space-time. What does cause or trigger the spontaneous
November 24, 2010
16:22
WSPC - Proceedings Trim Size: 9.75in x 6.5in
07.11˙Borstnik
553
breaks of symmetries in the evolution of the universe? What does determine the phase transitions? We put a lot of efforts in understanding the properties of higher dimensional spaces and in understanding the properties of fermions after breaking symmetries. The reader can find more about our understanding and the proposed solutions in the references.3 It is hard to evaluate how do the spontaneous (non adiabatic) breaks occur, what causes them and how do they influence the properties of all kinds of fields. To understand better the differences in the properties of a family members, we are estimating their properties bellow the tree level. Is it the expectation that there are effects bellow the tree level which are responsible for the differences in properties of family members (although some differences manifest already on the tree level) correct? Will such calculations show that the fifth family u-quark is heavier than the d-quark mainly due to the repulsive electrostatic contribution (≈ 100 GeV)? The estimation of the behaviour of the coloured fifth family clusters during the colour phase transition leads to the conclusion that the fifth family quarks either annihilate or form the fifth family neutrons or antineutrons. Will more accurate evaluations confirm this estimation? How many of the fifth family neutrinos and neutrons annihilate in the interval bellow the weak phase transition due to possibly large weak cross section for the annihilation in this region? Many a question presented here are under consideration, many of them not even written here wait to be studied. I invite the audience to contribute to next steps of (to my understanding) a really promising way beyond the Standard model. References 1. N.S. Mankoˇc Borˇstnik, Phys. Lett. B 292, 25 (1992), J. Math. Phys. 34, 3731 (1993), Modern Phys. Lett. A 10, 587 (1995), Int. J. Theor. Phys. 40, 315 (2001), hepph/0711.4681, p. 94-113, 53-113, hep-ph/0401043, hep-ph/0401055, hep-ph/0301029. 2. A. Borˇstnik Braˇciˇc, N.S. Mankoˇc Borˇstnik, Phys Rev. D 74, 073013 (2006). 3. N.S. Mankoˇc Borˇstnik, H. B. Nielsen, Phys. Rev. D 62, 04010 (2000), J. of Math. Phys. 43, 5782 (2002), J. of Math. Phys. 44, 4817 (2003), Phys. Lett. B 633, 771 (2006), Phys. Lett. B 644, 198 (2007), Phys. Lett. B 110, 1016 (2008). 4. N.S. Mankoˇc Borˇstnik, arXiv:0912.4532, p.119-135. 5. G. Bregar, M. Breskvar, D. Lukman, N.S. Mankoˇc Borˇstnik, New J. of Phys. 10, 093002 (2008). 6. G. Bregar, N.S. Mankoˇc Borˇstnik, Phys. Rev. D 80, 083534 (2009). 7. N.S. Mankoˇc Borˇstnik, arXiv:0912.4532, p.119-135. 8. D. Lukman, N.S. Mankoˇc Borˇstnik, H.B. Nielsen, arXiv:1001.4679. 9. R. Bernabei at al,Int. J. Mod. Phys. D 13, 2127 (2004). 10. Z. Ahmed et al., astro-ph/0802.3530.
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
PART VIII
High-Energy Gamma Rays, Cosmic Rays, Status and Explanations of the PAMELA/ATIC Anomaly
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
557
SEARCH FOR DARK MATTER THROUGH VERY HIGH ENERGY GAMMA-RAYS E. MOULIN∗ CEA - Saclay, DSM/IRFU/SPP, Gif-sur-Yvette, 91191, France ∗ E-mail: [email protected] www.irfu.cea.fr Annihilations of WIMPs can occur in high density regions of our Galaxy such as the Galactic Centre, dwarf galaxies and other types of substructures in Galactic haloes. High energy gamma-rays can be produced and may be detected by imaging atmospheric Cherenkov telescopes (IACTs). After a short overview of observations with current IACTs, basic principles of indirect detection through gamma-rays are given. Selected results on targeted searches such as satellites galaxies of the Milky Way are shown. In the absence of a clear signal, modelling the dark matter halo profile of these objects allows to put constraints on the particle physics parameters such as the annihilation cross section and the mass of the dark matter particle in the framework of models beyon d the Standard Model of Particle Physics. Besides theses searches are wide-field survey searches for DM substructures in the Galactic halo. The case for dark matter spikes around intermediate mass black holes will be discussed. Finally, the next generation of IACTs is presented. Keywords: Dark matter; Gamma-rays; Cherenkov telescopes.
1. Introduction During the last decades, compelling evidences have been accumulated suggesting a sizeable non-baryonic dark matter (DM) component in the total cosmological energy density of the Universe. The present estimate of the cold DM density is ΩCDM h2 ≃ 0.111 ± 0.0061 where the scaled Hubble parameter is h = 0.70 ± 0.02. At the galactic scales, evidences come from the measurements of the rotation curves for spiral galaxies as well as the gravitational lensing, and agree well with the predictions of N-body simulations of gravitational clustering in the CDM cosmology. At higher scales, the velocity dispersion of galaxies in galaxy cluster suggests high mass-tolight ratio, exceeding by at least one order of magnitude the ratio in the solar neighborhood. However, almost nothing is known about the intrinsic nature of the DM particle. Among the most widely discussed DM candidates are the WIMPs (Weakly Interacting Massive Particles). This particle can be in thermal equilibrium and in abundance in the primordial Universe. The equilibrium abundance is maintained through the annihilation with its antiparticle into lighter particles via the reaction
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
558
¯ χχ ¯ ⇀ ↽ l l. The abundance drops exponentially for non-relativistic particles as the Universe cools down. When the interaction rate becomes lower than the expansion rate, the interactions are not frequent enough to maintain the equilibrium abundance. At a given temperature, the equilibrium freeze-out and a relic cosmological abundance freezes in. The relic density is given by2 Ωχ h 2 ≃
3 × 10−27 cm3 s−1 , hσann vi
(1)
where the Hubble constant h has converged towards ∼0.7. h...i indicates taking a thermal average. For gauge couplings and masses of the order of the electroweak scale, the thermal relic density of massive particles automatically fullfill the WMAP constraints on the cold DM density. Tremendous experimental and theoretical efforts are currently at play to clarify the nature of dark matter. The detection strategies that have been devised to search for dark matter can be divided into three categories. 1) The production at accelerators is looking for missing energy, jets and high-pT particles in long decay chains; 2) The direct detection of DM particles in the Galactic DM halo aims in observing the recoiled nuclei from WIMPs scattering off of target nuclei in large undergound detectors; 3) The indirect detection of the annihilation products of two DM particles. The quest for the identification of the dark matter is a highly multidisciplinary field from cosmology to astrophysics to particle physics. The detection of the dark matter in any one of the experimental strategies will not be sufficient to conclusively elucidate the nature of dark matter. The direct and indirect detection of the dark matter particles making up the halo of our Galaxy is unlikely to provide enough information to reveal the underlying physics models behind these particles. On the other hand, collider experiments may identify long-lived particles, weakly interacting particle but will not be able to test its cosmological stability or abundance. Only the combination of the different approaches will allow to unveil the nature of the dark matter using the complementarity between the different techniques. The indirect detection technique may proceed through the measurement of positrons and antiprotons yields in the cosmic rays, the detection of neutrinos from the center of the Sun or the Earth or high energy gamma-rays from the WIMP annihilations in the Galactic halo or in external galaxies. All these discovery methods are searching for weak signals in overwhelming backgrounds. Indirect searches through gamma-rays may be in principle easier: 1) The propagation from the production region is not affected by significant scattering or absorption; 2) The annihilation rate depends on the square of the density making “hot spots” near high concentration of dark matter as predicted by large N-body cosmological simulations; 3) The presence of possible spectral features: no gamma-ray above the DM particle mass since no more energy than the DM particle mass per particle is released in the collision of two non-relativistic particles. The indirect detection through gamma-ray provide three types of signals. 1) A continuum of gamma-rays with a cut-off at the DM particle mass is ob-
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
559
tained via the hadronisation and/or decays of the cascading products, mainly via χχ → q q¯, W + W − , ... → γ + ...; 2) mono-energetic gamma-ray lines at the DM particle mass. These monoenergetic lines are produced via the reactions χχ → γγ and χχ → γZ. The monoenergetic gamma-ray line provides an unique signature although challenging to detect because these processes are highly suppressed, i.e. gamma-rays are produced via loop-induced processes. 3) When charged annihilation products are present, an additional photon appears in the final state via radiative corrections (internal bremsstrahlung). These photons dominate at high energies and bump-like structure is present slightly lower than the DM particle mass. 2. Observations With IACTs 2.1. The imaging atmospheric Cherenkov technique High energy gamma-rays (Eγ & 100 GeV) penetrating in the Earth’s upper atmosphere initiate electromagnetic showers via the production of electron-positron pairs and subsequent bremsstrahlung. For 1 TeV gamma-ray, the maximum development of the shower occurs at a depth of about 300 g.cm−2 which corresponds to an altitude of 10 km above sea level (a.s.l.) for vertical incident gamma-ray. The energy threshold for electrons and positrons to emit Cherenkov light is ∼40 MeV at 10 km a.s.l. The yield of Cherenkov light is proportional to the total track length of all particles, and thus proportional to the primary gamma-ray energy. An image of the cascade provides a pseudo-calorimteric measurement of the shower energy. The Cherenkov light opening angle is ∼1◦ in air and the photons produced around the shower maximum arrive at observation heights of ∼2000 m a.s.l. in a 120 m radius light pool. The photoelectron density is ∼100 per m2 per TeV. Given the typical instrumental efficiency of 10% (reflectivity of mirror and quantum efficiency of photomultipliers) 100 m2 optical reflector are required to obtain ∼100 photoelectrons in the shower image for 100 GeV gamma-ray. The Cherenkov light flash lasts a few nanoseconds and fast photomultipliers and electronics are needed to extract this faint signal over the night sky background light. 2.2. Background rejection Cherenkov technique faces to the challenge of the overwhelming background from showers initiated by cosmic ray protons and nuclei. For instance, the gamma-ray rate for the brightest objects detected by HESS is only ∼0.1% of the background showers rate. Showers initiated by TeV protons and nuclei differ in many respects from gamma-ray showers. Most of the energy is released in pions produced in the first few interactions. The neutral pions decay produces electromagnetic sub-showers with charged pions decaying into muons. Single muons reaching the ground produce rings when impacting the telescope dish, or arcs at larger impact distances. The subshowers generally produce substructures in the shower image and showers are generally wider than those of gamma-rays because of the the larger transverse angular
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
560
momentum implied by hadronic interactions. Moreover, for a given primary energy, hadronic interactions produce less Cherenkov light (2 to 3 times less at ∼1 TeV) due to the energy released in neutrinos, high energy muons and hadrons in the shower core. The discrimination between hadron and gamma-ray induced showers relies on the width of the shower image. Measurement and geometric parametrization of the image allow for an efficient background rejection.3 Since then, more sophisticated methods for background rejection and shower reconstruction have been developed.4 2.3. Using stereoscopy Multiple view of individual air shower is very useful as first demonstrated by the HEGRA collaboration.5 A multi-telescope trigger system removes the majority of muons and hadrons initiated showers. At the analysis level, the stereoscopy improves the reconstruction of the direction and energy of the primary gamma-ray. Although the shower axis reconstruction is possible with a single Cherenkov telescope, the multiple view of the shower allows for a more accurate reconstruction of the shower direction using the intersection of the directions of the major axes of the images recorded in the cameras. The shower core location can also be better determined, thus improving the energy resolution. A better hadronic rejection is also obtained thanks to an improved shower geometry. The rejection parameter width can be replaced by mean scaled width, normalising based on expectations for gamma-ray showers (for a given image amplitude and impact distance) and averaged over all telescopes. In a Cherenkov telescope array, the optimal separation of telescopes seems to be close to the radius of the Cherenkov light-pool. Low-energy performances can be improved with closer spacing at the expense of effective collection area at higher energies. 2.4. Current instruments Following the successes of the Imaging Cherenkov technique pioneered by the 10 m Whipple telescope with the discovery of the TeV emission from the Crab nebula in 1989, the ground-based Cherenkov telescopes establish a new astronomical domain with the firm detection of few tenth of sources. The ongoing generation of IACTs are relatively small field-of-view instruments (few degrees) and a duty cycle of 10% is imposed by the need of good weather and complete darkness. Best IACTs reach angular and energy resolutions of ∼0.1◦ and 15% respectively. Experiments are located both in Southern and Northern hemispheres allowing simultaneous and complementary observations of TeV sources. Table 1 summarises the main characteristics of currently operating major IACTs. H.E.S.S. (High Energy Stereoscopic System)6 is a four telescope array located in the Khomas highlands of Namibia at an altitude of 1800 m above sea level (a.s.l.), completed in early 2004. Each telescope of 13 m in diameter consists of an optical reflector of 107 m2 . Each camera is equipped with 960 photomultiplier tubes (PMTs). The latitude of H.E.S.S. and the combination of a wide field of view, very good angular resolution and off-axis
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
561 Table 1. Main characteristics of currently operating IACTs. The energy threshold is given at the trigger level for observations close to the zenith. The approximate sensitivity is expressed in terms of a percentage of the flux of the Crab Nebula (∼2×10−11 cm−2 s−1 above 1 TeV) as the minimum flux of a point-like source detectable at the 5σ level in a 50 hours observation time. Instrument H.E.S.S. MAGIC VERITAS CANGAROO-III
l b (deg.) (deg.) -23 29 32 -31
16 18 -111 137
Alt. (m)
# of Telescope Pixels FoV Threshold Sensitivity telescopes area (m2 ) /camera (deg.) (GeV) (% Crab)
1800 2225 1275 160
4 1 4 3
107 234 106 57.3
960 574b 499 427
5 3.5 3.5 4
100 60 60 400
0.7 2 2 15
Note: a A second telescope has recently being completed. The sensitivity is expected to be improved by a factor of ∼3 with the stereo operation.b This instrument has pixels with different sizes.
performances makes this instrument has reached an unprecedented sensitivity to accurately map the Galactic plane. VERITAS (Very Energetic Radiation Imaging Telescope Array System)7 is an array completed recently (April 2007) and situated at the base-camp of the Whipple Observa- tory in Arizona at 1275 m a.s.l. Each telescope of 12 m diameter is composed a camera with 499 PMTs. It is similar to H.E.S.S. in several aspects and can be considered as a complementary northern hemisphere instrument. The MAGIC telescope8 of 17 m in diameter and a mirror surface of 236 m2 , is located on the Canary island of La Palma at 2225 m a.s.l and is considered as the state-of-the-art in terms of single dish instruments. The camera contains 574 PMTs. The instrument is optimised for low energy measurements as requested for indirect DM searches. Its design has also be driven by the requirement to rapidly slew (∼5◦ /s) in case of GRB alerts. The CANGAROO-III (Collaboration of Australia and Nippon for a Gamma Ray Observa- tory in the Outback)9 instrument has been completed in 2004. It is a system of four 10 m diameter telescopes continuing the CANGAROO project on a site near Woomera, Australia, at 160 m a.s.l. Each telescope has a surface of 75 m2 and is equipped with a camera of 427 PMTs. Some controversy on the detection of certain sources with CANGAROO-II are now obsolete with the more sensitive observations of CANGAROO-III. 3. Gamma-Ray Flux From Dark Matter Annihilations The gamma-ray flux expected from the annihilations of DM particles of mass mDM accumulating in a spherical DM halo can be factored out in an astrophysical term and a particle physics term as: dΦ(∆Ω, Eγ ) 1 = dEγ 8π
hσvi dNγ m2χ dEγ | {z }
P article P hysics
¯ × J(∆Ω)∆Ω . | {z }
(2)
Astrophysics
The particle physics part contains hσvi, the velocity-weighted annihilation cross section, and dNγ /dEγ , the differential gamma-ray spectrum summed over the whole final states with their corresponding branching ratios. The astrophysical factor corresponds to the integral over the line of sight (l.o.s) of the squared density averaged
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
562
over the solid angle ∆Ω in the direction ψ, and is defined by: Z Z ¯ ∆Ω) = 1 J(ψ, dΩ dsρ2 (r(s, ψ)) . (3) ∆Ω ∆Ω l.o.s p s2 + s20 − 2ss0 cosθ where For observations pointed on the target position, r = s0 is the distance of the source from the Sun, and dΩ = sinθdθdφ. The integral p max 2 2 along the l.o.s is calculated over the interval smin = s0 cosθ ± rt − s0 sin2 θ. For a point-like search, ∆Ω is usually taken to 10−5 sr. 3.1. Dark matter halo modeling Numerical simulations are generically used to tackle the problem of large scale structure formation. Latest numerical simulations suggest the existence of an universal DM profile with the same shape for all masses and epochsa . One of the primeval parametrisation is the so-called NFW (Navarro, Frenk and White) profile10 given by: −1 −2 r r ρNFW (r) = ρ0 1+ (4) rs rs The normalisation parameter ρs , and the scale radius rs , can be related to the virial mass and the concentration parameter using the following relations ρ0 =
Mvir , 4πrs3 f (cvir )
rs =
Rvir ; cvir
(5)
while the function f (x) is the volume integral of the NFW profile f(x) ≡ ln(1+x)x/(1+x) with x = r/rs . The virial mass is related to the virial radius by Mvir =
4π 3 ρu Rvir . 3
(6)
Mvir is defined as the mass inside the radius Rvir with ρu the mean density in the Universeb . The NFW profile can be equally describe by given the couple of two nondegenerated parameters (ρs ,rs ) or (Mvir ,cvir ). However, the numerical simulations have finite resolution and the extrapolation of the cuspy profiles toward the center of galaxies may be subject to caution with respect to the flat cores observed in astrophysical systems. On a galactic scale, a core profile is often used. The analytical expression is given by: ρCore (r) =
va2 3rc2 + r2 4πG (rc2 + r2 )2
(7)
with rc the core radius and the va is the velocity scale. a It
is important to note that the exact value of the power-law index of the profile shape in the inner part of galaxies is still subject to debate. b ρ = 200×ρ , with ρ the critical density of the universe.11 u c c
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
563
The determination of the free parameters of the NFW and Core profiles is done by fitting to observational data on the astrophysical object using the Jeans equation. For a spherical galaxy, the enclosed in a given radius r, M (r), is related to observables by: rhvr i2 dlogν dloghvr2 i M (r) = − + + 2β , (8) G dlogr dlogr where ν is the luminosity density, hvr2 i is the radial velocity dispersion of the stars and β is the velocity anisotropy parameter of the stars. This method is applied in.12 3.2. Exclusion limit calculation In the absence of a clear signal, one can compute the minimum detectable velocityweighted annihilation cross section hσvimin with the relation C.L. hσvi95% = min
m2DM 8π Z ¯ Tobs J(∆Ω)∆Ω
0
mDM
Nγ95% C.L. , dNγ Aeff (Eγ ) dEγ dEγ
(9)
where mχ is the DM particle mass, dNγ /dEγ is the differential continuum photon spectrum, and Aeff is the effective area of the instrument during the observations. In general MSSM, the continuum spectrum from neutralino annhilitaion is a priori not known since the branching ratios of the annihilation channels are not uniquely determined. The parametrization for a Higgsino-like neutralino annihilating mainly via W and Z bosons pairs can be computed with the parametrization from.13 In some specific scenarios, the branching ratios of the annihilation channels can be computed given that the field content of the DM particle is known. The other popular DM candidate arises in theories with universal extra dimensions (UED). In Kaluza-Klein scenarios with KK-parity conservation, the lightest KK particle (LKP) is stable. Most often, the LKP is the first KK mode of the hypercharge gauge boson. In this case, LKP pairs annihilate mainly into fermion pairs: 35% in quark pairs and 59% in charged lepton pairs.14 4. Targeted Searches 4.1. The Galactic Center Since 2004, H.E.S.S. observations towards the Galactic Centre have revealed a bright pointlike gamma-ray source, HESS J1745- 290,15 coincident in position with the supermassive black hole Sgr A*, with a size lower than 15 pc. Diffuse emission along the Galactic plane has also been detected16 and correlates well with the mass density of molecular clouds from the Central Molecular Zone, as traced by CS emission. From 2004 data set, the energy spectrum of the source is well fitted in the energy range 160 GeV - 30 TeV to a power-law spectrum with a spectral index of 2.25±0.04stat.±0.1syst. . No deviation from a power-law is observed leading to an upper limit on the energy cut-off of 9 TeV (95% C.L.). According to
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
564
Fig. 1. HESS J1745-290 spectra derived for the 2004-2006 H.E.S.S. Galactic Centre dataset. The shaded areas are the 1σ condence intervals for the power law with an exponential cut-off fit (left) and the smoothed broken power law fit (right). The last points represent 95% condence level upper limits on the flux. Figure extracted from.20
recent detailed studies,17 the source position is located at an angular distance of 7.3”±8.7”stat.±8.5”syst. from Sgr A*. The pointing accuracy allows to discard the association of the very high energy (VHE) emission with the centre of the radio emission of the supernova remnant Sgr A East but the association with the pulsar wind nebula G359.95-0.04 can not be ruled out. The MAGIC observations were carried out towards the Galactic Centre since 2004 and revealed a strong emission18 . The observed excess in the direction of the GC has a significance of 7.3σ and is compatible with a pointlike source. Large zenith observation angle (&60◦ ) implies an energy threshold of ∼400 GeV. The source position and the flux level are consistent with the measurement of HESS within errors. The differential flux can be well described by a power law of index of 2.2±0.2stat.±0.2syst. . The flux level is steady within errors in the time-scales explored within these observations, as well as in the two year time-span between the MAGIC and HESS observations. Besides plausible astrophysical origins (see e.g.19 and references therein), an alternative explanation is the annihilation of DM in the central cusp of our Galaxy. The spectrum of HESS J1745-290 shows no indication for gamma-ray lines. The observed gamma-ray flux may also result from secondaries of DM annihilation. The hypothesis that the spectrum measured by H.E.S.S. originates only from DM particle annihilations is highly disfavored.15 Plausible astrophysical emitters may account for the observed signal even if a DM signal cannot be excluded. If a DM signal exists it is certainly overwhelmed by the astrophysical signals. Various mechanisms have been suggested to explain the astrophysical GC emission over the broadband spectrum. The stochastic acceleration of electrons interact-
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
565
ing with the turbulent magnetic field in the vicinity of Sgr A*21 would explain the millimeter and sub-millimeter emission. In this model, the IR and X-ray flaring22 is also reproduced. It assumes in addition that charged particles are accreted onto the black hole and predicts the escape of protons from the accretion disk and their acceleration. Neutral pions are produced by inelastic collisions with the interstellar medium in the central star cluster of the Galaxy. The energy cut-off in the gammaray spectrum could reect an energy cut-off in the primary proton spectrum. In that case, a cut-off in the gamma-ray spectral shape at Ecut∼ Ecut,p/30 is expected. It would then correspond to Ecut,p of about 400 TeV. Alternatively, energy-dependent diffusion models of protons to the outside of the central few parsecs of the Milky Way19 have been advocated. They would imply a spectral break due to the competition between the injection and the escape of protons outside the vicinity of the GC. The G359.95-0.04 pulsar wind nebulae located at 8 arcsec from Sgr A* also explains the steepening of the measured spectrum of HESS J1745-290.23 A fraction of the TeV emission at least may be explained by an inverse Compton emission due to a population of electrons whose energies extend up to 100 TeV. This model would imply a constant flux with time since the time scale for global PWN changes is typically much longer than a few years. The absence of TeV variability suggests that the emission mechanisms and emission regions differ from those invoked for the variable IR and X-ray emission. The models mentioned above can both accomodate a cut-off in the gamma-ray energy spectrum and predict the absence of variability in the TeV emission.
4.2. Observations of galaxy satellites of the Milky Way Dwarf spheroidal galaxies in the Local Group are considered as privileged targets for DM searches since they are amongst the most extreme DM-dominated environments. Measurements of roughly constant radial velocity dispersion of stars usually imply large mass-to-luminosity ratios. Nearby dwarfs are ideal astrophysical probes of the nature of DM as they usually consist of a stellar population with no hot or warm gas, no cosmic ray population and little dust. Indeed, these systems are expected to have a low intrinsic gamma-ray emission. This is in contrast with the Galactic Centre where disentangling the dominant astrophysical signal from possible more exotic one is very challenging. Observation campaigns by IACTs have started on dwarf galaxies for a few years.12,24–27 The Sagittarius (Sgr) dwarf galaxy is located at ∼24 kpc from the Sun and is one of the nearest Galaxy satellites of the Local Group. Sgr has been observed by H.E.S.S. since 2006.12 No significant gamma-ray excess is detected at the nominal target position. A 95% C.L. upper limit on the gamma-ray flux from standard astrophysical emission is derived: Φ95%C.L. (Eγ > 250GeV) = 3.6 × 10−12 cm−2 s−1 γ assuming a power-law spectrum of spectral index of 2.2. Sgr has made at least ten Milky Way crossings it should thus contain a substantial amount of DM to avoid to have been entirely disrupted. However, the DM halo modelling is even more
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
κ<σ v> [cm3s-1 ]
-20
10
3
<σ v> (cm s-1)
566
10-22
10-18 δ
10-19 10-20
B’
10-21
I’
L’
A’
G’ C’
10-22
αβ J’
H’
D’
10-23
10-24
E’
F’
γ
K’
10-24 10-25 10-26
-26
10
pMSSM + Cusped NFW profile -28
10
10-27
pMSSM + Cored NFW profile
10-28
pMSSM predictions
10-29
pMSSM predictions + WMAP
10-30 10-31 10-32
-30
10
0.1
1
10 mDM (TeV)
50 60 70
100
200
300
400 mχ [GeV]
Fig. 2. Left: Upper limits at 95%C.L. on σv versus the neutralino mass for a cusped NFW and core DM halo profile for Sgr. The predictions in pMSSM are also plotted with in addition those satisfying the WMAP constraints on the cold DM density. Right: Predictions for mSUGRA models on the thermally averaged neutralino annihilation cross section as a function of the neutralino mass. Benchmarks models are plotted (red dots). The red boxes indicate the flux upper limit. See text for more details.24
difficult. Two models of the mass distribution for the DM halo have been studied: a cusped NFW profile and a core isothermal profile, to emcompass a large class of plausible halo profiles. The left hand side of Fig. 2 presents the constraints on the velocity-weighted annihilation cross section σv for a cusped NFW and cored profiles in the solid angle integration region ∆Ω = 2 × 10−5 sr, for neutralino DM. Predictions for SUSY models are displayed. For a cusped NFW profile, H.E.S.S. does not set severe constraints on σv . For a core profile, due to a higher central density, stronger constraints are derived and some pMSSM models can be excluded in the upper part of the scanned region. The star velocity dispersions in Draco reveal that this object is dominated by DM on all spatial scales and provide robust bounds on its DM profile. Reduced tidal effects from the Milky Way are expected compared to Sgr. This decreases uncertainties on the astrophysical factor. The MAGIC collaboration searched for a steady gamma-ray emission from the direction of Draco.24 The analysis energy threshold after cuts is 140 GeV. No significant excess is found. For a power law with spectral index of 1.5, typical for a DM annihilation spectrum, and assuming a pointlike source, the 2σ upper limit is Φγ (Eγ > 140GeV) = 1.1×10−11cm−2 s−1 . The measured flux upper limit is several orders of magnitude larger than predicted for the smooth DM distribution in mSUGRA models. The limit on the flux enhancement caused by high clumpy substructures or a black hole lies in the range from 103 to 109 . Recent N-body simulations of hierarchical structure formation such as Via Lactea or Aquarius reveal the presence of DM substructures in Galactic halos. Since these results are scale invariant, they may be present in the dwarf galaxy halos and may have consequences for indirect DM searches towards dwarf galaxies. For point-
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
567
like searches towards the center of dwarf satellite galaxies, the boost factor in the predicted gamma-ray flux is of a few percent. No significant enhancement w.r.t the smooth contribution is thus expected. 5. Searches For Dark Matter Clumps The H.E.S.S. observations of the Galactic plane between 2004 and 2007 allowed for the first time to accurately map a large field of view in the TeV energy range. This survey results in a map of the Galactic plane between ±3◦ in galactic latitude and from -30◦ to 60◦ in galactic longitude with respect to the Galactic Centre position. Such a map allows to pave the way for blind DM searches, i.e. for searches for which the position of the DM target is not known a priori. The first study of the sensitivity in a large field of view to dark matter annihilations has been performed by H.E.S.S..28 Fig. 4 shows the experimentally observed sensitivity map in the Galactic plane from galactic longitudes l=-30◦ to l=+60◦ and galactic latitudes b=-3◦ to b=+3◦ , for a DM particle of 500 GeV mass annihilating with 100% BR into b¯b. Mini-spikes around Intermediate Mass Black Holes have been recently proposed as promising targets for indirect dark matter detection.29 The growth of massive black holes inevitably affects the surrounding DM distribution. The profile of the final DM overdensity, called mini-spike, depends on the initial distribution of DM, but also on astrophysical processes such as gravitational scattering of stars and mergers. Ignoring astrophysical effects, and assuming adiabatic growth of the black hole, if one starts from a NFW profile, a spike with a power-law index 7/3 is obtained, as relevant for the astrophysical formation scenario studied here characterized by black
Fig. 3. H.E.S.S. sensitivity map in Galactic coordinates, i.e. 90% C.L. limit on the integrated gamma-ray flux above 100 GeV, for DMr annihilation assuming a DM particle of 500 GeV mass and annihilation into the b¯b channel. The flux sensitivity is correlated to the exposure and acceptance maps.
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
568
V v [cm3s-1]
10-20 H.E.S.S. 90% C.L. pMSSM
10-22
pMSSM + WMAP
10-24 10-26 10-28 10-30
10-1
1
10 mDM [TeV]
Fig. 4. Constraints at 90% C.L. on the IMBH gamma-ray production scenario for different neutralino parameters, shown as upper limits on σv as a function of the mass of the neutralino mDM (grey shaded area). See Ref.28 for more details. The DM particle is assumed to be a neutralino annihilating into b¯b and τ + τ − pairs to encompass the softest and hardest annihilation spectra. The limit is derived from the H.E.S.S. flux sensitivity in the Galactic plane survey within the mini-spike scenario. SUSY models (black points) are plotted together with those satisfying the WMAP constraints on the DM particle relic density (magenta points).
hole masses of 105 M⊙ . Mini-spikes might be detected as bright pointlike sources by current IACTs. No IMBH candidate has been detected so far by H.E.S.S. within the survey range. Based on the absence of plausible IMBH candidates in the H.E.S.S. data, constraints are derived on the scenario B of Ref. 29 for neutralino or LKP annihilations, shown as upper limits on σv.28 Fig. 4 shows the exclusion limit at the 90% C.L. on σv as a function of the neutralino mass. The neutralino is assumed to annihilate into b¯b and τ + τ − with 100% BR, respectively. Predictions for SUSY models are also displayed. The limits on σv are at the level of 10−28 cm3 s−1 for the b¯b channel for neutralino masses in the TeV energy range. Limits are obtained one mini-spike scenario and constrain on the entire gamma-ray production scenario. 6. Next Generation Of Ground-Based Cherenkov Telescopes 6.1. A large Cherenkov telescope array Following the success of the currently operating Cherenkov telescopes arrays, i.e. HESS, MAGIC and VERITAS, the next generation of IACTs is a result a joint effort of the IACT community to design a km2 -sized array of Cherenkov telescopes such as CTA (Cherenkov Telescope Array)30 or AGIS (Advanced Gamma-ray Imaging System),31 to improve the overall capabilities of the present generation. CTA will consist of an array of a few tens to a thousand of telescopes with 2 to 3 different sizes to extend the accessible energy range both towards the low and the high energies.
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
569
The use of different telescope sizes will allow to increase the energy range up to almost four orders of magnitude, from about 30 GeV up to 100 TeV, whereas arrays of identical telescopes are usually restricted to two orders of magnitude in energy. CTA should achieve an angular resolution better of a factor two to three regarding the angular resolutoin of HESS, and gain at least a factor of ten in sensitivity. 6.2. A future view of the Galactic plane by CTA-like observatory
2
22 20
Simulated H.E.S.S. significance, real exposure
18 16
1
14 12
0
Significance
Galactic latitude (deg.)
The H.E.S.S. survey of the Galactic plane revealed a new population of gammaray sources.32,33 Although progresses have been made in the understanding of the acceleleration processes at stake in these astrophysical objects, some of them remain unidentified. The extrapolation of the Galactic plane survey population to lower fluxes should allow to investigate the potential of CTA-like observatory. Given the increased capabilities of the future observatory, the senstivity is expected to be improved by an order of magnitude. The increased population of detected sources will allow to improve the understanding of Galactic gamma-ray sources. The source population is modeled using the H.E.S.S. detected sources population given the latitude and longitude disctributions, the gamma-ray flux distribution, the angular distribution and the number of detected sources. An SNR-type population model is assumed with a SN rate of 10 per century, an efficiency of transferring explosion energy into kinetic energy of protons of ∼9% and the gamma-ray flux from π 0 decay is calculated using Ref. 34. The radial distribution of sources is extracted from.35 An example of a future view of the Galactic plane by CTA-like observatory is shown on Fig. 5. Here, a flat exposure of 5 hours in each position of the map is assumed.
10 8
Ŧ1
6 4
Ŧ2
2 Ŧ10
0
10
20
30
40
50
Galactic longitude (deg.) 70
2
Simulated CTA significance, flat exposure
60 50
1
40 0
Significance
Galactic latitude (deg.)
Ŧ20
30 Ŧ1
20
Ŧ2
10
Ŧ20
Ŧ10
0
10
20
30
40
50
Galactic longitude (deg.)
Fig. 5. A simulated view of what may be seen in the Galactic plane by a future CTA-like observatory. The upper plot shows a simulated view by H.E.S.S. using real exposure. The bottom plot is an extrapolated view for a CTA-like observatory assuming a factor 10 better in the collection area, a flat exposure of 5 hours and a background rejection improved by a factor 2, resulting in an overall factor of ∼10 better in sensitivity.
References 1. E. Komatsu et al., Submitted to Astrophys. J. Suppl. Ser. (2010).
December 20, 2010
18:45
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.01˙Moulin
570
2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35.
G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept. 267, 195 (1996). A. M. Hillas, Space Science Reviews 75, 17(January 1996). M. de Naurois and L. Rolland (2009). http://www.mpi-hd.mpg.de/hfm/HEGRA/ . http://www.mpi-hd.mpg.de/hfm/HESS/ . http://veritas.sao.arizona.edu/ . http://wwwmagic.mppmu.mpg.de/ . http://icrhp9.icrr.u-tokyo.ac.jp/ . J. F. Navarro, C. S. Frenk and S. D. M. White, Astrophys. J. 490, 493 (1997). C. Amsler et al., Phys. Lett. B667, p. 1 (2008). F. Aharonian et al., Astropart. Phys. 29, 55 (2008), Erratum-ibid.33:274,2010. L. Bergstrom, Rept. Prog. Phys. 63, p. 793 (2000). G. Servant and T. M. P. Tait, Nucl. Phys. B650, 391 (2003). F. Aharonian et al., Phys. Rev. Lett. 97, p. 221102 (2006). F. Aharonian et al., Nature 439, 695 (2006). H.E.S.S. ICRC contributions (2007). J. Albert et al., Astrophys. J. 638, L101 (2006). F. Aharonian and A. Neronov, Astrophys. J. 619, 306 (2005). F. Aharonian et al., Astron. Astrophys. 503, 817 (2009). S.-M. Liu, F. Melia, V. Petrosian and M. Fatuzzo, Astrophys. J. 647, 1099 (2006). A. Atoyan and C. D. Dermer, Astrophys. J. 617, L123 (2004). J. A. Hinton and F. A. Aharonian, Astrophys. J. 657, 302 (2007). J. Albert et al., Astrophys. J. 679, 428 (2008). F. Aharonian et al., Astrophys. J. 691, p. 175 (2009). E. Aliu et al., Astrophys. J. 697, 1299 (2009). R. G. Wagner and f. t. V. Collaboration (2009). F. Aharonian et al., Phys. Rev. D78, p. 072008 (2008). G. Bertone, A. R. Zentner and J. Silk, Phys. Rev. D72, p. 103517 (2005). http://www.cta-observatory.org/ . http://www.agis-observatory.org/ . F. Aharonian et al., Science 307, 1938 (2005). F. Aharonian et al., Astrophys. J. 636, 777 (2006). L. O. Drury, F. A. Aharonian and H. J. Volk, Astron. Astrophys. 287, 959 (1994). G. Case and D. Bhattacharya, Astron. Astrophys. Suppl. Ser. 120, C437+ (1996).
November 24, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.02˙Kelley
571
THE PIERRE AUGER OBSERVATORY: RECENT RESULTS AND FUTURE PLANS J. L. KELLEY∗ , for the Pierre Auger Collaboration Department of Astrophysics / IMAPP Radboud University Nijmegen Nijmegen, The Netherlands ∗ E-mail: [email protected] The Pierre Auger Observatory is a hybrid air shower experiment which uses multiple detection techniques to investigate the origin, spectrum, and composition of ultra-high energy cosmic rays. We present recent results on these topics as well as their implications for physics beyond the Standard Model, such as violation of Lorentz invariance and “topdown” models of cosmic ray production. Future plans, including enhancements underway at the southern site, are also discussed.
1. Introduction The cosmic ray energy spectrum extends to extremely high energies, up to at least 1020 eV. Due to the steepness of the spectrum, these ultra-high energy cosmic rays (UHECRs) cannot be detected directly, but can be studied by detection of the extensive air showers produced when they interact in Earth’s atmosphere. This allows construction of large arrays which provide the collection area necessary to detect such rare particles, arriving with a flux of 1 per km2 per century. The study of UHECRs provides a number of possibilities to probe physics beyond the Standard Model. While the conventional explanation for the source of UHECRs is astrophysical accelerators, the possibility exists that they instead come from the decay of super-heavy particles created in the early Universe. These so-called “topdown” models can be directly tested, as they also predict a sizable high-energy photon and/or neutrino component of the UHECR flux. New physics may also emerge in high-energy particle interactions, which are taking place at center-of-mass energies two orders of magnitude higher than what is currently achievable with terrestrial particle accelerators. Propagation of UHECRs through the cosmic microwave background (CMB) to Earth can probe Lorentz invariance at very high boost factors (γ ≈ 1011 ; see Ref. 1 for a review). The highenergy collisions in the atmosphere also probe hadronic interactions far beyond current accelerator data.
November 24, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.02˙Kelley
572
2. The Pierre Auger Observatory The Pierre Auger Observatory is a hybrid cosmic ray air shower experiment that uses multiple detection techniques to reconstruct cosmic ray energy, direction, and particle type from the characteristics of the associated extensive air shower. The southern site of the observatory, located in Mendoza, Argentina, was completed in 2008 and covers an area of 3000 km2 (see Fig. 1). A northern site of approximately 21,000 km2 is planned for Colorado in the United States.
Fig. 1. Layout of the southern site of the Pierre Auger Observatory. The points indicate the Surface Detector stations (those within the shaded area are operational), and the rays indicate the field of view of each of the 24 telescopes of the Fluorescence Detector.
The observatory records the particle shower front as it reaches the ground with the Surface Detector (SD), which consists of 1600 water-Cherenkov stations arranged on triangular grid with 1.5 km spacing. The SD operates with nearly a 100% livetime, and the trigger efficiency for events with zenith angles less than 60 ◦ is approximately 100% for proton and iron primaries with energies above 3 × 10 18 eV.2 The timing of the signals in the stations is used to reconstruct the arrival direction, typically to better than 1◦ .3 The longitudinal shower development is observed by the Fluorescence Detector (FD), consisting of 24 telescopes that overlook the SD at four sites around the array.4 Each telescope uses a mirror and an array of 440 photomultiplier tubes to track the fluorescence light emitted by excited nitrogen molecules as the air shower
November 24, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.02˙Kelley
573
deposits energy into the atmosphere. Integrating this light deposition provides a nearly calorimetric measurement of the shower energy. The livetime of the FD is approximately 13%, as its use requires clear, dark nights. However, the subset of hybrid events recorded in both SD and FD provides not only an improvement in angular resolution but also allows cross-calibration of the energy scale. A sample hybrid event display is shown in Fig. 2.
Fig. 2. Sample Auger hybrid event, with SD event display and reconstruction shown at top, and FD longitudinal profile and reconstruction shown at bottom.
3. Results 3.1. Energy spectrum The behavior of the UHECR energy spectrum approaching 1020 eV is of particular interest. A suppression of events was predicted by Greisen,5 Zatsepin, and Kuz’min6 from the interaction of charged particles with the cosmic microwave background (CMB). This “GZK effect” occurs when the center-of-mass energy of the proton or nucleus and the CMB photon exceeds the threshold for photopion production or photo-dissociation, effectively making the Universe opaque to cosmic rays of energy above 5 × 1019 eV. Results by the AGASA experiment7 suggested no GZK
November 24, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.02˙Kelley
574
suppression, with one possible explanation being violation of Lorentz invariance;8 due to limited statistics, however, the result was inconclusive. With increased exposure, however, both the HiRes and Auger experiments have observed a suppression in the spectrum above 3 × 1019 eV.9,10 Assuming this is indeed due to the GZK effect, this can be used to set a limit on Lorentz violation.11–13 However, other explanations of the spectral steepening, such as a cutoff in the source spectrum, cannot be ruled out. The most recent energy spectrum from Auger, combining data from both SD and FD, is shown in Fig. 3. The total systematic uncertainty on the energy scale is ∼ 22%,16 and an overall error in the energy scale can easily explain the discrepancy between the HiRes and Auger points.
Fig. 3. Combined SD and FD UHECR energy spectrum (multiplied by E 3 ).14 Data points from the HiRes experiment (open circles) are shown for comparison.
3.2. Arrival directions Cosmic rays, being primarily charged particles, are deflected by galactic and extragalactic magnetic fields, and so directional information about their sources is lost. However, for proton primaries above 50 − 60 EeV, the deflection may only be a few degrees, allowing correlations with source classes of objects. In 2007, an a priori sequential analysis provided evidence of angular correlation of events above 57 EeV with a subset of nearby active galactic nuclei (AGN) from the V´eron-Cetty catalog.3 A follow-up analysis includes an additional 31 high-energy
November 24, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.02˙Kelley
575
events recorded since the initial publication; of these, 8 correlate with the AGN subset, weakening the overall significance. A posteriori investigations into clustering in the region around Centaurus A and correlations with other catalogs, such as SWIFT-BAT, are in progress.15
3.3. Composition The longitudinal shower development provides information on the cosmic ray primary particle type, as showers from protons, heavy nuclei, photons, and neutrinos will interact at different depths in the atmosphere and will develop differently. At a given energy, showers from iron primaries will interact higher in the atmosphere and deposit their energy sooner than proton showers. This penetration depth is characterized by Xmax , the integrated density in g/cm2 of the shower maximum in the atmosphere. Fluctuations in Xmax from shower to shower will also be smaller, as to first order an iron shower can be considered as a superposition of singlenucleon showers. The uncertainty in reconstruction of Xmax as determined with events recorded at multiple FD sites is 20 g/cm2 .17 Shower <Xmax > and fluctuations rms(Xmax ) as a function of energy are shown in Fig. 4, measured using nearly 4000 events above 1018 eV. Both measurements suggest that the composition becomes heavier above 3 × 1018 eV, if the hadronic interaction models are correct at these energies. However, an increase in the proton cross section may also explain some of these features.18 The composition of the highest energy cosmic rays has an immediate impact on the anisotropy analysis discussed in section 3.2, as it is unlikely that iron primaries will have significant correlations with their sources, due to their larger deflection. However, we note that these composition data do not extend to the energy range of the correlation / anisotropy analysis.
QGSJET01 QGSJETII Sibyll2.1 EPOSv1.99
on
prot
800 750
iron
700
70
RMS(Xmax) [g/cm2]
<X max> [g/cm2]
850
proton
60 50 40 30 20
iron
10
650 18
10
19
10
E [eV]
0
18
10
19
10
E [eV]
Fig. 4. Air shower Xmax and rms(Xmax ) as a function of energy.17 Simulations of proton and iron primaries using various hadronic interaction models are shown as various lines.
November 24, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.02˙Kelley
576
3.4. Neutrino flux The Surface Detector of Auger can also be used as a neutrino detector, by searching for highly inclined showers near the horizon. Only a neutrino primary can penetrate deep into the atmosphere before interacting, so discriminating between “old” versus “young” showers can reject the background of conventional cosmic rays. In practice, this is achieved by using the width of the time distribution of the signals recorded in the Cherenkov stations: “old” showers consisting primarily of a muon bundle will result in a sharp, narrow signal compared to “young” showers with a large electromagnetic component. In addition, ντ can skim through a chord of the Earth, regenerating via multiple τ production and decay cycles and resulting in an “upgoing” shower at the array.19 No candidate neutrino events have been observed, resulting in upper limits on the diffuse flux of neutrinos (see Fig. 5). The upper limit at the 90% confidence level on a diffuse E −2 ντ flux is E 2 dN/dE < 1.3 × 10−7 GeV cm−2 s−1 sr−1 .
10-4
Single flavour neutrino limits (90% CL) HiRes
E2 ƒ(E) [GeV cm-2 s-1 sr -1]
10-5 HiRes e
10-6
Downgoing Auger 0.8 yr RICE06
-7
10
AMANDA 08
ANITA 08
Upgoing Auger 2.0 yr (central value)
10-8 1016
1017
GZK s 1018 1019 energy (eV)
1020
1021
1022
Fig. 5. Upper limits on the diffuse high-energy neutrino flux in differential and integral format.19,20 A range of predictions for the cosmogenic / GZK neutrino flux is indicated by the gray band.
3.5. Photon fraction As discussed in section 1, UHECRs may not be accelerated in astrophysical objects, but could originate in the decays of heavy particles. Numerous models exist for such “top-down” scenarios, such as super-heavy dark matter (SHDM), topological
November 24, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.02˙Kelley
577
photon fraction
defects (TD), and interactions with the relic neutrino background (Z-bursts), but many share the common feature of predicting a significant fraction of photons in the UHECR flux — from O(10%) at 1019 eV to over 50% at 1020 eV. Photon-initiated showers penetrate more deeply and have a higher Xmax than proton-induced showers at a given energy, due to decreased secondary multiplicities and suppression of cross sections by the LPM effect. Events with unexpectedly large Xmax can be searched for directly with the FD, or shower parameters such as the radius of curvature and signal risetime can be used with the SD. The techniques are effective in different energy ranges, with the FD search effective at EeV energies and the SD at energies above 10 EeV. In both analyses, data are consistent with only proton or nuclear primaries.21,22 Upper limits on the photon fraction are shown in Fig. 6 and strongly constrain many top-down models of UHECR production. 1 A1
A2 HP
HP
Auger SD
A1
AY
Y Y
101
Auger HYB Auger SD
Auger SD
SHDM SHDM’ TD
2
10
Z Burst GZK
19
10
20
10 threshold energy [eV]
Fig. 6. Upper limits at the 95% confidence level on the photon fraction of the integral cosmic ray flux for Auger (hybrid and SD), AGASA (A1,A2), AGASA-Yakutsk (AY), Yakutsk (Y), and Haverah Park (HP). The lines show predicted fluxes for various top-down models of UHECR production, and the shaded region shows the expected GZK photon fraction. See Ref. 21 and included references for more details.
4. Future Plans Several enhancements to the observatory are under development at the southern site. These include HEAT, the High Elevation Auger Telescopes, which will extend the fluorescence technique to lower energies;23 AMIGA, the Auger Muon and Infill Ground Array, an area of more densely spaced Cherenkov stations enhanced
November 24, 2010
17:34
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.02˙Kelley
578
with muon detectors;24 and AERA, the Auger Engineering Radio Array, a 20-km2 antenna array which will detect air showers via radio pulses produced by e+ e− interactions with the geomagnetic field.25 Finally, research and development is underway for the northern site of the observatory, which, with its larger collecting area, will allow extension of these measurements to even higher energy. References 1. W. Bietenholz, arXiv:0806.3713 [hep-ph]. 2. Pierre Auger Collaboration [J. Abraham et al.], Nucl. Instr. Meth. Phys. Res. A 613 (2010) 29. 3. Pierre Auger Collaboration [J. Abraham et al.], Science 318 (2007) 939. 4. Pierre Auger Collaboration [J. Abraham et al.], arXiv:0907.4282 [astro-ph]. 5. K. Greisen, Phys. Rev. Lett. 16 (1966) 748. 6. G. T. Zatsepin and V. A. Kuz’min, JETP Lett. 4 (1966) 78. 7. M. Takeda et al., Astropart. Phys. 19 (2003) 447. 8. S. T. Scully and F. W. Stecker, Astropart. Phys. 23 (2005) 203. 9. R. U. Abbasi et al., Phys. Rev. Lett. 100 (2008) 101101. 10. Pierre Auger Collaboration [J. Abraham et al.], Phys. Rev. Lett. 101 (2008) 061101. 11. S. T. Scully and F. W. Stecker, Astropart. Phys. 31 (2009) 220. 12. X.-J. Bi et al., Phys. Rev. D 79 (2009) 083015. 13. L. Maccione, A. M. Taylor, D. Mattingly, and S. Liberati, JCAP 04 (2009) 022. 14. Pierre Auger Collaboration [J. Abraham et al.], Phys. Lett. B 685 (2010) 239. 15. J. D. Hague [Pierre Auger Collaboration], Proc. 31st Intl. Cosmic Ray Conf. (L´ od´z, Poland), arXiv:0906.2347 [astro-ph]. 16. C. Di Giulio [Pierre Auger Collaboration], Proc. 31st Intl. Cosmic Ray Conf. (L´ od´z, Poland), arXiv:0906.2189 [astro-ph]. 17. Pierre Auger Collaboration [J. Abraham et al.], Phys. Rev. Lett. 104 (2010) 091101. 18. R. Ulrich et al., Nucl. Phys. B Proc. Suppl. 196, 335 (2009). 19. Pierre Auger Collaboration [J. Abraham et al.], Phys. Rev. D 79 (2009) 102001. 20. J. Tiffenberg [Pierre Auger Collaboration], Proc. 31st Intl. Cosmic Ray Conf. (L´ od´z, Poland), arXiv:0906.2347 [astro-ph]. 21. Pierre Auger Collaboration [J. Abraham et al.], Astropart. Phys. 31 (2009) 399. 22. Pierre Auger Collaboration [J. Abraham et al.], Astropart. Phys. 29 (2008) 243. 23. M. Kleifges [Pierre Auger Collaboration], Proc. 31st Intl. Cosmic Ray Conf. (L´ od´z, Poland), arXiv:0906.2354 [astro-ph]. 24. M. Platino [Pierre Auger Collaboration], Proc. 31st Intl. Cosmic Ray Conf. (L´ od´z, Poland), arXiv:0906.2354 [astro-ph]. 25. A. M. van den Berg [Pierre Auger Collaboration], Proc. 31st Intl. Cosmic Ray Conf. (L´ od´z, Poland), arXiv:0908.4422.
November 25, 2010
8:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.03˙Feinstein
579
SUPERNOVA REMNANTS INTERACTING WITH MOLECULAR CLOUDS: A NEW WAY TO REVEAL COSMIC RAYS F. FEINSTEIN∗ , A. FIASSON+ for the H.E.S.S. Collaboration ∗ Laboratoire
de Physique Th´ eorique et Astroparticules, Universit´ e Montpellier II, CNRS/IN2P3, CC 70, Place Eug` ene Bataillon, F-34095 Montpellier Cedex 5, France E-mail: [email protected] + Laboratoire
d’Annecy-le-Vieux de Physique des Particules, Universit´ e de Savoie, CNRS/IN2P3, 9 Chemin de Bellevue - BP 110 F-74941 Annecy-le-Vieux Cedex, France E-mail: [email protected] Molecular clouds interact with the ambient cosmic rays. The decay of secondary particles may give rise to a detectable flux of very high-energy photons. Recently the H.E.S.S., MAGIC and VERITAS telescopes have observed such sources associated with large molecular clouds and shell-type supernova remnants. Emission lines of OH masers are also observed in coincidence. This ensures that the expanding wave front of the supernova interacts effectively with the cloud. Such natural configurations bring new material to confront with the hypothesis that supernova remnants are the Galactic cosmic-ray accelerators. We describe the approach towards a systematic observation of such associations, present the current data and review the prospects of these studies for answering the question of the origin of the Galactic cosmic rays. Keywords: Shell-type supernova remnants; Molecular clouds; OH masers, Very highenergy gamma rays, Cosmic-ray origin, H.E.S.S. Galactic survey.
1. Introduction Since the discovery made by Victor Hess in 1912, that the atmosphere is more ionised at high altitude than at sea level, we have accumulated a lot of data on cosmic rays, their energy spectrum and their composition. They are observed from the GeV up to 1021 eV, over 1033 orders of magnitude in flux. However the acceleration mechanisms and the astrophysical sources of the very high-energy cosmic rays are still hypothetical, lacking indisputable evidences. This is mainly due to the fact that cosmic rays are scattered by the magnetic fields they encounter while they propagate through the interstellar and intergalactic media. Photons and neutrinos being electrically neutral are not affected by these magnetic fields. Neutrinos could sign unambiguously the presence of accelerated hadrons interacting with the interstellar
November 25, 2010
8:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.03˙Feinstein
580
medium, as they are secondary particles produced by the decay of charged pions produced in hadronic showers. Unfortunately, the estimated fluxes of the potential neutrino sources and the expected sensitivity of the km-scale neutrino telescopes are too low to give a signal. Very high-energy photons are similarly produced by the decay of neutral pions produced in hadronic showers. Present and future telescopes are sensitive enough to detect such signal. However, electromagnetic processes initiated by electrons can produce very high-energy photons, requiring more information to suppress this ambiguity. One plausible Galactic source of accelerated hadrons is the shock wave front from shell-type supernova remnants. Several of these supernova remnants are detected as sources of very high-energy photons, up to several 10 TeV. This is an evidence that either electrons or hadrons are accelerated at even higher energies, probably up to the PeV. In order to discriminate between electrons and hadrons, other data are needed. At the GeV scale, data from the Fermi satellite could separate hadronic and leptonic scenarios but the limited sensitivity and the angular resolution of the telescope will leave some room for both interpretations. Another approach consists in studying associations between large molecular clouds and shell-type supernova remnants. If the shock wave front - the accelerator - interacts with the cloud - the target - hadron-hadron collisions will cause hadronic showers which will produce a photon yield proportional to the matter density times the cosmic-ray density.1 We will describe the technique used to detect molecular clouds, the detection of the OH maser line to ensure true association between a cloud and a supernova blast wave, and review the current very high-energy photon sources associated with such configurations.
2. Supernova Remnants and Cosmic Rays Shell-type supernova remnants are plausible particle accelerators. Three arguments can be developed in favour of this hypothesis. Firstly, they correspond to a blast wave passing through the interstellar medium which can accelerate particles via the first-order Fermi acceleration mechanism. The particles gain energy by multiple passage through the supersonic shock, until they are no longer confined around the supernova remnant by the local magnetic fields. Secondly, 10 per-cent of the supernova explosion energy is enough to compensate the cosmic-ray escape from the Galaxy, taking into account the supernova frequency of about three per century. Thirdly, shell-type supernova remnants are observed as sources of very high-energy photons, for example by the H.E.S.S. telescope. As one typical case, Fig. 1 shows the H.E.S.S. image of the Vela Junior supernova remnant.2 The observed spectrum fits well with a power law E −Γ with the spectral index Γ close to 2 and extend to beyond 30 TeV, showing no cut-off. The observation of these photons proves that charged particles are accelerated by the blast wave beyond 100 TeV. They can be radiated by electrons via the inverse Compton process on ambient infrared photons. They can also be caused by the interaction of hadrons (protons and
November 25, 2010
8:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.03˙Feinstein
581
Fig. 1. events.
The Vela Junior field of view excess map: the grey scale shows the excess number of
nuclei) with the ambient matter, producing hadronic showers containing neutral pions which decay in two photons. These two explanations fit reasonably well with existing data, although they require a much lower ambient magnetic field of the order of 1 µG for the leptonic case, than for the hadronic case where the ambient magnetic field may reach around 100 µG. Observations at the GeV scale with the Fermi Gamma-Ray Space Telescope should bring helpful data in the near future. At these energies the spectra should differ and help decide which interpretation is the correct one. However, the angular resolution of the Fermi satellite may not be high enough to separate the photon emission coming from the shell and that of a pulsar wind nebula often present in the field of view of the supernova remnant. In such a case, a potential hadronic contribution near the accelerating zone of the shell will be mixed with the leptonic contribution of the pulsar wind nebula. It is thus worthwhile to look for astrophysical objects for which this generic ambiguity could be removed.
3. Association of Molecular Clouds and Supernova Remnants Giant molecular clouds correspond to big masses of gas, typically from 103 to 106 solar masses, with densities ranging from 10 H atoms cm−3 to 106 H atoms cm− 3. They are mostly composed of H2 molecules and He atoms. They are detected mostly via the radio emission lines of the CO molecule. These clouds are place of intense star formation. Their lifetime can reach several millions years, comparable to the
November 25, 2010
8:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.03˙Feinstein
582
lifetime of heavy stars which end their life as supernovae and give rise to expanding supernova remnants. Thus, associations between giant molecular clouds and supernova remnants is natural. Such configuration provides a material target, the cloud, close to the potential acceleration site, the expanding blast wave of the supernova remnant, allowing hadron-hadron collisions. Hadronic showers induced by these collisions could be the source of very high-energy photons. The photon yield would then be proportional to the hadronic cosmic-ray flux times the cloud density. A Galactic survey in the radio band corresponding the emission of the rotational line of the CO molecule is the most common method to get a reliable mapping of the interstellar medium of the density of matter. One assumes that the line intensity is proportional to the H2 column density, as the H2 molecule has no rotational line. In the denser parts where the CO line is saturated, it is possible to use also the CS line. The Doppler shift of the line provides a measurement of the radial velocity of the emitting matter, related to its distance to the Galactic centre. A molecular cloud will be then characterised by an intense line emitted by a given part of the galaxy, in a given radial velocity band. One can then look for molecular clouds which are close to known supernova remnants. However, the velocity leaves an ambiguity for its radial distance to the Sun (see Fig. 2). There are also often large uncertainties
(a)
(b)
Fig. 2. Ambiguity in the radial distance of molecular clouds. (a) Sketch of the matter velocity with respect to the Galactic centre. (b) radial velocity profile as seen from the Earth: the two white arrows in (a) correspond to the same radial velocity that could be 4 or kpc.
in the radial distance of supernova remnants to the Sun. It is thus uneasy to know whether we are in front of a true association between a supernova remnant and a molecular cloud, or we see a mere coincidence, while the objects are radially far from each other. Fortunately, the detection of the 1720 MHz line is an unambiguous signature of a true association. This corresponds to the emission line of an OH maser which can only occur in very specific conditions. The maser emission is caused by the collisional pumping of the OH radical through collisions with H2 molecules. This happens only when the temperature is within 25 K and 200 K and the molecule
November 25, 2010
8:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.03˙Feinstein
583
density between 103 and 105 cm−3 .3 These conditions are typical of a molecular cloud heated by the passage of the blast wave of a supernova remnant. 4. Observations Some very high-energy gamma-ray sources, already interpreted as association between a supernova remnant and a molecular cloud exhibit maser emission lines. This confirms the association between the two astrophysical objects, already inferred from other data. We have identified 18 supernova remnants associated with a 1720 MHz OH maser emission line. This is not exhaustive and requires more systematic surveys. However we have used this signature to perform pointed observations with the H.E.S.S. telescope. Some of them have led to the discovery of very high-energy gamma-ray sources. We present here a status of these observations.
Fig. 3. IC443 field of view, the grey scale shows the excess number of events. The wide grey lines show the matter distribution.
4.1. IC443 The IC443 complex has been first dicovered as a very high-energy gamma-ray source by the MAGIC collaboration,4 and confirmed by the VERITAS collaboration. Its situation in the Northern sky prevents H.E.S.S. from observing it. Several OH maser emission lines ensure that this is a true association. The gamma-ray excess coincides
November 25, 2010
8:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.03˙Feinstein
584
with the cloud and the masers positions. The source exhibits a flux of about 3% of the Crab nebula source and a soft spectrum with a spectral index Γ = 3.1 ± 0.3. No X-ray source has been detected in this direction. All these data are in favour of a hadronic origin of these photons. 4.2. The W28 (SNR G6.4-0.1) Field A multi-wavelength study reveals the W28 field to be complex field. It contains several supernova remnants, star formation regions and excited H regions. An OH maser is coincident with the the Northern gamma-ray excess and an EGRET source. Fig. 4 shows that CO emission observations from NANTEN reveal a molecular cloud
Fig. 4. The white lines show the H.E.S.S. signal superimposed on the NANTEN CO intensity map of the W28 field of view, in the 0-10 km/s radial velocity band.
in coincidence with this excess, in a radial velocity band compatible with the supernova remnant distance. The gamma-ray source flux is about 3% of the Crab nebula source.5 The energy budget is compatible with the hadronic scenario of cosmic rays accelerated by the supernova remnant and interacting with the molecular cloud. 4.3. HESS J1714-385 and CTB 37A This H.E.S.S. source (see Fig. 5) is coincident with a slightly extended gamma-ray sources with a spectral index : Γ = 2.30 ± 0.13 and a flux of about 3% of the Crab nebula source.6 Gamma-ray energetics is compatible with cosmic rays accelerated by CTB 37A. It requires that 4% to 30% of the supernova explosion energy be injected into cosmic rays.
November 25, 2010
8:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.03˙Feinstein
585
Fig. 5. The CTB 37A field of view excess map: the grey scale shows the excess number of events; the crosses show the OH masers.
Recent X-ray observations with the Chandra and XMM-Newton telescopes have led to the discovery of a pulsar wind nebula candidate which could possibly be associated with CTB 37A. A rather powerful spin-down luminosity can be inferred from the X-ray luminosity. The gamma-ray flux could be explained by the conversion of 0.1% of this energy. Both leptonic and hadronic scenarios are plausible.
4.4. HESS J1745-303 The radio shell of the supernova remnant G359.1-0.5 coincides with the Northern part of the extended gamma-ray source HESS J1745-303. The spectral index is Γ = 2.71 ± 0.10 and the flux corresponds to 1.5% of the Crab nebula source.7 As is shown on Fig. 6, OH maser emission lines have been detected in coincidence with the shell and the gamma-ray source, together with CO observations in a velocity band which is compatible with the Galactic centre. The gamma-ray flux requires that 15% to 60% of the supernova explosion energy be injected into cosmic rays. As there is no X-ray counterpart to this source and it coincides with an unidentified EGRET source, the hadronic scenario is plausible.
November 25, 2010
8:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.03˙Feinstein
586
Fig. 6. The white lines show the H.E.S.S. signal superimposed on the NANTEN CO intensity map of the HESS J1745-303 field of view; the circle shows the G359.1-0.5 radio contour and the crosses show the OH masers.
4.5. HESS J1923+141 This H.E.S.S. source (see Fig. 7) has been discovered while following the lead of OH maser lines coincident with known supernova remnants, here the W51C one. Galactic plane survey data combined with pointed observations in 2007 allowed to
Fig. 7. The W51C field of view excess map: the grey scale shows the excess number of events; the black circle represents the extension of W51C, the white lines show the matter distribution and the cross shows the OH masers.
November 25, 2010
8:47
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.03˙Feinstein
587
discover this new gamma-ray source which flux is about 3% of the Crab nebula source.8 CO data show a molecular cloud intersecting this region. Taking into account the amount of matter interacting and the gamma-ray flux, one concludes that the cosmic-ray density is about 30 times the local one. These could be accelerated by W51C. Chandra maps of the region show a non-thermal X-ray emission which could come from a pulsar wind nebula. The gamma-ray flux could be explained by the conversion of 0.1% of this energy. Here again both leptonic and hadronic scenarios are plausible. 5. Conclusions Several associations between supernova remnants and molecular clouds have been detected in coincidence with very high-energy gamma-ray sources. The presence of OH maser emission line ensure that these objects interact. This is the emergence of a new class of sources, which could bring determining information to help solving the century-long enigma of the origin of Galactic cosmic rays. A multi-wavelength approach, with more accurate CO surveys of giant molecular clouds, detection of new OH masers, the coming data from the Fermi telescope, data from HESS-II and MAGIC-II, together with refined magneto-hydrodynamical calculations of shock propagation in the interstellar medium, are thrilling perspectives towards understanding the high-energy dynamics of the Galaxy. References 1. 2. 3. 4. 5. 6. 7. 8.
F. A. Aharonian, L. O. Drury and H. J. Voelk, A&A 285, 645(May 1994). Aharonian F. et al. (H.E.S.S. Collaboration), ApJ 661, p. 236(January 2007). M. Elitzur, ApJ 203, 124(January 1976). Albert, J. et al. (MAGIC Collaboration), ApJ 664, L87(August 2007). Aharonian F. et al. (H.E.S.S. Collaboration), A&A 481, 401(April 2008). Aharonian F. et al. (H.E.S.S. Collaboration), A&A 490, 685(November 2008). Aharonian F. et al. (H.E.S.S. Collaboration), A&A 483, 509(May 2008). F. Feinstein, A. Fiasson, Y. Gallant et al., AIP Conf. Proc. 1112, p. 54(March 2009).
November 25, 2010
9:13
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.04˙Regis
588
A TEST FOR THE DARK MATTER INTERPRETATION OF THE PAMELA POSITRON EXCESS WITH THE FERMI TELESCOPE MARCO REGIS Astrophysics, Cosmology and Gravity Centre (ACGC), Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch 7701, Cape Town, South Africa and Centre for High Performance Computing, 15 Lower Hope St, Rosebank, Cape Town, South Africa [email protected] In this proceeding, we outline a test for the dark matter (DM) interpretation of the positron excess observed by the PAMELA cosmic-ray (CR) detector. It involves the identification of a Galactic diffuse gamma-ray component induced by DM at intermediate latitudes. The diffuse emission at mid-latitudes is a probe of the CR population in the nearby region, where (most likely) the positron source responsible for the excess is located. A different spatial distribution for the DM-induced component (having an extended profile) with respect to the astrophysical contribution (with sources confined within the stellar disc), make the disentanglement between these two interpretations viable. We show that, in general, the gamma-ray emission induced by PAMELA DM leads to a signature detectable in the forthcoming data of the Fermi Telescope at energies above 100 GeV and |b| ≥ 10◦ . An observational result in agreement with the prediction from standard CR components only would imply very strong constraints on the DM interpretation of the PAMELA excess. Keywords: Dark Matter; Cosmic Rays.
1. Introduction Recently, the PAMELA collaboration1 reported a measurement of the positron fraction in cosmic rays (CRs) up to 100 GeV. The observed spectrum shows a sharp raise above 10 GeV. This feature cannot be accommodated within a picture where primary electrons accelerated in supernova remnants (SNRs) and secondary positrons produced mainly from the interaction of primary cosmic-rays with the interstellar medium (ISM) during propagation are the only actors. It is instead suggestive of an extra primary source of positrons. The source has to be located in the nearby region, i.e., within one or few kpc, since this is the scale of diffusion for an electron/positron of energy ≥ 10 GeV. Pulsars are well-motivated candidates for this role.2 Another possibility is that positrons (and electrons) are secondary products of hadronic interaction within the cosmic ray sources.3
November 25, 2010
9:13
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.04˙Regis
589
An exciting explanation of the excess is also provided by annihilations or decays of dark matter (DM) particles in the dark halo of the Milky Way which constitute a possible source of positrons. In order to find a test to distinguish between the DM interpretation of the excess and other astrophysical explanations, we follow two guidelines: • The predictions for the observable we focus on have to be significantly different in the two cases (i.e., DM and astrophysical sources), allowing a clear distinction even in presence of few theoretical uncertainties. • The predictions have to rely on the same assumptions as for the positron fraction, in order to perform a self-consistent test. The spectral properties of local electrons and positrons do not fit into these requirements. Indeed, the predicted e+ /e− fluxes from dark matter and astrophysical sources are significantly model dependent in both cases,4 while observed fluxes are rather featureless. Therefore a spectral analysis does not fulfill the first requirement. A study of the spatial distribution of the local e+ /e− can be more promising. On the other hand, a clear signature of anisotropy in one of the present or forthcoming experiments is, most likely, possible only if the source of the PAMELA excess is a single ”strong” source (see, e.g., Ref. 5 in the case of a pulsar and Ref. 6 for a DM substructure). We do not consider this possibility here. Most attempts to insert the dark matter interpretation of the PAMELA data into a more global picture have involved either other species such as neutrino and antiproton yields, or other dark matter environments (e.g., the central region of the Galaxy, Galactic satellites, or extra-galactic DM), instead of the local dark matter population which, in case the dark matter interpretation holds, would be responsible for the measured positron flux. These kind of comparisons are inevitably model dependent and don’t fulfill the second requirement. Radiative components are instead unavoidably associated to electron/positron yields: Inverse Compton (IC) emission of a 100 GeV to 1 TeV electron on 1 µm starlight photons gives gamma-rays with energies peaked in about the range 50 GeV to 5 TeV; the associated synchrotron emission on a 1 µG magnetic field is peaked between 50 to 5000 GHz (scaling linearly with the magnetic field).7 Having normalized the electron/positron yield to the locally observed flux, the extrapolation for the radiative emission in the local portion of the Galaxy and its neighborhood is fairly solid, since it doesn’t introduce extra-assumption (models for magnetic field and starlight are indeed required for the computation of the positron fraction itself, contributing to the energy losses of e+ /e− , see Sect. 2). We present a comparison between dark matter and astrophysical contributions to Galactic radiation seed at intermediate latitudes. This analysis can meet our requirements. Indeed the emission in such portion of the sky can actually probe the local environment and the spatial-dependence of the signal is rather different in the two cases. The DM-induced component follows from an extended, possibly spherical, source function, while astrophysical contributions come from sources confined within
November 25, 2010
9:13
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.04˙Regis
590
the stellar disc. Although there are uncertainties in the parameters involved in the computation, such as the cosmic-ray propagation model and the level of the stellar radiation and magnetic fields, these need to be in turn readjusted to the local measurements (regarding the local electron flux as well as from ratios of secondaries to primaries in cosmic rays). Moreover, although it is true that they strongly affect the extraction of DM properties from the signal, they have a very little impact on the proposed test. Indeed, it’s pretty hard to imagine a picture where the diffuse emission from stellar sources becomes highly extended in the vertical direction, or where the emission from the extended DM halo becomes ”disky”. The reference experiment for the proposed test is the FERMI gamma-ray telescope.8 This proceeding is based on the work presented in Ref. 9. 2. Source Models The CR propagation equation can be written in the form:10 h i ∂ni (~r, p, t) ~ ~ i − ~vc ni ) + ∂ p2 Dpp ∂ 1 ni − ∂ p˙ ni − p (∇ ~ · ~vc ) ni =∇ · (Dxx ∇n ∂t ∂p ∂p p2 ∂p 3 ni ni + + + q(~r, p, t) τf τr (1) where ni is the number density per particle momentum, q is the source term, Dxx is the spatial diffusion coefficient along the regular magnetic field lines, ~vc is the velocity of the Galactic wind, Dpp is the coefficient of the diffusion in momentum space, p˙ is the momentum loss rate, and τf and τr are the time scales for fragmentation loss and radioactive decay, respectively. The transport equation is solved numerically and assuming a cylindrical symmetry, with halo boundaries at disc radius R and half-thickness zh . We exploited a modified version of the GALPROP code.11 In this proceeding we will concentrate on the so called ”conventional” model (see review in Ref. 12). For a broader discussion about the impact of the propagation scenario see Ref. 9. In the following we will focus on the high-energy electron/positron population. The associated radiative emission involves mainly inverse Compton scattering and synchrotron fluxes, which are related to interactions with the magnetic field and the interstellar radiation field, appearing also in the momentum loss rate p˙ (relevant formulas can be found in, e.g., Ref. 7). 2.1. Astrophysical sources of cosmic-rays We will assume the primary CR source to be in the form: |z| R exp − E −βinj,e , Qpi (R, z, E) ∝ Rαs exp − Rs zs
(2)
November 25, 2010
9:13
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.04˙Regis
591
where αs ' 2.35, the radial length scale Rs ' 1.528 kpc, and the vertical cutoff zs = 0.2 kpc confines the source distribution to the Galactic plane. Neglecting discreteness and time variation effects, which could be eventually considered in connection to young nearby SNRs, the spatial part of the source function follows the mean SNR distribution in the Galaxy as derived from radio pulsar population surveys.13 Secondary electrons and positrons derive from the decays of charged pions produced in the interaction of primary cosmic rays with the ISM along their propagation in the Galaxy. Their spectral index at the source is equal to the spectral index of primary nuclei after propagation , i.e. close to βnuc = 2.7, which is larger than the injection spectral index for electrons, βinj,e ' 2.35. The ratio of secondary to primary electrons is thus expected to decrease as the energy increases. To reverse this trend, and fit the sharp raise in the positron fraction detected by PAMELA1 above 10 GeV and up to 100 GeV, it seems unavoidable to introduce an extra electron/positron source with harder spectrum. Although the physical insight for the several proposed sources is different (as for, e.g., pulsars and production of secondary e+ /e− from hadronic interaction within CR sources), we can, in first approximation, model this additional component independently from the underlying physics. The spectrum at the sources is described by a power-law plus an exponential cutoff: E −βinj,s · exp(−E/Ec ). The parameters of the spectrum follow from the requirement that PAMELA data can be fitted when including this additional term. The spatial part is assumed to be the same as for standard primary components, which is the most important ingredient for our test. 2.2. Dark matter Dark matter in the Galactic halo can induce fluxes of electrons and positrons. In the case of WIMP dark matter particles, the source term associated to pair annihilations injecting a given species i is given by: Qai (r, E) = (σa v)
ρ(r)2 dNia × (E) , 2 Mχ2 dE
(3)
where ρ(r) is the Milky Way halo mass density profile (assumed to be spherical), M χ the mass of the dark matter particle, σv the pair annihilation rate, and dNia /dE is the number of particles i emitted per annihilation in the energy interval (E, E +dE). Another possibility is that, instead of being stable, dark matter particles have a long but finite lifetime, and the species i is injected in dark matter decays, with the source described by: Qdi (r, E) = Γd
ρ(r) dNid × (E) , Mχ dE
(4)
where Γd is the decay rate and dNid /dE is the number of particles i emitted per decay in (E, E + dE).
November 25, 2010
9:13
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.04˙Regis
592
The details of dark matter profile at the Galactic center are not particularly relevant for our analysis since we focus on the local region of the Galaxy. We assume a functional form with a large core radius at the center, the Burkert profile:14 ρ(r) = ρ0 (1 + r/rc )−1 (1 + (r/rc )−2 , where r is the distance from the GC, the profile normalization is ρ0 = 0.84 GeV cm−3 and the core radius is rc = 11.7 kpc. Note that, whatever is the DM spatial distribution considered, the gradient of the density profile in the region where most of the signal we will consider originates is very modest. Hence, in practice, the spatial signature of the annihilation source function, due to the scaling with ρ2 (r), is hardly distinguishable from that of the decay source function, scaling simply with ρ(r). The relation between decay and annihilation rates leading to an analogous scenario can be estimated by τ ' (σ a v)−1 · Mχa /ρ0 , where Mχa is the mass of the WIMP in the annihilating DM scenario and Mχd = 2 Mχa . We disregard a number of other potential contributions from DM, such as unresolved Galactic subhalos and extragalactic DM. In particular, if the spatial distribution of subhalos is antibiased with respect to the host halo mass distribution (as found, e.g., in the Via Lactea simulation15 ), the emission at mid-high latitudes would be enhanced and the conclusions would be strengthened. On the other hand, the estimates of mass function, spatial distribution and concentration of subhalos have great uncertainties and, taking a conservative approach, we choose to neglect them. Estimates of extragalactic γ-ray background from unresolved DM structures lead typically to a flux below the astrophysical extragalactic gamma-ray background (EGB), unless a substantial enhancement stems from populations of dense substructures; we disregard them in our analysis. The presence of a Galactic dark disc can in principle affect our conclusions if the DM density associated to the disc is much higher than the density of the halo. However, this is quite unlikely, as also found in the simulation of Ref. 16, where the existence of a dark disc in Milky Way-sized galaxies was proposed. Fits to the PAMELA positron excess require sources with a rather hard spectrum and the preferred DM-related interpretation is a scenario with prompt emission of leptonic final states; we present results for three benchmark cases DMe, DMτ , and DMµ, namely, emission of monochromatic e+ /e− (with Mχ = 300 GeV), the case of τ + /τ − yields (with Mχ = 400 GeV), and the channel µ+ /µ− (with Mχ = 1.5 TeV) as final state of annihilation/decay, respectively.
3. Results In Fig. 1a, we plot the vertical profiles of the electron number density distributions, at the local radial distance R = 8 kpc and E = 200 GeV. We are focusing on some typical energy at which electron and positron sources relevant for the raise in the positron fraction are also a significant contribution to the total population of e + +e− . We show the vertical profile of CR primary electrons, secondary e+ + e− produced in the ISM, e+ + e− injected by an astrophysical source fitting the PAMELA excess,
November 25, 2010
9:13
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.04˙Regis
593
and e+ + e− flux induced by WIMP annihilations in a DM benchmark scenario. All these cases but the latter follow a distribution which is mainly confined to the disc (although broadened by the diffusion). The DM-induced component is instead much flatter (we plot for comparison the profile of the DM injection source ∝ ρ2DM ). It is the dominant component at intermediate and large z. Therefore, in order to detect a DM-induced signal in the diffuse emission of the Galaxy, intermediate and high latitudes are the best targets. However, at high latitudes, the diffuse extragalactic gamma-ray background is expected to become the dominant background component. To estimate the level of the extragalactic emission we consider the model of Ref. 19 (black curve in Fig. 1), which roughly agrees with the EGB recently estimated by Fermi LAT.20 In Fig. 1b, we plot the γ-ray diffuse spectrum at 10◦ < b < 20◦ , integrated over longitude (0◦ < l < 360◦ ), and compared to the FERMI measurements.17 The first remark is that the sum of three CR components, namely, IC, bremsstrahlung, and π 0 -decays, plus the extragalactic background contribution, can approximately account for the measured flux at E ≤ 10 GeV. In the same plot one can see that the γ-ray flux induced by our benchmark DM models is more than one order of magnitude smaller than the detected flux at E ≤ 10 GeV, while it becomes comparable to or higher than the background at E & 100 GeV. This happens for all the benchmark DM models considered. In case of pairs annihilation into monochromatic e+ /e− or into µ+ /µ− , γ-rays arises from IC scattering of the propagating e+ /e− and final state radiation (FSR) processes at emission. In the case of a DM candidate annihilating into τ + τ − , the detectable γ-ray component is due to the emission from π 0 -decays, which peaks at, roughly, one-third of the DM mass. These conclusions can be straightforwardly extended to a decaying DM scenario, as mentioned above. In case of a DM candidate annihilating into a new light particle which in turn decays into leptons, the FSR is generally reduced (depending on the model). The total emission can be thus mildly fainter than an analogous WIMP case, but remains still sizable. Full sky-maps, at 150 GeV for the IC emission associated to CR e+ /e− and to e+ /e− induced by annihilations in the DMµ model are shown in Fig. 2. Differences in morphologies are indeed very clear. We can thus conclude that the test can be definitively performed, as soon as the dataset at E ≥ 100 GeV have reasonably small statistical errors and a good understanding of the systematics (which will hopefully be the case for the FERMI data of the forthcoming months/years). The statement regarding the detectability of the induced Galactic diffuse γray flux has a marginal dependence on the model implemented to describe the propagation of charged particles in the Galaxy, as shown in Ref 9, where different propagation setups were considered.
November 25, 2010
9:13
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.04˙Regis
594 -2
0
10
10
FERMI
R = 8 kpc Ee = 200 GeV ~ρ
2
CR total EGB DMτ DMµ DMe
(arbitrary normalization)
-1 -1
E J [Mev cm s sr ]
o
CR
10
sec
pri
on
ma
ry da
-3
10
se
co
ry
nd
ar
at
n yi
so
urc
e
2
-2 -1
-2
IS
M
-4
-4
10
-5
-4
10
o
0 < l < 360 o o 10 < b < 20
-3
10
-2 -1
DMe
-1
Φe [MeV cm s sr ]
10
-3
-2
-1
0 z [kpc]
1
2
3
4
10
1
10
2
10
3
4
10
10
5
10
6
10
E [MeV]
Fig. 1. Left Panel: Electron vertical profile at R = 8 kpc and E = 200 GeV for primary CR electrons (blue solid line), secondary CR e+ +e− produced in the ISM (blue short dashed line), e+ + e− injected by an astrophysical source fitting the PAMELA excess (black dotted line), and e + +e− induced by DM annihilation in the model DMe (blue thick dotted line). For comparison, we plot the distribution of a source scaling as ρ2DM (black solid line), with an arbitrary normalization. Right Panel: γ-ray diffuse spectrum at intermediate latitudes (10◦ < b < 20◦ ), integrated over longitudes 0◦ < l < 360◦ and compared to the FERMI data.18 The sum of CR (primary+secondary) spectra associated to π 0 -decay, IC, and bremsstrahlung is shown by the solid blue line. The solid black line shows the extragalactic background in the model described in the text. The IC + FSR emission associated to the WIMPs DMe and DMµ are shown by thick dotted and thick dashed-dotted lines, respectively. The IC + γ-ray from π 0 -decay signals induced by the WIMP DMτ are shown by thick dashed lines. Pictures from Ref. 9.
Fig. 2. Sky-map at 150 GeV of the inverse Compton emissions associated to Galactic primary+secondary CRs (left panel) and to WIMP annihilations in the DMµ scenario (right panel). The intensity is shown in logarithmic scale and units [MeV cm−2 s−1 sr−1 ]. Pictures from Ref. 9.
4. Conclusions To conclude, we have discussed how the DM interpretation of the PAMELA positron excess can be tested by the FERMI LAT telescope in the diffuse emission at midlatitudes and high energy. A crucial ingredient for the discussion is the different spatial profile for the CR primary sources (confined to the disc) and of dark matter induced components (spherical distribution). The two terms can be disentangled looking at the angular profile of the diffuse emission, with the optimal region to
November 25, 2010
9:13
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.04˙Regis
595
single out the DM component being at intermediate latitudes. Such cross-correlation test of the PAMELA excess would then be performed by focusing on a nearby portion of the Galaxy where the source responsible for the excess is located, and where the extrapolation on the DM density profile as well as on the propagation model parameters (from the locally-measured CR spectra) can be regarded as rather robust. A discovery of an extra γ-ray term, with spectral and angular features as we discussed for the DM source, would be an important step towards the identification of the DM component of the Universe. On the other hand, would FERMI find that the γ-ray diffusion emission is in agreement with the prediction from standard CR components only, tight constraints on the DM interpretation of the PAMELA positron excess would follow (while such picture would not be in contradiction with other scenarios addressing the PAMELA excess).
Acknowledgments I would like to acknowledge the African Institute for Mathematical Sciences, where part of this proceeding was completed.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
O. Adriani et al. [PAMELA Collaboration], arXiv:0810.4995 [astro-ph]. S. Profumo, arXiv:0812.4457 [astro-ph]. P. Blasi, Phys. Rev. Lett. 103 (2009) 051104 [arXiv:0903.2794 [astro-ph.HE]]. D. Grasso et al. [FERMI-LAT Collaboration], Astropart. Phys. 32 (2009) 140 [arXiv:0905.0636 [astro-ph.HE]]. D. Hooper, P. Blasi and P. D. Serpico, JCAP 0901 (2009) 025 [arXiv:0810.1527 [astroph]]. M. Regis and P. Ullio, arXiv:0907.5093 [astro-ph.GA]. M. Regis and P. Ullio, Phys. Rev. D 78 (2008) 043505 [arXiv:0802.0234 [hep-ph]]. W. B. Atwood et al. [LAT Collaboration], arXiv:0902.1089 [astro-ph.IM]. M. Regis and P. Ullio, Phys. Rev. D 80 (2009) 043525 [arXiv:0904.4645 [astro-ph.GA]]. Berezinskii, V. S., Bulanov, S. V., Dogiel, V. A., and Ptuskin, V. S., 1990, Amsterdam: North-Holland, 1990, edited by Ginzburg, V.L., A. W. Strong and I. V. Moskalenko, Astrophys. J. 509 (1998) 212 [arXiv:astroph/9807150]. A. W. Strong, I. V. Moskalenko and V. S. Ptuskin, Ann. Rev. Nucl. Part. Sci. 57 (2007) 285 [arXiv:astro-ph/0701517]. D. R. Lorimer, arXiv:astro-ph/0308501. A. Burkert, IAU Symp. 171 (1996) 175 [Astrophys. J. 447 (1995) L25] [arXiv:astroph/9504041]. M. Kuhlen, J. Diemand and P. Madau, arXiv:0805.4416 [astro-ph]. J. I. Read, G. Lake, O. Agertz and V. P. Debattista, arXiv:0803.2714 [astro-ph]. A. A. Abdo et al. [Fermi LAT Collaboration], Phys. Rev. Lett. 103 (2009) 251101 [arXiv:0912.0973 [astro-ph.HE]]. Talks at ENTApP DARK MATTER workshop 2009, I. V. Moskalenko and A. W. Strong.
November 25, 2010
9:13
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.04˙Regis
596
19. P. Ullio, L. Bergstrom, J. Edsjo and C. G. Lacey, Phys. Rev. D 66 (2002) 123502 [arXiv:astro-ph/0207125]. 20. A. A. Abdo et al. [The Fermi-LAT collaboration], Phys. Rev. Lett. 104 (2010) 101101 [arXiv:1002.3603 [astro-ph.HE]].
November 25, 2010
9:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.05˙Huh
597
MINIMAL SUSY DARK MATTER FOR FERMI-LAT/PAMELA COSMIC-RAY DATA JI-HAENG HUH Department of Physics and Astronomy, Seoul National University Gwanakro Sillim-dong, Gwanak-gu, Seoul, 151-747 Korea ∗ E-mail: [email protected] We propose the minimal supersymmetric decaying dark matter model in which energetic decay products of dark matter explain excess of electron/positron cosmic-ray recently observed by PAMELA and Fermi-LAT. The decay process of the dark matter is mediated by superheavy charged lepton singlet E c + E which can be incorporated naturally in the flipped-SU(5) grand unified theory. It fits well with observed electron/positron cosmicray excess as well as non-observation of anti-proton cosmic-ray flux. Keywords: Decaying dark matter, Axino, Cosmic-ray e±
1. Introduction After the Zwicky’s first claim1 of the existence of the unilluminating matter , various evidences on the existence of dark matter (DM) have been found in different length scales. However, even though the more precise measurement of the DM abundance have been done today, we hardly know about its non-gravitational nature. Therefore various efforts to identify DM are ongoing today. Broadly there are three kinds of strategies to detect DM : 1) direct detection in underground by observing nuclei recoil, 2) indirect detection from the γ-ray, the galactic neutrino flux, and the cosmic-ray (CR) observations, and 3) production of DM in the hardron collider like LHC. The first and third one are promising if the DM is weakly interacting particle with mass of electroweak scale ∼ 100 GeV. Many DM models based on the freeze-out mechanism has the great chance to be detected in these strategies. Furthermore if DM has been detected in those ways, the experimental data tells us the quite large accurate information about the DM. On the other hand, the second option was regarded as less confirmative way to identify DM than others. It is partially because the theoretic and experimental uncertainty are still large. Especially, for the high energy CR, there are several theoretical uncertainties in injection spectrum and propagation parameters. In spite of the difficulties, positron CR measurement by HEAT experiment2 has shown deviation from the prediction of the standard CR propagation model, and it seems to indicate the existence of an additional primary source like DM annihilation or decay. The observations of CR excess have been confirmed by more
November 25, 2010
9:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.05˙Huh
598
recent observation done by PAMELA3 and Fermi-LAT5 with better statistics. It is well known that if there is no astrophysical source like pulsar, both can be well explained by the dark matter in our Galactic halo which eventually decays into other particles which are mostly leptonic SM particles with mass of few TeV and life time of ∼ 1026 s. The interesting point is that the decay rate ∼ 10−26 s−1 can be obtained by grand unified theory (GUT) scale suppressed dimension-6 operator. This fact initiates many attempts to explain the CR excess with various DM models. The appearance of GUT scale in CR excess tempts to make DM model related to GUT in which DM interacts with SM particles only by dimension-6 operator which arose from integrating-out of superheavy particle. However, if supersymmetry (SUSY) is introduced once, it is hard task to forbid dimension-5 operator while keeping dimension-6 operator because of the fermionic superpartners. We introduce one of the minimal SUSY decaying dark matter model which overcome this obstacle with additional super-heavy charged SU(2) singlet lepton pair superfield E c + E as well as neutral superfield N containing DM particle.7 Even though we represent the model as an extension of the minimal supersymmetric standard model (MSSM), it can be naturally embedded in flipped-SU(5) model.8,9 Thus the GUT scale appeared in the decay rate really has GUT origin in this model. Also we explicitly show that the predicted electron CR excess in our model is well-fitted with the PAMELA/Fermi-LAT observation in the natural ranges of free parameters of the model. 2. High Energy CR and Fermi-LAT/PAMELA Anomaly The CR has been discovered firstly by Victor Hess in the early days of 20th century. After that, Galactic CR model has been gradually developed. In this model, high energy CR is mixture of the primary CR and secondary CR. The primary CR is from the acceleration of interstellar medium in the supernovae remnants. The acceleration mechanism of the primary CR is believed to associate with the supernovae remnants. On the other hand, the secondary CR is created by the spallation process between the primary CR (mostly proton and α) and the interstellar medium (mostly Hydrogen and Helium). The energy spectrum of secondary CR is relatively wellunderstood and it can have anti-matter component while the primary CR cannot. By observing CR (especially anti-matter CR), we can investigate the new source of CR like DM annihilation or decay. Once the high energy particles are injected into our Galaxy, their density in phase space follows the so-called diffusion-loss equation. ∂ ∂ f (x, E, t) + ∇ · {−D∇f + VC (z)f } + {b()f } = q(x, E), ∂t ∂ where D, VC and b are diffusion coefficient, convecting velocity and energy loss coefficient, respectively.
November 25, 2010
9:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.05˙Huh
599
A satellite experiment, PAMELA launched and gathered high energy CR data. PAMELA experiment is basically able to distinguish charge and thus to measure flux of matter and anti-matter separately. In their accompanying papers,3,4 they reported fraction of the positron flux, e+ /(e+ +e− ) from few GeV to 100 GeV and the ratio between anti-proton and proton fluxes, p¯/p. The positron fraction measured by PAMELA is increasing from 20 GeV while the theoretical curve expected from the conventional CR propagation model is decreasing without any feature. While the positron excess was observed in PAMELA, any significant excess of anti-protonproton ratio has been observed. It seems to indicate that if there is a new primary CR source, it distinguishes leptons from hadrons and mainly produces leptons. Although Fermi-LAT was originally designed to observe high energy γ-ray, it also has the ability to detect high energy electron and positron. Since the detector in Fermi-LAT cannot discriminate the signs of charge, they measured only the sum of electron and positron flux in CR. Their result also shows significant deviation from the conventional CR model up to ∼ 1 TeV. Even though, by changing the parameters in the conventional CR model, which is still uncertain, we can marginally explain one of two experiment, it cannot explain both at a time.6 Combining both strongly indicates the existence of leptophilic CR primary source. 3. Minimal Supersymmetric Decaying DM The flux of electron CR can be explained by decaying DM in the Galactic halo with life time of ∼ 1026 s and mass of few TeV. The interesting point is that the life time for the right amount of flux can be from dimension-6 operator with grand unified theory (GUT) scale suppression, i.e., Γ ∼ 10−26 s−1 ∼ (phase factor) ×
m5DM , 4 MGUT
(1)
where mDM ∼ 1TeV is the mass of dark matter and MGUT ∼ 1016 GeV is a typical GUT scale. Since the GUT scale suppressed non-renormalizable interaction can be easily obtained from integrating out heavy field, it seems easy to achieve this kind of model, but, if we introduce SUSY once, it is hard to forbid dimension-5 operator while keeping dimension-6. If there is dimension-5 operator, dark matter decays too fast to remain in our Universe. To see this, one can consider dimension-6 operator from integrating-out heavy scalar boson or vector boson field as in Fig. 1. However, for the heavy scalar boson case, there is always supersymmetric counterpart mediated by fermion which generically results in dimension-5 operator. For the heavy vector boson case, to make it leptophilic interaction we need to enlarge the gauge group significantly. Even though it is not so easy to construct SUSY GUT DM model with leptophilic interaction, there is possible loop hole in the argument above. Not all the intermediating fermion gives dimension-5 operator. If the heavy fermion has only Dirac mass, it gives dimension-6 operator with derivative interaction without dimension-5
November 25, 2010
9:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.05˙Huh
600 ψ2
ψ3
ψ2
ψ3
Xµ
Φ
ψ4
ψ1
ψ4
ψ1
(a)
(b)
Fig. 1. Diagrams give dimension-6 operator by integrating-out scalar boson (in case of (a) ) or vector boson ( in case (b) ). ψi ’s represent the fermion field in SM or DM sector.
operator. It means that there should be some symmetry to forbid Majorana mass of superheavy fermion. Exact U(1) symmetry is good enough and unbroken U(1) gauge symmetry is better. In SM, there is such a symmetry, the electromagnetic U(1)em . We call it charged singlet lepton pair, E c + E, in GUT scale. Since, in flipped SU(5)8,9 model, there can be charged lepton as a singlet in SU(5), it can be naturally embedded in such a model. Therefore, the minimal content and interaction to obtain decaying leptophilic DM model in SUSY GUT is following.7 CAPRICE (2000) HEAT (2001) AMS (2002) PAMELAe± cal.((a)γ0 = 2.60) PAMELAe± cal.((a)γ0 = 2.54)
H.E.S.S. (2009) PPB-BETS (2009) ATIC (2008) Fermi LAT (2009)
2.54
100
2.42
2.6
0
0.15
PAMELAe
±
(2008) 0.10
e+ /(e+ + e− )
E 3 ×Flux(e+ + e− ) [GeV2 m−2 s−1 sr−1]
500
0.05
2
10
1
5
10
10
20
50
100
102
(a) mN = 6 TeV mN˜ = 1 TeV fe = 0, fµ = 31 fτ γ0 = 2.54 (a) mN = 6 TeV mN˜ = 1 TeV fe = 0, fµ = 31 fτ γ0 = 2.60 (c) mN = 4 TeV
fe = f µ = 0 γ0 = 2.54
103
e± energy [GeV] Fig. 2. Solid curves represent expected CR flux with injection spectrum presented above curves. Dotted curves show the predicted CR flux with DM decay in our model.
• N containing the leptophilic DM with mass of ∼ TeV.
November 25, 2010
9:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
08.05˙Huh
601
• E + E c to be integrated out at GUT scale. • W ∼ N ec E coupling in the superpotential which eventually gives leptophilic interaction to the low energy lagrangian. To generate TeV mass of N using Giudice-Masiero mechanism and forbid unnecessary and non-leptophilic interaction between N and other SM field while keeping leptophilic coupling N ec E, we assign appropriate charges under the Peccei-Quinn symmetry as described in the Ref. 7. As a result, there appear simple superpotential W ∼ N ec E + mTeV N 2 + MGUT E c E. Peccei-Quinn symmetry also make accidental parity symmetry which assigns odd to E c , E and N , and even to all MSSM particles. It makes N stable, and N can be dark matter. 4. Result In our model, there are three possible scenarios depending on the vacuum expectation value of N and the masses of the components of N . The cases and possible decay channels are listed here. ˜i = 0 ; N → N ˜ + e + e˜ • case (a) mN > mN˜ and hN ˜i = 0 ; N ˜ → N + e + e˜ • case (b) mN < mN˜ and hN ˜ ˜ • case (c) mN < mN˜ and hN i 6= 0 ; N → e + e˜ In any case, N decays into electron and scalar electron and scalar electron eventually decays into other SM particles and the lightest neutralino. Although decay products of scalar electron might include anti-proton depending on SUSY spectrum, in generic MSSM parameter space, the flux is not so large to conflict with PAMELA p¯/p data. As shown in Fig. 2, this model can be fitted with both Fermi-LAT and PAMELA data. References 1. F. Zwicky, Helv. Phys. Acta 6 (1933) 110-127. 2. S. W. Barwick et al., Astrophys. J. 482 (1997) L191. 3. O. Adriani et al. [PAMELA Collaboration], Nature 458 (2009) 607, arXiv: 0810.4994 [astro-ph]. 4. O. Adriani et al. [PAMELA Collaboration], Phys. Rev. Lett. 102 (2009) 051101, arXiv: 0810.4995 [astro-ph]. 5. A. A. Abdo et al. [Fermi-LAT Collaboration], Astrophys. J. 697 (2009) 1071, arXiv: 0902.1089 [astro-ph.IM]; Phys. Rev. Lett. 102 (2009) 181101, arXiv: 0905.0025 [astroph.HE]. 6. D. Grasso et al. [Fermi-LAT Collaboration ], Astropart. Phys. 32 (2009) 140-151. [arXiv:0905.0636 [astro-ph.HE]]. 7. J. -H. Huh, J. E. Kim, Phys. Rev. D80 (2009) 075012. [arXiv:0908.0152 [hep-ph]]. 8. S. M. Barr, Phys. Lett. B112 (1982) 219; J.-P. Derendinger, J. E. Kim and D. V. Nanopoulos, Phys. Lett. B139 (1984) 170. 9. K. J. Bae, J. -H. Huh, J. E. Kim et al., Nucl. Phys. B817 (2009) 58-75. [arXiv:0812.3511 [hep-ph]]; J. -H. Huh, J. E. Kim, B. Kyae, Phys. Rev. D80 (2009) 115012. [arXiv:0904.1108 [hep-ph]].
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
PART IX
Hubble Space Telescope
divided
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
605
EARLY SCIENTIFIC RESULTS AND FUTURE PROSPECTS FOR THE REJUVENATED HUBBLE SPACE TELESCOPE MALCOLM B. NIEDNER Laboratory for Exoplanets and Stellar Astrophysics, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA E-mail: [email protected] www.nasa.gov/centers/goddard Following the extraordinarily successful Servicing Mission 4 (SM4) of Hubble Space Telescope (HST) in May of 2009, the Observatory is now fully equipped with a broad array of powerful science instruments that put it at the pinnacle of its scientific power. Relevant to the subject matter of the Beyond 2010 Conference, HST will be well-placed over the next five-plus years to advance our knowledge of the formation of high-redshift galaxies and their growth with cosmic time; the emergence of structure in the early universe via Dark Matter-driven gravitational instability; and the universe’s expansion history and any resulting implications for the temporal character of Dark Energy. These are fitting projects for the iconic facility now celebrating its 20th anniversary in orbit. Keywords: Universal expansion, Dark Energy, Galaxy formation, Dark Matter.
1. The Goals of Hubble Servicing Mission 4 With an originally planned science lifetime of fifteen years (1990-2005), Hubble Space Telescope (HST) is the one space observatory that from the outset was designed to be serviced by astronauts. The goals of servicing have always been two-fold: to sustain and extend science operations by replacing/upgrading key components of the engineering infrastructure that supports and enables that science, and to install successively more powerful scientific instruments as technology rapidly advances on the ground. It is beyond dispute that servicing has been the key to Hubble’s unique success as an orbiting observatory capable of a continuing wide and diverse range of critical inquiry. It is highly probable—and it is NASA’s formal goal—that HST’s science life has been extended to at least 2014 (and perhaps several years beyond), the enabling event being the unprecedentedly complex and successful Servicing Mission 4 (SM4) in May, 2009. This daunting mission accomplished everything possible in a full fivespacewalk flight to address both the engineering and scientific needs of the telescope. Concerning the former, full sets of six gyroscopes and six batteries were installed, as well as a fine guidance sensor, a science instrument controller, and three panels of thermally insulating material for several of the key electronics bays. Before describing the early new science emerging from the rejuvenated Hubble
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
606
(with some looking back when appropriate, in commemoration of Hubble’s 20th anniversary), I will give brief overviews of the scientific objectives of SM4, specifically the installation of two new instruments—WFC3 and COS—and the successful repair of two existing ones—ACS and STIS. The full set of mission objectives, the history leading up to SM4, and the mission itself, are described by Niedner.1 1.1. Wide field camera 3 (new) Wide Field Camera 3 (WFC3) was built as a “facility instrument” (no Principal Investigator) to guarantee Hubble’s high-resolution, wide-field imaging—its signature product—to the end of the science mission. An additional, critical objective was that WFC3 have the capabilities to go beyond the performance of previous Hubble wide-field imagers (Wide Field and Planetary Camera 2, Advanced Camera for Surveys) by offering first-ever panchromatic coverage—from the ultraviolet (UV) through the near-infrared (200-1700 nm)—over a wide field and with high sensitivity. The panchromatic performance is shown in the right panel of Figure 1. The heart of the instrument is the detectors in the two science channels: two butted 2k x 4k pixel (160 x 160 arcsecond field of view) UV/blue-optimized CCDs in the “UVIS channel,” which is sensitive over the range 200-1000 nm; and a Hg-Cd-Te 1k x 1k pixel (123 x 135 arcsec) detector in the “IR channel,” with a spectral sensitivity range of 800-1700 nm. Both detectors are state-of-the-art devices and are the end result of many years of development and steady improvement. They provide substantially more than order-of-magnitude gains over past HST instruments in key performance areas. WFC3’s other key advantage over previous imagers is its large and diverse set of spectral elements: 47 filters and 1 grism in the UVIS channel, and 15 filters and 2 grisms in the IR. A more complete technical description WFC3 is given by Kimble et al.2 and MacKenty et al.3 Instrument details are also given
(a) (b) Fig. 1. (a) the fully integrated Wide Field Camera 3 (WFC3) in the large clean room at NASAGoddard Space Flight Center. (b) survey speeds of the two WFC3 channels vs. ACS, NICMOS and WFPC2 (removed from HST in SM4). WFC3 brings unique performance in the near-UV/blue and the near-IR.4
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
607
in the WFC3 Instrument Handbook maintained by the Space Telescope Science Institute.4 1.2. Cosmic origins spectrograph (new) Prior to SM4, HST spectroscopy had evolved spectacularly from the first-generation Goddard High Resolution Spectrograph (GHRS) and Faint Object Spectrograph (FOS), to the ultra-versatile, and in some ways still unsurpassed, second-generation Space Telescope Imaging Spectrograph (STIS). Even given all its available observing modes, however, one thing was missing from STIS: optimized sensitivity in the UV, particularly the far-UV (FUV). Cosmic Origins Spectrograph (COS) was proposed by Principal Investigator James Green to enhance that missing performance by a factor of at least 10x, and the solution was a simple, elegant design in which light entering COS experiences one optical bounce prior to entering the FUV detector; cf. Green et al.5 and Froning and Green.6 The bounce is off of one of four selectable diffraction gratings that, in addition to dispersing the light, correct for HST’s spherical aberration as well as local astigmatism. The FUV channel nominally covers the spectral range 115-175 nm, but has some useful sensitivity below 100 nm. Its detector is a photon-counting cross delay line device with a CsI photocathode. The second COS channel covers the near-UV (175-320 nm) with four gratings and a CsTe photocathode Multi-Anode Multichannel Array (MAMA). Although optically less simple than the FUV channel, COS’s NUV performance is still faster by 3-4x than that of STIS. Details on COS are given in Refs. 5 and 6 and the COS Instrument Handbook .7 1.3. Advanced camera for surveys (repaired) Advanced Camera for Surveys (ACS) was installed on HST during SM3B in March, 2002, and is its widest field-of-view imager. Its Wide Field Channel (WFC) projects a 202 x 202 arcsecond field onto two butted 2k x 4k CCDs that have sensitivity over the range 380-1000 nm. As ACS was designed primarily for cosmology and lacked a separate “IR channel,” the WFC CCD was red-optimized and has maximum response at ∼ 600 nm (Ford et al.8 ). ACS is probably most well known for the Hubble Ultra Deep Field of 2004 (Beckwith et al.9 ), as well as for wide-field cosmological surveys such as “GOODS” (Giavalisco et al.10 ) and “COSMOS” (Scoville et al.11,12 ). The successful SM4 repair effort for ACS was devoted to the WFC, which during 2002-2007 accounted for ∼ 70% of ACS science and hence was the instrument’s scientific backbone. ACS’s other CCD channel, the narrow-field High Resolution Channel (HRC), was not repaired during SM4. Historically HRC had accounted for 20% of ACS science. The remaining 10% was performed by the instrument’s Solar Blind Channel (SBC), which has worked continuously from 2002 to the present. More details about the nature of the 2007 failure that took down WFC and HRC, as well as the SM4 repair technique that restored WFC, are reported in
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
608
Refs. 1 and 13. It is important to note that with their different unique features, as well as important areas of overlap, WFC3 and ACS complement and back each other up beautifully.4,14
1.4. Space telescope imaging spectrograph (repaired) Space Telescope Imaging Spectrograph (STIS) was installed on HST during SM2 in February, 1997. With an array of long slits of different width to choose from, the availability of low-medium-high spectral resolution, extremely wide UV-Visible-NIR spectral coverage (121-1100 nm) offered by two UV MAMA detectors (FUV/CsI and NUV/CsTe photocathodes) and a CCD (200-1100 nm), STIS offered the observer powerful diffraction-limited 2-d spectroscopy and arguably became HST’s most versatile instrument (my judgment; cf. Woodgate et al.15 and Kimble et al.16 for instrument details). Its two signature achievements (pre-SM4) were almost certainly: 1.) the efficiency with which it mapped the velocity fields of galaxies and detected supermassive black holes (SMBHs) in their nuclei, leading to the view that most, if not all galaxies possess SMBHs that have a role in their formation and/or evolution;17,18 and 2.) the highly unanticipated commencement of the study of exoplanetary atmospheres in transiting star-planet systems.19,20 The failure of a 5V power converter took STIS down in August, 2004. Because of its uniqueness and very high degree of complementarity with the (then) future COS, planning for the SM4 repair of STIS began almost immediately, and by late2005 was well underway. For more about the details of the 2004 STIS failure and the techniques that successfully restored it during SM4, refer to Refs. 1 and 21. In addition to Refs. 15 and 16, a full description of STIS is found in the Instrument Handbook.22
1.5. Near infrared camera and multi-object spectrometer (untouched in SM4) This fifth HST instrument—which in the most recent observing cycle was responsible for ∼ 2% of the total HST observing time—was not worked on or touched by SM4 astronauts. At the current time the mechanical cryocooler that cools the IR detectors to ∼ 77K is not running due (apparently) to trace amounts of water ice which have migrated to problematic areas in the Neon circulator loop that cools the instrument. HST Program has developed a technique to flush the loop of its current Ne-H2 O-other content and refill it with pure Ne from the on-board refill tank. Before WFC3 was installed and largely subsumed its science, NICMOS was a highly used, critical asset on HST, its most notable achievements being: the precise photometry of high-redshift Type Ia supernovae vital for HST Dark Energy work (cf. Riess et al.23,24 ); and its important contribution to probing the molecular content of exoplanetary atmospheres (Swain et al.25 ).
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
609
Fig. 2. 2009.
The Early Release Observations (EROs) produced by WFC3, released on September 9,
2. Early Post-SM4 Science and Prospects, with a Cosmological Bent Although a year has elapsed since SM4 at the time of this writing, the rejuvenated Hubble is still quite young and observers have not had much time to analyze and publish their data since the multi-month Servicing Mission Observatory Verification (SMOV) activities completed in the fall of 2009. Even so, refereed published papers are beginning to appear, and it is quite clear that HST is significantly more powerful than it ever has been. For the remainder of the paper I will concentrate on three subject areas: (1) Refining H0 and constraining Dark Energy (2) Going the distance: searching for and characterizing the most remote galaxies (3) Probing Dark Matter distribution through Strong and Weak Lensing First, however, it is worth reliving the first public WFC3 images to come out of SM4, its “Early Release Observations” (EROs). They are shown in Figure 2. 2.1. Refining H0 and constraining dark energy At the risk of oversimplification, I take the view that between the mid-1990s and the present, the Hubble constant, H0 , and Dark Energy have had two distinct,
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
610
important, and rather different “encounters” in HST-based research. The second encounter is ongoing. 1994-2001. “Determination of the Extragalactic Distance Scale” was a Key Project (J. Mould, Principal Investigator) at HST’s 1990 launch; its goal was the measurement of H0 to an accuracy of 10%, a cosmologically critical task for establishing the age and scale of the universe. A similar program, called the HST Supernova Calibration Program (A. Saha, PI), had the same overall objective. Pre-HST, measuring H0 was a daunting issue due to the great difficulties in determining accurate galaxy distances. Recession velocities were of course well measured and not a noise term of any significance for H0 determinations. As a result of the distance measurement problem (the same issue that plagued Edwin Hubble), astronomers disagreed about H0 ’s value by a factor of ∼ 2x (∼ 50 km s−1 Megaparsec−1 [Mpc] vs. ∼100 km s−1 Mpc−1 ; cf. Sandage26 for a discussion). The H0 Key and Supernova Calibration Projects aimed to take direct advantage of HST’s combination of superb angular resolution and sensitivity—after installation of the Wide Field and Planetary Camera 2 (WFPC2) in 1993 vanquished HST’s spherical aberration problem—to perform accurate photometry of distance-yielding Cepheid variable stars over a local volume of galaxies significantly larger than had been previously possible. Although these “Hubble Cepheids” would not by themselves reach far enough to suppress to an acceptably low level the random velocities of galaxies as a fraction of the universal expansion out to the farthest Cepheid distances, they would permit the calibration of much more luminous, far-reaching secondary distance indicators present in the HST-observed “Cepheid galaxies.” Thus could be obtained accurate distances out into the true “Hubble flow,” producing a welldetermined value for H0 (Freedman et al.;27 Sandage et al.28 ). Among the several secondary distance methods and indicators were Type Ia supernovae (SNe), the most accurate yardstick for distance determination across an appreciable fraction of the observable universe.23,24,27,28 Preliminary results from the Key Project produced H0 = 80 + 17 km s−1 Mpc−1 (Freedman et al.,29 Kennicutt et al.30 ), but this raised a profound question. Specifically, bringing the H0 -inferred (expansion) age of the universe into line with the estimated ages of the oldest (globular cluster) stars seemed to require all three of the following to be true:29,30 1.) the universe is open, with low matter density, ΩT otal = ΩM atter ≤ 0.3; 2.) H0 ’s true value is at the low end of the error-bracketed range (i.e., in the low 60s); and 3.) the ages of the oldest stars are on the low end of their estimated range. In a closed, all-matter Einstein-de Sitter universe, however, the problem with the Key Project H0 result was profound: the expansion age and oldest stellar ages disagreed by ∼ 6 Gyr (universe younger29,30). As Freedman et al. and Kennicutt et al. stated, a possible way out of the dilemma was through the agency of a non-zero cosmological constant (Λ), but no data directly supporting this scenario were presented. The Saha et al. team, in the meantime, had derived a preliminary value of H0 = 58 + 7 km s−1 Mpc−1 ,31 and what was noteworthy was that using the same
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
611
methods and many of the same targets, the two teams using HST were deriving values of H0 that were still in disagreement at the 25% level. The Key Project’s final result27 was H0 = 72 +/- 8 km s−1 Mpc−1 , and that of the Supernova Calibration Project28 was H0 = 62.3 +/- 5.2 km s−1 Mpc−1 . The Key Project, in particular, would have simply confirmed and reinforced the justcited age dilemma had not new results already emerged on H0 ’s time variability that radically changed our understanding of the universe’s expansion history, and in the process resolved the expansion age vs. stellar age conflict that otherwise existed for an H0 in the low 70s. Making use of Type Ia SNe observations to pin down the value of the cosmological “deceleration parameter,” Riess et al.32 and Perlmutter et al.33 had found— mostly on the strength of ground-based data, but also with a few higher redshift, greater accuracy, and hence high-value HST observations—that the local universe is not decelerating at all, but accelerating (by “local” is meant out to several Gyr lookback time). Acceleration, if it survived the test of future observations, meant that Einstein’s famous Λ was “back in play,” and that locally at least, gravity was being defeated by an unknown dominant repulsive force in the vacuum.32,33 Not knowing whether the repulsive force was constant in cosmic time (i.e., whether it actually was Λ)—let alone any of the physics behind it—it was simply named “Dark Energy” (DE). Later HST observations of high-redshift Type Ia SNe provided strong confirming evidence of a currently accelerating universe at the level of ΩΛ ∼ 0.7 (e.g., Riess et al.34 and Knop et al.35 ). To sum up the first “encounter” between H0 and DE in the HST experience, the vexing combination of Kennicutt et al.’s30 H0 (consistent with the later Freedman et al. result27 ) and the obvious insistence that the oldest stars be no older than the universe was, in a sense, “saved” by the discovery of Dark Energy: a flat universe (Ωtotal = 1) with H0 ∼ 72 km s−1 Mpc−1 and ΩΛ ∼ 0.7 placed the universe and oldest stars in the same age box of 14 +/- 1 Gyr. The age conflict was resolved, but it came at the expense of a profound puzzle for physics and astrophysics (a joyful gift to some?): what exactly is Dark Energy? 2009-2010- The second HST H0 -DE encounter is ongoing, and it represents the very different, and in some sense “reversed” interaction (at least to this author), in which further refinements of H0 have strong implications for DE. A team led by Adam Riess is undertaking this effort, called “Supernovae and H0 for the Equation of State” (or SH0 ES), and the remainder of this section will briefly discuss recently obtained results, the factors that made them possible, and the prospects for further advances using the post-SM4 HST. Given WMAP’s 5-year measurement of the product ΩM atter H20 to 5% accuracy (Komatsu et al.36 ), a significant reduction of Freedman et al.’s27 10-11% uncertainty in H0 to ≤ 5% can by itself substantially constrain the uncertainty in the Dark Energy equation of state, w = P/(ρc2 ), and specifically test whether Λ is supported by a finding of w = -1 to within the errors (Riess et al.24 ). To jump to the conclusions, Riess et al.’s SH0 ES Team improved our knowledge of H0 to 74.2 +
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
612
SH0ES -0.5
-1.0 w
w=-1.12 +/- 0.12
W
M
AP
5y
-1.5
r
-2.0 50
60
70
80
90
100
H0 Fig. 3. Probability contour diagram from Riess et al.,24 showing the intersection of WMAP 5year results and those of the SH0 ES program in the w -H0 plane. SH0 ES result of H0 = 74.2 +/3.6 km s−1 Mpc−1 produces a most likely equation of state parameter w which is consistent with -1 and the cosmological constant Λ. The innermost contour denotes a confidence level of 68%. 24
3.6 km s−1 Mpc−1 (4.8%), and with the WMAP 5-year results and the assumption of a constant equation of state w and a flat universe, they derived w = -1.12 + 0.12 (refer to Figure 3), which is both consistent with a cosmological constant and has smaller error bars (by half) than other data combinations such as WMAP + Baryon Acoustic Oscillations (BAO) and WMAP + Freedman et al.’s H0 (cf. Ref. 24). Note that the SH0 ES result agrees with Freedman et al.’s27 H0 measurement. Removing the constraints of a flat universe with constant w, and utilizing BAO data, WMAP 5-year results, and high-redshift (z) SNe, Riess et al.24 found that their 2.2x reduction in H0 uncertainty led to a run of w (z) that in three separate redshift bins out to z = 1.8 were all consistent with Λ, showing no evolution with z to within the errors. As a further demonstration of the importance of increasing accuracy in H0 , Riess et al. determined that the Dark Energy “figure of merit” (FoM)—in this instance the inverse product of the uncertainties of w in the three redshift bins—increased three-fold compared to when the Freedman et al. value of H0 and its uncertainty were used. The message: highly accurate knowledge of H0 matters. It is important to describe how HST and the SH0 ES program produced such an improved result for H0 , because it illustrates the enormous improvements that have been made in Hubble through successive servicing missions, as well as improvements in observational strategies and opportunities. Furthermore, SH0 ES is broadly indicative of the future progress that HST will make in the study of Dark Energy.
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
613
Freedman et al.27 observed their Cepheids with the Wide Field and Planetary Camera 2 (WFPC2), the iconic instrument that, upon its installation on HST in December 1993 during SM1, instantly vanquished spherical aberration and brought diffraction-limited wide-field imaging to Hubble for the first time (Holtzman et al.37 ). Using WFPC2 and Cepheids, Freedman et al. derived galaxy distances and calibrated the secondary distance standards within them, such as Type Ia SNe, using Large Magellanic Cloud (LMC) Cepheids as their “anchor” to put the distances on an absolute scale. As Freedman and her colleagues noted, the 8 km s−1 error bars on H0 were driven, in part, by WFPC2 instrumental issues such as error in the photometric zero point. Uncertainty in the distance to the “anchor,” the LMC, also was a substantial error term.24,27 Advanced Camera for Surveys (ACS) was installed on HST in March, 2002 during SM3B. It offered enormous advantages over WFPC2: improved photometric properties, higher sensitivity, and twice the angular resolution and areal field of view. By extending to ∼ 30-35 Mpc the distance at which Cepheids are resolved and measurable, vs. ∼ 20 Mpc for WFPC2,23,24 a much greater statistical sample of Cepheids was available and the probability that a new (or recently measured) Type Ia supernova would occur (had occurred) within Hubble’s “Cepheid radius” was much increased. Those were some of the differences in instrumental factors between Freedman et al.’s and Riess et al.’s H0 determinations, and they strongly favored Riess et al.’s use of ACS. Refer to Figure 4. There is more to the story than the use of ACS, however. While SH0 ES has taken
SN 1981B
SN 1994ae SN 1990N
SN 1998aq SN 1995al
SN 2002fk
(b) (a) Fig. 4. (a) ACS frame of NGC 3021, a spiral galaxy supernova host. Cepheids (circled) were detected by ACS, with follow-up observations by NICMOS, whose field of view is indicated by the squares; from Refs. 23 and 24. (b) ground-based images showing six galactic hosts of “high-quality” Type Ia supernovae of particular interest to the SH0 ES team.23,24 Note that SN 1995a1 (lower left) occurred in NGC 3021 (left frame).
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
614
full advantage of ACS to detect and accurately characterize Cepheids to greater distances in an efficient manner, it has also utilized the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS) to suppress errors due to dust extinction of light and chemical variations among Cepheids, the seriousness of both of which is strongly reduced in the near-IR as compared to the visible (cf. Refs. 23 and 24 for particulars). One final detail in SH0 ES’ “tools and methods” should be mentioned: the choice of distance anchor. Riess et al. chose the maser host, accretion galaxy NGC 4258, as their anchor, the advantage of which was the unprecedented (for an extragalactic object) 3% accuracy of its measured distance. This derives from the fact that the Doppler-shifted water maser clouds revolving around the central black hole (BH) have very well measured velocities, proper motions, and angular distances from the BH; hence the distance calculation is one of those rare cases in which geometry is all that is required (Argon et al.38 ). Moreover, NGC 4258 is close enough at 7.2 Mpc that its Cepheids are easily observed, and in great numbers. In contrast, Freedman et al.27 used the LMC as their anchor, the distance of which was known only to within 5%.24,27 As Riess et al.24 argue, further progress can be made in the accuracy of H0 , and the SH0 ES Team is currently using the higher sensitivity, larger field, and superior photometric properties of HST’s WFC3/IR channel, to improve the earlier measurements made with NICMOS. The goal is to reduce the error on H0 from 4.8% to ≤ 3%. It is expected that in future years additional well-measured maser systems will become available, eliminating the current reliance on one anchor and reducing the errors via averaging. Establishing absolute distances of multiple maser systems is a goal of the Megamaser Cosmology Project.39,40 The SH0 ES Team has shown through simulation that an H0 known to 1% accuracy (a lofty but perhaps achievable future goal), when used with data from aggressive Type Ia SNe and BAO observing programs, can increase the Dark Energy FoM enormously. Theoretical understanding of DE will almost certainly depend on progress being made toward a much better observational characterization of it than exists today, and HST will surely be a player in these future studies. The reader is referred to Frieman et al. 41 for a current review of Dark Energy research.
2.2. Going the distance: searching for and characterizing the most remote galaxies Though not among the original goals of the 1977 NASA/HST Announcement of Opportunity, the detection and characterization of ever more distant galaxies has for fifteen years been a high-profile and hugely productive HST effort whose motivation has, in part, been tied to the need to understand the population of local galaxies (those in the “here and now,” such as the Milky Way and Andromeda galaxies) as the end products of galaxy formation and evolution from the earliest of cosmic times.
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
615
Other reasons exist for the search of the earliest galactic systems, among them understanding the process(es) by which the so-called “Dark Ages”—during which the intergalactic medium (IGM) consisted of pure neutral species dominated by H and He—ended via the Dark Matter-driven emergence of the first stellar systems and perhaps accreting black holes (BH) (Fan et al.;42 Loeb;43 White and Rees44 ). FUV photons from either or both the fledgling galaxies and black holes would have ionized the IGM, if copious enough. Among the questions are: were galaxies alone sufficient, or were BHs also needed, and what were their fractional contributions to reionization? Understanding the physical details, the central epoch, and the cosmic time interval over which this last major change of state—the reionization of the universe— occurred is a major focus of astrophysics today.42,43 The post-SM4 HST is more capable than ever to address these questions and will help lead the way for the James Webb Space Telescope (JWST; Gardner et al.45 ) to probe even deeper toward the Big Bang as result of its substantially larger mirror and operation at longer IR wavelengths. The original effort to go as deep as possible with HST was made with WFPC2 (Williams et al.46 ). For ten consecutive days HST and WFPC2 were pointed at an apparently blank piece of sky, and the summed exposures through various filters produced an iconic image that was anything but blank: the “Hubble Deep Field” (HDF) recorded several thousand remote galaxies, some at redshifts > 5 (cf. review by Ferguson et al.47 ), which, with the later Λ-Cold Dark Matter (ΛCDM) concordance cosmology and H0 = 74.2 + 3.6 km s−1 Mpc−1 ,24 represents light emitted only ∼ 1 Gyr after the Big Bang. A later “HDF-South” was observed with WFPC2, NICMOS, and STIS, and NICMOS observations were also made of the original HDF-North (cf. Ref. 47 for an HDF review). A word about redshift determination for these remote, challenging targets is in order. As the most distant galactic objects detected in the Hubble Deep Fields are too faint for direct spectroscopic redshift measurement—even with behemoth ground-based telescopes—the now standard photometric “dropout” technique utilizing filtered imaging and the virtual disappearance of rest-frame flux below 91.2 and 121.5 nm due to absorption by intra- and inter-galactic hydrogen, is used to calculate “photometric redshifts” of usually good accuracy (cf. Steidel et al. 48 for discussion of the technique, and to the right panel of Figure 5 for an illustration of dropouts in the HUDF09, discussed below). A major advance beyond the HDFs was made in 2004 by ACS and NICMOS in the Hubble Ultra Deep Field (HUDF04; Beckwith et al.9 ). Contained in the single ACS field-of-view, ultra-deep “core sample,” also observed at depth with NICMOS, were some 10,000 galaxies, the faintest and reddest of which reached z ∼ 7 (Bouwens et al.49 ) and probed to within ∼ 800 Myr of the Big Bang. By a variety of techniques and lines of evidence, reionization of the universe is thought to have started at z > 11 and was essentially complete by z ∼ 6 (e.g., Ref. 42). This of course means that the first stars and galaxies (or their subunits)
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
616
were assembling at z > 10 and that the HUDF04, for all it achieved by reaching z ∼ 7, had not probed any but the tail-end of the reionization interval. Because of the undoubted (but unquantified) role galaxies played in the reionization process, and also because of the need to observe even earlier and younger galactic objects from the viewpoint of galaxy formation theory, there was (there always is) a profound need to go deeper, farther, and earlier. WFC3’s IR channel has already shown that it has the performance capability to do just that, as well as to bridge the HST and JWST eras of observational cosmology. G. Illingworth is the PI of a large HST program to repeat the HUDF04 using WFC3/IR and push to redshifts well beyond z = 7; the product is referred to as “HUDF09,” and is shown in Figure 5. With its large field, excellent sensitivity, and superior photometric properties as compared with NICMOS (installed on HST in 1997), WFC3/IR has observed the identical ACS/NICMOS HUDF04 field and pushed the limits of galaxy detection well beyond what was achieved by the earlier instruments. A flood of investigations has been triggered as a result of the non-proprietary nature of Illingworth et al.’s data, and space does not permit doing justice to the full set of published and soon-to-be published papers. Several things about WFC3 are clear, however: 1.) the cosmic distance limit for galaxies has been considerably extended by WFC3/IR; and 2.) the efficiency in detecting galaxies at z ∼ 7 is enhanced (i.e., is faster) by some 40-50x compared to NICMOS (Oesch et al., 50 Bouwens et al.,51 McLure et al.52 ). Clearly, WFC3 is everything that had been hoped for.
(b) (a) Fig. 5. (a) The WFC3/IR summed image comprising the Hubble Ultra Deep Field (“HUDF09”), in which galaxy light as far back as 600 Myr after the Big Bang, and perhaps only 500 Myr, is recorded from the most remote galaxies (from Illingworth et al. program). (b) In postage-stamp cutouts, HUDF04+09 filter sequences show the photometric “dropout” technique used to derive redshift for two of the highest redshift galaxies. Rest-frame galaxy radiation shortward of the H ionization edge at 91.2 nm and the strong Lyman-α line at 121.5 nm is essentially completely absorbed by intervening H, leading to zero observed flux out to those redshifted wavelengths. Fluxes longward of the dropout are detected and can be modeled to derive a “photometric redshift.” From Bouwens et al.51
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
617
Fig. 6. The HUDF09 extension of galaxy detection to z ≥ 8 has shown that the earlier trends in galaxy evolution over the range z = 4-7 continues to z ≥ 8, and perhaps to z ∼ 10. Plotted is the observed UV luminosity density in the co-moving frame (right ordinate) and the associated, inferred star formation rate in solar masses per year (left ordinate), as functions of z. The lack of vertical error bars out to z ∼ 6 reflect the fact that the luminosity functions are very welldetermined through high-number statistics, whereas at z > 7 the fewer numbers of detected objects are placing limits on accuracy. The three highest redshift data points at z ∼ 7 and 8, and 10, are from Refs. 50 and 51, and Ref. 55, respectively, but the latter is currently pending referee review. Note that the lower blue band contains as-observed galaxies without correction for dust extinction, whereas the upper band contains the dust correction. At high-z the bands overlap due to the absence of dust and “metals.” Figure from Ref. 55.
Not all HUDF-WFC3/IR investigation teams are coming to the same conclusions, but there is apparent agreement that the ease with which the z = 7-8.5 range has been reached by WFC3 is permitting some conclusions to be drawn about at least the bright end of the galaxy luminosity function (LF) at 600 Myr after the Big Bang, and in particular about the continuation of evolution seen over the lower z = 4-6 redshift range by WFPC2 and ACS/NICMOS in the HDFs and HUDF04, out to z = 7-8+. The reader is referred to Refs. 50, 51, and 52, as well as to Fig. 6. Specific conclusions in Refs. 50 and 51 are that the z = 7-8.5 galaxies are extremely small, compact objects with scale size < 1 kpc, and that in the rest frame they are extremely blue—indicative of a severe deficiency of the heavy element “metals” that are created in successive waves of star formation from the epoch of the first stars to the present. Furthermore, the characteristic galaxy luminosity, “L∗ ”, decreases monotonically toward higher z (also cf. Ref. 52). These results are consistent with the accepted view that hierarchical merging of small galaxies is responsible for the growth of the large systems we see today. WFC3/IR does not observe to faint enough limits at z = 8-9 to answer definitely the question of whether galaxy FUV photons are sufficient for reionization. All that can be said at the moment is that extrapolation of the faint end LF slope seen at lower redshifts (assumed to hold at higher redshifts) to L/ L∗ ∼ 10−4 produces an integrated FUV galaxy flux that may be sufficient for all or a majority of that needed for reionization (Trenti et al.53 ). The point is that on a relative scale, faint galaxies (L L∗ ) were extremely numerous and likely produced most of the FUV
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
618
photons responsible for reionization.52 Two last points about the early results coming out of HUDF09 need be made. First, Labb´e et al.54 have combined ACS, NICMOS and WFC3/IR data with IR Spitzer data of the same fields. A substantial number of z = 7-8 objects are seen by Spitzer, which has led to the construction of spectral energy distributions (SEDs) wide enough in wavelength coverage to permit stellar synthesis modeling. The results indicate that these very remote galaxies, seen at ∼ 600 Myr after the Big Bang, are not young; rather the SEDs point to the existence of stellar populations already 300 Myr old. The second and extremely important point is that z ∼ 10 galaxies are claimed to have been detected in submitted HUDF09 papers by Bouwens et al.55 and Yan et al.,56 though the results are very different (Bouwens et al. detecting only 3 objects vs. 20 by Yan et al.). 2.3. Probing dark matter distribution through strong and weak lensing Dark Matter (DM) dominates the mass budget of the universe, and in the concordance ΛCMD cosmology its time-dependent distribution drove, and was the major manifestation of, the formation of large-scale structure (cf. R´efr´egier57 and references contained therein). The use of gravitational lensing as a DM probe on a variety of spatial scales has become widespread and extremely powerful in recent years. A good example is the work by Newman et al.58 utilizing HST and the Subaru telescope, in which DM is mapped across three decades of distance from the core of Abell cluster 611. Lensing has potential uses besides characterizing DM, including the detection of extremely distant galaxies not possible without the high levels of magnification provided by strong lensing of galaxy clusters (Maizy et al., 59 although cf. Bouwens et al.60 for a somewhat different perspective on which portions of the galaxy LF most benefit from strong lensing). HST has made fundamental contributions in DM research in the strong lensing (SL) environments of the inner cores of galaxy clusters (e.g., Smith et al.61 and Zitrin et al.;62 also cf. Ref. 58) and in galaxy-galaxy configurations (e.g., Faure et al.;63 cf. Treu64 for a review of SL by galaxies); and in the weak lensing (WL) regime found both in the inner and outer regions of clusters, as well as in the inter-cluster environment (discussed in the next paragraph). Unlike a number of telescopes both on the ground and in space, Hubble is not a survey telescope that maps substantial portions of the sky, but its “wide-field” instruments can, with sufficient tiling, produce deep surveys over fields large enough to beat down cosmic variance. Most notably, the ACS Cosmic Evolution Survey (“COSMOS”) produced a medium-deep survey on 1.64 sq deg of contiguous sky that has been extremely powerful for lensing studies, among other things. Massey et al.,65 for example, performed a 3D cosmic shear (WL) analysis and was able to trace the time-dependent growth of DM clumping and place stronger constraints on cosmological parameters than when a more limiting 2D projection technique was
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
619
used. It is in the detection of multiply imaged background galaxies—strong lensing— that the diffraction-limited, deep, and multi-waveband imaging of Hubble is perhaps uniquely valuable (e.g., Refs. 61 and 62). ACS has been the heart of such studies since its installation on HST in 2002, but now that WFC3 has become part of the instrument complement, SL work by HST will be more powerful than ever as a result of ACS and WFC3 being able to work efficiently in tandem via the ability to observe simultaneously. As part of its “Early Release Observation” (ERO) program following SM4, the most distant Abell galaxy cluster was observed by the restored ACS. Abell 370, at a redshift of z = 0.375, produces through SL what is probably the most well-known of all lensed “giant arcs.” A lensed background galaxy at z = 0.725, the arc was one of the first strong lines of evidence that clusters had sufficiently concentrated DM in their cores to produce strong lensing (Soucail et al.;66,67 cf. also Ref. 68). Figure 7 shows the ACS observation of A370, as well as a close-up view of the giant arc and several other strongly lensed features. Richard et al.68 have analyzed the ACS imagery in detail, and find that the lensed galaxy in the giant arc is a starforming spiral whose nucleus is imaged five times along the arc by the foreground cluster. That the arc is continuous derives from the other, extended structure in the spiral galaxy, including the individual blue star-forming regions. In brief, including the background spiral comprising the giant arc, ACS images of A370 capture 10 separate multiply imaged background galaxies that produce 32 total images. 68 With such a rich dataset, Richard et al. construct a DM model that, supported by data from the Chandra X-ray Observatory (CXO), demonstrate that Abell 370 is actually two galaxy clusters in collision, their relative velocity vector oriented nearly along the line of sight. HST has detected several other clusters in collision as evidenced in part by their DM distributions, and the reader is referred to Bradaˇc et al.69,70
(a)
(b)
Fig. 7. (a) Restored-ACS image of Abell Cluster 370. Full-frame view shows the giant cD elliptical galaxies which dominate the cluster (center and right of center), and the “giant arc” (right of the right-most cD). (b) Close-up of some of the strong-lensed features of the cluster, including the giant arc in upper left.
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
620
Clearly, HST has much more SL and WL work to do in pursuit of Dark Matter findings, and the future looks very promising. 3. Conclusion With the addition of two new advanced instruments and the restoration of two others, HST is more scientifically powerful than ever. The three cosmological themes briefly discussed here represent a very small fraction of the total variety of science of which HST is capable, and the work cited is just the beginning of what will come in the three themes that were examined. The author draws particular attention to the recent addition of “Multi-Cycle Treasury Programs” (MCT) to the HST scientific programme. These are very large programs—each comprising at least 450 orbits of data spread over three years (cycles)—that enable large scientific gains not possible with smaller orbit allocations. The combination of MCT and the new/restored instruments should produce huge, multi-use datasets whose benefit to the community will be felt for decades to come. It is to be noted that the emergence and evolution of large-scale structure is not the domain just of the HST imagers through DM lensing studies. The Cosmic Origins Spectrograph (COS)—not discussed in detail here because its science still lies ahead—will probe the cosmic web of baryonic (and dark) matter and make major discoveries fit for presentation at a future conference in this series. In summation, the next five or more years with HST look to be as or more exciting than the first twenty have been. Acknowledgments The author serves as NASA’s Observatory Project Scientist for HST, and wishes to acknowledge his thanks to Garth Illingworth, Rychard Bowens and Adam Riess for their contribution of figures to the paper; to Zolton Levay of the Space Telescope Science Institute (STScI) for conversion of several of the images; and to Randy Kimble for a critical reading of the manuscript. References 1. M. Niedner, in The Critical Path, 17, No. 1, NASA/GSFC Flight Projects Directorate Newsletter (2009). Article available at: http://www.nasa.gov/ mission pages/hubble/servicing/SM4/main/HubbleScientificPinnacle.html. 2. R. Kimble, J. MacKenty, R. O’Connell and J. Townsend, Proc. SPIE 7010, 70101E (2008). 3. J. MacKenty, R. Kimble, R. O’Connell and J. Townsend, Proc. SPIE 7010, 70101F (2008). 4. M. Wong, C. Pavlovsky and K. Long et al., Wide Field Camera 3 Instrument Handbook, v. 2.0 (2010), avail. at: http://www.stsci.edu/hst/wfc3/documents/ handbooks/currentIHB/wfc3 cover.html 5. J. Green, E. Wilkinson and J. Morse, Proc. SPIE 5164, 17 (2003). 6. C. Froning and J. Green, Astrophys. Space Sci. 320, 181 (2009).
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
621
7. W. Dixon et al., Cosmic Origins Spectrograph Instrument Handbook, v. 2.0 (2010), avail. at: http://www.stsci.edu/hst/cos/documents/handbooks/ current/cos cover.html. 8. H. Ford, P. Feldman and D. Golimowski et al., Proc. SPIE 2807, 184 (1996). 9. S. Beckwith, M. Stiavelli and A. Koekemoer et al., Astron. J. 132, 1729 (2006). 10. M. Giavalisco, H. Ferguson and A. Koekemoer et al., Astrophys. J. 600, L93 (2004). 11. N. Scoville, H. Aussel and M. Brusa et al., Astrophys. J. Suppl. Ser. 172, 1 (2007). 12. N. Scoville, R. Abraham and H. Aussel et al., Astrophys. J. Suppl. Ser. 172, 38 (2007). 13. S. Rinehart, E. Cheng and M. Sirianni et al., Proc. SPIE 7010, 70104Q (2008). 14. A. Maybhate et al., Advanced Camera for Surveys Instrument Handbook, v. 9.0 (2010), avail. at: http://www.stsci.edu/hst/acs/documents/handbooks/ cycle18/cover.html. 15. B. Woodgate, R. Kimble and C. Bowers et al., Publ. Astron. Soc. Pacific 110, 1183 (1998). 16. R. Kimble, B. Woodgate and C. Bowers et al., Astrophys. J. 492, L83 (1998). 17. M. Sarzi, H-W Rix, J. Shields, G. Rudnick, L. Ho, D. McIntosh, A. Filippenko and W. Sargent, Astrophys. J. 550, 65 (2001). 18. A. Beifiori, M. Sarzi, E. Corsini, E. Bont` a, A. Pizzella, L. Coccato and F. Bertola, Astrophys. J. 692, 856 (2009). 19. D. Charbonneau, T. Brown, R. Noyes and R. Gilliland, Astrophys. J. 568, 377 (2002). 20. A. Vidal-Madjar, J. D´esert, A. des Etangs, G. H´ebrard, G. Ballester, D. Ehrenreich, R. Ferlet, J. McConnell, M. Mayor and C. Parkinson, Astrophys. J. 604, L69 (2004). 21. S. Rinehart, J. Domber, T. Faulkner, T. Gull, R. Kimble, M. Klappenberger, D. Leckrone, M. Niedner, C. Proffitt, H. Smith and B. Woodgate, Proc. SPIE 7010, 70104R (2008). 22. C. Proffitt et al., Space Telescope Imaging Spectrograph Instrument Handbook, v. 9.0 (2010), avail. at: http://www.stsci.edu/hst/stis/documents/ handbooks/currentIHB/cover.html. 23. A. Riess, L. Macri and W. Li et al., Astrophys. J. Suppl. Ser. 183, 109 (2009). 24. A. Riess, L. Macri and S. Casertano et al., Astrophys. J. 699, 539 (2009). 25. M. Swain, G. Tinetti and G. Vasisht et al., Astrophys. J. 704, 1616 (2009). 26. A. Sandage, Astrophys. J. 331, 605 (1988). 27. W. Freedman, B. Madore and B. Gibson et al., Astrophys. J. 553, 47 (2001). 28. A. Sandage, G. Tammann, A. Saha, B. Reindl, F. Macchetto and N. Panagia, Astrophys. J. 653, 843 (2006). 29. W. Freedman, B. Madore and J. Mould et al., Nature 371, 757 (1994). 30. R. Kennicutt, W. Freedman and J. Mould, Astron. J. 110, 1476 (1995). 31. A. Saha, A. Sandage, L. Labhardt, G. Tammann, F. Macchetto and N. Panagia, Astrophys. J., 486, 1 (1997). 32. A. Riess, A. Filippenko and P. Challis et al., Astron. J. 116, 1009 (1998). 33. S. Perlmutter, G. Aldering and G. Goldhaber et al., Astrophys. J. 517, 565 (1999). 34. A. Riess, P. Nugent and R. Gilliland et al., Astrophys. J. 560, 49 (2001). 35. R. Knop, G. Aldering and R. Amanullah et al., Astrophys. J. 598, 102 (2003). 36. E. Komatsu, J. Dunkley and M. Nolta et al., Astrophys. J. Suppl. Ser. 180, 330 (2009). 37. J. Holtzman, J. Hester and S. Casertano et al., Publ. Astron. Soc. Pacific 107, 156 (1995). 38. A. Argon, L. Greenhill, M. Reid, J. Moran and E. Humphreys, Astrophys. J. 659, 1040 (2007). 39. J. Braatz, M. Reid, L. Greenhill, J. Condon, K. Lo, C. Henkel, N. Gugliucci and L. Hao, Publ. Astron. Soc. Pacific Conference Series 395, 103 (2008).
November 25, 2010
10:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
09.01˙Niedner
622
40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70.
L. Greenhill, P. Kondratko, J. Moran and A. Tilak, Astrophys. J. 707, 787 (2009). J. Frieman, M. Turner and D. Huterer, Ann. Rev. Astron. Astrophys. 46, 385 (2008). X. Fan, C. Carilli and B. Keating, Ann. Rev. Astron. Astrophys. 44, 415 (2006). A. Loeb, Publ. Astron. Soc. Pacific Conference Series 395, 59 (2008). S. White and M. Rees, Mon. Not. Roy. Astron. Soc. 183, 341 (1978). J. Gardner, J. Mather and M. Clampin et al., Space Sci. Rev. 123, 485 (2006). R. Williams, B. Blacker and M. Dickinson et al., Astron. J. 112, 1335 (1996). H. Ferguson, M. Dickinson and R. Williams, Ann. Rev. Astron. Astrophys. 38, 667 (2000). C. Steidel, M. Pettini and D. Hamilton, Astron. J. 110, 2519 (1995). R. Bouwens, R. Thompson, G. Illingworth, M. Franx, P. van Dokkum, X. Fan, M. Dickinson, D. Eisenstein and M. Rieke, Astrophys. J. 616, L79 (2004). P. Oesch, R. Bouwens, G. Illingworth, C. Carollo, M. Franx, I. Labb´e, D. Magee, M. Stiavelli, M. Trenti and P. van Dokkum, Astrophys. J. 709, L16 (2010). R. Bouwens, G. Illingworth and P. Oesch et al., Astrophys. J. 709, L133 (2010). R. McLure, J. Dunlop, M. Cirasuolo, A. Koekemoer, E. Sabbi, D. Stark, T. Targett and R. Ellis, Mon. Not. Roy. Astron. Soc. 403, 960 (2010). M. Trenti, M. Stiavelli, R. Bouwens, P. Oesch, J. Shull, G. Illingworth, L. Bradley and C. Carollo, Astrophys. J. 714, L202 (2010). I. Labb´e, V. Gonz´ alez and R. Bouwens et al., Astrophys. J. 708, L26 (2010). R. Bouwens, G. Illingworth and I. Labb´e et al., submitted to Nature (2010). H. Yan, R. Windhorst, N. Hathi, S. Cohen, R. Ryan, R. O’Connell and P. McCarthy, submitted to Astrophys. J. (2010). A. R´efr´egier, Ann. Rev. Astron. Astrophys. 41, 645 (2003). A. Newman, T. Treu, R. Ellis, D. Sand, J. Richard, P. Marshall, P. Capak and S. Miyazaki, Astrophys. J. 706, 1078 (2009). A. Maizy, J. Richard, M. De Leo, R. Pell´ o and J. Kneib, Astron. Astrophys. 109, A105 (2010). R. Bouwens, G. Illingworth, L. Bradley, H. Ford, M. Franx, W. Zheng, R. Broadhurst, D. Coe and J. Jee, Astrophys. J. 690, 1764 (2009). G. Smith, H. Ebeling and M. Limousin et al., Astrophys. J. 707, L163 (2009). A. Zitrin, T. Broadhurst, Y. Rephaeli and S. Sadeh, Astrophys. J. 707, L102 (2009). C. Faure, J-P Kneib and G. Covone et al., Astrophys. J. Suppl. Ser. 176, 19 (2008). T. Treu, Ann. Rev. Astron. Astrophys. 48, 87 (2010). R. Massey, J. Rhodes and A. Leauthaud et al., Astrophys. J. Suppl. Ser. 172, 239 (2007). G. Soucail, B. Fort, Y. Mellier and J. Picat, Astron. Astrophys. 172, L14 (1987). G. Soucail, Y. Mellier, B. Fort, F. Hammer and G. Mathez, Astron. Astrophys. 184, L7 (1987). J. Richard, J. Kneib, M. Limousin, A. Edge and E. Jullo, Mon. Not. Roy. Astron. Soc. 402, L44 (2010). M. Bradaˇc, D. Clowe, A. Gonzalez, P. Marshall, W. Forman, C. Jones, M. Markevitch, S. Randall, T. Schrabback and D. Zaritsky, Astrophys. J. 652, 937 (2006). M. Bradaˇc, S. Allen, T. Treu, H. Ebeling, R. Massey, R. Morris, A. von der Linden and D. Applegate, Astrophys. J. 687, 959 (2008).
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
PART X
Archeology and Physics
divided
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 25, 2010
10:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.01˙Schlosser
625
A PHYSICIST’S VIEW - THE DISK OF NEBRA WOLFHARD SCHLOSSER Astronomisches Institut der Ruhr-Universit¨ at Bochum D-44780 Bochum, Germany [email protected] A physicist dealing with an archaeological object remains a physicist. Any interpretation or working hypothesis concerning this object must comply with the laws of statistics or tolerances of manufacture. It is shown that the Nebra Disk (about 1600 BC) and the much older Goseck circular enclosure (about 4800 BC) represent approximately the same level of astronomical knowledge. Keywords: Nebra Disk, Archaeoastronomy
1. Introduction Looters looking for militaria from World War II unearthed this remarkable disk and some other items on the Mittelberg, a large hill near the town of Nebra in Saxony-Anhalt. It then disappeared on the black market. More than two years later (February 2002) it was finally confiscated by the Swiss police in Basel (Meller 2005). It was handed over to its legal owner, the federal state of Saxony-Anhalt, and is now one of the major showpieces at the State Museum of Prehistory in Halle, the largest city there (fig. 1). The Nebra Disk was manufactured around 1600 BC. It is made of bronze which turned into green due to its long contact with the soil. The original colour is not known. It is not very likely that it had the golden colour of fresh bronze. In that case the gold applications representing sun, moon and stars would not stay out as they should. Probably it had a a violet or brownish tint. The disk has a diameter of 32 cm and weights roughly two kg. A short outlook regarding the astronomical interpretation will be given in the last chapter. This paper is mainly concerned with more fundamental questions regarding the construction of working hypotheses, may they be of astronomical nature or not. 2. How Many Points Are Necessary to Discriminate Between Simple Geometrical Figures? The author received many letters where people maintain to see simple geometrical figures like circles and lines on the Nebra Disk. So the question arises: How many
November 25, 2010
10:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.01˙Schlosser
626
Fig. 1.
The Nebra Disk. (C) Landesamt f¨ ur Denkmalpflege und Arch¨ aologie Sachsen-Anhalt.
points are necessary to make a substantiated guess as to the nature of the figure, i.e. to discriminate between an ellipse and a rectangle? Of course this is not a pure mathematical problem but incorporates things like human pattern recognition and the like as well (Schlosser 2005). Geometrical figures in a plane can be classified by their Degrees of Freedom (DF). A circle has DF=3, because of its centre (x,y) and its radius r. Likewise an ellipse has DF=5 (centre x,y, semi-axes a,b, tilt of the great axis). A rectangle has the same DF. In fig. 2 an ellipse and a rectangle (DF=5+5=10) are shown, where the number of defining points varies between p=14 and p=64. While 14 points allow no substantiated guess, 32 or more will do so. This means a minimum of w=3.2 points per Degree of Freedom. This number is representative for simple nonoverlying figures, but increases to about w=4.5 in case of more complex situations. With w=4.5 and p=25 small golden objects on the Nebra Disk (the cluster of golden objects being excluded), one has DF=p/w=25/4.5=5.6. This is an astonishingly small number. It means that (at best) only one ellipse or alternatively two circles could be detected with some confidence - if there were any at all! 3. Do the Varying Diameters of the Small Golden Objects (SGOs) Represent Star Brightnesses? The diameters of the SGOs vary strongly, and the question arises whether this reflects the varying brightnesses of the stars in the sky. The crosses in fig. 3 give the
November 25, 2010
10:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.01˙Schlosser
627
Fig. 2.
Fig. 3.
TEllipse and rectangle with varying numbers of defining points.
Cumulative distributions of SGO diametres and star brightnesses.
diameters of the SGOs in ascending order, likewise the cumulative Gaussian distribution with the same mean value and variance (curve). Both agree reasonably. This is a strong indication that the variation of diameters simply reflects the tolerance of manufacturing the SGOs. Contrary to this, the distribution of star brightnesses is definitely non-Gaussian. There are many faint stars in the sky but only a few bright ones. If we select the hundred brightest stars, again a mean value and its variance can be determined. If the scaling factor is chosen as to correlate the smallest SGOs with the faintest magnitudes and vice versa, the polygon results. There is no correlation whatsoever and we conclude: the diameters of the SGOs do not reflect stellar brightnesses.
November 25, 2010
10:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.01˙Schlosser
628
4. Do the SGOs Depict Constellations? It is obvious that the group of seven SGOs depicts an asterisk. But what about the remaining 25? Do they represent constellations? The centre of fig. 4 shows their distribution. To its left and above twelve random generated patterns are shown, to its right and below twelve drawn by test persons. The differences are striking. While those generated by the computer show a marked clustering, humans tend to distribute the points more evenly. If we denote by δ the mean distance of the points to their nearest neighbours and by σ the variance of the individual distances, then σ/δ is an appropriate measure for the amount of order of the distribution of points. This value is largest for the random generated patterns (σ/δ = 0.50), less for the humans (0.31), and smallest for the Nebra Disk (0.17). In the case of a perfectly ordered pattern σ/δ would be zero, because the individual distances to the next neighbours do not vary (σ = 0). As can be seen, the 25 stars of the Nebra Disk show some kind of ’ordered disorder’. The reason seems clear. Too much disorder as in the case of the random generated pattern inevitably produces clusters, which may be interpreted by the eye as ’constellation’ and distracts it from the ’intended constellation’ of the seven stars, in the following interpreted as the Pleiades. Too high an order would not give an impression of the starry sky around the Pleiades. It was pointed out that a random distribution yields a σ/δ of 0.5, which is the inverse of 2 - the dimension of the plane in which the points lie. Similar tests in spaces of three dimensions and more indeed gave values for σ/δ close to 1/dimension. No strict mathematical proof seems to have been given so far for this easy-to-formulate question (Rooch 2008). 5. Tolerances of Manufacturing the Disk Any working hypothesis concerning the Nebra Disc has to comply with its accuracy: poor manufacture excludes sophisticated ideas. As may be inferred from fig. 1, some golden applications are close to perfect while others are not. The big round object approaches a circle within σ = 0.3mm while the outer hexagon of the Pleiades is a poor approximation (σ = 4.12mm). The latter value is the worst observed on the disk and will therefore serve as criterion for any working hypothesis. 6. Astronomical Interpretation There can be no doubt that the disk tells an astronomical story of the bronze age. The cluster of seven SGOs depicts with some certainity the Pleiades. This asterisk played an important role for farmers and sailors worldwide - from ancient Greece to contemporary Lithuania. The sickle stands probably for the young moon, the round object may be the full moon. This makes sense, because even in our time Lithuanian farmers used the Pleiades in the evening sky (the part of the sky where the young moon appears) and in the morning sky (the domain of the full moon) for
November 25, 2010
10:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.01˙Schlosser
629
Fig. 4. The distribution of stars on the Nebra Disk in comparison with those generated at random and by test persons (see text).
agricultural predictions. Taking into account all influences of the atmosphere like extinction and sky brightness, these conjunctions took place around March 10 and October 17 at the bronze age. Both these dates approximately mark the limits of the agricultural year for Middle Europe. There is another interesting idea concerning the conjunction of the young moon and the Pleiades (Hansen 2006). If a lunar calendar is used, the young moon marks the beginning of a new month. Furthermore, the thickness of its sickle corresponds to its distance from the sun and therefore - if observed close to the Pleiades - to the sun’s position in the ecliptic, i.e. the progress of the solar year. Around 700 BC, Babylonians indeed used this phenomenon for intercalation purposes, as the mulapin cuniform tables report. If there was in Middle Europe any interest in bringing together the solar and lunar year (what we do not know), this method would have been working there as well. The two golden arcs on both sides of the disk (one being removed in old times) look at first sight like ornaments: two quarters of the circumference covered with gold, two quarters not. However, their lengths differ from a quarter of the circumference by more than the tolerance specified in chapter 5, so this difference was intended. As seen from the centre of the disk they span an angle of 82.7 degrees. Furthermore, they are shifted somewhat to the bottom of fig. 1, making the remaining two angles asymmetric by about five degrees. The angle of rougly 83 degrees is well known in archaeoastronomy, it is the range of azimuths the sun can reach over
November 25, 2010
10:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.01˙Schlosser
630
Fig. 5. Geographical latitude of optimum performance and observed part of the sun as derived from the golden arcs (cross).
the year for the latitudes of Saxony-Anhalt. Since the earth is surrounded by an atmosphere and the sun is a disk of an angular diameter of 0.5 degrees, the angle of asymmetry tells us which part of the sun was considered as rising or setting (fig.5). It was the upper limb, seen from a latitude of 52.2 degrees. This latitude is only one degree larger than that of the site where the disk was found. The Mittelberg area is one of Europe’s largest bronze age cemeteries with hundreds of burial mounds. From the highest point of the Mittelberg another interesting observation can be made: During summer solstice the sun sets behind the Harz mountain range with its summit Brocken, at First of May behind the Kyffh¨ auser.
Fig. 6. As seen from the site where the disk was found, the sun sets at two important dates behind two prominent mountains.
November 25, 2010
10:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.01˙Schlosser
631
Fig. 7. The reconstructed circular enclosure of Goseck. (C) Landesamt f¨ ur Denkmalpflege und Arch¨ aologie Sachsen-Anhalt.
Fig. 8. The golden arcs of the Nebra Disk and the palisade openings of the Goseck enclosure (lines) have the four solstice azimuths in common.
November 25, 2010
10:17
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.01˙Schlosser
632
Both mountains are still in our time considered as mystic with many legends connected to them. Summer solstice and First of May are special days as well, the latter being known in ethnology as Beltaine (fig. 6). Not far from Nebra the Goseck circular enclosure was built 3000 years earlier in neolithic times. It has a diameter of 60 m and is now reconstructed and open to the public (fig. 7). Openings in the palisades correspond to days in the year which partly coincide with those of the Nebra Disk (fig.8). So the Nebra Disk tells us a story from the bronze age which can be rooted back to the stone age. References 1. R. Hansen, ”Die Himmelsscheibe von Nebra - neu interpretiert” Arch¨ aologie in SachsenAnhalt 4 (2006) 289. 2. H. Meller (ed.), ”Der geschmiedete Himmel” Theiss, Stuttgart (2005). 3. A. Rooch, ”Summen von Minimalabstandsfunktionen” (Diploma Thesis), RuhrUniversit¨ at, Bochum (2008). 4. W. Schlosser, ”Zur Erkennbarkeit unvollst¨ andiger Muster” Jahresschrift f¨ ur Mitteldeutsche Vorgeschichte 85 (2005) 99.
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
633
EXOTIC ARCHAEOLOGY: SEARCHING FOR SUPERHEAVY ELEMENTS IN NATURE AND DATING HUMAN DNA WITH THE 14 C BOMB PEAK WALTER KUTSCHERA, FRANZ DELLINGER, JAKOB LIEBL and PETER STEIER Vienna Environmental Research Accelerator (VERA) Faculty of Physics — Isotope Research, University of Vienna, Austria [email protected] This contribution conveys the power of accelerator mass spectrometry (AMS) to measure ultra-low traces of long-lived radionuclides in two highly divers fields: Astrophysics and molecular biology. Our search for nuclides of superheavy elements (SHE) in several natural materials did not confirm the claims of positive evidence for SHEs reported by the group of Amnon Marinov from Jerusalem, even though the sensitivity of our AMS measurements were several orders of magnitude higher. We also report on the investigation by the group of Kirsty Spalding from Stockholm to date human DNA with the 14 C bomb peak. This allows one to determine retrospectively the birth date of cells in sections of the human body. Ongoing efforts to miniaturize carbon samples down to the level of 10 µg C for AMS measurements will allow one to venture into ever smaller subsections of the human brain.
1. Introduction The advancement of accelerator mass spectrometry (AMS) some 30 years ago opened the possibility to detect both natural and man-made, long-lived radionuclides down to isotopic abundance levels of 10−16 . It literally became possible to explore our world atom by atom in almost every section of the environment.1 A wellknown application of AMS is radiocarbon (14 C) dating in archaeology and other fields, which can be performed with samples of only 1 mg of carbon. This is at least thousand times less than what is required for the classical beta-counting method. A basic feature of any AMS facility is the ability to achieve good overall efficiency (the fraction of atoms in the sample actually detected), and at the same time providing utmost selectivity for the ultra-rare nuclide, in order to separate it from a usually overwhelming background. If the analyzing magnets are strong enough, an AMS facility can be tuned to any nuclide of the nuclear chart, even allowing one to venture into unexplored areas. A universal AMS facility such as the Vienna Environmental Research Accelerator (VERA) in Vienna can therefore be used to explore ‘white’ areas of the nuclear landscape, such as the predicted ‘island of stability’ where long-lived Super Heavy Elements (SHEs) may exist (see e.g. the overview of Flerov and Ter-Akopian2 ). It
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
634
may therefore not be unreasonable to look with a very sensitive method such as AMS for minute traces of long-lived SHEs in natural materials. Such measurements were recently performed at the AMS facilities of Munich3 and of Vienna,4 after the startling reports about positive findings of SHEs by the group of Marinov.5–7 In Section 2 of this contribution, these experiments will be described. The above example attempts to find nuclides, which may have been synthesized in stars before our solar system formed some 4.6 billion years ago. In Section 3 14 C AMS measurements are discussed which trace events of the last 50 years. It is well known that atmospheric nuclear weapons testing during the late 1950s and early 1960s doubled the 14 C content in atmospheric CO2 .8 After the Nuclear Test Ban Treaty in 1963, the exchange of CO2 with the biosphere and the ocean led to a rapid decrease of this extra 14 C transferring it to the respective reservoirs. It is interesting to note that the instantaneous labeling of atmospheric CO2 with ‘bomb’ 14 C has wide-reaching implications for studying the dynamics of the CO2 exchange of the atmosphere with the biosphere and the hydrosphere, respectively. The latter is important in studying the uptake of CO2 in the ocean, which is an essential ingredient in understanding the global CO2 cycle and the fate of the anthropogenic CO2 increase in the atmosphere. And this extra 14 C also allows one to study retrospectively the time of formation of new cells in the human body.9
2. Search for Superheavy Elements in Nature Calculating the stability of heavy nuclei has always been a challenge to nuclear theorists. In the 1960s an exciting possibility emerged, when shell-model corrections to the liquid-drop model indicated that there may be a neutron-rich ‘island of stability’ beyond any known nuclide.10–13 Nucleosynthesis calculation indicated that such nuclei may be produced under stellar r-process conditions.14 For the superheavy nucleus with Z = 110 and N = 184 a half-life of 2.5 billion years was calculated.15 A specific AMS search for this isotope was therefore performed in 1980 on a platinum nugget.16 It was assumed that element Z = 110 has similar chemical properties as platinum. No events of a 294 110 isotope were observed with an abundance limit of 294 110/Pt = 1 × 10−11 . Assuming a supernova-produced 294 110/Pt ratio of 0.02 to 0.0614 one can conclude from the observed abundance limit that the half-life must be less than about 200 million years, provided, that element 110 follows platinum throughout the geochemical and geophysical history. Many other searches for SHEs in natural materials were performed.2,17 To this day, no confirmed evidence for long-lived SHEs in nature exist, even though some evidence has been reported for the occurrence of long-lived neutron-deficient isotopes of thorium5 and of Roentgenium (Z = 111) in natural gold,6 and of a SHE (Z ∼ 122) nuclide of mass 294 in thorium.7 These measurements were performed with high-resolution ICP-SFMS (Inductively Coupled Sector Field Mass Spectrometry). Since it seems difficult to measure the reported abundance levels in the 10−10 to 10−12 range with ICP-SFMS, these extraordi-
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
635
nary claims certainly need independent verification by other experimental techniques. So far two searches with AMS were performed. One at the Maier Leibniz Laboratory of the LMU and TU Munich,3 and another one and at the VERA Lab of the University of Vienna.4 Figure 1. shows a schematic presentation of the AMS facility in Vienna, indicating the setup for measuring ultra-low abundances of longlived nuclides in thorium materials.4 Compared to ICP-SFMS, which essentially identifies the claimed SHEs only through high-resolution mass measurements, the AMS set-up at VERA utilizes a system which eliminates background by a highly redundant filtering process. This clearly is an advantage for proving the existence of such a rare species once a positive signal is observed. However, it must be pointed out that the complexity of the AMS system requires a very good calibration with known pilot beams (see Fig. 2), in order to be sure that the set-up is sensitive for a possible detection of SHE nuclides. Simply speaking it is easier to miss rare species
1 2
3
5 4
9 8
6 7
Fig. 1. Schematic presentation of the setup to search for 292 Eka-Th with VERA. The high selectivity to find rare events is indicated by the nine-fold filtering process of the ion beams generated from samples in the Cs-beam sputter source: (1) negative ion production, (2) energy/charge selection, (3) momentum/charge selection (mass sensitive), (4) break-up of molecules in the gas-stripping process, (5) momentum/charge selection, (6) energy/charge selection, (7) momentum/charge selection, (8) time of flight (velocity) measurement, (9) residual energy measurement.
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
636
208Pb5+ 167Er4+
(125Te3+)
Fig. 2. Residual energy versus time-of-flight spectra measured in searching for the superheavy nuclide EkaTh-292. The left spectrum is recorded when the AMS system is set up for this nuclide only. The right spectrum shows an overlay of spectra, when the system was tuned to different pilot beams for calibration. No events for EkaTh-292 were recorded in the expected window.
with AMS than with ICP-SFMS. On the other hand, it is very difficult to be sure that events observed with ICP-MS are not caused by an unidentified background. Figure 2 shows the events observed in the search for 292 Eka-Th at VERA.4 No events were observed in the region where one should have seen at least 1000 events if SHEs at the abundance level claimed by the Marinov group7 were present. Table 1.
Rare isotope 292 Eka-Th 211 Th 213 Th 217 Th 218 Th
Summary of rare isotope measurements in thorium
ICP-SFMSa (A Th/232 Th)
AMS Munichb (A Th/232 Th)
(1 − 10) × 10−12 (1 − 10) × 10−11 (1 − 10) × 10−11 (1 − 10) × 10−11 (1 − 10) × 10−11
not measured < 9.6 × 10−13 < 1.2 × 10−12 < 6.6 × 10−13 < 2.4 × 10−12
AMS Viennac (A Th/232 Th) < 4 × 10−15 < 5 × 10−15 (7 × 10−16 − 8 × 10−15 )d (1 × 10−16 − 6 × 10−15 )d < 5 × 10−15
a211–118 Th,5 292 Eka-Th7 b Ref. c Ref.
3 4
d For 213 Th two events, and for uncertain, see Ref. 4.
217 Th
one event was observed. These are, however,
In Table 1 a comparison of the results for ICP-SFMS and AMS measurements performed so far in thorium materials is presented. It is clear that the limits of both AMS measurements are orders of magnitude lower than the observations of the Marinov group. The AMS results therefore exclude the existence of long-lived neutron-deficient thorium isotopes and of a 292 Eka-Th nuclide at the level reported by the ICP-SFMS measurements.5,7 At VERA the search for SHEs is continued with natural materials including platinum and gold nuggets, galenite (PbS), and bismuth ochre (Bi2 O3 ). Here the
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
637
122
292Eka-Th
0.06s Eka-Bi Eka-Pb 2.6 s
Cn Rg
34 s 11 s
Rg (Eka-Au) Ds (Eka-Pt)
Superheavy Elements?
Limits from AMS meas. (10-14 – 10-16)
Fig. 3. Upper end of the chart of nuclides indicating the region of increased stability (dark shaded background) where superheavy nuclides are expected to be particularly stable.21 The basic figure is adopted from Stoyer,22 with the nuclides studied at VERA indicated by red squares.
assumption is that the eka-elements, Eka-Pt (Ds, Z = 110), Eka-Au (Rg, Z = 111), Eka-Pb (Z = 114), and Eka-Bi (Z = 115) follow the corresponding elements, although due to relativistic effects this may not be necessarily the case.18,19 Of particular interest is gold, since positive results for the observation of long-lived isotopes of roentgenium (261 , 265 Rg) in gold have been reported by ICP-SFMS measurements.6 So far, we have not found any evidence for these nuclides with three to four orders of magnitude higher sensitivities than the reported level of (1 − 10) × 10−10 .6 In addition, extensive searches for SHE nuclides in the vicinity of A ∼ 300 have not yet resulted in any positive evidence for the existence of such nuclides. Figure 4 summarizes the preliminary results of these searches, with abundance limits ranging from 10−14 to 10−16 .20 3. Dating Human DNA with the
14
C Bomb Peak
It is well known that after the Second World War, the superpowers (USA and USSR) started to test ever bigger nuclear bombs (fission and hydrogen fusion) in the atmosphere. This produced a number of radioisotopes, among them also 14 C. This happens because nuclear bomb explosions are accompanied by large neutron fluxes,
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
638
Fig. 4. Historical photo from the White House Treaty Room, where President John F. Kennedy (centre) signed the Nuclear Test Ban Treaty on October 3rd, 1963. On the far right is Vice President Lyndon B. Johnson (Photograph by Robert Knudsen, White House, in the John. F. Kennedy Presidential Library and Museum, Boston).
which convert nitrogen into 14 C through the 14 N(n,p)14 C reaction. Also natural 14 C is produced by the same reaction, with the neutrons coming from spallation reactions of cosmic ray protons on nuclei of the atmosphere.23,24 Atmopsheric nuclear testing continued until 1963, when the Nuclear Test Ban Treaty (NTBT) put an end to atmospheric nuclear weapons testing (Fig. 4). By that time the 14 C content in atmospheric CO2 had increased by 100%, but rapidly decreased thereafter due to the exchange of atmospheric CO2 with the biosphere and the ocean (see Fig. 5). Due to the exchange of atmospheric 14 CO2 with the ocean, the dynamics of this important process can be studied. On the other hand, it is remarkable that every species living in the second half of the 20th century got labeled with some bomb 14 C, since 14 CO2 enters the biosphere through the photosynthesis of plants. In 2005, an interesting paper was published in the journal Cell9 reporting on the use of the 14 C bomb peak to retrospectively determine the birth date of cells in humans. The basic idea is that 14 C in genomic DNA reflects the birth date of cells. The authors state in their paper9 : “Most molecules in a cell are in constant flux, with the unique exception of genomic DNA, which is not exchanged after a cell has gone through its last division. The level of 14 C integrated into genomic DNA should thus reflect the level in the atmosphere at any given time point, and we hypothesized that determination of 14 C
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
639
14C
Bomb Peak
a
Natural deviations from the 14C ref. level
NTBT 1963 14C
decay (t1/2 = 5730 a)
b
14C
reference level
AD Fig. 5. (a) Variation of the 14 C content in atmospheric CO2 during the last 4000 years including the 14 C bomb peak.9 Deviations from the 14 C reference level (14 C/12 C = 1.2 × 10−12 ) are given. (b) Detail of the bomb peak during the last 50 years from measurements of 14 C in atmospheric CO2 in the northern and in the southern hemisphere.8 The decrease of 14 C due to radioactive decay is negligible (red dashed line) compared to the effect of CO2 exchange with the biosphere and the ocean.
levels in genomic DNA could be used to retrospectively establish the birth date of cells in the human body.” The task of actually measuring 14 C/12 C ratios in human DNA can be assessed from Table 2 which summarizes its basic constituents. Even though the DNA molecule is very large containing about 1011 carbon atoms, one needs DNA from 10 cells to obtain one 14 C atom. With DNA extracted from 15 million cells, one obtains 1.5 million 14 C atoms and 36 µg of carbon. Since about 2% of the 14 C atoms can be counted with AMS, there is enough signal to determine the 14 C/12 C ratio. However the small amount of total carbon available for these measurements is still a challenge to the AMS technique. In recent years, progress in sample preparation
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
640 Table 2.
Composition of the DNA molecule and its
Basic composition of DNA: Chemical sum formula per bp: Molecular weight:. Mass of DNA per cell: Mass of carbon (40 wt% C): Total length of DNA per cell: C atoms of DNA per cell: 14 C/12 C: DNA of 10 cells: 15 million cells: C from DNA of 15 million cells: Tote 14 C AMS detection efficiency:
14 C
content (a physicist’s view)
Macromolecule with 3 × 109 base pairs (bp) C20 H23 N7 O13 P2 and C19 H22 N8 O13 P2 ∼630 Daltons (Da) per base pair, total ∼ 1.9 × 1012 Da 2 DNA per cell = 2 × 3 pg = 6 pg 2.4 pg 2 × 3 × 109 × (0.34 nm, distance between bp) = 2 m 2 × 3 × 109 × (20 C) = 1.2 × 1011 C atoms 1.2 × 10−12 ∼ 1 14 C atom 1.5 million 14 C atoms ∼ 36 µg C ∼ 2% →∼ 30, 000 14 C atoms detected
Birth
Fig. 6. The 14 C content of different human cells indicates the time after birth when the respective cells were formed.9 The vertical line shows the birth date of the individual.
and background reduction allows one to reliably measure 14 C in samples down to 10 µg carbon.25 An example of the different times when human cells are formed after a person’s birth is shown in Fig. 6. It is obvious from the figure that the 14 C content of the DNA from brain cells point to a time much closer to the birthdate than cells from the intestine, which are known to be frequently renewed. As for the brain itself,
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
641
birth
birth
birth
birth
Fig. 7. Separation of neurons from non-neuronal cells show the different times after birth where they were formed.9 Two individuals (A, B) were born after the bomb peak, and two (C, D) were born before the bomb peak.
one can see that cells of the cerebellum are generated closer to the person’s birth date than the ones of the cortex. In Fig. 7 the brain of four different individuals were investigated, two of them born after the bomb peak (A and B), and two before the bomb peak. Differentiation into neuronal and non-neuronal cells shows that the cortical neurons are closest to the birth, whereas non-neuronal cells of the cortex point to a substantial production at a later stage in life. The method has been applied by the Stockholm group also for other questions such as the neurogenesis of the human neocortex,26 turnover of fat cells in humans,27 and the renewal of heart cells (cardiomyocites) in humans.28 4. Conclusion AMS is capable of tracing long-lived radionuclides at ultra-low levels in almost any domain on Earth. This allows one to study natural and anthropogenic radionuclides at unprecedented low levels, literally ‘atom by atom’. The two examples discussed here present two extreme applications of AMS. On the one hand, a search for longlived superheavy elements in nature did not confirm the positive reports on their existence. On the other hand, the 14 C bomb peak — once considered only a menace
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
642
to man — turned out to be a valuable tool in studying one of the most interesting questions in molecular biology: When do cells in the human body form, and what is their renewal rate. AMS thus demonstrates its analytic power of studying questions about the beginning of our Solar System as well as of the intricacies of cell formation in the human body. Acknowledgment The continuing collaboration with the group of Kirsty Spalding at the Department of Molecular Biology of the Karolinska Institute in Stockholm on the 14 C bomb peak dating of human DNA is greatfully acknowledged. We also thank Robin Golser for discussions on the manuscript. References 1. W. Kutschera, Progress in isotope analysis at ultra-trace level by AMS, Int. J. Mass Spectrom. 242 (2005) 145. 2. G. N. Flerov, G. M. Ter-Akopian, Superheavy nuclei, Rep. Prog. Phys. 46 (1983) 817. 3. L. Lachner, I. Dillmann, T. Faestermann, G. Korschinek, M. Poutivtsev, G. Rugel, Search for long-lived isomeric states in neutron-deficient thorium isotopes, Phys. Rev. C 78 (2008) 064313. 4. F. Dellinger, O. Forstner, R. Golser, W. Kutschera, A. Priller, P. Steier, A. Wallner, G. Winkler, Search for a superheavy nuclide with A = 292 and neutron-deficient thorium isotopes in natural thorianite, Nucl. Instr. and Meth. B 268 (2010) 1287. 5. A. Marinov, I. Rodushkin, Y. Kashiv, L. Halicz , I. Segal, A. Pape, R. V. Gentry, H. W. Miller, D. Kolb, R. Brandt, Existence of long-lived isomeric states in naturallyoccurring neutron-deficient Th isotopes, Phys. Rev. C 76 (2007) 021303 (R). 6. A. Marinov, I. Rodushkin, A. Pape, Y. Kashiv, D. Kolb, R. Brandt, R. V. Gentry, H. W. Miller, L. Halicz, I. Segal, Existence of long-lived isotopes of a superheavy element in natural Au, Int. J. Mod. Phys. E 18 (2009) 621. 7. A. Marinov, I. Rodushkin, D. Kolb, A. Pape, Y. Kashiv, R. Brandt, R. V. Gentry, H. W. Miller, Evidence for the possible existence of a long-lived superheavy nucleus with atomic mass number A = 292 and atomic number Z × 122 in natural Th, Int. J. Mod. Phys. E 19 (2010) 131. 8. I. Levin, V. Hesshaimer, Radiocarbon – a unique tracer of global carbon cycle dynamics, Radiocarbon 42/1 (2000) 69. 9. K. L. Spalding, R. D. Bhardwaj, B. A. Buchholz, H. Druid, J. Fris´ n, Restrospective birth dating of cells in humans, Cell 122 (2005) 133. 10. W. D. Myers, W. J. Swiatecki, Nuclear masses and deformations, Nucl. Phys. 81 (1966) 1. 11. S. G. Nilsson et al., On the spontaneous fission of nuclei with Z near 114 and N near 184, Nucl. Phys. A 115 (1968) 545. 12. V. M. Strutinsky, Shells in deformed nuclei, Nucl. Phys. A 122 (1968) 1. 13. S. G. Nilsson et al., On the nuclear structure and stability of heavy and superheavy elements, Nucl. Phys. A 131 (1969) 1. 14. D. N. Schramm, W. A. Fowler, Synthesis of superheavy elements in the r-process, Nature 231 (1971) 103. 15. E. O. Fiset, J. R. Nix, Calculation of half-lives for superheavy nuclei, Nucl. Phys. A 193 (1972) 674.
November 25, 2010
12:23
WSPC - Proceedings Trim Size: 9.75in x 6.5in
10.02˙Kutschera
643
16. W. Stephens, J. Klein, R. Zurmhle, Search for naturally occurring superheavy element Z = 110, A = 294, Phys. Rev. C 21 (1980) 1664. 17. G. Herrmann, Superheavy-element research, Nature 280 (1979) 543. 18. E. Eliav, U. Kaldor, P. Schwerdtfeger, B. A. Hess, Y. Ishakawa, Ground state configuration of element 111, Phys. Rev. Lett. 73 (1994) 3203. 19. R. Eichler et al., Chemical characterization of element 112, Nature 447 (2007) 72. 20. F. Dellinger, PhD thesis, University of Vienna, in preparation. 21. A. Sobiczewski, K. Pomorski, Description of structure and properties of superheavy nuclei, Prog. Part. Nucl. Phys. 58 (2007) 292. 22. M. A. Stoyer, Island ahoy!, Nature 442 (2006) 876. 23. W. F. Libby, Atmospheric helium three and radiocarbon from cosmic radiation, Phys. Rev. 68 (1946) 671. 24. E. C. Anderson, W. F. Libby, S. Weinhouse, A. F. Reid, A. D. Kirshenbaum, A. V. Grosse, Natural radiocarbon from cosmic radiation, Phys. Rev. 72 (1947) 931. 25. J. Liebl, P. Steier, R. Avalos Ortiz, R. Golser, F. Handle, W. Kutschera, P. Steier, E. M. Wild, Studies on the preparation of small 14 C samples with an RGA and 13 C enriched material, Radiocarbon 52(2-3) (2010), in print. 26. R. D. Bhardwaj et al., Neocortical neurogenesis in humans is restricted to development, Proc. Nat. Acad. Sci. 103 (2006) 12564. 27. K. L. Spalding et al., Dynamics of fat cell turnover in humans, Nature 453 (2008) 783. 28. O. Bergman et al., Evidence for cardiomyocyte renewal in humans, Science 324 (2009) 98.
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
PART XI
Neutron Beta Decay
divided
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
647
THE CRUCIAL ROLE OF NEUTRON β-DECAY EXPERIMENTS IN ESTABLISHING THE FUNDAMENTAL SYMMETRIES OF THE (V-A) DESCRIPTION OF WEAK INTERACTIONS J. BYRNE Department of Physics and Astronomy, University of Sussex, Falmer, Brighton, East Sussex, BN1 9QH, UK [email protected] Experimental data from unpolarized and polarized neutron beta -decay yield accurate values for the basic parameters of the P-violating T-conserving charged current weak interaction, thereby posing a potentially stringent unitarity test of the CKM quark mixing matrix. Experimental studies of the radiative (BR ∼3.10−3 ) and two-body (BR ∼ 4.10−6 ) decay branches are currently in progress.
1. Introduction 1.1. Why the neutron (a) Neutrons are produced in very large numbers with average fluxes up to ∼ 2× 109 cm−2 sec−1 . (b) Neutrons come in a wide range of energy from thermal (∼ 0.025 eV), through cold (< 5.10−5eV) to ultra-cold (< 2.10−7 eV). (c) Neutrons have magnetic moments allowing the production of∼100% longitudinal or transverse polarization. (d)The neutron decays weakly into an electron, proton and anti-neutrino, n → p + e− + ν¯e with a lifetime τn of approximately 15 minutes.This process has recently been the subject of two major reviews 1−2 which describe in detail how the basic parameters characterising nuclear β-decay can be determined to high accuracy, from the measured value of the lifetime, and from a number of angular and angular-polarization correlations defined with respect to the orientation of the neutron spin. (e)There are no nuclear structure effects to complicate the analysis of neutron decay phenomenology. 1.2. Kinematic parameters The main kinematic parameters governing neutron β-decay are:
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
648
(a) Σ = (mn +mp )c2 = 1877.83704 MeV; (b) ∆ = (mn -mp )c2 = 1.29332 MeV; (c) kinetic energy of electrons Te ≤ 0.783 MeV; (d) kinetic energy of protons Tp ≤ 751 eV. Because the recoil parameter δ = ∆/Σ < 10−3 is so small the momentum transfer dependence of all weak form factors may be neglected
1.3. Weak interactions in nuclei 1.3.1. Nuclear β-decay and the conserved vector current Within the standard model 1 the charged weak current is constructed from the purely left handed (V-A) admixture of polar vector (V) and axial vector (A) currents of quarks and leptons. In the context of nuclear physics vector interactions give rise to allowed Fermi β-transitions with coupling constant gV and spin-parity selection rules ∆I = 0,
no parity change
Axial interactions give rise to Gamow-Teller β-transitions with coupling constant gA and spin-parity selection rule: ∆I = 0, ±1,
no 0 → 0,
no parity change
Neglecting the strong interactions, the bare nucleonic electromagnetic current has the form Jµ (x)=ψ¯N γµ 12 (1 + τ3 )ψN , where τ is the isospin operator. This subdivides into an isoscalar term 21 ψ¯N γµ ψN which is conserved in the presence of the strong interactions because of baryon conservation, plus an isovector term ψ¯N 21 γµ ψN which is also conserved taking into account the anomalous magnetic moments of proton and neutron. Since the bare electromagnetic and weak vector currents are evidently members of the same isotriplet, the conserved vector current (CVC) hypothesis proposed that this property remains true in the presence of the strong interactions. Thus, to the extent that isospin is a good symmetry of the strong interactions,Vµ (x) is conserved. The CVC hypothesis is now an integral part of the standard model.
1.3.2. Superallowed β-decays within isospin multiplets An important sub-set of such decays are the pure Fermi decays with ∆I = 0 whose importance stems from the fact that only the vector current Vµ (x) contributes, and this current is conserved, neglecting (a) multiplet mass splittings 3 contributing corrections at the level of (δm/m)2 , and (b) Coulomb and radiative corrections at the level of a few per cent. The principal consequence is that in these hypercharge conserving (∆Y = 0) nuclear decays the vector coupling constant gV is given by
February 24, 2011
14:30
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11
649
gV = GF Vud , where GF /(~c)3 = 1.16637±0.00001 ×10−5 GeV−2 is the universal Fermi coupling constant determined from the lifetime of the muon combined with appropriate radiative corrections. Vud is the largest matrix element in the unitary CKM matrix 1 which rotates the quark mass eigenstates (d, s, b) into the weak ′ ′ ′ eigenstates (d , s ,b ) ′ d Vud Vus Vub d s′ = Vcd Vcs Vcb s b′ Vtd Vts Vtb b In relation to the question of the unitarity of the CKM matrix, the pure Fermi 0+ → 0+ superallowed β-transitions, e.g. 14 O→14 N, are of the greatest importance because according to CVC, their comparative half-lives or ft-values should be equal allowing for all symmetry-breaking corrections and experimental error. The QEC values for some twenty of these decays have been measured, and the weighted average ft-value of the 13 most precisely determined decays yields the value = 3072.08±0.79 sec. Combining the resultant value of |Vud | = 0.97425±0.00022, with the values |Vus | = 0.22534 ± 0.0.00093 derived from Ke2 decays and |Vub | = (3.93±0.35)×10−3,derived from b-quark decays gives the result |Vud |2 +|Vus |2 +|Vub |2 = 0.9999±0.00061 4 which is clearly consistent with unitarity of the CKM matrix. 1.3.3. Neutron decay However, by far the most important of the superallowed decays is neutron decay which is an I=1/2 → I=1/2 β-transition within an isospin doublet. The masssplitting correction is of order δ 2 ∼ 10−6 . The radiative corrections are at the level of 1% and have been intensively studied over decades. 5 Furthermore, since neutron decay is allowed by both Fermi and Gamow-Teller selection rules, the availability of polarized neutron beams permits easy observation of parity-violating phenomena associated with polar vector- axial vector interference. The most general form of the hadronic vector matrix element appropriate to neutron decay may be written in the form 1−2 < p|Vµ |n >=< u ¯p |f1 γµ − if2
~ ~ σµν qν + f3 qµ |un >, mc mc
where the invariant form factors f1 , f2 , f3 refer to vector ( f1 ), induced weak magnetism (f2 ) and induced scalar (f3 ) interactions. CVC theory alone rules out the induced scalar form factor since C-invariance of the strong interactions means that there is no equivalent term in the electromagnetic current. It is also ruled out on the grounds of being ‘second class’ since it transforms under the G-parity operation with the wrong sign as compared with the bare vector current, where G-parity is a symmetry of the strong interactions 6 .Thus the weak magnetism term alone survives as a correction of recoil order to the bare vector current. Therefore CVC
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
650
theory requires that: f1 = 1,
f2 = (κp − κn ),
f3 = 0
where κp and κn are the anomalous magnetic moments of proton and neutron expressed in units of the nuclear magneton. Similarly the most general form of the hadronic axial matrix element is1−2 ~ ~ σµν qν γ5 + g3 qµ γ5 |un > mc mc where the induced tensor term < p | − ig2 σµν qν γ5 |n >( 0 weak electricity0 ) is also ruled out as being second class. The induced pseudo-scalar term < p |g3 qµ γ5 |n > requires a change in parity and therefore cannot contribute to allowed decay although it does contribute to the weak capture of muons on protons. However various chiral QCD-theoretic results exist, e.g. the Goldberger-Treiman relation |λ| = fπ gπN N /mc2 which relate |λ| to g3 , to the pion decay constant fπ and to the pion coupling constant gπN N .6 The axial current Aµ (x) cannot be conserved, because otherwise the decay of pseudo-scalar pions into leptons would be forbidden. It follows that the ratio < p|Aµ |n >=< u ¯p |g1 γµ γ5 − ig2
λ = |λ| exp(iϕ) = g1 /f1 = gA /gV is a function of the strong interactions and must be determined experimentally. Results obtained with polarized neutrons show that the phase angle φ = 0, corresponding to the left-handed combination (V-A) in the weak interaction. Further, since under time reversal f1 → f1 ∗ and g1 → g1 ∗, it follows that, if λ is complex, then time reversal invariance is violated. 1.3.4. The neutron lifetime Within the standard model the comparative half-life of the neutron is given by fR t = (2π 3 ln(2)~7 /m5e c4 )/[G2F |Vud |2 /(1 + 3|λ|2 )] where t = τn ln(2), fR = f(1+δR ) =1.71489±0.00002 is the Coulomb corrected integrated Fermi phase space factor 2 , and δR is a term of order 1-2% which takes account of the outer radiative corrections 5 . It follows that, to determine the fundamental quantities |Vud |2 and |λ|2 , together with the sign of λ, by a route which is free of nuclear structure effects, the neutron lifetime τn must be measured accurately together with at least one other parameter determined from neutron decay. The process of neutron decay is also of direct interest in the field of solar astrophysics 7 since the β-decay of the free neutron is essentially the inverse of the weak interaction which initiates the proton-proton cycle of thermonuclear reactions in the sun: p+p→
2
H + e+ + νe .
However, since the deuteron 2 H can exist only in the triplet state and the protons are restricted by the Pauli principle to scatter only in the singlet state, it follows
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
651
that the leptons must carry off one unit of angular momentum. Thus only the axial current contributes to the weak interaction and the rate of the fusion process is proportional to |λ|2 . The availability of a precise value for the neutron lifetime itself is also of central importance in big-bang cosmology since it determines the rate at which hydrogen is converted into helium in the early Universe 8 . Adopting the current world average value, τn = 885 ± 0.8 sec .23 results in a theoretical relative helium abundance of about 25% in the present day Universe in very good agreement with observation. 1.4. Measuring the neutron lifetime 1.4.1. Measurement strategies Assuming a thermal flux of 2×109 neutrons cm−2 sec−1 one may estimate a count rate of ∼10 decays sec−1 per cm3 of neutron beam which, without the application of special methods, will be unobservable against the high γ−background present at all neutron sources.. In the 60 years which have elapsed since Robson first determined the neutron lifetime at Chalk River to an accuracy of ∼ 20% , two quite distinct techniques have emerged. These are (a) beam methods which involve measuring the number of decay electrons or protons emerging from a neutron beam, and of the many variations on this technique which have been tested the Penning trap method 9−10 has proved the most successful; and (b) bottle methods which rely on measuring the decay rate of ultra-cold neutrons trapped in material or magnetic bottles. In relation to the question of systematic error these two techniques are quite different in that the former records the number of neutron decays which occurred in a given time while the latter records the number of neutrons which survive after a given time. 1.4.2. The Penning trap method This method is based on an application of the differential relation dn(t) n(t) = , dt τn where protons from neutron decay with energy < 0.8 keV are stored in a Penning trap for periods of order 2-10 msec. On release from the trap these are accelerated to about 30 keV and counted for ∼100 µsec in a silicon surface barrier detector. This ratio of storage to counting time brings about a suppression of background by a factor of 100 or more The Penning trap, which is illustrated in Fig. 1 10 , is constructed from a 5T uniform axial magnetic field superimposed on an electrostatic quasi-square well potential of depth ∼1kV End effects are eliminated by varying the trap length using a system of segmented electrodes. The magnetic field at the exit point is bent through a 9o angle so that the proton detector sits outside the neutron beam.
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
652
Fig. 1.
The Penning trap used to measure the neutron lifetime.
10 .
The number of incident neutrons is determined by recording (n,α) reactions in an assayed target of a material which has a 1/v capture cross-section, e.g. 10 B(n, α)7 Li or 6 Li(n, α)3 He which provides a measure of the mean neutron density in the beam independent of the distribution in energy. This method of counting the number of decaying neutrons is the main weakness in the technique since the capture cross sections must be known to an accuracy better than 0.25%. 1.4.3. Bottle methods Bottle methods rely on the storage of ultra-cold neutrons over a period of time to determine the number of survivors at time t. Two types of storage bottles have been tested over the years. These are (a) magnetic bottles 11−12 and (b) material bottles 13−16 . Magnetic bottles are based on the force F = -µn .∇|B(r)| which is exerted on the neutron magnetic moment in an inhomogenous magnetic field B(r). Since the force acts to trap one sign of the spin only, it follows that spin-flipping of the trapped neutron could be interpreted as a β-decay. To date this technique, which has turned out to be exceptionally difficult to implement, has failed to prove competitive. Material bottles, sometimes combined with gravity as shown in Fig.2, rely on the fact that a material with positive scattering length (e.g. quartz) the Fermi pseudo-potential generates a force which is repulsive for neutrons of energy < 2.×10−7 eV. The method applies the integral form of the law of exponential decay N (t) = N (0) exp(−λn t − λw t) where λn =τn−1 and λw is an experimental number which takes account of neutron losses at the bottle walls. This technique has been gradually refined by (a) altering
February 24, 2011
14:30
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11
653
Fig. 2.
The gravitational bottle used to measure the neutron lifetime
16 .
Table 1. Published values of the neutron lifetime. The Particle Data Group (2008)23 recommends the value τn = 885 ± 0.8 sec , excluding the 2008 bottle measurement 16 , which has yet to be confirmed. Method Beam Penning Trap Penning Trap
Date 1988 1996 2005
UCN UCN UCN UCN
1989 1993 2000 2008
Bottle Bottle Bottle Bottle
Neutron Counter 197 Au(n, γ)198 Au 10 B(n, α)7 Li 6 Li(n, α)3 He
τn (sec.) 891±9 889.2±4.8. 886.3±3.4
References 17 18-19 20-21
3 He(n, p)3 H
887.6±3 882.6±3.3 885.0±1.0 878.5±0.78
13 14 15 16
3 He(n, p)3 H 3 He(n, p)3 H 3 He(n, p)3 H
the collision rate by varying the trapping volume; (b) scaling storage times in proportion to the mean free path; (c) coating the surfaces with hydrogen-free fomblin oil13−16 to suppress the loss rate by n-p capture on the walls and (d) counting the number of neutrons escaping from the trap by upscattering at the walls 15−16 . Table 1. Published values of the neutron lifetime. The Particle Data Group (2008)23 recommends the value τn = 885 ± 0.8 sec , excluding the 2008 bottle measurement 16 , which has yet to be confirmed.
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
654
1.5. Measuring the coupling constant ratio λ 1.5.1. Decay of a polarized neutron To determine the sign λ it is necessary to observe a pseudo-scalar interference effect between vector and axial vector matrix elements. This requires measuring a neutron spin - particle momentum correlation coefficient using polarized neutrons. Once the sign of λ is established, there are various routes to measuring its magnitude |λ |, and the most accurate of these rely on making a measurement of the anomaly (|λ | -1). The decay rate of a polarized neutron is given by the famous Jackson-TreimanWyld formula 2 pe × p ν bme + + Ee Eν Ee pν pe × p ν σe × p e pe +B +D +R + ......}] + < σn > ·{A Ee Eν Ee Eν Ee where σn and σe represent the spins of neutron and electron respectively.The Fierz interference coefficient b vanishes in the absence of scalar or tensor couplings but is sensitive to the coupling of a right-handed lepton to either handedness of quark The parity-violating angular-polarization coefficients A and B, and the parity-conserving angular correlation a, are given by dW (σn , pe , pν ) = F (Ee )dΩe dΩν [1 + a
A=
−2[|λ|2 + Re(λ)] , 1 + 3|λ|2
B=
2[|λ|2 − Re(λ)] 1 + 3|λ|2
a=
(1 − |λ|2 ) . 1 + 3|λ|2
The Particle Data Group 23 recommends the value A= -0.1173±0.0013, which is derived in the main from a measurement programme carried out continuously for over 20 years by the Heidelberg group at the ILL Grenoble using the spectrometer PERKEO and its successors 22 . This value of A corresponds to a value λ = −1.2695 ± 0.0029. However, whether such a result is consistent with the unitarity of the CKM matrix may be judged from Fig.3, since it depends to some degree on a resolution of the neutron lifetime problem. A first measurement of A using ultra-cold neutrons arrived at the value A = - 0.1138±0.005124. The B-coefficient is insensitive to the precise value of λ but is sensitive to the presence of intermediate vector bosons which couple to right-handed currents, and measurements of B set a limit of 284.3 GeV/c2 to the mass of such a boson 25 . The Particle Data Group recommends the value B = 0.9807±0.0030. However, instead of measuring A or B one could measure the correlation C< σn >pp , which , assuming the full range of proton energies is recorded and no lepton is detected, satisfies the conditions of Weinberg’s interference theorem 26 . Thus the correlation derives from the interference terms only and has the form C = 0.27484
4λ 1 + 3|λ|2
where the numerical coefficient comes from a sum over proton energies and an average over electron energies including Coulomb, recoil order and radiative corrections 2.
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
655
Fig. 3.
Plots of g
A
vs gV from neutron decay relating to CKM unitarity 2 .
The single available measurement gave the result C=-0.2377±0.0026. 27 The remaining two terms in the transition rate of a polarized neutron with coefficients D (P-conserving) and R (which contains P-conserving and P-violating contributions) are T-violating correlations which vanish in the standard model. The D-coefficient 2Im(λ) D = 2Im(λ) 1 + 3|λ|2 has recently been determined to high accuracy with the results D = (-0.6 ±1.3) × 10−3 ,28 and D = (-0.2.8±7.1)×10−4,29 both consistent with time reversal invariance. It should however be pointed out that the coefficient D is of second order in the T-violating phase in the CKM matrix. The R-coefficient is particularly sensitive to the presence of right-handed leptons and has recently been measured to have the value R=0.008±0.016 30 .
1.5.2. Decay of an unpolarized neutron One of the most serious problems encountered on working with polarized neutrons is making an accurate assessment of the degree of polarization which can vary both in energy and position. Thus the electron-antineutrino angular correlation coefficient a provides another route to |λ| which does not require polarized neutrons, but
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
656
Fig. 4. Momentum diagram illustrating the principle of an e-p delayed coincidence method for measuring the e-¯ ν angular correlation coefficient 35 .
which is hampered by the necessity for detecting the proton. However the proton spectrum itself contains a term proportional to a .The spectrum has been measured several times 31−33 giving a mean value a = −0.103 ± 0.004 23 , a result which is not competitive with the coefficient A in arriving at a value for |λ|. However the proton spectrum has recently been the subject of a new measurement 34 using the proton spectrometer aSPECT with the aim of improving the accuracy by a whole order of magnitude. ACORN is a competing project currently under way with similar ambitions but which is based on proton-counting in delayed coincidence with electrons. 35 The principle of the method is illustrated in the momentum diagram in Fig. 4. In this method the two solid angles corresponding to pe .pν > 0 and pe .pν < 0 are set equal by limiting the range of accepted transverse components of the proton momentum in an axial magnetic field. The longitudinal proton momenta for both angles are recorded in coincidence with the corresponding electron by reflecting those protons for which pe .pν < 0 in an electrostatic mirror. For an electron kinetic energy >250 keV the time delay incurred is detected by time of flight spectroscopy thereby separating the two groups of protons leading directly to a value of a. 1.6. Rare decay branches of the neutron 1.6.1. Radiative neutron decay In addition to its familiar three-body decay process, the neutron also undergoes two further decay processes, of which radiative neutron decay 36−37 n → p + e− + ν¯e + γ
BR ∼ 10−3
is included as a component in Sirlin’s universal outer radiative correction formula 36 . Although the photon can in principle be emitted by the electron or the proton
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
657
Fig. 5.
An e− - p-γ triple coincidence radiative neutron decay event
37−38 .
or directly from the weak vertex, however the imposition of full gauge invariance on a calculation based on heavy baryon chiral perturbation theory 38 shows that only photon emission by the electron makes a significant contribution. Radiative neutron decay has recently been detected 39−40 , by observing triple coincidences as shown in Fig 5, in an apparatus similar to that shown in Fig. 1. The branching ratio was measured equal to (3.09±0.32) × 10−3 over an energy range 15 keV to 340 keV, and an energy spectrum consistent with theory.
1.6.2. Two-body neutron decay The extremely rare decay of a neutron into an antineutrino and a hydrogen atom n → H + ν˜e
BR ∼ 4 × 10−6
is highly sensitive to small admixtures of scalar and tensor currents but has yet to be observed. In this case conservation of momentum requires that pH +pν¯ = 0, the antineutrino and the hydrogen atom therefore have the unique energies Eν¯ = 783 keV , and EH = 326.5 eV . Only the S-states of the hydrogen atom are populated since only these states have a non-zero position probability at the proton where
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
658
the electron is created. However the 2S state is metastable with a lifetime of 0.143 sec. for decay by 2γ emission to the ground state whereas all other S-states have lifetimes in a range < 3×10−7 sec. Thus only the 2S state, which accounts for 10.54% of the population , survives. Due to the hyperfine coupling between electron and proton the 2S splits into a singlet 1 S0 level |f=0, m=0> , and a three fold degenerate 3 S1 level | f=1, m=0,±1. These are separated in frequency units by an amount νh = 177.14 MHz,which corresponds to a magnetic field B=0.000634 T. When the magnetic field is increased by a factor of ∼ 100 f is no longer a good quantum number, and the hyperfine states go over into the Paschen-Back states I| 12 , 21 >, II| − 21 , 21 >, III | 21 , − 21 > and IV |- 21 , − 21 > with good quantum numbers me and mp , and relative populations WI = 0.62%, WII = 55.24%, WIII = 44.14% and WIV =0 respectively. This is an exceptionally difficult experiment to perform but nevertheless it is being attempted at Munich 41−42 with neutrons in a throughgoing reactor beam tube decaying into hydrogen atoms at a rate of about 3 sec −1 . These are state-selected in a Lamb-shift spin filter, ionized by a pair of CW lasers λ1 (2S→10P) and λ2 (10P→ 27D) into protons which are subsequently detected. The confirmed existence of a finite population in state IV would provide unambiguous proof of the presence of a righthanded neutrino. Acknowledgments It is a pleasure to acknowledge the benefit of many illuminating conversations on these topics with Tim Chupp, Robert Cooper, Scott Dewey, Gertrud Konrad, Jeff Nico, Wolfgang Schott, Nathal Severijns, Fred Wietfeldt, Boris Yerozolimsky and Oliver Zimmer. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
N. Severijns at al, Rev. Mod. Phys. 78 (2006) 991. J.S. Nico, J. Phys. G 36 (2009) 104001. R.E. Behrends and A. Sirlin, Phys. Rev. Lett. 4 (1960) 1865. J.C. Hardy and I.S. Towner, Phys. Rev. C 79 (2009) 05502. A. Czamecki, W. Marciano and A. Sirlin, Phys. Rev. D 70 (2004) 093006. H. Pagels, Physics Reports (Physics Letters C) 16 (1975) 219. J.N Bahcall and R.M. May, Astrophys. J 155 (1969) 50110. R.J. Tayler, Rep. Prog. Phys. 43 (1980) 253. J. Byrne et al, in The Investigation of Fundamental Interactions with Cold Neutrons, ed. G.L. Greene (NBS Special Publication 711) (1986) page 48. J. Byrne, in Trapped Charged Particles and Related Fundamental Physics, eds. I. Bergstr¨ om et al, (World Scientific, Singapore 1995) page 311. W. Paul et al., Z. Phys. C 45 (1989) 29. P.R. Huffman et al, Nature 403 (2001) 62. W. Mampe et al, Phys. Rev. Lett. 63 (1989) 593. W. Mampe et al., JETP Lett. 57 (1993) 82. S. Arzumanov et al, Phys. Lett. B 483 (2000) 15. A. Serebrov et al, Phys. Rev. C 78 (2008) 035505. P.E. Spivak, Sov. Phys. JETP 94 (1988) 1.
November 25, 2010
13:40
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.01˙Byrne
659
18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42.
J. Byrne et al, Phys. Rev. Lett. 65 (1990) 289-292. J. Byrne et al, Europhys Lett. 33 (1996) 187. M.S. Dewey et al, Phys. Rev. Lett. 91 (2003) 152302. J.S. Nico et al, Phys. Rev. C 71 (2005) 055502. H. Abele et al, Phys. Rev. Lett. 88 (2002) 211802. C. Amsler et al, Particle Data Group, Phys. Lett. B 667 (2008) 1. R.W. Pattie et al, Phys. Rev. Lett. 102 (2009 ) 012301. A.P. Serebrov et al, JETP 86 (1998) 1074. S. Weinberg, Phys. Rev. 115 (1959) 4816. M. Schumann et al, Phys. Rev. Lett. 100 (2008) 151801. L.J. Lising et al, Phys. Rev. C 62 (2000) 055501. T. Soldner et al, Phys. Lett. B 581 (2004) 49. A. Kozela et al, Phys. Rev. Lett. 102 (2009) 17230. V. Grigoryev et al, Sov. J. Nucl. Phys. 6 (1967) 329. C. Stratowa et al, Phys. Rev. D 18 (1978) 397010. J. Byrne et al, J. Phys. G 28 (2002) 1325. S. Baessler et al, Eur. Phys. J A 38 (2008) 17 and G. Konrad in these proceedings. F.E. Wietfeldt et al, Nucl. Instr. and Meth. A 545 (2005) 181. Y.V. Gaponev and R.U. Khafizov, Phys. Lett. B 379 (1996) 7. M. Beck et al, JETP Lett. 76 (2002) 332. V. Bernard et al, Phys. Lett. B 593 (2004) 105; 599 (2004) 348. J. Nico et al, Nature 444, (2006) 1059. R.L. Cooper et al, Phys.Rev. C 81 (2010) 035503. W. Schott et al, Eur. Phys. J. A 30 (2006) 603. W. Schott et al, Contribution to the Workshop on ”Neutron, Neutrino, Nuclear, Muon and Medical Physics at ESS”, 2-4 December 2009, Lund, Sweden http : //ess − scandinavia.eu/3n2mp/ (2009).
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
660
IMPACT OF NEUTRON DECAY EXPERIMENTS ON NON-STANDARD MODEL PHYSICS G. KONRAD∗ AND W. HEIL Institut f¨ ur Physik, Universit¨ at Mainz, Staudingerweg 7 55099 Mainz, Germany ∗ E-mail: [email protected] www.quantum.physik.uni-mainz.de ˇ ´ S. BAEßLER∗ AND D. POCANI C Department of Physics, University of Virginia Charlottesville, VA 22904, USA ∗ E-mail: [email protected] ¨ F. GLUCK IEKP, KIT, 76131 Karlsruhe, Germany KFKI, RMKI, H-1525 Budapest 114, Hungary E-mail: [email protected] This paper gives a brief overview of the present and expected future limits on physics beyond the Standard Model (SM) from neutron beta decay, which is described by two parameters only within the SM. Since more than two observables are accessible, the problem is over-determined. Thus, precise measurements of correlations in neutron decay can be used to study the SM as well to search for evidence of possible extensions to it. Of particular interest in this context are the search for right-handed currents or for scalar and tensor interactions. Precision measurements of neutron decay observables address important open questions of particle physics and cosmology, and are generally complementary to direct searches for new physics beyond the SM in high-energy physics. Free neutron decay is therefore a very active field, with a number of new measurements underway worldwide. We present the impact of recent developments. Keywords: Standard Model; Scalar and tensor interactions; Right-handed currents; Neutron beta decay; Neutrino mass; Neutrinoless double beta decay
1. Introduction Neutron decay, n → peν e , is the simplest nuclear beta decay, well described as a purely left-handed, V −A interaction within the framework of the Standard Model of elementary particles and fields. Thanks to its highly precise theoretical description, 1 neutron beta decay data can lead to limits on certain extensions to the SM. Neutron decay experiments provide one of the most sensitive means for determining the weak vector (LV GF Vud ) and axial-vector (LA GF Vud ) coupling constants,
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
661
and the element Vud of the Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix. Extracted Vud , along with Vus and Vub from K-meson and B-meson decays, respectively, test the unitarity of the CKM matrix. GF is the Fermi weak coupling constant, evaluated from the muon lifetime.2 The value of LV is important for testing the conserved vector current (CVC) hypothesis. The size of the weak coupling constants is important for applications in cosmology (e.g., primordial nucleosynthesis), astronomy (e.g., solar cycle, neutron star formation), and particle physics (e.g., neutrino detectors, neutrino scattering).3–5 In the SM, the CVC hypothesis requires LV = 1 for zero momentum transfer. Therefore, neutron beta decay is described by two parameters only, λ = LA /LV and Vud . The neutron lifetime τn is inversely proportional to G2F |Vud |2 (1 + 3|λ|2 ). Hence, independent measurements of τn and of an observable sensitive to λ, allow the determination of Vud . The value of λ can be determined from several independent neutron decay observables, introduced in Sec. 2. Each observable brings a different sensitivity to non-SM physics, such that comparing the various values of λ provides an important test of the validity of the SM. Of particular interest is the search for scalar and tensor interactions, discussed in Secs. 4.1 and 4.2. These interactions can be caused, e.g., by leptoquarks or charged Higgs bosons.6 In Sec. 4.3 we discuss a particular kind of V +A interactions, the manifest left-right symmetric (MLRS) models, with the SU (2)L × SU (2)R × U (1)B−L gauge group and righthanded charged current, approximately realized with a minimal Higgs sector. 2. Measurable Parameters of Neutron Decay The matrix element M describing neutron beta decay can be constructed as a fourfermion interaction composed of hadronic and leptonic matrix elements. Assuming that vector (V ), axial-vector (A), scalar (S), and tensor (T ) currents are involved, the decay matrix element can be written as a sum of left-handed and right-handed matrix elements: X 2GF Vud 1 + γ5 1 − γ5 M= √ |νe i+Rj hp|Γj |nihe− |Γj |νe i. Lj hp|Γj |nihe− |Γj 2 2 2 j∈{V,A,S,T }
(1)
The four types of currents are defined by the operators: ΓV = γ µ ,
ΓA = iγµ γ5 ,
ΓS = 1,
and ΓT =
i[γµ , γν ] √ . 2 2
(2)
The coupling constants to left-handed (LH) and right-handed (RH) neutrinos are denoted by Lj and Rj , respectively. This parametrization was introduced in Ref. 7 in order to highlight the handedness of the neutrino in the participating V, A, S, T currents. The Lj and Rj coupling constants are linear combinations of the coupling constants, Cj and Cj0 , that were defined in earlier work:8 Cj =
GF Vud √ (Lj + Rj ), 2
Cj0 =
GF Vud √ (Lj − Rj ), 2
for
j = V, A, S, T.
(3)
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
662
We neglect effects of time-reversal violation, i.e., we consider the above 8 couplings to be real. In neutron decay experiments the outgoing spins are usually not observed. Summing over these spin quantities, and neglecting the neutrino masses, one can evaluate the triple differential decay rate to be:9 d3 Γ =
1 G2F |Vud |2 2 pe Ee (E0 − Ee ) dEe dΩe dΩν (2π)5 2 pe · pν me pe pν ×ξ 1 + a +b + sn A +B +... , Ee Eν Ee Ee Eν
(4)
where pe , pν , Ee , and Eν are the electron (neutrino) momenta and total energies, respectively, E0 is the maximum electron total energy, me the electron mass, sn the neutron spin, and the Ωi denote solid angles. Quantities a, A, and B are the angular correlation coefficients, while b is the Fierz interference term. The latter, and the neutrino-electron correlation a, are measurable in decays of unpolarized neutrons, while the A and B, the beta and neutrino asymmetry parameters, respectively, require polarized neutrons. The dependence of a, b, A, and B on the coupling constants Lj and Rj is described in Ref. 7. We mention that in the presence of LH e S and T couplings B depends on the electron energy: B = B0 + bν m Ee , where bν is 7,9 another Fierz-like parameter, similar to b. We note that a, A, and B0 are sensitive to non-SM couplings only in second order, while b and bν depend in first order on LS and LT . A non-zero b would indicate the existence of LH S and T interactions. Another observable is C, the proton asymmetry relative to the neutron spin. Observables related to the proton do not appear in Eq. (4). However, the proton is kinematically coupled to the other decay products. The connection between C and the coupling constants Lj and Rj is given in Refs. 7 and 10. + + We also use the ratio of the Ft0 →0 values in superallowed Fermi (SAF) decays to the equivalent quantity in neutron decay, Ftn : +
rF t =
Ft0 →0 Ftn
+
+
=
+
+
+
Ft0 →0 Ft0 →0 = , 0 n f t(1 + δR ) fR ln (2)τn
(5)
where f n = 1.6887 is a statistical phase-space factor.1 The nucleus-dependent 0 (outer) radiative correction δR , and O(α2 ) corrections,11–13 change f n by ∼ 1.5 % to fR = 1.71385(34)a. The corrections implicitly assume the validity of the V −A theory.15 The dependence of rF t on coupling constants Lj and Rj is given in Ref. 7. An electrically charged gauge boson outside the SM is generically denoted W 0 . The most attractive candidate for W 0 is the WR gauge boson associated with the left-right symmetric models,16,17 which seek to provide a spontaneous origin for parity violation in weak interactions. WL and WR may mix due to spontaneous most recently published value of fR = 1.71335(15)14 used f n = 1.6886, and did not include the corrections by Marciano and Sirlin.12 Applying the Towner and Hardy prescription for splitting the radiative corrections13 increases the uncertainty in fR slightly, to reproduce Eq. (18) in Ref. 12.
a The
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
663
symmetry breaking. The physical mass eigenstates are denoted as W1 = WL cos ζ + WR sin ζ,
and W2 = −WL sin ζ + WR cos ζ,
(6)
where W1 is the familiar W boson, and ζ is the mixing angle between the two mass eigenstates. In the MLRS model, there are only three free parameters, the mass ratio δ = m21 /m22 , ζ, and λ0 , while m1,2 denote the masses of W1,2 , respectively. Since LV = 1 (CVC) and LS = LT = RS = RT = 0, the coupling constants LA , RV , and RA depend on δ, ζ, and λ0 as described in Refs. 7,18. The dependence of a, A, B, C, and τn on δ, ζ, and λ0 follows from their respective dependence on LV , LA , RV , and RA . 3. Experimental Data We present results of least-squares fits, using recent experimental data as well as target uncertainties for planned experiments on neutron decay. The principle of nonlinear χ2 minimization is discussed, e.g., in Ref. 19. Figures 1–6 show the present and expected future limits from neutron decay, respectively. The confidence regions in 2 dimensions, or confidence intervals in 1 dimension, are defined as in Ref. 20. We first analyze the presently available data on neutron decay. As input for our study we used: a = −0.103(4) and B = 0.9807(30) (both from Ref. 2), as well as + + Ft0 →0 = 3071.81(83) s as the average value for SAF decays (from Ref. 21). We used our own averages for τn and A, as follows. The most recent result of Serebrov et al.22 , τn = 878.5(8) s, is not included in the PDG 2008 average. We prefer not to exclude this measurement without being convinced that it is wrong, and include it in our average to obtain τn = (881.8 ± 1.4) s. Our average includes a scale factor of 2.5, as we obtain χ 2 = 45 for 7 degrees of freedom. The statistical probability for such a high χ2 is 1.5×10−7. If our average were the true value of the neutron lifetime τn , both the result of Serebrov et al. and the PDG average would be wrong at the 2 − 3 σ level. Two beta asymmetry experiments have completed their analyses since the PDG 2008 review. The UCNA collaboration has published A = −0.1138(46)(21). 23 The last PERKEO II run has yielded a preliminary value of A = −0.1198(5).24 We include these new results in our average, and obtain A = −0.1186(9), which includes a scale factor of 2.3 based on χ2 = 28 for 5 degrees of freedom. The statistical probability for such a high χ2 is 5 × 10−5 , not much better than in the case of τn . Hence, we find that the relative errors are about 4 % in a, 1 % in A, and 0.3 % in B. We will not use C = −0.2377(26)2 in the analysis of present results, since the PERKEO II results for B and C are derived from the same data set.b b We
note that a recent experiment25 measured the neutron spin–electron spin correlation N in neutron decay. N is the coefficient of an additional term (+N sn se ), which appears in Eq. (4) if the electron spin is detected. The N parameter depends linearly on S, T couplings. We disregard the result, as it lacks the precision to have an impact on our analysis.
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
664
About a dozen new instruments are currently planned or under construction. For recent reviews see Refs. 3,26. We will discuss a future scenario which assumes the following improvements in precision in a couple of years. • ∆a/a = 0.1 %: Measurements of the neutrino-electron correlation coefficient a with the aSPECT27,28 , aCORN29 , Nab30 , and PERC experiments are projected or underway. • ∆b = 3 × 10−3 : The first ever measurement of the Fierz interference term b in neutron decay is planned by the Nab collaboration.30 In addition, the UCNb31 and PERC collaborations are exploring measurements of b. • ∆A/A = 3 × 10−4 : Measurements of the beta asymmetry parameter A with PERKEO III32 , UCNA33 , abBA34 , and PERC35 are either planned or underway. • ∆B/B = 0.1 %: The abBA34 and UCNB36 collaborations intend to measure the neutrino asymmetry parameter B. PERC is also exploring a measurement of B. • ∆C/C = 0.1 %: The aSPECT37 and PANDA38 collaborations plan measurements of the proton asymmetry parameter C; PERC may follow suit as well. • ∆τn = 0.8 s: Measurements of the neutron lifetime τn with beam experiments39,40 , material bottles41,42 , and magnetic storage experiments43–47 are planned or underway. Our assumptions about future uncertainties for a, A, B, and C reflect the goal accuracies in the proposals, while for τn we only assume the present discrepancy to be resolved. Our assumed ∆τn corresponds to the best uncertainty claimed in a previous experiment.22 Our scenario “future limits” assumes that the SM holds and connects the different observables. We used a = −0.10588, b = 0, B = 0.98728, C = −0.23875, and + + τn = 882.2 s derived from A = −0.1186 and Ft0 →0 = 3071.81 s. These values agree with the present measurements within 2 σ. 4. Searches for Physics Beyond the Standard Model Our fits are not conclusive if all 8 coupling constants Lj and Rj , for j = V, A, S, T , are treated as free parameters. We are more interested in restricted analyses presented below. Experiments quote a, A, B, and C after applying (small) theoretical corrections for recoil and radiative effects; we neglect any dependence on non-SM physics in these corrections. 4.1. Left-handed S, T currents Addition of LH S, T currents to the SM leaves LV = 1, LA = λ, LS , and LT as the non-vanishing parameters. Non-zero Fierz interference terms b and bν appear in this model; the direct determination of b through beta spectrum shape measurement is the most sensitive way to constrain the size of the non-SM currents. The experiments discussed above measure the correlation coefficients from the electron spectra and asymmetries, respectively. The published results on a, A, B, and C assume b = bν =
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
665
0. To make use of measured values of a in a scenario involving a non-zero value for the Fierz term b, we rewrite Eq. (4) for unpolarized neutron decay: ! pe · pν me me a pe · pν 3
d Γ∝ 1+a +b ≈ 1+b 1+ . Ee Eν Ee Ee 1 + bme Ee−1 Ee Eν (7) The value quoted for a is then taken as a measurement of a ¯, defined through a
, a ¯= (8) 1 + bme Ee−1 where h·i denotes the weighted average over the part of the beta spectrum observed in the particular experiment. This procedure has been also applied in Refs. 7,21,48. Reported experimental values of A, B, and C are interpreted as measurements of
B0 + bν me Ee−1 −xC (A + B0 ) − x0C bν A ¯ ¯
−1 , B =
−1 , C¯ =
, (9) A= 1 + bme Ee 1 + bme Ee 1 + bme Ee−1
where xC = 0.27484 and x0C = 0.1978 are kinematical factors, assuming integration over all electrons.c This procedure is not perfect. The presence of a Fierz term b might influence systematic uncertainties. For example, the background estimate in PERKEO II assumes the SM dependence of the measured A on Ee . The term
me Ee−1 depends on the part of the electron spectrum
−1 used in each experiment. We ¯ dominated have used the following values in our study: m = 0.5393 for A, e Ee
−1 49 ¯ by PERKEO II , me Ee = 0.6108 for B, dominated by Serebrov et al.50,51 and 52 −1 ¯ PERKEO II , and the total mean value me Ee = 0.6556 for a ¯ and C. Figure 1 shows the current limits from neutron decay. Free parameters λ, LS /LV , and LT /LA were fitted to the observables a, b, A, B, and C. Unlike Secs. 4.2 and 4.3, here we omit the neutron lifetime τn , since otherwise we would have to determine + + the possible influence of the Fierz term in Fermi decays, bF , on the Ft0 →0 values. A combined analysis of neutron and SAF beta decays will be published soon. Figure 2 presents the impact of projected measurements in our future scenario. For comparison, a recent combined analysis of nuclear and neutron physics data (see Ref. 48) finds LS /LV = 0.0013(13) and LT /LA = 0.0036(33), with 1 σ statistical errors. It includes the determination of the Fierz term bF from superallowed beta decays, updated in Ref. 21, which sets a limit on LS that is hard to improve with neutron decay alone. As in the recent survey of Severijns et al.,48 we do not include the limits on tensor couplings obtained53 from a measurement of the Fierz term bGT in the forbidden Gamow-Teller decay of 22 Na, due to its large log f t(=7.5) value. Neutron decay has the potential to improve the best remaining nuclear limit on LT as provided by a measurement of the longitudinal polarization of positrons emitted by polarized 107 In nuclei (log f t = 5.6).54,55 Limits from neutron decay are independent of nuclear structure. The stringent limit on LT in Ref. 48 stems mainly from c Note
that we define C with the opposite sign compared to Ref. 7 to adhere to the convention that a positive asymmetry indicates that more particles are emitted in the direction of spin.
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
666 0.3
0.04
0.2
0.04
neutron and nuclear decays (survey, 68% C.L.)
0.02 Δχ2
0.0
2.30 68.3% 4.61 90% 6.17 95.4%
-0.1 -0.2 -0.3 -0.3
C.L.
LT/LA
LT/LA
0.1
“present limits” (68% C.L.)
muon decay “90% C.L.”
-0.2
-0.1
0.0
0.1
0.2
0.3
LS/LV Fig. 1. Present limits from neutron decay (only a, A, and B). The SM values are at the origin of the plot. Analogous limits extracted from muon decays are indicated. Other limits are discussed in the text. All bars correspond to single parameter limits.
C.L. “future limits” (68% C.L.)
0.00 neutron and nuclear decays (survey, 68% C.L.)
-0.02
superallowed 0+→0+ decays (68% C.L.)
nuclear decays 107 (P( In), 90% C.L.)
Δχ2
2.30 68.3% 4.61 90% 6.17 95.4%
superallowed 0+→0+ decays (68% C.L.)
decays -0.04 nuclear 107
(P( In), 90% C.L.)
-0.04
-0.02
0.00
0.02
0.04
LS/LV Fig. 2. Future limits from neutron decay assuming improved and independent measurements of a, b, A, B, and C. Analogous limits extracted from muon decays are not indicated since they exceed the scale of the plot.
measurements of τn and B in neutron decay. New neutron decay experiments alone could lead to an accuracy of ∆(LT /LA ) = 0.0023, competitive with the combined analysis of neutron and nuclear physics data,48 and ∆(LS /LV ) = 0.0083, both at the 1 σ confidence level. Supersymmetric (SUSY) contributions to the SM can be discovered at this level of precision, as discussed in Ref. 56.
4.2. Right-handed S, T currents Adding the RH S and T currents to the SM yields LV = 1, LA = λ, RS , and RT as the remaining non-zero parameters. The observables depend only quadratically on RS and RT , i.e., the possible limits are less sensitive than those obtained for LH S, T currents. Figure 3 shows the present limits from neutron decay. A similar analysis of this scenario was recently published in Ref. 57. Free parameters λ, RS /LV , and RT /LA were fitted to the observables a, A, B, C, and τn . Additionally, to take into account uncertainties in the Ft values and in + + radiative corrections, we fitted Ft0 →0 and f R to ‘data points’ 3071.81(83) s and 1.71385(34), respectively. The Fierz interference terms b and bν are zero in this model. Hence, measurements of b (or bF in SAF beta decays) can invalidate the model, but not improve its parameters. Figure 4 shows the projected improvement in our future scenario. The grey ellipse stems from a recent survey of the state of the art in nuclear and neutron beta decays.48 New neutron decay experiments alone could considerably improve the limits on RH S and T currents, to ∆(RS /LV ) = 0.0275 and ∆(RT /LA ) = 0.0173.
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
667 0.15
2
Δχ
“present limits” (68% C.L.)
2.30 68.3% 4.61 90% 6.17 95.4% neutrino mass (68% C.L.)
0.05 0.00
2
Δχ
0.,05
-0.05
-0.15 -0.15 -0.10 -0.05 0.00
0.05
0.10
neutrino mass (68% C.L.)
0.00
muon decay “90% C.L.”
-0.10 neutron and nuclear decays
neutrino mass (68% C.L.)
(survey, 95% C.L.)
“future limits” (68% C.L.)
-0.,05
muon decay “90% C.L.”
-0.10 neutron and nuclear decays
C.L.
2.30 68.3% 4.61 90% 6.17 95.4%
0.10
RT/LA
0.10
RT/LA
0.15
C.L.
neutrino mass (68% C.L.)
(survey, 95% C.L.)
-0.15 -0.15 -0.10 -0.05 0.00
0.15
0.05
0.10
0.15
RS/LV
RS/LV Fig. 3. Current limits from a, A, B, and τn in neutron decay. The SM prediction is at plot origin. As a comparison, we show limits from a survey of nuclear and neutron beta decays,48 and limits from muon decays and neutrino mass measurements. The muon limit on LS /LV is larger than the scale of the plot.
Fig. 4. Future limits from neutron decay, assuming improved measurements of a, A, B, C, and τn . The grey ellipse is the present 86.5 % contour from a recent survey of nuclear and neutron beta decays.48 Analogous tensor limits from muon decays are also shown—the scalar limits are larger than the scale of the plot (details in text).
4.3. Right-handed W bosons
250 +
+
0 →0 decays (68% C.L.)
300
0.06 μ decays 400
lepton scattering (90% C.L.)
0.02
0.04 0.03
Δχ Δχ22 C.L. C.L. μ decays 2.30 2.30 68.3% 68.3% (90% C.L.) 90% 4.61 4.61 90% 95.4% 6.17 6.17 95.4% μ decays (68% C.L.)
350
(68% C.L.)
0.04
0.05
“present limits” (68% C.L.)
DØ (95% C.L.)
450 500 550 600
+
+
0 →0 decays (68% C.L.)
400
“future limit” (68% C.L.)
0.02
lepton scattering (90% C.L.)
0.01
DØ (95% C.L.)
Mass m2 [GeV]
0.08
Δχ2 C.L. μ decays 2.30 68.3% (90% C.L.) 4.61 90% 6.17 95.4%
Mass ratio δ
0.10
Mass m2 [GeV]
Mass ratio δ
Adding RH V and A currents to the SM leaves δ, ζ, and λ0 as the non-vanishing parameters. Figure 5 shows the current limits from neutron decay. The fit parameters + + δ, ζ, λ0 , Ft0 →0 , and f R , were fitted to the observables discussed in Sec. 4.2.
500 600 700 800 900 1000
0.00
0.00 -0.2
-0.1
0.0
0.1
Mixing angle ζ Fig. 5. Current limits from a, A, B, and τn in neutron decay. The SM prediction is at plot origin. As a comparison, we show analogous limits from muon decays58,59 , lepton scattering (deep inelastic ν-hadron, ν-e scattering and e-hadron interactions)60 , and a direct search at D∅61 .
-0.10
-0.05
0.00
0.05
Mixing angle ζ Fig. 6. Projeceted future limits from neutron decay, assuming improved measurements of a, A, B, C, and τn . The value of |Vud | from superallowed 0+ → 0+ nuclear beta decays was used to set a limit on ζ, assuming that the CKM matrix for LH quarks is strictly unitary (see Ref. 21).
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
668
Measurements of the polarized observables, i.e., the electron, neutrino, or proton asymmetries, lead to important restrictions, but are at present inferior to limits on the mixing angle ζ from µ decays.59 They are also inferior to limits on the mass m2 from direct searches for extra W bosons.2 Comparison of beta decay limits with high energy data is possible in our minimal MLRS model. For example, the comparison with W 0 searches at Tevatron61 assumes a RH CKM matrix identical to the LH one and identical couplings. In more general scenarios the limits are complementary to each other since they probe different combinations of the RH parameters.62 Figure 6 shows the improvement from planned measurements in our future scenario. The χ2 minimization converges to a single minimum at mass m2 = ∞; with χ2 = 0, i.e., the mixing angle ζ is not defined at this minimum. The 68.3 % C.L. is δ < 0.0196 which yields m2 > 574 GeV. In the mass range > 1 TeV, not excluded by collider experiments, we would improve the limit on ζ from µ decays slightly. We emphasize that all presented RH coupling limits (RS , RT , δ, and ζ) assume that the RH (Majorana) neutrinos are light (m 1 MeV). The RH interactions are kinematically weakened by the masses of the predominantly RH neutrinos, if these masses are not much smaller than the electron endpoint energy in neutron decay (782 keV). If both the W boson and neutrino left-right mixing angles were zero, and if the RH neutrino masses were above 782 keV, RH corrections to neutron decay observables would be completely absent. In summary, new physics may be within reach of precision measurements in neutron beta decay in the near future. 5. Limits From Other Measurements 5.1. Constraints from muon and pion decays Muon decay provides arguably the theoretically cleanest limits on non-(V −A) weak interaction couplings.2,63 Muon decay involves operators that are different from the ones encountered in neutron, and generally hadronic, decays. However, in certain models (e.g., the SUSY extensions discussed in Ref. 56, or in the MLRS), the muon and neutron decay derived limits become comparable.64 In order to illustrate the relative sensitivities of the muon and neutron sectors, we have attempted to translate the muon limits from Refs. 2 and 63 into corresponding neutron observables such as LS /LV , LT /LA , and RT /RA . In doing so we neglected possible differences in SUSY contributions to muon and quark decays, making the comparison merely illustrative. These limits are plotted in Figs. 1–6, as appropriate, showing that neutron decay measurements at their current and projected future sensitivity are not only complementary, but also competitive to the muon sector. Limits similar to the ones discussed in Sec. 5 can be extracted from pion decays (added complexity of heavier meson decays limits their sensitivity). The presence of a tensor interaction would manifest itself in both the Fierz interference term in beta decays (e.g., of the neutron) and in a non-zero value of the tensor form factor for the pion. The latter was hinted at for well over a decade, but was recently
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
669
found to be constrained to −5.2 × 10−4 < FT < 4.0 × 10−4 with 90 % C.L.65 While values for b in neutron decay and for the pion FT are not directly comparable, in certain simple scenarios they would be of the same order.66 Thus, finding a non-zero value for b in neutron decay at the level of O(10−3 ) would be extremely interesting. Similarly, the π → eν decay (πe2 ) offers a very sensitive means to study non(V −A) weak couplings, primarily through a pseudoscalar term in the amplitude. Alternatively, πe2 decay provides the most sensitive test of lepton universality. Thus, new measurements in neutron decay would complement the results of precision experiments in the pion sector, such as PIBETA67 and PEN68 . 5.2. RH coupling constraints from 0ν double β decay, and mν The most natural mechanism of neutrinoless (0ν) double beta decay is through virtual electron neutrino exchange between the two neutron decay vertices. The LH and RH νe may mix with mass eigenstate Majorana neutrinos Ni :69
νeL =
6 X
Uei
i=1
1 − γ5 Ni , 2
and νeR =
6 X
Vei
i=1
1 + γ5 Ni , 2
(10)
where Uei and Vei denote elements of the LH and RH mixing matrices, respectively. The neutrinoless double beta decay amplitude with the virtual neutrino propagator has two parts.70 If the SM LH V −A coupling combines with LH coupling terms (LL interference), the amplitude contribution is proportional to the Majorana 2 neutrino masses (weighted with the Uei factors). Since from neutrino oscillations we have rather small lower limits for these masses (40 meV for the heaviest LH neutrino71 ), we get only weak constraints for the non-SM LH couplings. On the other hand, if the SM LH V −A coupling combines with RH non-SM terms (LR interference), the amplitude is proportional to the virtual neutrino momentum (instead of the neutrino mass); since the momentum can be quite large we get constraints for the RH non-SM couplings. The latter part of the 0ν double beta decay amplitude is ˜ j = Rj ε, for j = V, A, S, T , where69,72 proportional to the effective RH couplings R ε=
6 X
(light)
Uei Vei ,
where “light” implies mi < 10 MeV.
(11)
i=1
According to Ref. 69 there are three different scenarios: D: all neutrinos are light Dirac particles =⇒ no constraints for non-SM couplings because ε = 0. M-I: all neutrinos are light (< 1 MeV) Majorana particles =⇒ no constraints for non-SM couplings, because ε = 0 from orthogonality condition. M-II: both light (< MeV) and heavy (> GeV) Majorana neutrinos exist =⇒ constraints for non-SM couplings: ε 6= 0, because heavy neutrinos are missing from the sum; ε is on the order of the unknown, likely small, mixing angle θLR between LH and RH neutrinos.
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
670
In the M-II scenario there are stringent constraints for the effective RH V, A, S, T ˜ j | < 10−8 .70 These effective couplings are proportional to ε ∼ θLR .69,72 couplings: |R Since ε depends on specific neutrino mixing models, it is not possible to give model independent limits for the Rj couplings based on 0ν double β decay data. We have already mentioned in Sec. 4.3 that for the heavy RH (Majorana) neutrinos the RH observables in neutron decay are kinematically weakened or for special cases completely suppressed. Assuming 1 TeV effective RH neutrino mass scale within M-II, one obtains |ζ| < 4.7×10−3 and m2 > 1.1 TeV.72 For a larger RH neutrino mass scale these constraints become weaker. In Ref. 73 it is argued that neutrinoless double beta decay occurs in nature. If further experiments confirm this observation, one can be sure that the neutrinos are Majorana particles. The RH couplings can contribute to neutrino mass through loop effects, leading to constraints on the RH coupling constants from neutrino mass limits.74 Using the absolute neutrino mass limit m(νe ) < 2.2 eV from the Troitsk and Mainz tritium decay experiments,75,76 one obtains the 1 σ limits: |RS | < 0.01, |RT | < 0.1, and |RV −RA | < 0.1. With the m(νe ) < 0.22 eV model dependent limit from cosmology77 (similar neutrino mass limit is expected from the KATRIN experiment78 ), the above coupling constant limits become 10 times more restrictive. An intermediate neutrino mass upper limit of order 0.5 − 0.6 eV comes from neutrinoless double beta decay 73 and from other cosmology analysis79 . Acknowledgments This work was supported by the German Federal Ministry of Education and Research under Contract No. 06MZ989I, 06MZ170, the European Commission u nder Contract No. 506065, the Universit¨ at Mainz, and the National Science Foundation Grants PHY-0653356, -0855610, and -0970013. References 1. A. Czarnecki, W. J. Marciano and A. Sirlin, Phys. Rev. D 70, p. 093006 (2004). 2. C. Amsler et al., Phys. Lett. B 667, p. 1 (2008), and 2009 partial update for the 2010 edition (URL: http://pdg.lbl.gov). 3. H. Abele, Prog. Part. Nucl. Phys. 60, p. 1 (2008). 4. J. S. Nico and W. M. Snow, Annu. Rev. Nucl. Part. Sci. 55, p. 60 (2005). 5. D. Dubbers, Prog. Part. Nucl. Phys. 26, p. 173 (1991). 6. P. Herczeg, Prog. Part. Nucl. Phys. 46, p. 413 (2001). 7. F. Gl¨ uck, J. Jo´ o and J. Last, Nucl. Phys. A 593, p. 125 (1995). 8. T. D. Lee and C. N. Yang, Phys. Rev. 104, p. 254 (1956). 9. J. D. Jackson, S. B. Treiman and H. W. Wyld, Phys. Rev. 106, p. 517 (1957). 10. F. Gl¨ uck, Phys. Lett. B 376, p. 25 (1996). 11. D. H. Wilkinson, Nucl. Phys. A 377, p. 474 (1982). 12. W. J. Marciano and A. Sirlin, Phys. Rev. Lett. 96, p. 032002 (2006). 13. I. S. Towner and J. C. Hardy, Phys. Rev. C 77, p. 025501 (2008).
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
671
14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58.
H. Abele et al., Eur. Phys. J. C 33, p. 1 (2004). A. Sirlin, Phys. Rev. 164, p. 1767 (1967). J. Pati and A. Salam, Phys. Rev. D 10, p. 275 (1974). R. Mohapatra and J. Pati, Phys. Rev. D 11, p. 566 (1975). M. A. B´eg, Phys. Rev. Lett. 38, p. 1252 (1977). W. T. Eadie et al., Statistical Methods in Experimental Physics (North-Holland Publishing Co., 1971). W. H. Press et al., Numerical Recipes: The Art of Scientific Computing, 3rd edn. (Cambridge University Press, 2007). Sec. 15.6. J. C. Hardy and I. S. Towner, Phys. Rev. C 79, p. 055502 (2009). A. P. Serebrov et al., Phys. Lett. B 605, p. 72 (2005). R. W. Pattie et al., Phys. Rev. Lett. 102, p. 01231 (2009). H. Abele, private communication (2010). A. Kozela et al., Phys. Rev. Lett. 102, p. 172301 (2009). J. S. Nico, J. Phys. G: Nucl. Part. Phys. 36, p. 104001 (2009). F. Gl¨ uck et al., Eur. Phys. J. A 23, p. 135 (2005). S. Baeßler et al., Eur. Phys. J. A 38, p. 17 (2008). F. E. Wietfeldt et al., Nucl. Instr. and Meth. A (2009). D. Poˇcani´ c et al., Nucl. Instr. and Meth. A 611, p. 211 (2009). K. P. Hickerson, in UCN Workshop, November 6-7 2009, Santa Fe, New Mexico, 2009. http://neutron.physics.ncsu.edu/UCN Workshop 09/Hickerson SantaFe 2009.pdf. B. M¨ arkisch et al., Nucl. Instr. and Meth. A 611, p. 216 (2009). B. Plaster et al., Nucl. Instr. and Meth. A 595, p. 587 (2008). http://nab.phys.virginia.edu/abba proposal 2007.pdf (2007). D. Dubbers et al., Nucl. Instr. and Meth. A 596, p. 238 (2008). W. S. Wilburn et al., Rev. Mex. F´is. 55, p. 119 (2009). O. Zimmer, private communication (2010). http://research.physics.lsa.umich.edu/chupp/panda. M. Dewey et al., Nucl. Instr. and Meth. A 611, p. 189 (2009). H. M. Shimizu, in Proceedings of the 7th International UCN Workshop, Saint Petersburg, Russia, 2009. http://cns.pnpi.spb.ru/ucn/articles/Shimizu.pdf. S. Arzumanov et al., Nucl. Instr. and Meth. A 611, p. 186 (2009). A. P. Serebrov, in Proceedings of the 7th International UCN Workshop, Saint Petersburg, Russia, 2009. http://cns.pnpi.spb.ru/ucn/articles/Serebrov3.pdf. P. L. Walstron et al., Nucl. Instr. and Meth. A 599, p. 82 (2009). V. F. Ezhov et al., Nucl. Instr. and Meth. A 611, p. 167 (2009). S. Materne et al., Nucl. Instr. and Meth. A 611, p. 176 (2009). K. K. H. Leung and O. Zimmer, Nucl. Instr. and Meth. A 611, p. 181 (2009). C. M. O’Shaughnessy et al., Nucl. Instr. and Meth. A 611, p. 171 (2009). ˇ ci´ N. Severijns, M. Beck and O. Naviliat-Cunˇ c, Rev. Mod. Phys. 78, p. 991 (2006). H. Abele et al., Phys. Rev. Lett. 88, p. 211801 (2002). ´ A. P. Serebrov et al., Zh. Eksp. Teor. Fiz. 113, p. 1963 (1998). A. P. Serebrov et al., J. Exp. and Theor. Phys. 86, p. 1074 (1998). M. Schumann et al., Phys. Rev. Lett. 99, p. 191803 (2007). A. I. Boothroyd, J. Markey and P. Vogel, Phys. Rev. C 29, p. 603 (1984). J. Camps, PhD thesis, Kath. Univ. Leuven1997. N. Severijns et al., Hyperfine Interact. 129, p. 223 (2000). S. Profumo, M. J. Ramsey-Musolf and S. Tulin, Phys. Rev. D 75, p. 075017 (2007). M. Schumann, arxiv:0705.3769v2 [hep-ph] (2007). G. Barenboim et al., Phys. Rev. D 55, p. 4213 (1997).
November 25, 2010
14:4
WSPC - Proceedings Trim Size: 9.75in x 6.5in
11.02˙Konrad
672
59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79.
R. P. MacDonald et al., Phys. Rev. D 78, p. 032010 (2008). M. Czakon, J. Gluza and M. Zralek, Phys. Lett. B 458, p. 355 (1999). V. M. Abazov et al., Phys. Rev. Lett. 100, p. 031804 (2008). N. Severijns et al., Nucl. Phys. A 629, p. 429c (1998). C. A. Gagliardi, R. Tribble and N. J. Williams, Phys. Rev. D 72, p. 073002 (2005). V. Cirigliano, private communication (2009). M. Bychkov et al., Phys. Rev. Lett. 103, p. 051802 (2009). P. Herczeg, private communication (2004). http://pibeta.phys.virginia.edu/. http://pen.phys.virginia.edu/. M. Doi, T. Kotani and E. Takasugi, Prog. Theor. Phys. Suppl. 83, p. 1 (1985). H. P¨ as et al., Phys. Lett. B 453, p. 194 (1999). R. Mohapatra and A. Y. Smirnov, Annu. Rev. Nucl. Part. Sci. 56, p. 569 (2006). J. Hirsch, H. V. Klapdor-Kleingrothaus and O. Panella, Phys. Lett. B 374, p. 7 (1996). H. V. Klapdor-Kleingrothaus and I. V. Krivosheina, Modern Phys. Lett. A 21, p. 1547 (2006). T. M. Ito and G. Prezeau, Phys. Rev. Lett. 94, p. 161802 (2005). V. M. Lobashev et al., Nucl. Phys. A 719, p. C153 (2003). C. Kraus et al., Eur. Phys. J. C 40, p. 447 (2005). E. Komatsu et al., Astrophys. J. Suppl. Ser. 180, p. 330 (2009). J. Angrik et al., FZKA Scientific Report 7090 (2004), http://bibliothek.fzk.de/zb/berichte/FZKA7090.pdf. M. Tegmark et al., Phys. Rev. D 69, p. 103501 (2004).
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
PART XII
Superheavy Elements
divided
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
February 24, 2011
15:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01
675
STUDY of SHE AT GSI - STATUS AND PERSPECTIVES FOR THE NEXT DECADE F. P. HESSBERGER Department ‘Superheavy Elements’, GSI Helmholtzzentrum f¨ ur Schwerionenforschung mbH, Planckstraße 1, 64291 Darmstadt, Germany and Section ‘Superheavy Elements - Physics’ Helmholtzinstitut Mainz, 55099 Mainz, Germany E-mail: [email protected] An extensive program on synthesis of superheavy elements (SHE) and investigating their nuclear structure as well as their chemical properties has been performed at the UNILAC accelerator at GSI during the past three decades. Highlights of this research program were the identification of the new elements with atomic numbers Z = 107-112, detailed nuclear structure investigations and discovery of new K isomers in the transfermium region, first chemical characterization of element 108 (hassium) and identification of the deformed doubly magic nucleus 270 Hs. Part of the latest results are presented and discussed. Current and prospected upgrades of the facility and the experimental set-ups are presented.
Keywords: Superheavy elements, heavy ion reactions, nuclear structure, alpha- and gamma-spectroscopy, recoil separators.
1. Introduction Extrapolations of the nuclear shell model1,2 into regions far beyond the highest experimentally established spherical proton and neutron shell closures at Z=82 and N=126 lead to the prediction of the next spherical shell closures at Z=114 and N=184.3 It marked the beginnings of intensive theoretical studies to check the new magic proton and neutron numbers, to determine the shell strengths and the properties of the nuclei in the considered region (see e.g.4 ). The theories were based on the nuclear liquid drop model, originally suggested by C.F. von Weizs¨ acker,5 taking into account the effect of the shell structure on the nuclear ground-state masses in form of small energy corrections calculated using a method suggested by V.M. Strutinsky.6 The latter usually are denoted as ‘shell effects’. Large negative values indicated a (local) increase of the nuclear binding energy, i.e. an enhanced stability against radioactive decay. In the region of transfermium nuclei stabilization against spontaneous fission was of specific importance, since fission barriers calculated on the basis of the liquid-drop model (see7 for first theoretical treatment)
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
676
Fig. 1. Ground-state shell correction energies predicted for nuclei in the region 99≤Z≤120 and 140≤N≤190 on the basis of microscopic - macroscopic calculations. 8 See text for details.
were expected to vanish around Z ≈ 106, with the exact location depending on the parametrization of the Coulomb and the surface energies. Therefore spontaneous fission was regarded as the dominant decay mode at Z > 106 with decreasing half-lives at increasing atomic numbers. Around the Z=114, N=184 shell closures, however, due to the high shell correction energies which convert into high fission barriers, a strong stabilization against spontaneous fission and thus long fission half-lives were expected. The nuclei in the considered region were expected to form an ’island of stability’ within a ’sea of instability’ and soon were denoted as ’superheavy’. Predicted half-lives, however, varied considerably, depending on the shell effects and the competition with α- and β - decay. So the maximum half-lives were not expected at the shell closures (Z=114, N=184), but at somewhat lower values of Z and N. 4 About two decades later, improved models exhibited another feature: besides the spherical one at Z=114, N=184 another shell closure within the region of strong prolate deformed nuclei (β2 ≈ 0.2) was predicted at Z=108, N=162, forming the backbone of a bridge of nuclei, with half-lives sufficiently long to be detected, between the actinides and the ’superheavies’. The situation, based on more recent calculations8 is displayed in fig. 1. The full squares represent the nuclei that had been known at the time of first prediction of superheavy elements (SHE), the open squares those that have been identified or claimed to be discovered since then. Superheavy nuclei claimed to be first synthesized in irradiations of actinide targets with 48 Ca projectiles at FLNR - JINR Dubna9,10 are marked by crosses. About a decade ago, first results on predictions of proton and neutron shell closures using a completely different approach were reported. They were obtained using self-consistent models like Skyrme-Hartree-Fock-Bogoliubov (SHFB) calculations
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
677
or relativistic mean-field models (RMF) using NL3, NL-Z2 or NL-Z parametrisations.11,12 The RMF calculations resulted in N = 172 as the magic neutron number, while the Skyrme - force based calculations resulted in N = 184. The proton shell appeared at Z = 120 in the RMF and most of the SHBF calculations, while the SkI4 parametrization resulted in Z = 114, the SkP and SkM* parametrizations in Z = 126 as magic number.11 In the light of these different predictions it seems inevitable to clarify the situation experimentally. Indeed, already the first theoretical predictions of SHE were followed by vast experimental programs at different laboratories to produce and identify those nuclei and to investigate their nuclear, atomic and chemical properties. Experimental techniques and sensitivity were developed and improved continuously and presently allow for the synthesis and identification of superheavy nuclei with production cross sections lower than 50 fb. At the UNILAC - accelerator, installed at GSI - Helmholtzzentrum f¨ ur Schwerionenforschung, Darmstadt, Germany, experiments aiming in this direction, have been an essential part of the scientific program since more than three decades using the velocity filter SHIP,13 chemical separation methods (see e.g.14 ) and since recently the gas-filled separator TASCA.15 Highlights of this research program were the identification of the new elements with atomic numbers Z = 107-112,16 detailed nuclear structure investigations17–19 and discovery of new K isomers20–22 in the transfermium region, first chemical characterization of element 108 (hassium)23 and identification of the deformed doubly magic nucleus 270 Hs.24
2. Synthesis of Superheavy Elements 2.1. ’Cold’ fusion reactions ’Cold’ fusion reactions of lead and bismuth isotopes with medium heavy projectiles ranging from 50 Ti to 70 Zn have been the most successful method to produce isotopes of elements rutherfordium (Z=104) to copernicium (Z=112).19 These combinations allow the production of compound nuclei with low excitation energies (E∗ <20 MeV) at the Coulomb barrier thanks to the large negative shell correction energies of nuclei around the doubly magic 208 Pb. At increasing proton number of the compound nucleus (ZCN ) the advantage of the high survival probability due to the low excitation energy is essentially cancelled by the drop of the fusion probability close to the Coulomb barrier at increasing proton numbers in the projectile.25 ’Hot’ fusion reactions using actinide isotopes as target material and 48 Ca ions as projectiles turned out to be more favourable for the synthesis of elements Z > 112 (see sect. 2.2). Nevertheless ’cold’ fusion reactions are still of high interest for the production of neutron deficient isotopes of elements 102 ≤ Z ≤ 110, which are due to relatively high production cross-sections (σ>10 pb) presently candidates for detailed spectroscopic investigations. Since for the latter purpose production rates as high as possible are needed the optimum reaction for the production of a specific isotope has
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
678
to be chosen, which requires a deep understanding of the fusion process. An illustrative example is shown in fig. 2. Here the measured 1n cross-sections (full symbols) and 2n cross-sections (open symbols) for the complete fusion reactions 54 Cr + 208 Pb (squares, full lines), 207 Pb (diamonds, dashed lines) and 206 Pb (dots, dotted lines), recently measured at SHIP, are compared with each other and with the results from HIVAP26 calculations. Fission barriers and dynamical hindrance of fusion around the barrier were parameterized to reproduce the data for 54 Cr + 208 Pb. For the 2n channels decreasing maximum cross-sections are obtained with decreasing neutron number due to increasing fissility of the compound nuclei in accordance with the HIVAP calculations. For the 1n channels, whose maxima occur far below the fusion barrier located at E∗ ≈ 23 MeV for the three systems, the situation is different. While for 54 Cr + 207 Pb the experimental data are reproduced within a factor of two by the calculations, for 54 Cr + 206 Pb the measured 1n cross-section is about a factor of seven higher than the value predicted by HIVAP and is even a factor of two higher than the 1n cross-section for 54 Cr + 207 Pb. The same isotope, 259 Sg, is produced by the reactions 206 Pb(54 Cr,1n)259 Sg and 207 Pb(54 Cr,2n)259 Sg. Since the cross-section for the latter case is fairly reproduced by the HIVAP calculations, the discrepancies in the measured and the values when produced by 1n deexcitation is certainly not due to an enhanced stability of the nucleus against prompt fission during the deexcitation process, but rather due to an enhanced fusion probability below the fusion barrier, certainly caused by the nuclear structure of the target nucleus. Investigations to understand such processes in detail with respect to the synthesis of SHE are in progress.
2.2. Actinide based (‘hot’) fusion reactions The steep drop of cross-sections in ’cold’ fusion reactions with increasing ZCN or vice versa with increasing proton numbers of the projectiles (ZP roj ) (see e.g.19 ) demanded to consider alternative reactions to produce isotopes of elements Z > 112. As the drop was understood to be essentially due to a decreasing fusion probability at energies around the barrier (see25 for a recent publication to this subject) it did not seem promising to test more symmetric reactions although for specific target projectile combinations still more favourable Q - values were expected, but to come back to more asymmetric ’hot’ fusion reactions. Fusion was expected to be less hindered at the barrier; the price, however, is a higher excitation energy, requiring the emission of 3-4 neutrons during the de-excitation of the compound nucleus instead of only one as in the case of ’cold’ fusion. In order to minimize the excitation energy and to reach the most neutron rich compound nuclei the double-magic 48 Ca (Z=20, N=28) was regarded a the ideal projectile. Indeed, reactions using 48 Ca projectiles and actinide isotopes as targets were tested already in the seventies and eighties of the last century to produce SHE. These experiments did not show positive results, finally due to relatively high upper cross-section limits of ≈200 pb for shortlived (T1/2 = 1 µs - 1 d) and (10 - 100) pb for long-lived (T1/2 ≈ 1 d - 1 y) isotopes.27
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
σ
679
Fig. 2. Excitation functions for the production of seaborgium isotopes in reactions 54 Cr + 208 Pb (squares), 207 Pb (diamonds), 206 Pb (circles); full symbols represent the 1n deexcitation channels, open symbols the 2n channels. The lines represent the results of HIVAP calculations (see text for details).
Technical improvements of experimental set-ups, higher available beam intensities, development of rotating wheels for actinide targets enabled to reach three orders of magnitude lower cross-section limits in experimental runs of reasonable duration. In this spirit a voluminous program on SHE research has been started at FLNR-JINR Dubna resulting in the claim having synthesized and identified isotopes of elements 113-116, 1189 and recently also of element 117.10 Using 48 Ca projectiles one can reach elements only up to Z = 118, since californium isotopes 249,252 Cf are presently the heaviest ones available in quantities high enough to produce target wheels. To synthesize isotopes having Z ≥ 119 heavier projectiles are necessary. Theoretical studies indicate that due to entrance channel limitations the most asymmetric ones are the most promising.25 Nevertheless in a first attempt to synthesize element 120 at SHIP the reaction 64 Ni + 238 U was chosen, although the maximum predicted cross-section was only about 5 fb for the 3n deexcitation channel.25 Besides the unproblematic technical feasibility due to the low radioactivity of 238 U compared to e.g. 248 Cm, the choice of the reaction was motivated by speculation on an enhanced survival probability of the compound nucleus due to higher fission barriers. In25 cross-section calculations were performed using fission barriers from,28 delivering a value Bf iss = 7.3 MeV for the compound nucleus 302 120 (N = 182), which are based on Z = 114 as magic proton number. Self-consistent calculations, predicting Z = 120 as magic number result in fission barriers of up to 12 MeV (SLy6 - force).29 HIVAP calculations26 result in an increase of the 3n cross-section at the expected maximum
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
680
(E∗ ≈ 35 MeV) by a factor of ≈5 at an increase of the fission barrier by 1 MeV. So a fission barrier of 10 MeV could raise the cross-section from25 to values in the order of 500 fb. However, another model using a different ansatz for calculating the fusion probability leads to a maximum cross-section of only 0.05 fb for the 3n cross-section in the complete fusion reaction 64 Ni + 238 U but on the other side to an unrealistic high value of ≈10 pb for the 3n channel in the reaction 54 Cr + 248 Cm.31 Nevertheless a fission barrier of ≈12 MeV29 would increase the 3n cross section for the 64 Ni + 238 U reaction also in this case to ≈200 fb. In an experimental run of 116 days at SHIP no α decays or spontaneous fission events that could origin from 298,299,300 120 were observed resulting in an upper crosssection limit of 90 pb.30 The negative result, however, does not disprove Z = 120 as magic proton number, since cross-sections at bombarding energies close to the barrier are extremely sensitive to the fusion probability as seen from the comparison of the results from25 and.31 Nevertheless it seems evident from the negative result of the SHIP experiment that fission barriers as high as published in29 at Z = 120 are rather unrealistic. In other words, if a proton shell exists at Z = 100 it should be much weaker than predicted. Consequently the strategy at GSI for the next future is to proceed stepwise towards the upper end of the chart of nuclei using targets heavier than uranium. In a first experiment the isotopes 288,289 114 were synthesized in bombardments of 244 Pu with 48 Ca at TASCA, for which a transmission of ≈60% was estimated. As new results a so far unreported α decay branch of 281 Ds leading to the new isotope 277 Hs, which, vice versa decays by spontaneous fission with half-life in the order of 3 ms were obtained.32 An experiment to produce the isotopes 292,293 116 via 4n- and 3n- evaporation channels in complete fusion reactions of 48 Ca projectiles with 248 Cm target nuclei is in preparation. Next milestones certainly will be attempts to synthesize element 119 in the reaction 50 Ti + 249 Bk and element 120 in the reactions 50 Ti + 249 Cf or 54 Cr + 248 Cm for which maximum evaporation residue cross- sections around 40 fb and 25 fb, respectively, are predicted.25,33
3. Nuclear Structure Investigations Superheavy nuclei are stabilized solely by ’shell effects’, which are determined by the structure of the nucleus. Therefore, investigation of the nuclear structure is the key for understanding the existence und the stability of superheavy nuclei. Information on nuclear structure can be obtained from the study of radioactive decay. Due to the low production rate, for a long time investigation of α decay was practically the only method to obtain limited information on the structure of nuclei in the transfermium region. The main observables were α decay energies and line intensities, with allowed for the determination of hindrance factors. While α particle energies allow in cases where the ground-state decay has been observed the determination of the excitation energy of the levels populated by the α transition, hindrance factors
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
681 $
!
!!
!
!!!
#
!
!!
!
!
α
α
α
!!
!
%&
!
!!!
!
"
% % %
!
#
(
' %γ
Fig. 3. a) simplified decay schemes of the N = 151 isotones 249 Cf, 251 Fm and 253 No; b) γ spectrum observed in coincidence with α decays of 253 No.
reflect differences in the nuclear structure between mother and daughter states. Based on the systematic trends that will be discussed in sect. 3.1, in some cases (e.g. 255 No34 ) tentative assignments of spin, parity and Nilsson quantum numbers of excited daughter levels could be performed solely on the basis of α spectroscopy. In general, however, only information of limited quality could be obtained. Starting about ten years ago, the use of improved experimental set-ups and facilities, primarily characterized by availability of high intense beams (i ≈ 1000 pnA), allowing to produce considerably higher numbers of transfermium nuclei in reasonable irradiation times and highly efficient detectors (or detector arrangements) for measuring γ radiation, enabled to extend detailed nuclear structure studies in a region up to Z ≈ 108, in other words, to nuclei with production cross-sections as low as ≈100 pb. Further improvements of experimental facilities (see section 4) will enable within the next decade to investigate in detail nuclei with even an order of magnitude lower production cross-sections. 3.1. Decay spectroscopy Availability of high intense beams of ’medium’ heavy projectiles (e.g. 48 Ca) and enhanced detector systems for measuring α and γ radiation have enabled detailed decay studies of nuclei in the transfermium region by means of α - γ - spectroscopy during the past years. These studies allow to determine energies and ordering of low lying Nilsson levels, essentially in the energy range E∗ < 500 keV. Since low
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
682 ν
)
'
ν
(
)
γ
α1,2
∆
(
& %
γ
!"#
$
Fig. 4. a) gamma spectrum from the decay of the K isomer 252m No; the decay curve is shown in the insert; b) decay scheme of 252m No (data are taken from21 ).
lying levels in odd-mass nuclei are essentially determined by the unpaired nucleon, systematic trends are obvious in even-Z nuclei along the isotone lines and in odd-Z nuclei along the isotope lines.35 Although production rates that are presently possible within experimental runs of reasonable durations (up to some weeks) are not sufficient to establish detailed level schemes they allow for establishing systematic trends in energies of low lying Nilsson levels along the isotone or isotope lines. As an illustrative example the decay schemes of the N = 151 isotones 249 Cf, 251 Fm (data taken from36 ), and 253 No (recent measurements at SHIP19 ) are presented in fig. 3a. The γ spectrum taken in prompt coincidence (∆t(α − γ) ≤ 1µs) with α decays of 253 No is shown in fig. 3b. The by far strongest α transition is the unhindered one between the 9/2− ground-states of the N = 151 isotones and the corresponding levels in the daughter nuclei. The latter decay by three strong E1 transitions either into the 7/2+ ground-states or the 9/2+ - and 11/2+ - members of the rotational band built up on the ground-state. Weaker α transitions populate the 5/2+ [622] and the 7/2− [743] -levels in the daughter nuclei, which then decay via M1 and E1 transitions, respectively, into the ground-state. 3.2. K isomers K isomers are specific forms of spin traps whose existence does not only depend on the magnitude of the nuclear spin vector but also on its orientation.37 They occur only in axially symmetric deformed nuclei. The quantum number K denotes the
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
! "#
683
$ Fig. 5. a) Symbolic coupling schemes for occurrence of two-quasiparticle states in 252 No; b) comparison of calculated excitation energies of two-quasiparticle states in N = 150 isotones 48 and experimental placement of the 8− K-isomers in 246 Cm, 250 Fm and 252 No; full lines represent two-quasineutron states, dashed lines two-quasiproton states.
projection of the total nuclear spin onto the symmetry axis. First K isomers in the region of heaviest nuclei (Z ≥ 100) had been identified in 250 Fm and 254 No already in 1973 by γ recoil measurements, which, however, allowed only the determination of the half-life.38 First γ decay studies of a K isomer in the heavy element region were performed at 256m Fm (T1/2 =70 ns, E∗ =1425.2 keV, Iπ =7− ), which was populated by β − decay of 256 Es,39 while a K isomer in 270 Ds was identified by its α decay.20 Improved detector set-ups allowed in recent years detailed γ decay studies of K isomers in fermium (250 Fm40 ) and nobelium (251−254 No18,21,22,41–43 ) isotopes, and in 255 Lr (44–46 ) around the neutron subshell at N=152. Investigation of the decay of Iπ =8− K isomers in 246 Cm, 250m Fm and 252m No allowed to establish a systematic behavior of K isomers in even-even N=150 isotones. As an illustrative example the γ spectrum and the decay pattern of 252m No are shown in figs. 4a and 4b, respectively. Similar decay patterns were obtained for 246m Cm (see47 ) and 250m Fm.40 The isomeric Iπ = 8− state in these isotopes is interpreted as a two-quasineutron state. Empirically this behavior can be explained as illustrated in fig. 5a for the case of 252m No. A two-quasineutron configuration with Iπ = 8− can be produced by breaking the neutron pair in the 7/2+ state and excite it into the 9/2− state, while a two-quasiproton state with Iπ = 8− requires excitation of both protons in the 1/2− state: one into the 7/2− level and one into the 9/2+ level, respectively. Expectation that the first scenario may lead to a lower excitation energy is fully supported by the results from Hartree-Fock-Bogoliubov calculations48 as shown in fig. 5b. For all three cases the 8− (ν9/2− [734]↑ ⊗ ν7/2+ [624]↓) configuration represents the lowest lying two-quasiparticle state. Experimental excitation energies are
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
684
reproduced within 200 keV. 4. Technical and Experimental Developments 4.1. New accelerator concept To proceed towards a successful production of new elements having formation crosssections <<1 pb and to investigate their nuclear structure and their chemical properties more efficient separators and detection systems as well as considerably higher beam intensities are necessary. Concerning the latter several upgrades are ongoing or being prospected at GSI. One project is the installation of a new high charge injector for the existing UNILAC acceleration consisting essentially of an advanced ECR ion source and an improved RFQ-accelerator allowing for a duty cycle of 100%. The new RFQ has been installed in the recent months. In addition to the existing 14-GHz GSI-CAPRICE II - ECR source a superconducting 28-GHz ECRIS source will be installed in foreseeable future. A concept for a superconductive cw-linac49 with an output energy region of 3.5 to 7.5 AMeV has been recently proposed (participating centres are: GSI Helmholtzzentrum f¨ ur Schwerionenforschung GmbH, Darmstadt, Helmholtzinstitut Mainz, Stern Gerlach Zentrum SGZ (Institut f¨ ur Angewandte Physik, Goethe Universit¨ at, Frankfurt)) and obtained excellent feedback from the HGF (Helmholtz Association) - advisory board; financing, however, is not yet fixed. From the new accelerator facility a continuous beam with an a factor of about ten enhanced beam intensity for medium heavy projectiles in the range calcium to zinc compared to the pulsed beam presently available at the UNILAC can be expected. 4.2. TASISpec The extension of nuclear structure investigations to heavier nuclei with lower production rates requires more efficient detection systems. While efficiencies for measuring α particles of ≈80% are already obtained using combined ’stop’- and ’box’ - silicon detector systems16 a significant increase of the efficiency for γ rays is necessary to enable more sensitive measurements. So far for nuclear structure investigations at SHIP γ rays were measured using a single four-fold clover detector placed in close geometry behind the silicon ’stop’ - detector (see e.g.43 ). With such a setup γ-ray efficiencies of 12-15 %, depending on the size of the clover detector, can be obtained. To obtain a significant improvement in this direction the highly efficient multi-coincidence spectrometer TASISpec (TASCA in Small Image mode Spectroscopy) has been developed.50 For α particle measurements a ’stop’ - and ’box’ - detector combination of one DSSSD (used as the ’stop’ detector) and four SSSSDs (forming the ’box’ detectors) are used. The DSSSD is a 32 x 32 silicon strip detector with an active area of 58 x 58 mm2 , while the SSSSDs are 32-strip silicon detectors with active areas of each 60 x 60 mm2 . A geometrical efficiency of 81% is estimated for α particles. γ rays are measured by a combination of four clover detectors, placed at the sides of, above and below the ’box’ detectors and a cluster
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
685
Fig. 6. View of the TASISpec set-up with two (1,2) of the four clover detectors and the cluster detector (3); the chamber containing the stop and the box detectors is denoted by (4).
detector, consisting of seven Ge crystals, placed behind the ’stop’ detector. A photograph showing the chamber containing the ’stop’- and ’box’- detectors (4), two of the four clover detectors (1,2) and the cluster detector (3) is shown in fig.6. A first experiment with the full set-up, devoted to decay spectroscopy of 247 Md, was performed at TASCA in April 2010. An absolute γ efficiency of ≈35% was obtained for the 210-keV -γ line,51 which is a factor of 2.5 higher than the 14% obtained in a previous SHIP experiment using a single, big clover detector.17,52 During the commissioning phase TASISPec was also installed behind SHIPTRAP and its applicability for trap-assisted spectroscopy was successfully tested.53 4.3. New vacuum separator (‘SuperSHIP’) The velocity SHIP13 has been the working horse in SHE research at GSI for more than thirty years, The original lay-out was influenced by the prospect at that time that SHE could be produced in ’symmetric’ reactions, i.e. in complete fusion reactions using target - projectile like 136 Xe + 170 Er, which have been tested in the early years of experiments at SHIP.54 Since in such reactions the fusion products are forward directed the entrance aperture of SHIP had been kept small (2.7 msr) to suppress products from nuclear transfer reactions. The experimental work, however, showed that ’symmetric’ reactions are not suited for the synthesis of SHE due to strong fusion hindrance at energies around the Coulomb barrier, while the successful way was the use of ’cold’ and actinide based reactions as discussed in sect. 2. Due to a lower linear momentum fusion products from these suffer a larger angu-
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
686
0
-
*+,-. ,/ -12
∆
7 )## 5##
!"# µ $ !%&' ( "#")
α/34/ γ ε!5#6
, ' !"# !8"96 !896 :
Fig. 7. Sketch of a possible new vacuum separator (’SuperSHIP’) replacing the existing velocity filter SHIP at GSI on a long time range.
lar distribution from particle evaporation during the deexcitation process and from scattering in the target foil. Thus the small entrance aperture turned out to be a bottle neck limiting the transmission. Indeed, an improvement of about 50 % could be obtained by moving the target position closer to the first quadrupole triplet,55 but the price was a lower separation quality which could be largely compensated by adding a 7.50 deflection dipole magnet.16 More than 95% of the background particles, for reactions leading to SHE typically (50-200)/s at beam currents of ≈1 pµA (6.2x112 projectiles/s), consist of scattered projectiles, target nuclei and products from few-nucleon transfer reactions having the same velocity as the fusion products. Beam currents of a factor of 5-10 higher intensity will also increase this background. Principally background suppression can be improved by operating SHIP at a higher dispersion, but due to its construction this leads to reduction of the transmission.55 Therefore on a long term range a new separator is desired. A new vacuum separator should be a complementary instrument to the gas-filled separator TASCA.15 A rough scheme of such a separator is shown in fig. 7. Central part could be an improved velocity filter with a larger aperture and possibly a larger velocity and charge acceptance. Since decay chains starting in superheavy elements produced in actinide based reactions9,10 do not end in known isotopes, the possibility to determine the atomic mass numbers is required for a new separator by adding a mass analyzer; a resolution of ∆(M/q)/(M/q) ≈ 400 seems sufficient. It has been recognized during the years of SHIP operation that the intensity of the above mentioned background is strongly dependent on the beam quality. Especially halos in the spatial intensity distribution are disturbing since scattering of projectiles at the target frames is a major source of unwanted particles passing SHIP. Another source are particles that pass the accelerator out of phase and forming a low energy tail in the projectile beam. Therefore an appropriate ion optical system
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
687
enabling a homogeneous intensity distribution with a sharp boundary is required, preferably in combination with a small velocity filter to obtain a clean beam energy. These constituents are denoted as ‘preseparator’ in Fig. 7. 5. Outlook During the past decade ‘hot’ fusion reactions turned out to be the (most) successful way to synthesize elements beyond copernicium (Z = 112). Discovery of elements up to Z = 118 has been claimed so far using 48 Ca as projectiles. Synthesis of elements Z > 118 requires heavier projectiles. Theoretical studies as well as first experiments indicate that cross-sections drop by more than an order of magnitude compared to 48 Ca. Using highly efficient separators as TASCA or possibly a new vacuum separator (‘SuperSHIP’) and beams of high intensity from an upgraded accelerator facility (28-GHz ECR source, superconductive cw-linac) cross-section limits of some femtobarns will be in reach within reasonable irradiation times at GSI. So synthesis of elemenents at least up to Z = 122 should be feasible within the next decade. The rapid progress in nuclear structure investigations of transfermium nuclei will profit from both, enhanced beam intensities and highly efficient γ detection set-ups. Detailed nuclear structure investigation will be extended to nuclides with production cross-sections an order of magnitude lower than feasible today. Another feature will be identification of the atomic number of isotopes produced in ‘hot’ fusion reactions by measuring K X-rays following internal conversion at least for one member within the α - decay chains. It is a pleasure for me to acknowledge collaboration and fruitful discussions with D. Ackermann, S. Antalic, M. Block, C.E. D¨ ullmann, S. Heinz, S. Hofmann, B. Kindler, J. Khuyagbaatar, I. Kojouharov, B. Lommel, R. Mann, A.G. Popeko, D. Rudolph, S. Saro, M. Sch¨ adel, A.V. Yeremin. References 1. 2. 3. 4. 5. 6. 7. 8.
M.G¨ oppert-Mayer, Phys. Rev. 74, 235 (1948). O.Haxel et al., Phys. Rev. 75, 1769 (1949). H.Meldner, Arkiv f¨ or fysik 36, 593 (1967). S.G.Nilsson et al., Nucl. Phys. A 115, 545 (1969). C.F. von Weizs¨ acker, Z. Phys. 96, 461 (1935). V.M.Strutinsky, Nucl. Phys. A 95, 442 (1967). N.Bohr, J.A.Wheeler, Phys. Rev. 56, 426 (1939). R.Smolanczuk, A.Sobiczewski, in Proc. EPS Conf. ’Low energy Nuclear Dynamics’ St. Petersburg 1995, e.d. Yu.Ts. Oganessian et al., (World Scientific, Singapore, New Jersey, London, Hong Kong, 1995) 313 (1995). 9. Yu.Oganessian, J. Phys. G: Nucl. Part. Phys. 34, R165 (2007). 10. Yu.Oganessian et al., Phys. Rev. Lett. 104, 142502 (2010). 11. K.Rutz et al., Proc. of the Second Int, Conf. ’Fission and Properties of Neutron-Rich Nuclei’, St. Andrews 1999, ed. J.H.Hamilton et al., (World Scientific, Singapore, New Jersey, London, Hong Kong, 2000 449 (2000).
December 21, 2010
15:0
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.01.˙Hessberger
688
12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55.
M.Bender et al., Rev. of Mod. Phys. 75, 121 (2003). G.M¨ unzenberg et al., Nucl. Instr. Meth. 161, 65 (1979). M.Sch¨ adel et al., Eur. Phys. J. D 45, 67 (2007). A.Semchenkov et al., Nucl. Instr. and Meth. in Phys. Res. B 266, 4153 (2008). S.Hofmann, G.M¨ unzenberg, Rev. Mod. Phys. 72, 733 (2000). F.P.Heßberger et al., Eur. Phys. J. A 26, 233 (2005). F.P.Heßberger et al., Eur. Phys. J. A 30, 561 (2006). F.P.Heßberger, Eur. Phys. J. D 45, 33 (2007). S.Hofmann et al., Eur. Phys. J. A 10, 5 (2001). B.Sulignano et al., Eur. Phys. J. A 33, 327 (2007). F.P.Heßberger et al., Phys. of. At. Nuclei 70, 1445 (2007). C.E.D¨ ullmann et al., Nature 418, 859 (2002). J.Dvorak et al., Phys. Rev. Lett. 97, 242501 (2006). V.Zagrebaev, W.Greiner, Phys. Rev. C 78, 034610 (2008). W.Reisdorf, Phys. Rev. C 69, 014307 (2004). P.Armbruster et al., Phys. Rev. Lett. 54, 406 (1985). P.M¨ oller et al., At. Data and Nucl. Data Tab. 59, 185 (1995). T.B¨ urvenich et al., Phys. Rev. C 69, 014307 (2004). S.Hofmann et al., GSI Scientific Report 2008 GSI Report 2009-1, 131 (2009). A.V.Nasirov et al., Phys. Rev. C 79, 024606 (2009). C.E.D¨ ullmann et al., Phys. Rev. Lett. (in press 2010). Z.H.Liu, Jing-Dong Bao, Phys. Rev. C 80, 054608 (2009). P.Eskola et al., Phys. Rev. C 2, 1058 (1970). R.-D.Herzberg, P.T.Greenlees, Progr. in Part. and Nucl. Phys. 61, 674 (2008). R.B.Firestone et al. (eds.), Table of Isotopes, John Wiley & Sons, INC, New York, Chicester, Brisbane, Toronto, Singapore (1996). P.Walker, G.Dracoulis, Nature 399, 35 (1999). A.Ghiorso et al., Phys. Rev. C 7, 2032 (1973). H.L. Hall et al., Phys. Rev. C 39, 1866 (1989). P.Greenlees et al., Phys. Rev. C 78, 021303(R) (2008). R.-D.Herzberg et al., Nature 442, 896 (2006). S.K.Tandel et al., Phys. Rev. Lett. 97, 082502 (2006). F.P.Heßberger et al., Eur. Phys. J. A 43, 55 (2010). K.Hauschild et al., Phys. Rev. C 78, 021302(R) (2008). S.Antalic et al., Eur. Phys. J. A 38, 219 (2008). H.B.Jeppesen et al., Phys. Rev. C 80, 034424 (2009). A.P.Robinson et al., Phys. Rev. C 78, 034308 (2009). J.P.Delaroche et al., Nucl. Phys. A 771, 103 (2006). S.Minaev et al., to be publshed. L.-L.Andersson et al., submitted to Nucl. Instr. and Meth. (2010). D.Rudolph, private communication, April 2010. F.P.Heßberger et al., Eur. Phys. J. A 22, 417 (2004). D.Rudolph et al., GSI Scientific Report 2010, in preparation. P.Armbruster et al., GSI Jahresbericht 1977 GSI-J-1-78, 75 (1977). F.P.Heßberger et al., Lecture Notes in Physics, ed. C.Signorini et al. 317, 289 (1988).
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
689
SYNTHESIS AND STUDY OF SUPERHEAVY ELEMENTS A. G. POPEKO∗ Flerov Laboratory of Nuclear Reactions, Joint Institute for Nuclear Research, Joliot-Curie str., 6, Dubna, 141980, Russia, ∗ E-mail: [email protected] http://flerovlab.jinr.ru/flnr/index.html Results of experiments on the synthesis of superheavy nuclei in 48 Ca-induced reactions are presented. The experiments were carried out at the Flerov Laboratory of Nuclear Reactions (FLNR) Dubna heavy ion cyclotron U400 in the framework of a large collaboration: FLNR ( JINR, Dubna, Russia), IAR (Dimitrovgrad, Russia), LLNL (Livermore, USA), ORNL (Oak-Ridge, USA). Enriched isotopes of U ÷ Cf were used as targets. In the reactions studied in 2000 — 2010, decays of the heaviest isotopes of Rf ÷ Cn and isotopes of six new elements 113 ÷ 118 were observed. Keywords: Superheavy elements, fusion–fission reactions, A > 220, transactinide elements.
1. Introduction First qualitative predictions of the position of an “island of stability” of superheavy elements (SHE) were done after analyzing the level diagrams of heavy nuclei. 1,2 Theoretical predictions on the next closed shells numbers vary strongly depending on the model. Following the well-known proton and neutron shells with Z = 82 and N = 126 (208 Pb), the shell correction amplitude has a maximum for the superheavy nucleus 298 114 at N = 184 (spherical shells) in macro-microscopic models. An interesting consequence from these calculations was the revealing of a remarkable gap in the level density of deformed nuclei around N = 162 (deformed shell). After calculations performed using the Hartree—Fock—Bogoliubov (HFB)-model or self-consistent relativistic mean-field models, the closed spherical proton shells are predicted at Z = 120 or 126.3 The perspective to discover superheavy elements was very attractive and at the beginning of 70-th their synthesis seemed to be reachable in the closest future without serious problems. A large number of projectile-target combination have been studied in attempts to produce new elements around the predicted nuclear shell closures at atomic number 114 and neutron number 184. Multifarious sophisticated physical and chemical methods were employed for the
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
690
isolation and detection of superheavy elements.4–7 Among the studied in 70 – 80-th reactions, there were fusion reactions: 232 Th + 86 Kr, 248 Cm + 40 Ar, 248 Cm + 48 Ca, and deep inelastic transfer reactions 76 Ge + 238 U, 136 Xe + 238 U, 238 U + 248 Cm. No evidence for the formation of superheavy nuclei has been obtained. Figure 1 shows the upper limits of superheavy elements production cross sections reached by the use of on-line recoil separators, fast on-line and off-line chemical techniques.
Recoil fragment separators
Fast on-line chemical separation
Off-line chemistry
Region of interest
Fig. 1.
Reached upper limits of superheavy elements production cross sections (envelope from 7 ).
The most exotic combination studied in this “romantic” time was 254 Es + 48 Ca.8 The target consisted from 1.14·1016 atoms/cm2 (≈ 5 µg/cm2 ). Both, on- and off-line chemical methods have been used for the separation of eka-Hg, eka-Pb and eka-Rn from reaction products. The lowest cross-section limit for SHE production in these experiments was 2.5·10−31 cm2 (250 µb) for half-lives of a few days. The authors discussed the possible production of a target containing 40 µg (!) of 254 Es. First transfermium elements - Md — Sg have been produced in fusion reactions of U, Pu and later heavier targets up to Cf with ions from B to Ne. This type of reactions has been called “hot” fusion reactions. The elements - Bh — Cn,9 were synthesized using the so called “cold” fusion reactions - fusion reactions between 40 Ar – 70 Zn projectiles with magic nuclei 208 Pb and 209 Bi. Compound nuclei produced in this type of reactions have a minimum excitation energy compared with other target-projectile combinations,10 giving a substantial rise in the survival probability. But, as it has been found out later,9 the overall production cross-sections of evaporation residues drastic decreased approximately by factor of 3 with the increase in Z of a compound nucleus per unit charge. Figure 2 shows the production cross-sections of transfermium nuclei in “hot” and
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
691
“cold” fusion reactions.
Fig. 2.
Production cross-sections for “hot” and “cold” fusion reactions.
During 2003 – 2008, the RIKEN group (Japan) performed an experiment to synthesize element 113 in a 209 Bi + 70 Zn reaction using the gas-filled separator GARIS.11 The beam-on-target time amounted 7477 h and total accumulated dose of 70 Zn projectiles was 8.48·1019. Two observed α-decay chains were assigned to the subsequent decays from 278 113. The production cross section corresponding to these two events was deduced to be 23+30 −15 fb (!). It seems to be, the natural limit for production of superheavy elements in “cold” fusion reactions had been reached in this experiment. The progress in development of accelerator technic, especially that, of ion sources, new data on reaction mechanisms and on properties of transactinide nuclei obtained during past 20 years allowed one to return by the end of 90-th to the use of double magic nucleus 48 Ca for the synthesis of superheavy elements. 2. Experimental Approach and Set-Ups The expected half-lives of heaviest nuclei produced in fusion reactions of 48 Ca with neutron rich actinides can vary in a wide range: from a few µs up to tens of hours. The expected cross-sections are calculated to be of the order of picobarns (10 −36 cm2 ) (see “Region of interest” on the Fig. 1). Because of extremely low expected production cross-sections of superheavy elements, the cornerstone in the experiments was the production of a stable and intense ion beam of the rare and expensive (≈ 100 US$/mg) isotope 48 Ca at a
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
692
minimal material consumption.12 The neutral atoms of metallic, enriched to 70% 48 Ca, in the form of vapor, were injected into the plasma of the external 14 GHz ECR-ion source, and the low energy 48 Ca5+ -beam was axially injected to the center of the U400 accelerator’s chamber. In all experiments the accelerator was operated in continuous beam mode (DC). The long-term average intensity of the ion beam on the target was of about 0.6 pµA. The evaporation residues (ER) were separated in-flight from beam particles and other reaction products by the Dubna Gas-filled Recoil Separator (DGFRS)13 (Fig. 3). The separation efficiency of ERs from reaction Act. + 48 Ca was estimated from preparatory experiments. It was shown that about 35 ÷ 45% of EVRs, produced with the 48 Ca projectiles could reach the separator’s focal plane detector. In the focal plane the background from a primary beam was eliminated by a factor of > 1017 and target-like products of incomplete fusion reactions were suppressed by a factor of > 105 . Due to high suppression factors, ERs can be directly implanted into focal plane detectors.
D
e nc a t is
in
tr me
es
4
Focal plane detector
3
2
Time of flight detector
1
0
Beam
Quadrupole lenses
Target wheel
Fig. 3.
Magnetic dipole 22.5o
Layout of the gas-filled recoil separator.
In the focal plane of the separator a system of time-of-flight (TOF) detectors (multi-wire proportional chambers) and silicon position-sensitive multi-strip detector arrays were installed for the registration of EVRs and their decays. The signals from the TOF detectors were used both, for measurement of the recoils velocity and for distinguishing decays of the previously implanted nuclei (anticoincidence), from arriving ions signals (coincidence). The time window for
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
693
measuring decay chains could be widened up to several hours. A special “chopper” switched out the beam after a hardware request with selected parameters, coming from the detecting system, allowing practically background free detection of correlated decays with a long half-life at the absence of a beam. The data on energy, deposited in the stop- or backward detectors, time and position of appearing signals and auxiliary data were stored in a reference list mode. The analysis of events collected in an experiment has been performed to find generic decay links of the implants in certain time interval depending on the supposed halflives. 3. Experimental Results The targets consisted of enriched isotopes of 233,238 U, 237 Np, 242,244 Pu, 243 Am, Cm, 249 Bk and 249 Cf in a form of oxides deposited with a thickness of ≈ 0.35 mg/cm2 on a 1.5-µm Ti foil. The elementary targets in a shape of an arc segment were mounted on a disk, that was rotated at ≈ 2000 rpm. Experimental conditions: excitation energy ranges covered by targets, beam doses, observed evaporation residues, together with target material producers are listed in Table 1. 245,248
Table 1.
Targets and experimental conditions in irradiations Act.+48 Ca.
Target
Producer
233 U
RIAR – RIAR RIAR ORNL RIAR/ORNL RIAR RIAR/ORNL ORNL RIAR/ORNL
238 U 237 Np 242 Pu 244 Pu 243 Am 245 Cm 248 Cm 249 Bk 249 Cf
Excitation energy Ex (MeV) 32.7 29.3 36.9 30.4 29.8 38.0 30.9 31.2 32.9 26.6
– – – – – – – – – –
37.1 41.9 41.2 47.2 54.7 46.5 44.8 41.1 41.1 36.1
Beam dose ×1019 0.70 1.81 1.10 1.78 2.96 0.86 2.57 3.00 4.40 4.10
Evaporation residues
no events 282,283 Cn 282 113 286,287,288 114 287,288,289 114 287,288 115 290,291 116 292,293 116 293,294 117 294 118
In reactions studied in 2000 – 2009,14 decays of the heaviest isotopes of Rf, Db, Bh, Hs, Mt, Ds, Rg, Cn and 14 isotopes of new elements with Z = 113 ÷ 116 and 118 were observed among the products of complete fusion reactions involving 48 Ca projectiles and actinide targets U ÷ Cm and Cf. Decay properties of 45 new nuclide, produced in experiments with 48 Ca projectiles and actinide targets (including 249 Bk + 48 Ca15 ), are listed in Table 2. 4. Synthesis of a New Chemical Element With Z = 117 The synthesis of an element with Z = 117 was necessary not so much to fill up the gap between Z = 116 and 118, but to a great extent for producing data concerning
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
694 Table 2. Decay properties of the heaviest isotopes produced in Act. + 48 Ca reactions. Isotope
Decay mode
Eα (MeV)
T1/2
N of observed events
267 Rf
SF SF/EC SF SF/EC SF α/SF α α α α α α α α α/SF SF α α α SF α SF α/SF SF α α α α α α α/SF α α α α α α α α α α α α α α
— — — — — 8.54 ± 0.08 8.93 ± 0.08 9.02 ± 0.06 8.80 ± 0.10 9.30 ± 0.06 9.76 ± 0.10 10.33 ± 0.09 9.71 ± 0.06 9.55 ± 0.19 9.7 ± 0.06 — 10.69 ± 0.08 10.37 ± 0.16 9.75 ± 0.06 — 9.00 ± 0.10 — 9.54 ± 0.06 — 9.15 ± 0.05 10.63 ± 0.06 10.12 ± 0.09 10.00 ± 0.06 9.74 ± 0.08 9.63 ± 0.10 10.19 ± 0.06 10.02 ± 0.06 9.94 ± 0.06 9.82 ± 0.05 10.59 ± 0.09 10.46 ± 0.06 10.31 ± 0.09 9.95 ± 0.40 10.84 ± 0.08 10.74 ± 0.07 10.66 ± 0.07 10.54 ± 0.06 11.03 ± 0.08 10.81 ± 0.10 11.65 ± 0.06
1.3 h 22 min 1.2 h 29 h 23.1 h 1.9 min 1.0 min 9.8 s 0.9 m 0.19 s 0.45 s 9.7 ms 0.7 s 7.6 s 0.20 s 11.1 s 4.2 ms 0.17 s 3.6 s 26.3 s 0.5 s 0.8 ms 3.8 s 0.1 s 29.0 s 73.0 ms 0.1 s 0.5 s 5.5 s 20 s 0.13 s 0.48 s 0.80 s 2.6 s 32 ms 87 ms 0.22 s 16 ms 7.1 ms 18 ms 18 ms 61 ms 14.5 ms 77.5 ms 0.89 ms
2 1 1 18 1 3 1 3 1 3 2 1 3 1 26 10 2 1 3 5 1 12 22 19 10 2 1 3 5 1 24 16 18 10 1 3 5 1 10 3 5 4 5 1 3
266 Db 267 Db 268 Db 270 Db15 271 Sg 270 Bh 272 Bh 274 Bh15 275 Hs 274 Mt 275 Mt 276 Mt 278 Mt15 279 Ds 281 Ds 278 Rg 279 Rg 280 Rg 281 Rg15 282 Rg15 282 Cn 283 Cn 284 Cn 285 Cn 282 113 283 113 284 113 285 11315 286 11315 286 114 287 114 288 114 289 114 287 115 288 115 289 11515 290 11515 290 116 291 116 292 116 293 116 293 11715 294 11715 294 118
properties of about 15 new superheavy isotopes, which are expected to be observed in decay chains. The most perspective for the synthesis of this missing element is the reaction
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
695 249
Bk + 48 Ca. Unfortunately, 249 Bk cannot be produced and accumulated in advance due to the short lifetime (T1/2 = 320 d.), and after production it should be “used” immediately. The experiments aimed at the synthesis of a new element 117 in complete-fusion reactions 249 Bk+48 Ca were performed in July - December 2009 employing the GasFilled Recoil Separator of the FLNR JINR in collaboration with the laboratories of Oak Ridge (ORNL), Livermore (LLNL) and Vanderbilt University (USA).15 The target material was produced at ORNL through neutron irradiation of Am and Cm targets for 250 d in the High Flux Isotope Reactor. Six targets each with an area of 6 cm2 were manufactured at RIAR Dimitrovgrad (Russia) by depositing BkO2 to a thickness of ≈ 0.3 mg/cm2 onto 1.5 µm Ti foils. In the first experimental run the energy of accelerated 48 Ca ions, delivered by the U400 cyclotron corresponded to the excitation energy of the compound nucleus 297 117 of about 39 MeV, at which the maximum of the production rate of the isotope 293 117, the product of the evaporation of 4 neutrons from the compound nucleus was expected. The total accumulated beam dose of 48 Ca was 2.4·1019. Altogether 5 decay chains consisting from 3 α-decays and terminated by the spontaneous fission were observed at this excitation energy (Fig. 4, average of five events, right part). 112 ms
Ex= 35 MeV 1 event
28.3 s
0.74 s
282
286
278
Mt
1.3 s
274
Bh
a6 33.36 h
270
Db F1
TKE=219 MeV
20 ms 9.6 MeV
293
117
a1 0.32 s
9.0 MeV
289
9.5 MeV
285
281
11.0 MeV
10.3 MeV
9.7 MeV
Rg F1
117
113
a3 37.5 s
297
115
a2 7.9 s
8.8 MeV F2
9.9 MeV
113
Rg
a4 a5
10.8 MeV
115
a2 a3
11.1 s
290
117
117
a1 23 ms
297
294
Ex= 39 MeV 5 events
F2
TKE=218 MeV
Fig. 4. Observed decays of isotopes 293,294 117, their lifetimes and α-particle energies. The numbers denote deduced lifetimes (τ = T1/2 / ln 2) and α-particle energies.15
At the end of October 2009 another experiment started at a lower 48 Ca energy, that corresponded to the excitation energy of 297 117 of about 35 MeV and the
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
696
maximum of the production of the isotope 294 117, the product of the evaporation of 3 neutrons from the compound nucleus was expected. The collected beam dose amounted 2.0·1019 . One decay chain consisting from 6 α-decays and terminated by the spontaneous fission was observed at this excitation energy (Fig. 4, left part). The studied decays of 11 new isotopes significantly expand our knowledge of the properties of the most neutron-rich isotopes of odd-Z elements 105 ÷ 117. 5. Studying Chemical Properties of Superheavy Elements The investigation of chemical properties of the latest discovered elements is of separate interest in connection with the study of the chemical behavior of heavy and superheavy elements. Some of them have half-lives ranging from several seconds to ≈ 1 d, times - reachable by radiochemical methods. As is seen from Table 2, the most suitable for direct chemical studies are the isotopes of Cn: 283 Cn (T1/2 ≈ 4 s) and 285 Cn (T1/2 ≈ 30 s). According to the atomic configuration in the ground state, Cn should belong to the 12-th group of the Periodic Table of Elements as a heavier homologue of Hg, Cd and Zn. To what extent Cn is a homologue of Hg, depends on the so-called “relativistic effect” in the electronic structure of a superheavy atom. As it has been shown in,16 Hg atoms can be transported with a neutral carrier gas(e.g., in He, Ne or Ar) to a distance of more than 30 m, with a velocity of up to 5 m/s. Therefore, investigation of Cn adsorption on metal surfaces has been α undertaken. To produce the isotope 283 Cn the reaction 242 Pu(48 Ca,3n)287 114 → 283 Cn has been used. The chemical setup applied in these experiments was based on the thermochromatographic in situ volatilization and online detection technique (IVO) in combination with the cryo-on-line detector “COLD”.17 To the 242 Pu-target (99.93%) about 15 µg/cm2 of nat Nd was added to produce simultaneously the α-radioactive 185 Hg isotope, having a half-life of 49 s, which served to monitor the production and separation processes. The recoil nuclei leaving the target stopped in a high-purity gaseous medium: He(70%)+Ar(30%). A self-drying closed gas loop system was developed to keep the amount of trace gases such as oxygen and water in this carrier gas mixture as low as possible. The stopped nuclei were transported to detectors by means of a 8 m capillary tube (Fig. 5). The total transport time from the reaction chamber to α detectors was 3.6 s. This time is long enough for the decay 287 114 → 283 Cn. The thermochromatographic “COLD” detector array consisted of 32 pairs of ion-implanted planar silicon detectors facing each other, forming a narrow chromatographic channel. One side of the channel was covered with a 50 nm thick gold layer, deposited directly on the silicon detector surface. A temperature gradient was established along the detector array using a thermostat at the entrance and a liquid nitrogen cryostat at the exit, spanning a range from room temperature to -184 oC. In control experiments α-particles from decays of isotopes of high volatile ele-
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
697 Liquid nitrogen cooling Oven 650 oC with quartz wool Target Beam
Beam dump
2 × 32 PIPS diodes
Window Teflon capillary Membrane pump
Recoil chamber
He Ar Drying system
Fig. 5. Schematic experimental set-up used to investigate the adsorption properties Cn on a gold surface.
50
+50
Ice
Gold
40
0
30
185
219
Hg
-50
Rn
20 283
10
-100 Cn
-150 -200
0 2
Fig. 6.
Temperature (oC)
Relative yield / detector %
ments 181−188 Hg (from nat Nd + 48 Ca reaction) and 219,220 Rn (descendants of transfer products) have been detected. Hg atoms are registered by the first detectors, and decays of the chemically neutral Rn atoms by last ones, which are at the lowest temperature. During the runs,17,18 experimental condition: gas flow rates and temperature gradients were varied. One of the obtained thermochromatographic deposition patterns for 185 Hg, 219 Rn, and 283 Cn are depicted in Fig. 6 and represent a characteristic example for the gas chromatographic behavior of single atoms.
4
6
8
10 12 14 16 18 20 22 24 26 28 30 32 Detector number
Thermochromatographic deposition patterns of
185 Hg, 219 Rn,
and
283 Cn.
The statistical analysis of the deposition behavior of 283 Cn (Fig. 6) revealed a Au standard adsorption enthalpy of element 112 on gold surfaces of ∆Habs (Cn) = 52+4 −3 kJ·mol−1 .17 The observed enhanced adsorption enthalpy indicates a metallic-bond character involved in the adsorption interaction between Cn and Au.
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
698
By applying the estimated values for the sublimation entropy ∆Ssubl = (106.5 ± 2.0) J · mol−1 K, Cn can be presumed to have a boiling point of (357 ± 110) K. These values indicate that element 112 is considerably more volatile compared to its lighter homologues Zn, Cd, and Hg. In experiment,18 aimed at the investigation of chemical properties of Cn rather unexpectedly, one decay chain was observed, which was unambiguously attributed to the decay of 287 114. Even more surprising was the observation of this decay chain on the detector 19, held at a temperature of -88 o C. There were special thermochromatography experiments devoted to the element 114.19 From the observation of three atoms of element 114 adsorbed on gold surfaces of the COLD detector, its most probable standard adsorption enthalpy on gold was Au −1 determined as ∆Habs (E114n) = 34+20 . −3 kJ·mol So, the experimental data point to “Hg-like” behavior of Cn and rather “noble gas like” behavior of the element 114. This observation is the first indication on the influence of relativistic effects on properties of superheavy atoms. This problem is fundamental for the modern chemistry. Experiments are in progress.
6. Summary and Outlook What can we learn from the analysis of the whole set of the data? In reactions Act. + 48 Ca 20 isotopes of six new elements 113 ÷ 118 have been produced and among their decay products 25 heaviest isotopes of the known elements Rf ÷ Cn identified. Figure 7 shows the “north-east” corner of the chart of the experimentally investigated nuclides. In reactions with 48 Ca at a bombarding energy close to the Coulomb barrier the maximum yield for 3n- and 4n-evaporation channels has been observed. Products of evaporation channels accompanied by the emission of charged particles (protons, α-particles) have been not observed. For all events of sequential α-decays the energies and decay probabilities obey the basic rule of Geiger—Nuttall (e.g. in the form21 ), which connects the α-decay energy Qα and the half-life Tα and imply decays of nuclei with large atomic numbers Z = 110 – 118. According to existing systematics, spontaneous fission events with TKE ∼ 200 MeV are related to the decay of considerably long-lived nuclei with Z ≥ 104 which for one’s turn are “issues” of even heavier nuclei. Comparing properties of “light” isotopes, produced in “cold” fusion reactions,9,11 with those of “heavy” ones, produced in Act. + 48 Ca reactions (Table 2), one can see, that addition of several neutrons leads to a significant increase in lifetimes (Table 3). The data in Table 3 indicate on the presence of a neutron shell at higher neutron numbers. Comparing the half-lives of even-even most long living isotopes 284 Cn - T1/2 = 0.1 s, 288 114 - T1/2 = 0.8 s, 292 116 - T1/2 = 18 ms and 294 118 - T1/2 = 0.9 ms, one
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
699
118 294 0.89 ms
proton number
118
a 10.84
Ds 269 170 ms a 11.11
Mt
Mt 268 70 ms a 10.10 10.24
Hs
Hs 267 59 ms a 9.88 9.83 Bh 266 1s a 9.29
Ds 270 5.1 ms a 10.95
a 10.71
a 9.76 Hs 269 9.3 s a 9.23 9.17
a 10.33
Hs 270 10 s
a 8.83
a 9.74 Cn 284 97 ms
Cn 285 29 s
SF
a 9.15
Rg 279 170 ms
Rg 280 3.6 s
Rg 281 26 s
Rg 282 0.5 s
a 10.37
a 9.75
SF
a 9.00
Mt 276 0.7 s
a 10.66
a 10.54
a 9.95
a 9.63
174
176
Ds 281 11 s SF
Mt 278 7.6 s
a 9.71
a 9.55
Hs 275 0.19 s
a 9.08
Bh 267 17 s
170
172
a 9.30 Bh 270 61 s
Bh 272 9.8 s
Bh 274 53 s
a 8.93
a 9.02
a 8.80
Sg 265 Sg 266 a 7.4 s 0.2 s 8.84;8.76; 8.94;8.69 SF
Sg 271 1.9 m a 8.54 Db 266 22 m
Db
Mt 275 9.7 s
a 10.31
a 10.00
Ds 279 0.2 s SF a 9.70 Mt 274 0.5 s
a 10.46
Cn 283 3.8 ms a 9.45 SF
Rg 278 4.2 ms
a 9.73
a 10.59
Cn 282 0.82 ms
a 10.69 Ds 273 0.1s
Ds 271 56 ms
115 289 115 290 0.22 s 16 ms
a 10.12
SF
Rg 272 1.5 ms
Cn 288 87 ms
113 282 113 283 113 284 113 285 113 286 73 ms 0.1 s 0.5 s 5.5 s 20 s a 10.63
Cn 277 240 ms a 11.17 11.65
a 10.82
a 10.74
Cn 287 32 s
114 286 114 287 114 288 114 289 0.13 s 0.48 s 0.8 s 2.6 s SF a 10.19 a 10.02 a 9.94 a 9.82
114 113
a 10.81
116 290 116 291 116 292 116 293 7.1 ms 18 ms 18 ms 61 ms
115
Rg
Sg
a 11.03
116
Cn
Bh
117 293 117 294 14 ms 78 ms
117
113 278 1.8 ms a 11.52 11.68
Ds
a 11.65
SF
160
Db 267 1.2 h
Db 268 29 h
Db 270 23.1 h
SF
SF
SF
162
Fig. 7.
164
166
168
neutron number
The “north-east” corner of the chart of nuclides.
Table 3. Decay properties of the heaviest isotopes produced in Pb,Bi + and Act.+48 Ca reactions. “light” isotope
T1/2
∆n
269 Ds
0.18 ms 3.8 ms 0.7 ms 1.8 ms
14 9 8 7
272 Rg 277 Cn 278 113
“heavy” isotope 281 Ds 281 Rg 285 Cn 285 113
T1/2 11.1 26.3 29.0 20.0
s s s s
48 Ca
increasing factor 6 · 104 7 · 103 4 · 104 1 · 104
can suppose, that Z = 114 is probably a proton shell. Similar conclusion can be drawn after analyzing production cross-sections of superheavy isotopes (see Fig. 8). Regardless of the facts that there are interplays of odd-even effects, massasymmetries in entrance channels, differences in formation and survival probabilities of compound nuclei, the clear maximum in the vicinity of Z = 114 is also an indication on the presence of a closed shell around this proton number. There is also an indication, that this shell could be a spherical one. In the limits of the detector energy resolution all α-decays of 288 114 can be attributed to the decay from the ground state, which is populated after decay of the evaporation residue or after the α-decay of the parent nucleus 292 116. This can be compared with the decays of 277 Cn, which populate different levels in deformed (due to the
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
700
“hot” fusion reactions
Fig. 8. tions.
48
Ca induced reactions
Measured excitation functions maxima of xn-evaporationchannels of Act. +
48 Ca
reac-
neutron shell N = 162) 273 110. Similar, in the two decays of 278 113,11 observed α-transition differ by 0.15 MeV . Relatively long half-lives of isotopes with Z = 108 – 114, obtained in 48 Ca induced reactions, open up new opportunities for the investigation of the influence of relativistic effects on chemical properties of superheavy elements. Due to relatively long life-times on new isotopes, the experimental approach to their investigation can be changed — quasi-on-line mass separation or chemical separation can be employed. These methods have sufficient advantages in the effective target thickness (factor of ∼ 15) and in the beam acceptability. 7. Investigation of Reactions Perspective for the Synthesis of SHE The heaviest isotope, that can be used in reality as a target for the synthesis of SHE, is 249 Cf. The advance to isotopes of heavier, than Z = 118 elements, requires using of heavier projectiles e.g. 50 Ti, 54 Cr, 58 Fe and so forth. E.G., three reactions, 238 U + 64 Ni, 244 Pu + 58 Fe, or 248 Cm + 54 Cr, can be used for the synthesis of isotopes of element Z = 120, all leading to the same compound nucleus – 302 120 (N = 182). Two first reactions were studied preliminary at the velocity filter SHIP (GSI) and DGFRS (FLNR). No events were detected. The sensitivity of both experiments corresponded approximately to 0.4 pb for the detection of a single decay. The more perspective reaction 248 Cm + 54 Cr stands in a queue. Since 249 Cf is the heaviest target material available in reality for the synthesis of
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
701
superheavy isotopes, the limit on the neutron excess in the compound nucleus (NCN = 180) formed in the fusion of commercially available 248 Cm with 48 Ca cannot be overcome. In principle, a step to more neutron rich nuclei could be done using the isotopes 250 Cm (T1/2 = 9700 a) and 251 Cf (T1/2 = 898 a). However, for the separation of these isotopes one needs special electromagnetic separators, which are yet not available. Thus, the obtaining of nuclei close to the neutron N = 184 shell in Act. + HI reactions requires studies of the reaction mechanisms, so as to determine optimal condition and to provide realistic estimates of the probability of producing compound nuclei in such reactions. Another possibility is to look for reaction other than Act + HI.
8. Reactions Other Than HI + Act Another, and very attractive possibility could be the use of neutron-rich radioactive magic nuclei as beams, e.g. fission fragments 132 Sn (T1/2 = 40 s); 133 Sb (2.5 min) and 134 Te (42 min), which determine the fission mode and fragment mass distribution of weakly excited nuclei. In the fusion with stable nuclei, for instance with 176 Yb, this could lead to compound nuclei with ZCN = 120 - 122 and NCN = 188. The use of complementary fission fragments like 93 Kr (T1/2 = 1.3 s), which are produced in nuclear fission with comparable yields, could also be promising for producing of even more neutron-rich nuclei. For instance in reaction 226 Ra + 93 Kr, compound nucleus with ZCN = 124 and NCN = 195, decaying down to the isotope 296 114182 (3n channel), could be formed. Reactions of 82 Se + 208 Pb and 86 Kr + 208 Pb were studied using the velocity filter SHIP at GSI with a sensitivity of 1 pb, but no events which could be attributed to the formation of isotopes with Z = 116 or else 118 were observed. However, it seems that the sensitivity of these experiments was insufficient. In principle, the use of symmetric reactions between deformed nuclei like 150 Nd 150 + Nd should also be considered. In this case the effect of orientation of interacting nuclei in a touching point can play an important role in the rise of compound nucleus formation cross-section. However, from the experimental data it is known, that the increased Coulomb repulsion in symmetric reactions, will enhance the quasi-fission channel and thus hinder the formation of a compound nucleus. As long as calculations of these limitations are very uncertain, it seems reasonable to estimate them using test reactions with known nuclei. In this connection first experiments aimed at the study of symmetric reactions - 136 Xe(136 Xe, xn)272−x Hs and 136 Xe(124 Sn,xn)260−x Rf were conducted radiochemically at the FLNR and using LISE-III spectrometer at GANIL. In both cases the upper limit of only ≤ 10 pb has been reached.
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
702
9. Search for Superheavy Elements in Nature The possibility to exist in Nature of elements heavier than 238 U depends on two determinatives: in the Universe should exist a mechanism leading to the formation of superheavy elements, and it is necessary that one of superheavy nuclides would have a lifetime comparable with the age of the Earth (of about 4 − 5 · 109 y). The search for heavy elements in terrestrial samples, meteorites and in cosmic rays in 70’s and 80’s was one of the extensive experimental investigations.22 When choosing the objects of such studies, it was assumed that the most stable nuclei were located in the vicinity of the closed shells Z = 114 and N = 184 (corresponding to the chemical behavior of Pb), where the maximum shell effect was expected. In all experiments only upper limits of the superheavy element concentration in the studied samples have been determined. All this resulted in a pessimistic view on the possible existence of SHE in nature, and the searching experiments were practically stopped in the mid of 80-th.
15
spontaneous fission
a - decay
Log10 T1/2 (s)
10 5
Z=108
Z=108
0 -5
Z=114
Z=114
-10 140
150
160
170
180
190
140
150
160
170
180
190
Neutron number Fig. 9. Predicted half-lives T1/2 of isotopes of elements 108 and 114 against spontaneous fission and α-decay.
The experimental data accumulated, and development of modern microscopic models during passed 30 years, simulated new approach to the search for perspective objects. So, a noticeable increase in T1/2 α and T1/2 SF may be expected in the region of nuclei with Z ≤ 110,14 which has not been yet looked for. Calculated half-lives T1/2 of isotopes of elements 108 and 114 against spontaneous fission and α-decay, obtained in a MM-model,20 are shown in Fig. 9. Considering different nuclei as objects for such studies, it turns out that for element 108 - Hs,14 the chemical homologue of Os, the chances to be found in terrestrial samples could be favorable.
November 25, 2010
14:35
WSPC - Proceedings Trim Size: 9.75in x 6.5in
12.02˙Popeko
703
The search for rare decays may be undertaken with a raw metallic sample of Os, where atoms of Hs can be present. Spontaneous fission in an Os sample (up to 1 kg) can be registered by detecting multiple emission of prompt neutrons.22 Such an experiment has been now running in the underground laboratory in Modane (France), protected by a 4000-m water-equivalent layer and it will be continued for several years. Another possibility for choosing of objects, perspective for SHE search, can follow from the discovered high volatility of Cn and element Z = 114. These elements can be gases (noble) at normal conditions – the boiling temperature of Cn is (360 ± 100) K. Thus one can look for SHE in heavy fractions of Xe production. The problem of the existence of superheavy elements belongs to the must fundamental in natural sciences, because it affects nuclear and atomic physics, quantum chemistry, electrodynamic of strong fields, astrophysics, cosmology, and undoubtedly the efforts to solve it will be actively continued. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
A. Sobiczewski, F. A. Gareev, B. N. Kalinkin, Phys. Lett., 22, 500 (1966). H. B. Meldner, Ark. Fys., 36, 593 (1967). A. Sobiczewski, K. Pomorski, Prog. Part. Nucl. Phys., 58, 292 (2007). S. G. Nilssen, S. G. Thompson and C. F. Tsang, Phys. Lett., 28, 458 (1969). G. N. Flerov et al., Nucl. Phys. A., 267, 359 (1976). Yu. Ts. Oganessian et al., Nucl. Phys. A, 294, 213 (1978). P. Armbruster et al., Phys. Rev. Lett., 54, 406 (1985). R. W. Lougheed et al., Phys. Rev. C, 32 (1985). S. Hofmann, G. M¨ unzenberg, Rev. Mod. Phys., 72, 733 (2000). Yu. Ts. Oganessian et al., Nucl. Phys. A, 239, 353 (1975). K. Morita, Prog. Part. Nucl. Phys., 62 325 (2009). Yu. Ts. Oganessian et al., Eur. Phys. J. A,5, 63 (1999). Yu. TS. Oganessian et al., Phys. Rev. Lett, 83, 3154 (1999). Yu. Ts. Oganessian, J. Phys. G: Nucl. Part. Phys. 34, R165 (2007). Yu. Ts. Oganessian et al., Phys. Rev. Lett., 104, 142502 (2010). A. B. Yakushev et al., Radiochim. Acta, 89, 743 (2001). R. Eichler et al., Nature, 447, 72 (2007). R. Eichler et al., Angew. Chem. Int. Ed., 47, 3262 (2008). R. Eicler et al., Radiochim Acta, 98, 133 (2010). R. Smola´ nczuk, Phys. Rev. C, 56, 812 (1997). V. E. Viola Jr., G. T. Seaborg, J. Inorg. Nucl. Chem. 28, 741 (1966). G. N. Flerov, G. M. Ter-Akopian, Rep. Prog. Phys., 46, 817 (1983).
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 11, 2010
19:2
WSPC - Proceedings Trim Size: 9.75in x 6.5in
PART XIII
General Relativity
divided
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
707
ON THE THRESHOLD OF GRAVITATIONAL WAVE ASTRONOMY P. AUFMUTH Max Planck Institute for Gravitational Physics (Albert Einstein Institute) Callinstr. 38, 30167 Hannover, Germany [email protected] www.aei-hannover.de The first generation of interferometric gravitational wave detectors operates continuously at the design sensitivity. Up to now, no gravitational wave signal has been detected. However, the analysis of the data obtained so far gives upper limits that allow to make statements about, e.g., the ellipticity of neutron stars and to decide on theories predicting properties of the cosmic background radiation. The next generation of gravitational wave detectors is under construction. These will be able (by about 2014) to observe a thousand times larger volume of the Universe as now and to start gravitational wave astronomy. Keywords: General relativity, astrophysics, astronomy, gravitational waves.
1. Introduction The first generation of interferometric gravitational wave detectors has reached the design sensitivity (goal: to detect bursts with amplitudes of 10−21 ). They operate as a network taking data continuously over months or years. Their potential sources consist, e.g., of the coalescence of black hole binaries at a distance of about 150 Mpc and spinning neutron stars in our galaxy with ellipticities larger than about 10−6 . Observations at this initial level are not expected to be frequent, but all the detectors are to be upgraded until 2014. This will result in tenfold the actual sensitivity, increasing the volume of observation by a factor of thousand. These second-generation detectors can be expected to open a new branch of astronomy – gravitational wave astronomy. This paper reviews the results obtained so far and the plans for future detectors. Section 2 gives a short introduction to the properties of gravitational waves (for a more comprehensive account see Ref. 1 or Ref. 2). Section 3 gives an overview of the first-generation detectors and the first data-taking campaigns, while Sec. 4 gives an account of the results obtained so far. Section 5 reviews the plans for the next generations of detectors. In Sec. 6 the importance of gravitational wave astronomy is emphasized.
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
708
2. Gravitational Waves In metric theories of gravity, space-time is a deformable quantity. Since any signal can propagate only with finite velocity, each disturbance in space-time produces automatically a wave-like phenomenon: a gravitational wave. General Relativity is Einstein’s metric theory of gravity. Soon after the completion of his theory in November 1915, Einstein showed the existence of wave-like solutions of his field equations.3 Gravitational waves are ripples in the geometry of space-time (described by the metric gik ) produced by accelerated masses if they possess a mass quadrupole moment. These waves are transverse with two polarization modes and propagate with the velocity of light c. In a co-moving coordinate system they appear as small changes hik of the flat background metric ηik :
gik = ηik + hik
−1 0 0 0 0 0 1 0 0 0 = 0 0 1 0 + 0 0 001 0
0 0 h + h× h× −h+ 0 0
0 0 0 0
for a wave propagating in z-direction; h+ and h× are the amplitudes of the polarization modes. This changes the x and y components of the line element ds2 for, e.g., h+ in the following way: ds2 = −c2 dt2 + (1 + h+ )dx2 + (1 − h+ )dy 2 + dz 2 . Thus, the wave is stretching and compressing the proper distance between two objects in the (x, y)-plane. The amplitude h depends on the temporal variation of the quadrupole moment Qik of the source:4 2G d2 Qik rc4 dt2 where G is Newton’s gravitational constant and r is the distance to the source. Only sources with large quadrupole moments that change very fast produce observable gravitational wave amplitudes. These sources are compact astrophysical objects like neutron stars or black holes and astrophysical processes like a supernova core collapse, the merger of two compact objects or the Big Bang (see Fig. 1). Since gravitational waves have nearly no interaction with matter, the whole Universe is transparent for gravitational radiation. The theory of gravitational radiation already makes an important contribution to the understanding of these astronomical systems. The famous Hulse-Taylor pulsar PSR B1913+16 consists of two neutron stars orbiting in a close eccentric orbit. Since all the relevant parameters of the system can be measured accurately, the orbital shrinking due to gravitational radiation backreaction can be deduced independently. E.g., the decay of its orbital period is consistent with general relativistic predictions within a tenth of a percent.5 Another spectacular system is OJ287 where two supermassive black holes are orbiting.6 hik =
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
709
Fig. 1. Sources and frequencies of gravitational waves. The ordinate gives the frequency f on a logarithmic scale in Hz. The upper part lists possible sources (NS = neutron star, BH = black bole), the lower part displays methods to detect these (CMB = cosmic microwave background). PLANCK and LISA are satellite missions. The broken line encloses detectors for the direct observation of gravitational waves.
Gravitational wave detectors on Earth and in space will observe the frequency range from about 10−4 Hz to 10 kHz. At lower frequencies one can use observations of timing irregularities of single pulsars caused by gravitational waves to set upper limits on the background gravitational-wave field (‘pulsar timing’). At the lowest end of the gravitational wave spectrum, the PLANCK satellite will look for polarization patterns in the cosmic microwave background produced by gravitational waves in the early Universe. 3. First-Generation Detectors The aim of the first generation of gravitational wave detectors was to be able to observe at least supernovae in the Milky Way and neutron star mergers in the Virgo cluster. Theory and numerical simulations predicted that a detector had to be sensitive enough to measure a relative amplitude h = δl/l of 10−21 to match these conditions (δl = length change due to the gravitational wave, l = characteristic length of the detector). For a detector with 1 km length, δl amounts to 10−18 m. Because gravitational wave detectors have wide antenna patterns, signals can be detected from essentially everywhere in the sky. 3.1. Resonant mass antennae A gravitational wave acts like a tidal force across an extended rigid object so that it will be stretched and compressed. If the resonance frequency of this object matches the gravitational wave frequency then a small amount of energy is absorbed by the object exciting its longitudinal vibration modes. The first gravitational wave detectors were large bars of an aluminum alloy with a resonance frequency of about 1 kHz
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
710
(‘Weber bars’).7 The noise level of present detectors corresponds to an amplitude of 1.4 × 10−19 with a bandwidth of about 100 Hz.8 Since the mid-1990s a network of five bar detectors has been operating: • • • • •
ALLEGRO at Baton Rouge, LA (USA) with a mass of 2296 kg Al, AURIGA at Legnaro (Italy), m = 2230 kg Al, EXPLORER at CERN, Geneva (Switzerland), m = 2270 kg Al, NAUTILUS at Frascati (Italy), m = 2260 kg Al, and NIOBE at Perth (Australia) with a 1500 kg Nb cylinder
working together in the International Gravitational Event Collaboration (IGEC) to coordinate observation time and data analysis.9,10 The performance of resonant detectors is limited by thermal and quantum noise, but also by their small size and bandwidth. So, after the arrival of interferometric detectors (see Sec. 3.2), bar detectors have lost in importance. Two of them (ALLEGRO and NIOBE) are no longer funded, but the remaining network will be able to observe supernova events within the Milky Way and to watch the skies when the interferometers are upgraded. 3.2. Interferometers A gravitational wave affects the proper distance l between two objects in space. It changes two perpendicular distances by the same amount δl, but with different sign if the orientation of these objects is optimum. These changes produce a phase shift δφ between two light beams of wavelength λ travelling these distances: δφ = 4πδl/λ. A Michelson interferometer is the perfect instrument to detect this phase shift as a change in the interference pattern at the output port. The sensitivity depends on the arm length and on the amount of light energy stored in the arms. The advantage of laser interferometers compared with resonant detectors is the broad detection band, from 10 Hz to 5 kHz, and the higher attainable sensitivity. As an example for an interferometric gravitational wave detector, Fig. 2 shows the optical layout of the GEO600 detector.11 The light from a master-slave laser system is filtered by two sequential mode cleaners (MC1 and MC2). The stabilized light with a power of 3.2 W is then injected (mirror BDIPR) into the Michelson interferometer (BS = beam splitter) with folded arms (mirrors MCn and MCe) and an optical roundtrip length of 2400 m (end mirrors MFn and MFe). The operating point is chosen to be at a dark fringe such that all the light goes back to the laser where it can be recycled (power-recycling mirror MPR, transmission T = 0.09 %). This enhances the circulating light power within the interferometer by a factor of about thousand to 2.7 kW. Only signal sidebands and control sidebands leave the interferometer towards the output port which hosts an output mode cleaner and a signal-recycling mirror (MSR with T = 1.9 %). With detuned signal recycling the
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
711
Fig. 2.
Simplified optical layout of the GEO600 detector.11
gravitational wave signal has optimally combined frequency dependent components in both output quadratures P (t) and Q(t). Five large interferometric gravitational wave detectors are in operation: 1 • GEO600 near Hannover (Germany) with an arm length of 600 m, • LIGO, two detectors (H1 and L1) at Hanford, WA and Livingston, LA (USA) with 4 km arm lengths (and an additional 2 km interferometer (H2) within the same vacuum system at Hanford), • TAMA300 at Tokyo (Japan) with 300 m arm length, and • Virgo near Pisa (Italy) with an arm length of 3 km. All the detectors use Fabry-Perot cavities in the arms, apart from GEO600 that uses a delay line instead. Up to now, GEO600 is the only instrument applying signal recycling. This technique resonantly enhances a periodic signal and allows to tune the detector. These detectors work together in the LIGO Scientific Collaboration and Virgo Collaboration in order to coordinate observation time and data analysis. Data taking started in 2001 (TAMA300: 1999); since then several data taking runs have been performed, with increasing sensitivity. LIGO (see Fig. 3) and GEO600 achieved their design sensitivity in 2005, Virgo in 2007. LIGO completed a two-yearlong science run (called “S5”) in October 2007, with the participation of GEO600 from May to October 2006 and Virgo from May to October 2007. At present time, all the detectors undergo several upgrades. Interferometers and bar detectors also perform joint searches or compare their data. E.g., in 2007 LIGO data were cross-correlated with ALLEGRO data in order to search for the stochastic background, providing the best upper limits at that time.13 AURIGA and LIGO have done a joint search for gravitational wave bursts. 14
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
712
Fig. 3. Sensitivity of the LIGO detectors during the S5 run. Upper curve: H2, lowest curve: H1, ˜ ) (= the square root of the power slightly above: L1). The Fig. gives the strain noise density h(f spectrum) in Hz−1/2 . The strain amplitude is obtained by multiplication with the square root of the detection bandwidth.12
4. First Results Over the past decade, gravitational wave detectors have accumulated a wealth of data. In parallel, data evaluation algorithms have been refined in order to extract signals from these data. Theoretical work has provided a much better understanding of possible sources and of systems like neutron star binaries or cataclysmic variables. Up to now, all these efforts have produced dozens of papers but no direct detection. On the other hand, even this null result provides information about the astrophysics of neutron stars and the population of possible sources as well as about the origin of the cosmological background.15 The most sensitive data have been collected during the S5 run of LIGO. During this run, all three LIGO detectors had sensitivities near their design goals. At signal frequencies near 100 Hz the obtained strain sensitivity for periodic signals was smaller than 10−24 . The data were acquired and digitized at a rate of 16384 Hz.12,16 4.1. Periodic signals Rotating neutron stars in our Galaxy are the prime target for the search of periodic gravitational waves. These can be emitted by means of a static deformation or internal matter oscillations. The observed spin-down rate of known pulsars is most likely dominated by gravitational wave losses. From the first year of the S5 run, data segments containing at least 30 min of continuous interferometer operation have been evaluated over the frequency range from 50–1100 Hz. The results of the analysis with the PowerFlux algorithm allowed to set strict all-sky frequentist upper limits on the strength of continuous
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
713
wave gravitational radiation of linear and circular polarization. An analysis of coincidence candidates with a signal-to-noise ratio (SNR) larger than 6.25 did not yield a detection. This limits the detection sensitivity for unknown neutron stars to equatorial ellipticities of about 10−6 up to a distance of 500 pc. The results imply the following portrait of the galactic population of neutron stars spinning down primarily due to gravitational radiation: birth rate less than one per 30 yr, typical ellipticity less than 10−6 , spin period greater than 10 ms.16 The Crab pulsar (PSR J0534+2200) with its low distance of about 2 kpc has long been regarded as one of the most promising sources of gravitational wave emission. Its energy loss rate is estimated to be about 4 × 1031 W and its ellipticity to be approximately 10−4 . If all the spin-down is due to gravitational radiation than a gravitational strain of 1.4×10−24 results, detectable by LIGO by integrating several months of data. Since no signal has been detected, the fraction of energy lost by the Crab pulsar to gravitational waves is only about 6 %. The vast majority of the observed loss is due to electromagnetic emission (magnetic dipole radiation), particle acceleration in the magnetosphere and the expansion of the Crab nebula.17 4.2. Stochastic background The Universe should have an isotropic background of gravitational radiation from the Big Bang. It could be produced by the amplification of vacuum fluctuations. This background is expected to be very weak, but it will carry information about the very early Universe, from a time about 10−30 s after its birth.18 The gravitational wave signal has the form of random noise with a characteristic power spectrum. It can be distinguished from instrumental noise by cross-correlating two or more detectors. It is usual to characterize the intensity of a random field of gravitational waves by its energy density as a function of frequency: f dρGW ΩGW (f ) = ρc df where dρGW is the energy density of gravitational radiation contained in the frequency range from f to f + df and ρc is the critical density of the Universe. Theories about the early Universe give tight constraints on the gravitational wave energy density. A large ΩGW would alter the abundances of light nuclei produced in the Big Bang and also the observed cosmic microwave background and mat−5 ter power spectra. The Big Bang nucleosynthesis model predicts ΩBBN GW < 1.1 × 10 CMB −6 and the latter one ΩGW < 9.5 × 10 . With 95 % confidence the upper limit for LIGO (S5) is ΩGW < 6.9 × 10−6 . This result constrains also models of cosmic (super)strings formed during phase transitions in the early Universe.19 4.3. Gravitational collapse In the early days of gravitational wave detection a type IIa supernova was expected to be the most promising source of gravitational radiation. Since then, refined sim-
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
714
ulations predict amplitudes a hundred times lower than first believed. In a typical case gravitational waves might extract between about 10−7 and 10−5 of the total available mass-energy, producing a burst with a frequency between 200 and 1000 Hz.2 A rough estimate of the amplitude is h = 6 × 10−21
E 10−7 M
1/2
1 ms T
1/2
1 kHz f
10 kpc r
for a supernova in our galaxy, 10 kpc away, emitting the energy equivalent of 10 −7 Sun masses at a frequency of 1 kHz during 1 ms. This amplitude is large enough for current ground-based detectors to observe the signal, but the event rate within 10 kpc is expected to be far too small to make a detection likely. The first year of LIGO S5 data has been searched for gravitational wave bursts in the frequency range from 64 Hz to 2000 Hz. The analysis used 269 days of data during which two or more LIGO detectors were in science mode. After evaluation of the data quality and a background estimation (false trigger rate from detector noise and artifacts), three algorithms have been used to analyze the data. In addition, many simulated signals (e.g., sine-Gaussians) are injected into the data stream in order to simulate the passage of gravitational wave bursts and to obtain an estimate of the efficiency of the search.20 The overall sensitivity is expressed in terms of the root-sum-square strain amplitude v uZ u +∞ u hrss = t (|h+ (t)|2 + |h× (t)|2 ) dt −∞
calculated from both polarization component amplitudes at the Earth. For the S5 run, hrss was in the range of 6 × 10−22 Hz−1/2 to a few 10−21 Hz−1/2 . No signals were observed and a frequentist upper limit of 3.75 events per year on the rate of strong bursts was placed in the 90 % level. One can estimate what amount of mass, converted into gravitational wave burst energy at a distance of 10 kpc, is needed to be detected with 50 % efficiency. For a 153 Hz sine-Gaussian this is 1.9×10−8M . For a source in the Virgo cluster 16 Mpc away an energy emission of about 0.05M c2 is needed to produce the same hrss – a value not to be expected in a supernova event. The most realistic simulations of gravitational wave forms are performed for type IIa supernovae; the astrophysical reach for such an event today is 24 kpc. The network of resonant bars has also performed long data runs, from 1997– 2000 (‘IGEC-1’ with five detectors with resonance frequencies between 890 and 935 Hz) and from May to November 2005 (‘IGEC-2’ with three detectors sensitive to frequencies between 850 and 960 Hz). In IGEC-1 four bars ran jointly for 29 days, and three for 178 days. The overall sensitivity was hrss > 8 × 10−19 Hz−1/2 . The analysis of the first run set a rate limit of 1.5 events per year at the 95 % confidence
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
715
level for bursts near the resonance frequency of the bars.9 The second run was a factor of about 3 more sensitive.10 No burst candidates were found. 4.4. Binary star mergers The most promising sources are binary neutron stars emitting strong gravitational radiation during the inspiral phase just before merging. This energy loss causes the orbit to shrink continuously, thus producing a signal with increasing amplitude and frequency (called a ‘chirp’). Today, these systems are detectable out to tens of Mpc. Systems consisting of a stellar black hole and a neutron star or stellar black hole binaries can be detected at much greater distances. The inspiral waveforms can be reliably predicted by post-Newtonian perturbation theory. These systems are so well understood, so that it is even possible to determine directly the distance to the source.2 As in the previous cases, data from the first year of the S5 run have been analyzed. The detectors were sensitive to coalescences as far as 150 Mpc from the Earth for systems with a total mass of about 28M . The data were matched filtered through a bank of about 7000 templates covering a total mass region of 2M < M < 35M . If signals passed a preset SNR threshold, one searched for coincident triggers in time and template masses between two or three detectors.21 No gravitational wave signals were observed above the expected background. The cumulative 90 % confidence rate upper limits of the binary coalescence of neutron stars, black holes and black hole-neutron star systems are 1.4 × 10−2 , 7.3 × 10−4 10 and 3.6 × 10−3 yr−1 L−1 times the blue 10 , respectively. L10 means a galaxy with 10 light luminosity of the Sun; this value is proportional to the stellar birth rate in nearby spiral galaxies.22 5. Future Detectors We are literally on the threshold of gravitational wave astronomy. The detectors are able to observe a signal, but the event rates are still too low. If the detectors were sensitive enough to cover the Virgo cluster, e.g., an event rate of 30 per year for type II supernovae can be expected. In 2008, LIGO and Virgo have been upgraded with a new high power laser (P = 35W) from the GEO600 collaboration. Since July 2009, ‘enhanced LIGO’ and ‘Virgo+’ are again collecting data with (‘S6/VSR2’ science run). After this, a major upgrade of both detectors in the years from 2011 to 2014 is expected to result in an one order of magnitude better sensitivity than the current instruments. This corresponds to a volume of the Universe a thousand times larger than that being presently scanned by gravitational wave detectors. 5.1. Advanced detectors Advanced LIGO and Advanced Virgo represent the next generation of gravitational wave detectors which will replace the initial ones. They use the existing
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
716
Fig. 4. Sensitivity of present and future groundbased gravitational wave detectors. The Fig. gives ˜ ).30 the strain noise density h(f
vacuum systems, but will include improved seismic isolation and suspension, larger optics and high-power lasers. Installation of improved hardware is planned in early 2011 and first observations could begin as early as 2014.23 Advanced LIGO, e.g., will use 40 kg silica optics, 34 cm in diameter and 20 cm thick. This large size reduces radiation pressure noise. The substrates will be made from Heraeus Suprasil 3001, the lowest absorption fused silica. The test masses will be supported by fused silica fibers in a four-stage pendulum system. This design (‘monolithic suspension’) is based on the GEO600 suspensions and reduces thermal noise below the level of radiation pressure noise. The three-stage Nd:YAG laser system with an output power of 180 W has been developed by members of the GEO600 collaboration (Albert Einstein Institute and Laser Zentrum Hannover). Advanced LIGO will also use signal recycling and an output modecleaner. Most of these techniques have been developed and used in GEO600 which was in a certain sense already an advanced detector (therefore, it could compete with the larger detectors). In July 2009 GEO600 has started an upgrade program called GEO-HF. The sensitivity will be improved by tuned signal recycling, DC readout, injection of squeezed vacuum states into the anti-symmetric port and a high-power laser.24 GEO-HF aims to be more sensitive at higher frequencies above 1 kHz. – A new 10 m prototype interferometer at Hannover will operate at the standard quantum limit and test quantum non-demolition techniques and entanglement of macroscopic test masses. LCGT (Large-Scale Cryogenic Gravitational-Wave Telescope) is planned as a TAMA-like 3 km interferometer with signal recycling and cooled to cryogenic temperatures in order to reduce thermal noise. It will be placed underground in a mine
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
717
in Gifu (Japan) to maintain an environment of low seismic noise.25 A prototype facility is CLIO, the ‘Cryogenic Laser Interferometer Observatory’, a 100 m baseline cryogenic underground interferometer. It has been built in the Kamioka mine to demonstrate the feasibility of a cryogenic mirror system and the benefits of an underground location.26 New design schemes for resonant mass detectors have been proposed, too. It is possible to construct spheres of a similar size (1 to 3 m diameter) to existing cylinders, thus increasing the mass of the detector. A sphere gets rid of the dependency on the direction of incidence of the wave since five quadrupole modes are excited. This allows one detector to determine the position of the source and the polarization of the wave. The sensitivity may be pushed to below 10−21 . A spherical prototype (MiniGRAIL) has been operated in the Netherlands.27 Recently proposed nested cylinders (DUAL) have the potential to reach the sensitivity of the advanced interferometers at higher frequencies with a larger bandwidth.28 5.2. Third-generation detectors The sensitivity of the advanced interferometers is expected to guarantee the detection of gravitational wave signals. But in order to open the era of routine gravitational wave astronomy we need a third generation of detectors with another factor of 10 improvement in sensitivity. The European Commission has funded a design study for such a detector under the Framework Programme 7 (FP7, Grant Agreement 211743), the Einstein Telescope (ET). The arm length will probably be about 10 km, but the usual L-shaped geometry is replaced by a triangular one where the arms form a 60o angle. Three co-located detectors of this kind can be accomodated in a triangular shaped underground site; this will be useful to extract additional information from the gravitational wave. The detector will use cryogenic cooled heavy test masses, a high power laser (1 kW) and squeezed light injection. Multiple-stage pendulums push the low frequency limit toward 1 Hz.29 The first data taking of ET will not take place before 2025. – Figure 4 shows the sensitivity curves for present and future ground-based gravitational wave detectors. 30 But even with ET, we will never be able to detect gravitational waves in the Millihertz range, because of seismic noise. Only space missions can open this observational window. ESA and NASA have agreed to collaborate on the LISA (Laser Interferometer Space Antenna) project, consisting of three identical drag-free spacecraft, placed at the corners of an equilateral triangle with a side length of 5 million km. This constellation is to revolve around the Sun in an Earth-like orbit, about 20o behind the Earth (Fig. 5). Each spacecraft has two separate lasers that are phase-locked so as to represent the beam splitter of a Michelson interferometer. The distances are measured from test masses (Au/Pt alloy cubes) freely floating within the spacecrafts. The sensitivity h is about 3 × 10−24 . Launch is scheduled for about 2022. – LISA Pathfinder (LPF) is a mission dedicated to demonstrate
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
718
Fig. 5.
LISA orbit; the triangle is drawn one order of magnitude too large. 1
the feasibility of the technology and of the measurement scheme of LISA. Launch date is scheduled for 2012.The final orbit is around the Sun–Earth Lagrangian point L1.31 DECIGO (Decihertz Interferometer Gravitational Wave Observatory) is a future space antenna with an observation frequency band around 0.1 Hz. Four interferometer units will orbit the Sun along the Earth orbit. Each unit will be formed by three drag-free spacecraft that are separated 1000 km from one another. DECIGO will fill the gap between the sensitivity curves of LISA and ET and will be launched in about 2027.32 6. Conclusion In the last five years gravitational wave detectors have not succeeded in a direct detection of a signal, but the first detection could occur at any time. Even now, the analyses of the accumulated data are placing meaningful constraints on astrophysical objects and events, source populations and the total energy density of graviational waves in the Universe. Advanced detectors in the near future and third-generation observatories in more than one decade will open the possibility to perform gravitational wave astronomy. This will give us completely new views on fundamental physics and astrophysics. One can look forward to detailed comparison of black hole mergers with theory, to the relationship between compact-object mergers and gamma-ray bursts, to a precise calibration-free measurement of the Hubble constant, and to population studies of neutron stars and black holes.2 Speculations about the amplification of relic gravitational waves during the inflationary era, the existence of cosmic strings or the nature of dark matter can be put to the test. References 1. P. Aufmuth: The Search for Gravitational Waves - Status and Perspectives, in Beyond the Desert 2003, ed. by H. V. Klapdor-Kleingrothaus (Springer 2004) pp. 1055–1076
November 25, 2010
16:5
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.01˙Aufmuth
719
2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32.
B. S. Sathyaprakash, B. F. Schutz: Living Rev. Relativity 12, 2 (2009) A. Einstein: Sitzungsber. Preuss. Akad. Wiss. Berlin 688 (1916) A. Einstein: Sitzungsber. Preuss. Akad. Wiss. Berlin 154 (1918) J. M. Weisberg, J. H. Taylor: ASP Conf. Ser. 328, 25 (2004) L. J. Valtonen et al.: Nature 452, 851 (2008) J. Weber: Phys. Rev. Lett. 117, 306 (1960) V. Fafone: Class. Quantum Grav. 23, S223 (2006) P. Astone et al.: Phys. Rev. D 68, 022001 (2003) P. Astone et al.: Phys. Rev. D 76, 102001 (2007) B. Willke: Class. Quantum Grav. 24, S389 (2007) B. P. Abbott et al.: Rep. Prog. Phys. 72, 076901 (2009) B. Abbott et al.: Phys. Rev. D 76, 022001 (2007) L. Baggio et al.: Class. Quantum Grav. 25, 095004 (2008) P. S. Shawhan: Class. Quantum Grav. 27, 084017 (2010) B. P. Abbott et al.: Phys. Rev. Lett. 102, 111102 (2009) B. Abbott et al.: Astrophys. J. Lett. 683, L45 (2008) M. Maggiore: Phys. Rep. 331, 283 (2000) B. P. Abbott et al.: Nature 460, 990 (2009) B. P. Abbott et al.: Phys. Rev. D 80, 102001 (2009) B. P. Abbott et al.: Phys. Rev. D 79, 122001 (2009) B. P. Abbott et al.: Phys. Rev. D 80, 047101 (2009) G. M. Harry: Class. Quantum Grav. 27, 084006 (2010) H. Grote: Class. Quantum Grav. 27, 084003 (2010) K. Kuroda: Class. Quantum Grav. 27, 084004 (2010) S. Kawamura: Class. Quantum Grav. 27, 084001 (2010) L. Gottardi et al.:Phys. Rev. D 76, 102005 (2007) M. Bonaldi et al.: Phys. Rev. D 74, 022003 (2006) M. Punturo et al.: Class. Quantum Grav. 27, 084007 (2010) S. E. Whitcomb: Class. Quantum Grav. 25, 114013 (2008) M. Armano et al.: Class. Quantum Grav. 26, 094001 (2009) M. Ando et al.: Class. Quantum Grav. 27, 084010 (2010)
November 25, 2010
16:11
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.02˙Richter
720
SPHERICAL ACCRETION OF RELATIVISTIC FLUID ONTO SUPERMASSIVE BLACK HOLE INCLUDING BACK-REACTION M. C. RICHTER∗ , G. B. TUPPER, R. D. VIOLLIER Department of Physics, University of Cape Town, Cape Town, Western Cape 8005, South Africa ∗ E-mail: [email protected] A new general framework for studying relativistic spherical accretion of a self-gravitating fluid onto a central black hole is introduced in stationary coordinates for an observer at infinity. The important feature of gravitational back-reaction due to a self-gravitating fluid on the metric is included in the model. The model is solved numerically for the most simple case of a polytropic fluid and compared to analytical solutions, and the implications of these findings are discussed. Finally, the model is focused on the accretion of a relativistic Fermi gas and the implications this might have on the rapid growth of supermassive black holes in the early universe. Keywords: Supermassive black holes, accretion, quasars, back-reaction, dark matter
1. Introduction The current theory of supermassive black holes (SMBH) and their origin seems rather incomplete, since it does not describe the timescale at which we observe these colossal beasts in the early universe very accurately. There are many gaps in the knowledge regarding the origin of SMBHs, some more massive than three billion of our Suns combined. A most peculiar observation1 of the early universe is that these SMBHs are observed at red-shift values of z ∼ 6 which corresponds to a time of t = 850 million years (Myr) after the Big Bang. What is puzzling about this observation is that such an early appearance cannot be reconciled with a theoretical picture of a solar mass black hole growing to a supermassive size by accreting ordinary baryonic matter. That kind of accretion is limited in growth rate by the Eddington limit that slows the in-fall of ionizing matter due to emission of radiation. Calculations of such growth times yields a prediction for the emergence of SMBHs of at least twice that which is observed. Further confounding evidence is the fact that supermassive black holes in the form of quasars emerge well ahead of the smaller scale SMBH observed as active galactic nuclei (AGN), seemingly pointing to an anti-hierarchical nature of the accretion process; also poorly explained by many contemporary theories. The currently available theories based on work done by Lyttleton, Hoyle and Bondi,2–4 and later Michel5 and Shapiro and Teukolsky,6 regarding accretion also fail to address an important issue of the accretion process,
November 25, 2010
16:11
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.02˙Richter
721
namely the effect the surrounding fluid has on the spacetime and hence the accretion rate. This effect is also known as the self-gravity of the fluid or the back-reaction. A detailed look at a relativistic accretion theory is necessary to describe the origin of quasars and active galactic nuclei, which are believed to be SMBHs. Solutions to these problems with the early formation of SMBHs are proposed by the symbiotic scenario set out in Ref. (7), whereby the ordinary solar mass seed black holes grow to the supermassive scale by accreting dark matter. In this theory, the dark matter is composed of degenerate sterile neutrinos8 that form supermassive structures early in the universe. Baryonic matter, in the form of molecular hydrogen clouds, interacts gravitationally with the dark matter sterile neutrinos, and easily coalesces at the center of these proposed sterile neutrino balls. This way a large star can easily form and eventually undergo gravitational collapse to form a black hole of the scale of several solar masses. This seed black hole would then accrete the surrounding sterile neutrino dark matter. This process is unhampered by the Eddington limit and so the central body can grow much quicker than it could accreting ordinary baryonic matter. Calculations of the accretion rate within the non-relativistic theory agree well with the early appearance of the quasars mentioned above. 2. Presentation of the Theory To describe the accretion of a perfect fluid, choose a spherically symmetric metric tensor akin to the Schwarzschild metric ds2 = eν dt2 − eλ dr2 − r2 dθ2 + sin2 θdφ2 . (1)
In this instance, ν and λ are functions of both t and r. The function λ is defined in the Schwarzschild sense (with units of c = 1), eλ =
1 1−
2GM r
,
(2)
made time-dependent through the fact that the enclosed mass M = M (r, t). Ultimately, through the Einstein field equations, the set of equations dM = 4πr2 T00 , dr dM = −4πr2 T0r , dt " # 1 dν G M − 4πr3 Trr = 2 , 2 dr r 1 − 2GM r
(3) (4) (5)
is yielded. As these equations are still very general, a specific matter content must still be imposed. This is done by choosing the stress-energy tensor for a perfect fluid, Tµν = (ρ + P ) Uµ U ν − δµν P . It must be stipulated that this will be a model of stationary accretion flow, such that ρ˙ = P˙ = 0. At this point it would also be
November 25, 2010
16:11
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.02˙Richter
722
prudent to define a new flow velocity to parameterise the four-velocity U µ , such that e−ν/2 ve−λ/2 U0 = √ , Ur = −√ . 1 − v2 1 − v2
(6)
where again v˙ = 0. These specifications bring about the relativistic accretion equations dM ρ + v2P 2 = 4πr , (7) dr 1 − v2 r v dM 2GM ν/2 = 4πr2 (ρ + P ) e , (8) 1− 2 dt 1−v r 2 3 P +v ρ 1−v 2 1 dν G M + 4πr . = 2 (9) 2 dr r 1 − 2GM r where ν/2 is a general relativistic analogue of the Newtonian potential. The generalised version of the Tolman–Oppenheimer–Volkoff (TOV) equations,9,10 is derived from the conservation of energy, such that ! v dv 1 dP G M + 4πr3 P = 0. (10) + + 2 1 − v 2 dr ρ + P dr r 1 − 2GM r Combining the generalised TOV relation Eq. (10) and the requirement of stationary flow produces the two first-order relations G M 1 − v 2 + 4πr3 P + v 2 ρ 2u2 c2s − v 2 dρ + =− 2 , (11) 2GM ρ + P dr r r 1− r c2s − v 2 1 dv G M 1 − c2s + 4πr3 P + c2s ρ 2c2 = 2 − s, (12) 2GM 2 1 − v v dr r r 1− r where the value of the speed of sound of the accreting matter is introduced as c2s = ∂P ∂ρ . Equations (7), (11) and (12) completely describe the stationary accretion flow radially in a system of nonlinear, coupled, first-order equations.
3. Test Cases for the Relativistic Accretion Theory The system of relativistic accretion equations require another specification to yield a feasible test case. For this, the polytropic equation of state approximating ultrarelativistic matter, P = c2S ρ, is chosen. The system of equations is solved numerically by integrating from the observer at infinity toward the central object. Two cases for the parameter of c2S admit analytical solutions: fluids modeling ultra-relativistic stiff matter and radiation.
November 25, 2010
16:11
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.02˙Richter
723
3.1. Ultra-relativistic stiff matter accretion Firstly, for the case of relativistic stiff matter accretion, the speed of sound becomes the speed of light, such that c2S = 1. It is expected that the accretion of stiff matter should not yield transonic flow, as the speed of sound is equivalent to the speed of light and is hence the bounding limit of the fluid flow. This means that a smooth flow of the accretion material toward the central accreting body is expected in the stationary model. The numerical solution agrees to great accuracy with the analytical solution,11 with an error of order 10−6 .
3.2. Radiation accretion In the second case that allows analytical solutions, the case of radiation accretion, the equation of state parameter c2S is set to 1/3. The theoretical analysis of Michel5 predicts that there be a critical point in the flow at xc = 3. At this particular point, the flow velocity of the fluid will reach the speed of sound of the accreting material. In the case when the fluid reaches the speed of sound at a radius larger than that given by the critical point, a departure from the smooth fluid flow is expected. This would come about in systems that are overly dense at the boundary. This creates discontinuities or shocks in the fluid flow. Below are the numerical solutions side-by-side with the analytical solution given by Ref. (11).
1
1
0.9
w=
0.8
0.9
1 3
1 3
0.7
Flow velocity v
0.7
Flow velocity, v
w=
0.8
0.6 0.5 0.4
0.6 0.5 0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
1
10
Radial coordinate over total mass,
2
r M
10
0 10
1
Radial coordinate over total mass
r M
10
2
Fig. 1. (left) For the ”test-fluid” case of radiation accretion, the analytical solution of Ref. (11) is reproduced with an error similar to the stiff matter case. (right) When cases with non-trivial halo mass are calculated, the numerical solution (solid) deviates from the analytical solution (dashed), showing shock waves in the flow as the fluid exceeds the speed of sound prior to reaching the critical point.
To gather a complete picture of the accretion rate for all possible ratios of the surrounding halo mass to total mass, the accretion curve is generated and shown in Fig. 2.
November 25, 2010
16:11
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.02˙Richter
724
1
M˙ RAT numerical calculation 1
Accretion rate from Malec et al. ∝ m H (1 − m H )2
0.9
Accretion rate normalised to fitted curve.
Mass of central object, mC
0.8 0.7 0.6 0.5 0.4 0.3 0.2
0.8
0.6
0.4
0.2
0.1 0 0
0.5
1
1.5
2
2.5
Density at outer boundary, ρ∞
3
3.5 x 10
−8
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fraction of mass in the halo mH surrounding the central accreting object
Fig. 2. (left) The density at the outer boundary is shown to be directly proportional to the mass of the central accreting body in line with the work done by Malec et al.12 (right) The accretion rate calculated by the relativistic accretion model (points) reproduces the results by Malec et al. 12 (dashed) well. The numerical solution varies near the peak of the accretion curve due to the shock phenomenon in the flow.
4. Accretion of Relativistic Degenerate Fermi Gas The symbiotic scenario described in Ref. (7) relies on the idea that SMBHs grew rapidly in the early universe due to an abundance of supermassive dark matter sterile neutrino balls, which they accreted in an Eddington limit-less process. Thus it would be prudent to use this new relativistic accretion theory to model the growth of a central body feeding on a surrounding fluid comprised of a relativistic Fermi gas. To model the accretion of a degenerate Fermi gas, as would be needed to describe dark matter in the form of degenerate sterile neutrinos falling into a central black hole, an appropriate equation of state must be utilised. The equation of state for a degenerate ideal Fermi gas has the parameterisation6 for the pressure P and the energy density ρ n h 1/2 1/2 io P (x) = k x 1 + x2 2x2 /3 − 1 + ln x + 1 + x2 , (13) h 1/2 1/2 io k n 2x2 + 1 − ln x + 1 + x2 , (14) ρ(x) = 2 x 1 + x2 c where x, the dimensionless Fermi momentum, and the constant k are given by x=
m 4 c5 pF and k = ν2 3 , mν c 8π ~
(15)
and the mass of the fermion is represented by mν . Solving the subsequent system of equations in the relativistic accretion theory yields an accretion rate that is depicted in Fig. 3. 5. Conclusion A coherent relativistic framework, that describes the spherical accretion of a selfgravitating fluid around a central accreting body, has been presented. The test cases
November 25, 2010
16:11
WSPC - Proceedings Trim Size: 9.75in x 6.5in
13.02˙Richter
725
2 old M˙ bondi ∝ m H (1 − m H )
1
M˙ RAT
Normalised Accretion rate
0.8
0.6
0.4
0.2
0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
fraction of the mass in the halo mH surrounding the central accreting object
Fig. 3. Accretion rate for a relativistic Fermi gas falling into a central black hole. The numerical solution (points) shows similarities to the test case of radiation accretion (dashed), albeit with a shifted peak and subtle changes in curve shape.
of the polytropic fluids of ultra-relativistic stiff matter and radiation have yielded agreeable and feasible numerical results for the test-fluid scenarios,11 which serve as a link to the Bondi-type accretion neglecting the back-reaction of the fluid on the surrounding spacetime. Furthermore, the radiation accretion rate compares well to that calculated by similar theories12 which include the all-important self-gravitation of the accreting fluid. Under these scrutinies, the proposed model behaves well and recreates familiar physics. This solid foundation of tests gives good reason to extend the model to as-yet-unknown scenarios, such as the accretion of a relativistic Fermi gas, to better describe the symbiotic scenario.7 References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
C. J. Willott, R. J. McLure and M. J. Jarvis, Astrophys. J. 587, L15 (2003). R. A. Lyttleton and F. Hoyle, The Observatory 63, p. 39 (1940). F. Hoyle and R. A. Lyttleton, Proc. Cam. Phil. Soc. 36, p. 424 (1940). H. Bondi and F. Hoyle, Mon. Not. Roy. Astron. Soc. 104, p. 273 (1944). F. C. Michel, Astrophys. Space Sci. 15, p. 153 (1972). S. L. Shapiro and S. A. Teukolsky, Black Holes, White Dwarfs and Neutron Stars (Wiley-Interscience, New York). M. C. Richter, G. B. Tupper and R. D. Viollier, JCAP 12, p. 015 (2006), astroph/0611552. T. Asaka, S. Blanchet and M. Shaposhnikov, Phys. Lett. B 631, 151 (2005). R. C. Tolman, Relativity, Thermodynamics and Cosmology (Clarendon Press, Oxford). J. R. Oppenheimer and G. M. Volkoff, Phys. Rev. 55, p. 374 (1939). E. Babichev, V. Dokuchaev and Y. Eroshenko, Phys. Rev. Lett. 93, p. 021102 (2004). ´ J. Karkowski, B. Kinasiewicz, P. Mach, E. Malec and Z. Swierczy´ nski, Phys. Rev. D 73, p. 021503(R) (2006).
December 22, 2010
14:24
WSPC - Proceedings Trim Size: 9.75in x 6.5in
divided
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
14˙Participants
727
List of Participants
Aharonian, Felix A. Max-Planck-Institut fuer Kernphysik D-69117 Heidelberg GERMANY Tel: +49(0)6221-516 485 Email: [email protected] http://www.mpi-hd.mpg.de/ aharon Allen, Roland E. Department of Physics Texas A&M Univ., College Station Texas 77843-4242, USA Tel: 1-979-845-4341 Fax: 1-979-845-2590 Email: [email protected] http://faculty.physics.tamu.edu/allen Antoniadis, Ignatios Physics Department CERN - Theory Division CH-1211 Geneva 23 SWITZERLAND Tel: + 41 22 767 3201 Fax: + 41 22 767 3850 Email: [email protected] Appelquist, Thomas Department of Physics, Sloane Laboratory Yale University, New Haven, Connecticut 06520, USA Email: [email protected] Antusch, Stefan Max-Planck-Institut f¨ ur Physik Werner-Heisenberg-Institut F¨ ohringer Ring 6 80805 M¨ unchen GERMANY Email: [email protected]
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
728
Aufmuth, Peter Albert-Einstein-Institut Callinstrasse 38, 30167 Hannover GERMANY Tel: + 49 0511-7622386 Fax: +49-511-762-2784 Email: [email protected] www.geo600.uni-hannover.de/ aufmuth/ Bernabei, Rita Dipartimento di Fisica Universita’ di Roma ”Tor Vergata” and INFN - Sezione di Roma Tor Vergata Via della Ricerca Scientifica, 1 I-00133 Roma, ITALY Tel: +39-0672594542 Fax: +39-0672594825 Email: [email protected] http://people.roma2.infn.it/dama/ Beckwith, Andrew American Institute of Beam Energy Propulsion 71 Lakewood Court, Apt. 7 Moriches, NY 11955 USA Email: ab [email protected] Bilic, Neven Rudjer Boskovic Institute P.O.Box 180, Bujenicka 54 10002 ZAGREB CROATIA Tel: + 385-1-4680 (Secretary:+385-1-4561) Fax: + 385-1-4680 223 Email: [email protected] Bravar, Alessandro Department of Nuclear and Particle Physics University of Geneva Geneve 23, CH-1211 SWITZERLAND Email: [email protected]
14˙Participants
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
14˙Participants
729
Bross, Alan David MS 231 Fermilab Batavia IL 60510 USA Tel: (630)840-4880 (office) Tel: (630)667-3061 (Cell) Fax: (630)840-3061 Email: [email protected] Byrne, Jim Department of Physics and Astronomy University of Sussex Falmer, Brighton, East Sussex, BN1 9QH UNITED KINGDOM Email: [email protected] Tel: 44-1273-678557 Fax: 44-1273-503610 Capelli, Silvia Physics Departement and INFN Universit` a di Milano Bicocca Milano, 20126 ITALY Email: [email protected] www.unimib.it Clarkson, Chris Centre for Astrophysics, Cosmology and Gravitation and Department of Mathematics and Applied Mathematics University of Cape Town, Rondebosch 7701 SOUTH AFRICA Email: [email protected] Cooray, Asantha University of California Irvine, CA92697 USA Email: [email protected]
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
730
Costantini, Silvia Department of Physics and Astronomy University of Ghent, Ghent BELGIUM and CERN-PH-EP 354-2002 CH-1211 Geneva 23 SWITZERLAND Email: [email protected] Tel: +41-22-767-7964 Fax: +41-22-767-8940 D’Angelo, Davide Istituto Nazionale di Fisica Nucleare sez. di Milano via Celoria 16 - 20133 Milano ITALY Tel: +390250317369 Fax: +390250317617 Email: [email protected] skype: davidedangelo Desiati, Paolo IceCube Research Center University of Wisconsin Madison, WI 53703 USA Tel: +1 608 890 0546 Fax: +1 608 262 2309 Email: [email protected] http://icecube.wisc.edu Feinstein, Fabrice Laboratoire de Physique Th´eorique et Astroparticules Universit´e Montpellier II, CNRS/IN2P3 CC 70, Place Eug`ene Bataillon F-34095 Montpellier Cedex 5 FRANCE Tel: +33 4 67 14 9330 Fax: +33 4 67 14 4190 Email: [email protected] Mobile : +33 6 31 78 84 79
14˙Participants
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
14˙Participants
731
Flaminio, Vincenzo Physics Department University of Pisa and INFN-Pisa Pisa, 56127 ITALY Email: [email protected] Hessberger, Fritz Peter Department ’Superheavy Elements’ GSI Helmholtzzentrum f¨ ur Schwerionenforschung mbH Planckstraße 1 64291 Darmstadt GERMANY and Section ’Superheavy Elements - Physics’ Helmholtzinstitut Mainz 55099 Mainz GERMANY Tel: ++49 (0)6159-712735 Fax: ++49 (0)6159-712902 Email: [email protected] http://www.gsi.de/forschung/kp/kp2/ship/people/hessberger.html
http://www.gsi.de/forschung/kp/kp2/ship/index.html Huh, Ji-Haeng Department of Physics and Astronomy Seoul National University Gwanakro Sillim-dong, Gwanak-gu Seoul, 151-747 KOREA Email: [email protected] Tel: +82-2-880-6587 Guyot, Claude SPP/IRFU, C.E. Saclay 91191 Gif-sur-Yvette Cedex FRANCE Tel: 33 169085574 Email: [email protected] Giuliani, Andrea Universita‘ dell’Insubria and INFN - Sezione di Milano-Bicocca Via Valleggio 11 - 22100, Como
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
732
ITALY Tel: +39 031 238 6217 Tel: mob. +39 347 3200638 Fax: +39 031 238 6209 Email: [email protected] www.uninsubria.it Skype: andrea.ernesto Gladyshev, Alexey V. Bogoliubov Laboratory of Theoretical Physics Joint Institute for Nuclear Research 6 Joliot-Curie, 141980 Dubna Moscow Region RUSSIA and Institute of Theoretical and Experimental Physics 25 Bolshaya Cheremushkinskaya 117218 Moscow RUSSIA Email: [email protected] Kawasaki, Takeo Niigata University, Department of Physics Ikarashi Ni-no-cho 8050, Nishi-ku Niigata 950 2181 JAPAN Tel: +81-25-262-6124 Fax: +81-25-262-6138 Email: [email protected] Kazakov, Dmitri Bogoliubov Laboratory of Theoretical Physics Joint Institute for Nuclear Research 6 Joliot-Curie, 141980 Dubna Moscow Region RUSSIA Email: [email protected] Kießig, Clemens P. Max-Planck-Institut f¨ ur Physik (Werner-Heisenberg-Institut) F¨ ohringer Ring 6, D-80805 M¨ unchen GERMANY Tel: ++49-89-32354-318 Tel: ++49-89-32354-304
14˙Participants
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
14˙Participants
733
Email: [email protected] http://wwwth.mppmu.mpg.de/members/ckiessig http://mpp.mpg.de/ ckiessig/ Kelley, John Department of Astrophysics, IMAPP Radboud University Nijmegen Nijmegen, The NETHERLANDS Tel: +31 24 365 2135 Fax: +31 24 365 2807 Email: [email protected] http://www.astro.ru.nl/ jkelley Kim, J.E. Department of Physics and Astronomy and Center for Theoretical Physics Seoul National University Seoul 151-747 KOREA Email: [email protected] Klapdor-Kleingrothaus, Hans Volker Stahlbergweg 12 74931 Lobbach, GERMANY Tel: (49) 6226-41088 Tel: (49) 157 7974 3109 Email: [email protected] Home-page: http://www.klapdor-k.de Skype: genius19412 Konrad, Gertrud Institut f¨ ur Physik Johannes Gutenberg-Universit¨ at Mainz Staudingerweg 7 55099 Mainz GERMANY Tel: ++49 (0)6131 39-23675 Fax: ++49 (0)6131 39-23428 Email: [email protected] Home-page: www.quantum.physik.uni-mainz.de
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
734
Krivosheina, Irina Vladimirovna Radiophysical Research Institute (NIRFI) ul. Bolshaja-Pecherskaja 25 603005, Nishnij-Novgorod RUSSIA Email: [email protected] Home-page: http://www.klapdor-k.de Skype: evidencebb Kutschera, Walter Faculty of Physics, Isotope Research University of Vienna Vienna Environmental Research Accelerator (VERA) Waehringer Strasse 17 A-1090 Vienna AUSTRIA Tel: + 43-1-4277-51700 Fax: + 43-676-8437-66620 Email: [email protected] Law, Sandy S.C. Department of Physics Chung-Yuan Christian University 200 Chung-Pei Rd. Chung-Li, 320 TAIWAN Tel: +886-3-265-3250 Email: [email protected] Leubner, Manfred P. Institute for Astro- & Particle Phys. University of Innsbruck Technikerstr. 25, A-6020 Innsbruck, AUSTRIA Tel: +43-(0)512-507-6054 Fax: +43-(0)512-507-2923 Email: [email protected] http://homepage.uibk.ac.at/∼c706102/ Lindner, Manfred Max-Planck-Institut f¨ ur Kernphysik Saupfercheckweg 1 69117 Heidelberg, GERMANY Email: [email protected]
14˙Participants
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
14˙Participants
735
Lin, Shin-Ted Institute of Physics Academia Sinica NanKang Taipei, TAIWAN 11529 R.O.C. Tel: +886-2-2789-8943 Fax: +886-2-2788-9828 Email: [email protected] Losada, Marta Centro de Investigaciones Universidad Antonio Nari˜ no Cra 3 Este No 47A-15, Bloque 4 Bogot´ a, COLOMBIA Email: [email protected] www.uan.edu.co Mankoˇ c Borˇ stnik, Norma Susana Department of Physics, FMF University of Ljubljana Jadranska 19, 1000 Ljubljana SLOVENIA Email: [email protected] http://www.fmf.uni-lj.si Mazini, Rachid Institute of Physics, Academia Sinica CERN,CH-1211 Geneva 23 SWITZERLAND Email: [email protected] McIntyre, Peter Department of Physics & Astronomy, Texas A&M University College Station, TX 77845, USA Email: [email protected] Tel: 979-255-5531 www.tamu.edu Miramonti, Lino Physics Department Milan University & INFN Via Celoria 16, Milano 1-20133 ITALY Email: [email protected]
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
736
Mondal, Naba Tata Institute of Fundamental Research, Mumbai INDIA Email: [email protected] Moulin, Emmanuel CEA - Saclay, DSM/IRFU/SPP Gif-sur-Yvette, 91191 FRANCE Tel: +33 (0)1 69082960 Tel: 6428 Email: [email protected] www.irfu.cea.fr Nardulli, Jacopo Science and Technology Facility Council Rutherford Appleton Laboratory Didcot, OX 11, 0QX UNITED KINGDOM and Les Jardins de Chevry 18.32 01170 Chevry, FRANCE Email: [email protected] Email: [email protected] Tel: +33950059118 Fax: +41227677379 Neubert, Matthias Institut f¨ ur Physik (WA THEP) Johannes-Gutenberg-Universit¨ at 55099 Mainz GERMANY Tel: +49613139 23 681 Fax: +49613139 24 611 Tel: [email protected] http://wwwthep.physik.uni-mainz.de/site/people/neubert/ Niedner, Malcolm Bowen Laboratory for Exoplanets and Stellar Astrophysics NASA Goddard Space Flight Center Greenbelt, MD 20771 USA Tel: (301) 286-5821 Fax: (301) 286-1753 Email: [email protected]
14˙Participants
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
14˙Participants
737
Novella, Pau CIEMAT (Fisica Altas Energas) Av. Complutense 22 28040, Madrid SPAIN Email: [email protected] Tel: 0034 91 496 25 39 Skype: paunoga Ohlsson, Tommy Department of Theoretical Physics Royal Institute of Technology (KTH) AlbaNova University Center SE-106 91, Stockholm SWEDEN Tel: +46-8-5537 8161 Fax: +46-8-5537 8216 Email: [email protected] Osipowicz, Alexander University of Appl. Sciences Marquardstr. 35 36039 Fulda GERMANY Tel: +49 661 9640 556 Fax: +49 661 9640 559 Email: [email protected] Email: [email protected] http://www.hs-fulda.de Patrizzii, Laura INFN and Physics Department of the University of Bologna v.le Berti Pichat 6/2 I-40127 Bologna, ITALY Email: [email protected] www.bo.infn.it Popeko, Andrej G. Flerov Laboratory of Nuclear Reactions Joint Institute for Nuclear Research Joliot-Curie str., 6, Dubna, 141980 RUSSIA [email protected] http://flerovlab.jinr.ru/flnr/index.html
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
738
Prodanov, Emil M. School of Mathematical Sciences Dublin Institute of Technology IRELAND Tel: [email protected] www.maths.dit.ie Roy, Probir Saha Institute of Nuclear Physics C/O Director’s office block AF, Sector I Bidhannagar Kolkata 700 064 INDIA Tel: 033-2337 5345-5349, extn 1425 Fax: 033-2337 4637 Email: [email protected] Regis, Marco Astrophysics, Cosmology and Gravity Centre (ACGC) Department of Mathematics and Applied Mathematics, University of Cape Town Rondebosch 7701, Cape Town SOUTH AFRICA and Centre for High Performance Computing 15 Lower Hope St, Rosebank Cape Town SOUTH AFRICA Email: [email protected] Richter, Max Department of Physics University of Cape Town Private Bag X3 Rondebosch 7701 Cape Town SOUTH AFRICA Tel: +27216503344 Fax: +27865512294 Email: [email protected] Skype: maximilian c Rueckl, Reinhold Institute for Theoretical Physics and Astrophysics, University of W¨ urzburg Am Hubland, D-97074 W¨ urzburg GERMANY Tel: +49 931 31-85878
14˙Participants
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
14˙Participants
739
Fax: +49 931 31-87138 Email: [email protected] http://theorie.physik.uni-wuerzburg.de/TP2/ Schlosser, Wolfhard Astronomisches Institut der Ruhr-Universit¨ at Bochum D-44780 Bochum GERMANY Email: [email protected] Signorelli, Giovanni INFN Sezione di Pisa Largo B. Pontecorvo 3 Edif. C I-56127 Pisa ITALY Email: [email protected] Tel: +39 050 2214 425 Fax: +39 050 2214 317 Skype: g signorelli Shaposhnikov, Mikhail Institut de Th´eorie des Ph´enom`enes Physiques EPFL, CH-1015 Lausanne SWITZERLAND Email: [email protected] ˇ Simkovic, Fedor Bogoliubov Laboratory of Theoretical Physics JINR Dubna, 141 960 Dubna Moscow region, RUSSIA Email: [email protected] Tel: + 7 49621 65084 Fax: + 7 49621 62473 http://theor.jinr.ru and Department of Nuclear Physics and Biophysics, Comenius University 842 48 Bratislava, SLOVAKIA Email: [email protected] Tel: + 421 2 60295543 Fax: + 421 2 65412305 http://www.fmph.uniba.sk/ skype: fedordbd
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
740
Stanco, Luca INFN, Via Marzolo, 8 Padova I-35131 ITALY Tel: +39-049-827-7076 Tel: +39-347-2545-250 Email: [email protected] www.pd.infn.it Stephenson, Jr. Gerard J. Department of Physics and Astronomy MSC 07 4220 1 University of New Mexico Albuquerque NM 87131-0001 USA Tel: 505-277-7389 Tel: 505-296-9238 (Home) Tel: 505-269-2928 (Cell) Fax: 505-277-1520 Email: [email protected] Email: [email protected] Stewart, Ewan Davidson Korea Advanced Institute of Science & Technology (KAIST) 373-1 Kusong-dong, Yusong-gu, Taejon South Korea Email: [email protected] Suhonen, Jouni Department of Physics University of Jyv¨ askyl¨ a P.O. Box 35 (YFL) FI-40014 University of Jyv¨ askyl¨ a FINLAND Tel: +358-142602380 Fax: +358-142602351 Email: [email protected] www.jyu.fi/fysiikka
14˙Participants
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
14˙Participants
741
Takubo, Yosuke Department of Physics, Tohoku University Sendai, Miyagi, 980-8578 JAPAN Email: [email protected] Tel: +81-022-795-5730 Fax: +81-022-795-6729 Tupper, Gary B. Department of Physics University of Cape Town Private Bag X3 Rondebosch 7701 Cape Town SOUTH AFRICA Email: [email protected] Urban, Federico R. Department of Physics & Astronomy University of British Columbia 6224 Agricultural Road Vancouver B.C. V6T 1Z1 CANADA Email: [email protected] Tel: +1 (604) 822-3853 Fax: +1 (604) 822-5324 Viollier, Raoul D. Department of Physics University of Cape Town Private Bag X3 Rondebosch 7701 Cape Town SOUTH AFRICA Email: [email protected] Volkas, Raymond R. FAA School of Physics The University of Melbourne Victoria 3010 AUSTRALIA Tel: +61 3 8344 5464 (office) Fax: +61 3 9347 4783 (fax) Email: [email protected]
November 25, 2010
16:33
WSPC - Proceedings Trim Size: 9.75in x 6.5in
742
Ward, Bennie F.L. Department of Physics, Baylor University, Waco, TX 76798 USA Email: BFL [email protected] Home-page: www.baylor.edu Wolschin, Georg Institut f¨ ur Theoretische Physik Universit¨ at Heidelberg, Philosophenweg 16 69120, Heidelberg GERMANY Email: [email protected] Home-page: wolschin.uni-hd.de Wolf, Peter LNE-SYRTE, CNRS UMR8630 UPMC Observatoire de Paris 61 Av. de l’Observatoire 75014 Paris FRANCE Tel: +33 1 40512324 Fax: +33 1 43255542 Email: [email protected] Yasuda, Osamu Department of Physics Tokyo Metropolitan University 1-1 Minami-Osawa, Hachioji Tokyo 192-0397 JAPAN Tel: +81-426-77-2522 Fax: +81-426-77-2483 Email: [email protected] Zeeshan, Ahmed California Institute of Technology 1200 E. California Blvd. MC 367-17 Pasadena CA 91125 USA Tel: +1-626-395-2635 Fax: +1-626-395-2366 Email: [email protected]
14˙Participants
December 21, 2010
8:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
15˙Index
743
Authors Index
Adam J. 406 Allen R.E. 199 Alsing P.M. 471 ANTARES collaboration 347 Antoniadis I. 155 Antusch S. 177 Appelquist T. 3 ATLAS collaboration 26, 43 Asano M. 91 Asakawa E. 91 Aufmuth P. 707 Baeßler S. 660 Bai X. 406 Baldini A. 406 Baracchini E. 406 Barchiesi A. 406 Baumann J.P. 177 Beckwith A. 491 Bemporad C. 406 Belli P. 511 Bernabei R. 511 Bilic N. 503 Boca G. 406 Borexino collaboration Boyarsky A. 155 Byrne J. 647 Bravar A. 337 Bross A.D. 82 Cappella F. 511 Capelli S. 286 Cattaneo P.W. 406 Cavoto G. 406 CDMS-II collaboration Cecchet G. 406 Cei F. 406 Cerri C. 406
Cerulli R. 511 Clarkson C. 455 CMS collaboration 17 Costantini S. 17 CUORE collaboration 286 CUORICINO collaboration 286 Dai C.J. 511 Dafinei I. 256 d’Angelo A. 511 d’Angelo D. 362 De Bari A. 406 Dellinger F. 633 Desiati P. 376 De Gerone M. 406 Doke T. 406 Dussoni S. 406 Dutta K. 177 Double Chooz collaboration Egger J.
362
530
406
Feinstein F. 579 Ferroni F. 256 Fiasson A. 579 Flaminio V. 347 Fujii K. 91 Galli L. 406 Gallucci G. 406 Gatti F. 406 Giacomelli G. 417 Giuliani A. 256 Gladyshev A.V. 60 Gl¨ uck F. 660 Golden B. 406 Goldman T. 471
318
December 21, 2010
8:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
15˙Index
744
Grassi M. 406 Grigoriev D.N. 406 Guyot C. 26 Haruyama T. 406 He H.L. 511 Hessberger F.P. 675 Heil W. 660 Hildebrandt M. 406 Hisamatsu Y. 406 Huh J.-H. 597 IceCube collaboration Ignatov F. 406 Incicchitti A. 511 Iwamoto T. 406
376
Kaneko D. 406 KATRIN collaboration 261 Kawasaki T. 318 Kazakov D.I. 60 Kelley J.L. 571 Kettle P.-R. 406 Khazin B.I. 406 Kießig C.P. 146 Kim J.E. 166 Kiselev O. 406 Klapdor-Kleingrothaus H.V. 231 Konrad G. 660 Korenchenko A. 406 Kostka P. M. 177 Kravchuk N. 406 Krivosheina I.V. 231 Kuang H.H. 511 Kusano T. 91 Kutschera W. 633 Law S.S.C. 135 Leubner M.P. 482 LHCb collaboration Liebl J. 633 Lin S.-T. 537 Losada M. 122
54
Ma X.H. 511 Mankoˇc Borˇstnik N.S. 543 Maki A. 406 Matsumoto S. 91 Mazini R. 43 McIntyre P. 100, 112 MEG collaboration 406 McKellar B.H.J. 471 Mihara S. 406 Molzon W. 406 Montecchia F. 511 Moulin E. 557 Mori T. 406 Mzavia D. 406 Mustonen M.T. 267 Natori H. 406 Nard` o R. 406 Nardulli J. 54 Nicol` o D. 406 Niedner M.B. 605 Nishiguchi H. 406 Nishimura Y. 406 Novella P. 314 Nozzoli F. 511 Ohlsson T.
496
Ootani W. 406 Osipowicz A. 261 Panareo M. 406 Papa A. 406 Patrizii L. 417 Pazzi R. 406 Paucar M.G. 60 Pierre Auger collaboration Piredda G. 406 Pirro S. 256 Pl¨ umacher M. 146 Poˇcani´c D. 660 Popeko A.G. 689 Popov A. 406
571
December 21, 2010
8:58
WSPC - Proceedings Trim Size: 9.75in x 6.5in
15˙Index
745
Previtali E. 256 Prodanov E.M. 432 Prosperi D. 511 Regis M. 588 Renga F. 406 Richter M.C. 720 Ritt S. 406 Rossella M. 406 Roy P. 293 Ruchayskiy O. 155 R¨ uckl R. 211
Uchiyama Y. 406 Urban F.R. 441 Valle R. 406 Viollier R.D. 503, 720 Voena C. 406 Volkas R.R. 393
Sahnoun Z. 417 Sasaki R. 91 Sattarov A. 100, 112 Sawada R. 406 Schlosser W. 625 Schneebeli M. 406 Sergiampietri F. 406 Shaposhnikov M. 219 Sheng X.D. 511 Signorelli G. 406 ˇ Simkovic F. 276 Stanco L. 325 Stephenson G.J. 471 Steier P. 633 Suhonen J. 267 Suzuki S. 406 T2K collaboration Takubo Y. 91
TEXONO collaboration Thoma M.H. 146 Topchyan C. 406 Tumakov V. 406 Tupper G.B. 503, 720
337
Wang R.G. 511 Ward B.F.L. 188 Wolschin G. 69 Wong H.T. 537 Xiao F.
406
Yamada S. 406 Yamamoto A. 406 Yamamoto H. 91 Yamashita S. 406 Yasuda O. 300 Ye Z.P. 511 Yudin Yu.V. 406 Zanello D. Zeeshan A.
406 530
537