CORRELATIONS & FLUCTUATIONS
Q C D
edited by
N G Antoniou, F K Diakonos & C N Ktorides University of Athens, Greece
Proceedings of the 10th International Workshop on Multiparticle Production Crete, Greece
8 - 15 June 2002
A
I
LORRELATIONS & FLUCTUATIO
Q
c
D
World Scientific New Jersey London Singapore Hong Kong
Published by World Scientific Publishing Co. Re. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: Suite 202,1060 Main Street, River Edge, NJ 07661
UK ofice: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-PublicationData A catalogue record for this book is available from the British Library.
CORRELATIONS AND nUCTUATIONS IN QCD Proceedings of the 10th International Workshop on Multiparticle Production Copyright 0 2003 by World Scientific Publishing Co. Re. Ltd. All rights reserved. This book, or parts thereof; may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permissionfrom the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN 981-238-455-3
Printed in Singapore by B & JO Enterprise
ORGANIZING COMMITTEE Nikos Antoniou
University of Athens Athens, Greece
Fotis Diakonos
University of Athens Athens, Greece
Christos Ktorides
University of Athens Athens, Greece
Martha Spyropoulou-Stassinaki
University of Athens Athens, Greece
SPONSORS Hellenic Ministry of Education Hellenic Ministry of Culture University of Athens Hellenic Organization of Tourism (EOT) Bullet S.A.
V
This page intentionally left blank
PREFACE The 2002 International Workshop on Multiparticle Production, tenth in the series, was held in Crete, Greece from June 8 through June 15, 2002. It was hosted by the Department of Nuclear and Particle Physics of the University of Athens and its specific scientific topic was: “Correlations and Fluctuations in Quantum Chromodynamics”. The first meeting in the series was held in Aspen (1986) and subsequently, the workshops have been held in Jinan (1987), Perugia (1988), Santa Fe (1990), Ringberg (1991), Cracow (1993), Nijmegen (1996), Matrahaza (1998), Torino (2000) and now Crete (2002). According to the tradition, the Workshop in Crete was a meeting of a small number of researchers (approximately 50): experimentalists and theorists, gathered together with the aim to present their latest findings in the field of multiparticle production and discuss new ideas, measurements and methods in our effort to understand the complex structure of the QCD vacuum. New phenomena and novel theoretical developments, ranging from critical fluctuations in the QCD phase diagram to quantum correlations (HBT) in Zo decays and from nonextensive entropy to chaotic field theory, were discussed thoroughly in the inspiring environment of Istron Bay, in the Island of Crete. An invited talk on the discovery of neutrino masses, given by Professor Norbert Schmitz, has added in the programme of the Workshop a very interesting presentation of the most important development in Particle Physics, during the last few years. The smooth as well as pleasant running of the Workshop was in large part due to the efforts and devotion of Dr. Maria Diakonou and Mrs. Heleni Holeva. Finally, the enthusiasm, skill and patience of our students and collaborators have contributed grately to the success of the meeting. These Proceedings are dedicated to the memory of Bo Andersson. Athens, December 2002 Nikos Antoniou Fotis Diakonos Christos Ktorides
vii
This page intentionally left blank
CONTENTS vii
Preface
Bo Andersson (1937-2002) W Kittel
1
The Discovery of Neutrino Masses N. Schmitz
5
Sessions on Correlations and Fluctuations in e+e-, hh Collisions Chairpersons: C. N. Ktorides, B. Buschbeck, A. Giovannini, L Liu, and I. Dremin Scaling Property of the Factorial Moments in Hadronic 2 Decay G. Chen, E Hu, W Kittel, L. S. Liu, and W J. Metzger
23
Rapidity Correlations in Quark Jets and the Study of the Charge of Leading Hadrons in Gluon and Quark Fragmentation B. Buschbeck and E Mandl
33
Genuine Three-Particle Bose-Einstein Correlations in Hadronic Z decay J. A. Van Dalen, W Kittel, and W J. Metzger
43
Like-Sign Particle Genuine Correlations in Zo Hadronic Decays E. K. G. Sarkisyan
53
Measurement of Bose-Einstein Correlations in e+e- + W+WEvents at LEP J. A. Van Dalen, W Kittel, W J. Metzger; and $ Todorova-Nova'
63
On the Scale of Visible Jets in High Energy Electron-Positron Collisions L. S. Liu, G. Chen, and J. H. Fu
73
Experimental Evidence in Favour of Lund String with a Helix Structure 3 Todorova-Nova'
79
Bose-Einstein Correlations in the Lund Model for Multijet Systems S. Mohanty
89
ix
X
Power Series Distributions in Clan Structure Analysis: New Observables in Strong Interactions R. Ugoccioni and A. Giovannini
99
Scale Factors from Multiple Heavy Quark Production at the LHC A. Del Fabbro
108
On Truncated Multiplicity Distributions I. M. Dremin
115
Forward-Backward Multiplicity Correlations in e+e- Annihilation and pp Collisions and the Weighted Superposition Mechanism A. Giovannini and R. Ugoccioni
123
Soft Photon Excess Over the Known Sources in Hadronic Interactions M. Spyropoulou-Stassinaki
132
A Study of Soft Photon Production in pp Collisions at 450 Gevlc at CERN-SPS A. Belogianni, W Beusch, T J. Brodbeck, E S. Dzheparov, B. R. French, f? Ganoti, J. B. Kinson, A. Kirk, V Lenti, I. Minashvili, V Perepelitsa, N. Russakovich, A. V Singovsky, f? Sondereggel; M. Spyropoulou-Stassinaki, and 0. Villalobos-Baillie
143
QCD and String Theory G. K. Savvidy
154
Are Bose-Einstein Correlations Emerging from Correlations of Fluctuations? 0. V Utyuzh, G. Wilk, M. Rybczyriski, and Z. Wloa'arczyk
162
Session on Phase Transitions in QCD Chairperson: N. Schmitz Theory versus Experiment in High Energy Nucleus Collisions R. D. Pisarski
175
Prospects of Detecting the QCD Critical Point N. G. Antoniou, !I E Contoyiannis, E K. Diakonos, and A. S. Kapoyannis
190
Locating the QCD Critical Point in the Phase Diagram N. G. Antoniou, E K. Diakonos, and A. S. Kapoyannis
20 1
xi
Baryonic Fluctuations at the QCD Critical Point K. S. Kousouris
213
Non-Equilibrium Phenomena in the QCD Phase Transition E. N. Sarihkis
225
Sessions on Correlations and Fluctuations in Heavy Ion Collisions Chairpersons: G. Wilk and T. Trainor Correlations and Fluctuations in Strong Interactions: A Selection of Topics A. Bialas
239
Long Range Hadron Density Fluctuations at Soft PT in Au + Au Collisions at RHIC M. L. Kopytine
249
The Correlation Structure of RHIC Au-Au Events ?: A. Trainor
259
Particle Spectra and Elliptic Flow in Au + Au Collisions at RHIC S. Margetis
269
A Model for the Color Glass Condensate versus Jet Quenching A. I! Contogouris, E K. Diakonos, and I! K. Papachristou
279
Wavelet Analysis in Pb + Pb Collisions at CERN-SPS G. Georgopoulos, P Christakoglou, A. Petridis, and M. Vassiliou
282
Heavy Quark Chemical Potential as Probe of the Phase Diagram of Nuclear Matter I! G. Katsas, A. D. Panagiotou, and T Gountras
293
Gap Analysis for Critical Fluctuations R. C. Hwa
304
Session on Complexity and Strong Interactions Chairperson: R. C. Hwa Turbulent Fields and their Recurrences I! Cvitanovic'and Z-H.h n
313
xii
Nonextensive Statistical Mechanics - Applications to Nuclear and High Energy Physics C. Tsallis and E. I! Borges
326
Traces of Nonextensivity in Particle Physics Due to Fluctuations G. wilk and 2. Wlodarczyk
344
Chaos Criterion and Instanton Tunneling in Quantum Field Theory V I. Kuvshinov and A. V Kuzmin
354
Session on Correlations and Fluctuations (Methods and Applications) Chairperson: M. Spyropoulou-Stassinaki Brief Introduction to Wavelets I. M. Dremin
369
Multiparticle Correlations in Q-Space H. C. Eggers and T, A. Trainor
386
Fluctuations in Human Electroencephalogram R. C. Hwa and T C. Ferree
396
List of Participants
405
BO ANDERSSON (1937-2002) Official obituaries have already been published and a little conference has been held in honor of Bo just a couple of days before this Workshop, thus allowing me to try and sketch this outstanding personality here from a few rather personal impressions. When once being shown this little cartoon on the left visualizing the “Eternal Search”, Bo stood startled for a while, but then broke out “That’s me, but Suzy, you know, that’s me!” Can you hear him!? And yes, it indeed resembles him and his never exhausted interest in experimental observations and struggle for deep understanding. The typical charm he put into this confession contained both, his sense of humor that allowed him to laugh about himself and, at the same time, his being absolutely serious about that. I knew him before, but I think, I got to appreciate Bo as a friend in 1987, when becoming overwhelmed by China and the Chinese together with him on cI*+!i a post-conference tour, in fact after the
1
2
second in this series of International Workshops on Multiparticle Production. We saw a lot and talked a lot, and both did not remain limited to Physics. Besides all that very serious experience, at one occasion, climbing the holiest mountain Tai Shan and looking down to the grave of Kung Futse in deep respect, we came across the most secular sign depicted in Fig. 2, obviously trying to suggest not t o litter. Bo really got the kick out of this and could not stop laughing and insisted to also get this picture taken. However, he himself, together with his colleagues and students, DID “put his papers about” and among his countless ones are, according to the definition of a particular index, thirteen “wellknown” ones (number of citations 50 or more) plus eleven “famous” ones (100 or more) plus ..., well 500 or more would be “renowned”, but for 1000 or more an appropriate superlative still has to be inFigure 2. vented: on a 1983 paper, Bo with Gosta, Gunnar and Torbjorn hit the 1400! Bo luckily refrained from disposing them where the sign on Tai Shan perhaps still suggests, but instead collected them into his most precious book on the Lund Model. Whenever he came and talked at this series of Multiparticle Production workshops (or at any other occasion, for that matter), it was another step of a giant: FRITIOF color dipole dynamics (87), Bose-Einstein Correlations in the Lund Model (95) (Bo: “the most difficult work I have ever participated in”), the helix-shaped color force field (98), and the recent reformulation of
- -
Figure 3.
Figure
4.
3
the original string model in terms of the so-called directrix that stretches along the partonic energy-momenta (2000), later turned into a Monte-Carlo code and so beautifully completed here (2002) by his brilliant student Sandipan Mohanty. How he managed? “Well, it is simple, you know! You just have to attract good people and then force them to do what THEY want.” Oh, ... but we know it takes that charismatic personality of Bo to attract those good people. However, Bo did not only talk, he also did his share of listening. In fact, he was the greatest listener, at least among the particle theorists I know, and just look in Fig. 5 how deeply dedicated he could in fact listen.
Figure 5. The way I will remember him, was one of my last encounters with him, at the 2001 School in Zakopane. I had just finished my Bose-Einstein lecture with the L3 results on three-particle correlations granting a phase consistent with fully chaotic pion emission, at least in conventional interpretation, but not immediately evident from his more recent view. To cool down, I was walking through the little park behind the hotel when spotting Bo smoking his pipe and deeply in thoughts on a balcony above. He, nevertheless, noticed me down there, took his pipe out of his mouth and called “Wolfram, why is it always you who is sending me off with new homework?” “Well, you know” he added after some protest from my side, “perhaps you and Eddi and Brigitte
4
and recently also S h a r k ” . I consider his continuous confrontation of his ideas with our data the most beautiful compliment in my life as a physicist. His life so sadly proved too short t o allow him to complete the answer this time. His students will! In the meantime, let me close these few lines with two quotations of an International Evaluation of Elementary Particle Physics in Sweden (C. Callan et al., NFR, Nov. 1988) to which I had the honor and the pleasure to contribute, and which are valid today as they were then: “The phemenological impact of the work of Andersson and his coworkers at Lund has been nothing short of amazing”. “The small group of Andersson and Gustafson has attracted a particularly large number of graduate students ... well trained t o confront ideas with facts.” Beyond all that, Bo was passionately concerned with fundamental questions of life, desperate questions without answers. He was a fighter, sometimes lonely, but from time to time he was able to open himself to give you the privilege of sharing.
Wolfram Kittel
Illustrations: S.K.-Habock
THE DISCOVERY OF NEUTRINO MASSES
NORBERT SCHMITZ Max-Planck-Institut f i r Physik Fohringer Ring 6, 0-80805 Munchen E-mail:
[email protected] The recent observation of neutrino oscillations with atmospheric and solar neutrinos, implying that neutrinos are not massless, is a discovery of paramount importance for particle physics and particle astrophysics. This invited lecture discusses - hopefully in a way understandable also for the non-expert - the physics background and the results mainly from the two most relevant experiments, SuperKamiokande and SNO. It also addresses the implications for possible neutrino mass spectra. We restrict the discussion to three neutrino flavours (ve,v p ,v r ) ,not mentioning a possible sterile neutrino.
1. Introduction Until recently one of the fundamental questions in particle physics has been as to whether neutrinos have a mass (m, > 0, massive neutrinos) or are exactly massless (like the photon). This question is directly related to the more general question whether there is new physics beyond the Standard Model (SM): In the minimal SM, neutrinos have fixed helicity, always H ( v ) = -1 and H ( V ) = +l. This implies m, = 0, since only massless particles can be eigenstates of the helicity operator. m, > 0 would therefore transcend the simple SM. Furthermore, if m, is in the order of 1 - 10 eV, the relic neutrinos from the Big Bang (n, x 340/cm3) would noticeably contribute to the dark matter in the universe. Direct kinematic measurements of neutrino masses, using suitable decays, have so far yielded only rather loose upper limits, the present best values being m(v,)
< 3eV
(from tritium /? decay)
m(vp)< 190keV (9O%CL) (from 7r+ decay) m(vT)< 18.2 MeV (95%CL) (from T decays).
5
(1)
6
Another and much more sensitive access to neutrino masses is provided by neutrino oscillations 2 . They allow, however, to measure only differences of masses squared, 6mfj mf - rn;, rather than masses directly. For completeness we summarize briefly the most relevant formulae for neutrino oscillations in the simplest case, namely in the vacuum and for only two flavours (u,, V b ) , e.g. ( y e ,u p ) (two-flavour formalism). The generalization t o three (or more) flavours is straight-forward in principle, but somewhat more involved in practice, unless special cases are considered, e.g. ml M m2
<< m3 '.
Va
b'
t
L
A
B
production
detection
Figure 1. Scheme of a neutrino oscillation experiment.
The two flavour eigenstates (u,, V b ) are in general related to the two mass eigenstates ( U I , u2) with masses (ml,m2) by a unitary transformation:
():
= (-sin e
case sine)
. (zqJ
(2)
where 6 is the mixing angle. If ml # m2, the two mass eigenstates evolve differently in time, so that for 6 # 0 the given original linear superposition of u1 and u2 changes with time into a different superposition. This means that flavour transitions (oscillations) u, -+ Ub and Vb + u, can occur with certain time-dependent oscillatory probabilities. In other words (Fig. 1): If a neutrino is produced (or detected) at A as a flavour eigenstate u, (e.g. up from n-+ -+ p+ u p ) ,it is detected, after travelling a distance (baseline) L , at B with a probability P(u, + ub) as flavour eigenstate ub ( e g u, in u,n + p e - ) . The transition probability P ( U a 4 ub) = P(Ia + Ib) = P(ub -+ u,) is given by
+
+
~ ( u , ub) = sin2 28
1
sin2
for u, # U b (flavour change (3)
7
P(v, -+ v,) = 1 - P(va -+vb)
(survival of v,)
where 6m’ = m i - m: and E = neutrino energy. Thus the probability oscillates when varying L I E , with 8 determining the amplitude (sin’ 29) and 6m2 the frequency of the oscillation. The smaller Sm’, the larger L I E values are needed to see oscillations, i.e. significant deviations of P(v, -+ vb) from zero and of P(v, -+ v,) from unity. Notice the two necessary conditions for v oscillations: (a) ml # m2 implying that not all neutrinos are massless, and (b) non-conservation of the lepton-flavour numbers. In (3), L and E are the variables of an experiment, and 0 and 6m’ the parameters (constants of Nature) to be determined. The original situation ( P ( v , -+ vb) = 6,a) is restored, if in (3) the distance L is an integer multiple of the oscillation length Lo,, which is given by
The masses m(v,) and m(vb) of the flavour eigenstates are expectation values of the mass operator, i.e. linear combinations of ml and m ~ :
e .ml + sin’ e .m2 m(vg) = sin’ 9 . ml + cos2e .m2. m(v,) = cos’
(5)
2. Flavour change of atmospheric neutrinos
The most convincing evidence for a flavour change of atmospheric neutrinos after first indications were was found in 1998 by S~per-Kamiokande~>~, observed by some earlier experiments (Kamiokande 5 , IMB 6 , Soudan 2 ’). Atmospheric neutrinos are created when a high-energy cosmic-ray proton (or nucleus) from outer space collides with a nucleus in the earth’s atmosphere, leading to an extensive air shower (EAS) by cascades of secondary interactions. Such a shower contains many T* (and K’) mesons (part of) which decay according to T + , K+
-+ p+vp 4 e+v,F,
T-,
K - -+ p-FP 4e-Devp ,
(6)
yielding atmospheric neutrinos. From (6) one would expect in an underground neutrino detector a number ratio of
if all p* decayed before reaching the detector. This is the case only at rather low shower energies whereas with increasing energy more and more
8
plt survive due to relativistic time dilation and may reach the detector as background (atmospheric p ) . Consequently the expected p / e ratio rises above 2 (fewer and fewer v,, V e ) with increasing Y energy. For quantitative
predictions Monte Carlo (MC) simulations, which include also other (small) v sources, have been performed, using measured p fluxes as input, modelling the air showers in detail, and yielding the fluxes of the various neutrino species ( y e ,V e ,v p ,Vp') as a function of the v energy 8. Atmospheric neutrinos reaching the underground Super-K detector can be registered by neutrino reactions with nucleons inside the detector, the simplest and most frequent reactions being CC quasi-elastic scatterings: v n + pe(a> -" vep +net
(b) p n + p p v,p + np+ .
(8)
Electrr
Figure 2.
Schematic view of Super-Kamiokande g .
-
Super-K (Fig. 2)9 is a big water-Cherenkov detector in the Kamioka Mine (Japan) at a depth of 1000 m. It consists of 50 ktons (50000 m3) of ultrapurified water in a cylindrical tank (diameter = 39 m, height = 41 m). The inner detector volume of 32 ktons is watched by 11146 photomultiplier tubes (PMTs, diameter = 20") mounted on the volume's surface and providing a 40% surface coverage. The outer detector, which tags entering
9
particles and exiting particles, is a 2.5 m thick water layer surrounding the inner volume and looked at by 1885 smaller PMTs (diameter = 8”). A high-velocity charged particle passing through the water produces a cone of Cherenkov light which is registered by the PMTs. The Cherenkov image of a particle starting and ending inside the inner detector is a ring, the image of a particle starting inside and leaving the inner detector is a disk. A distinction between an e-like event (8a) and a p-like event (8b) is possible (with a n efficiency of 2 98%) from the appearance of the image: an e* has an image with a diffuse, fuzzy boundary whereas the boundary of a p* image is sharp. The observed numbers of p-like and e-like events give directly the observed v-flux ratio (p/e)obs (eq. 7) which is to be compared with the MC-predicted ratio (p/e)Mc (for no v oscillations) by computing the double ratio
Agreement between observation and expectation implies R = 1. The events are separated into fully contained events (FC, no track leaving the inner volume, (E,) 1GeV) and partially contained events (PC, one or more tracks leaving the inner volume, (E,) 10GeV). For FC events the visible energy Evis,which is obtained from the pulse heights in the PMTs, is close t o the v energy. With this in mind, the FC sample is subdivided into sub-GeV events (&is < 1.33 GeV) and multi-GeV events (&is > 1.33 GeV). In the multi-GeV range the v direction can approximately be determined as the direction of the Cherenkov-light cone, since at higher energies the directions of the incoming v and the outgoing charged lepton are close t o each other.
-
-
Table 1. Results on the double-ratio R. The first error is statistical, the second systematic (kty = kilotons . years). Super-K (70.5 kty) (5.1 ktv)
R = 0.652 f 0.019 f 0.051 sub-GeV R = 0.661 f 0.034 f 0.079 multi-GeV
(&is (&is
< 1.33 GeV) > 1.33 GeV)
R = 0.68 f 0.11 f 0.06
Recent results on R from Super-K and Soudan 2 lo are given in Tab. 1. All three R values are significantly smaller than unity (“atmospheric neutrino anomaly”) which is due, as it turns out (see below), to a deficit of vp,V p and not to an excess of ve,Ve in (p/e)obs. A natural explanation of this deficit is that some vp,Vp have oscillated into (ve,Ve)or (vT,VT)
10
according t o (3) before reaching the detector. This explanation has become evident, with essentially only vI1 --+ v, remaining (see below), by a study of the v fluxes as a function of the zenith angle 0 between the vertical (zenith) and the v direction. A v with 0 M 0" comes from above (down-going v) after travelling a distance of L 5 20 km (effective thickness of the atmosphere); a v with 0 M 180" reaches the detector from below (up-going v) after traversing the whole earth with L M 13000 km.
Y
d>
L
&-&? +
c) *.,
0 h
sE
-
SublGeV b-like '
400
200
a
(c)
E:
:
I . . . . I . . . . I . . . . ,
0
l---I-l+
Multi-GeV p-like P
200
Q)
rw
0
L
s 100 E a
E:
-
0
-1 -0.5
0
0.5
1
Figure 3. Zenith-angle distribution of (a) sub-GeV e-like, (b) multi-GeV e-like, (c) PC events. (The PC events turned out sub-GeV p-like, and (d) multi-GeV p-like to be practically all v p events). The points show the data, the full histograms the MC predictions for no oscillations and the dotted histograms the best fit by v p + I+ oscillations. From Super-K4.
+
The zenith angular distributions (zenith angle of the charged lepton) as measured by Super-K4 are shown in Fig. 3 for e-like and p-like events,
11
in each event class separately for sub-GeV and multi-GeV events. The full histograms show the MC predictions for no oscillations. The e-like distributions (a) and (b) are both seen to be in good agreement with the predictions which implies that there is no v, excess and no noticeable up + ue transition. The p-like distributions (c) and (d) on the other hand both show a up deficit with respect to the predictions. For multi-GeV p-like events (d), for which the u and p directions are well correlated (see above), the deficit increases with increasing zenith angle, i.e. increasing flight distance L of the u between production and detection; it is absent for down-going muons (0 M 0") and large for up-going muons (0 > 90"). For sub-GeV p-like events (c) the dependence of the deficit on 0 is much weaker, owing to the only weak correlation between the v and p directions. In conclusion, all four distributions of Fig. 3 are compatible with the assumption, that part of the original up change into v, (thus not affecting the e-like distributions), if their flight distance L is sufficiently long ( L2 Lost). This conclusion is supported by a Super-K measurement of the zenith angular distribution of up-going muons with 0 > 90" that enter the detector from outside. Because of their large zenith angle they cannot be atmospheric muons - those would not range so far into the earth -, but are rather produced in CC reactions by energetic up-going up,T p in the rock surrounding the detector. A clear deficit is observed for upward muons stopping in the detector ((E,) 10 GeV) whereas it is much weaker for upward through-going muons ((E,) N 100 GeV). A deficit of atmospheric u p ,Pp has also been observed by the MACRO collaboration12 in the Gran Sasso Underground Laboratory in a similar measurement, their ratio of the numbers of observed to expected events being pobs/peXp = 0.72f0.13 (three errors added in quadrature) for upward through-going muons ((ICY) 100 GeV) . A two-flavour oscillation analysis, with sin2 28 and 6m2 as free parameters, has been carried out by the Super-K collaboration, using their data on (partially) contained events (Fig. 3) and including also their data on up-going muons. A good fit with x2f N D F = 135f 152 has been obtained4 for up +) ur, the best-fit parameters being: 1174,
-
-
6m2 = 3.2.
eV2 , sin2 28 = 1.
(10)
Fig. 4 shows the allowed regions with 68 %, 90 % and 99 % CL in the parameter plane. The best fit is also shown by the dotted histograms in Fig. 3, where excellent agreement with the data points is observed. From (4) and (10) one obtains an oscillation length of Lo,, = 775 km . E/GeV.
12
Vp->
v,
10-2 r
10”
as%c..L -999bC.L BWbCL
.-
Figure 4. Regions (to the right of the curves) allowed at 68 %, 90 % and 99 % CL in the (sin’ 28, bm’) plane for up ff v, oscillations. From Super-K4.
Thus, a flavour-change signal is not expected, because of L << L,,,, (a) for neutrinos with 0 M 0” (i.e. L ,5 20 km) and E 2 1 GeV (Fig. 3d), and (b) for neutrinos producing upward through-going muons with E 100 GeV (see above) so that Lo,, 80000 km - much larger than the diameter of the earth. No good fit could be obtained for vP H Y, oscillations. In addition, Y, disappearance (77, + 7 7 ~ has ) not been observed by two long-baseline reactor experiments (CHOOZ13 and Palo Verde14) with L M 1 km and ( E ) 3 MeV, which rule out 6m2 > 0.7 . eV2 for sin2 28 = 1, and sin2 28 > 0.1for large 6m2. In summary: Atmospheric neutrinos have yielded convincing evidence, mostly contributed by Super-K, that vP H v, oscillations take place with parameters given by Fig. 4 and Eq. (10). There is no other hypothesis around that can explain the data. One therefore has to conclude that not all neutrinos are massless.
-
-
N
3. Flavour change of solar neutrinos
Very exciting discoveries regarding neutrino masses have recently been made with solar neutrinos, in particular by the Sudbury Neutrino Observatory (SNO). Solar neutrinos15 come from the fusion reaction 4p + He4
+ 2e+ + 2v,
(11)
13
inside the sun with a total energy release of 26.7 MeV after two e+eannihilations. The v energy spectrum extends up to about 15 MeV with an average of (E,) = 0.59 MeV. The total v flux from the sun is 4, = 1.87. s-l resulting in a flux density of 6 . 6 . 1O1OcmLZs-l on earth. Reaction (11) proceeds in various steps in the pp chain or CNO cycle, the three most relevant out of eight different Ve sources being:
+ p + D + e+ + v,
(E, < 0.42 MeV, 0.91)
Be7 : Be7
(E, = 0.86 MeV, 0.07)
B8 :
+ e- + Li7 + v, B8 --+ Be8 + e+ + v,
( E , < 14.6 MeV,
pp : p
(12)
N
The second number in each bracket gives the fraction of the total solar v flux. Energy spectra of the v, fluxes from the various sources and rates for the various detection reactions have been predicted in the framework of the Standard Solar Model (SSM)16i17.With respect to these predictions a v, deficit from the sun has been observed in the past by various experiments as listed in Tab. 2 (see ratios Result/SSM). These deficits, the well-known “solar neutrino problem”, could be explained by v oscillations ve --+ vx into another flavour X (v, disappearance) either inside the sun (matter oscillations, Mikheyev-Smirnov-Wolfenstein (MSW) effectz3) or on their way from sun to earth (vacuum oscillations, L M 1.5. lo8 km), see below. Table 2. The five previous solar v experiments and their results (adopting a recent compilation in Table 8 of Ref.17). The SSM is BP2000i7. ~
~
Result (Result/SSM)
Experiment
Reaction
Threshold [MeV1
Homestakela
C137(ue,e-)Ar37
E,
> 0.814 2.56 f 0.23 SNU
e-)Ge71 1Ga7l(u,, 9
Ev
> 0.233
74 f 7 SNU (0.58 f 0.07)
SAGE~O
Ga71( v e ,e-)Ge71
E,
> 0.233
75 f 8 SNU (0.59 f 0.07)
Kamiokande21
ue --t ve
E,
> 7.5
(2.80 f 0.38) .lo6 cmP2 s-l (0.55 & 0.13)
Super-Kamiokande22 ve + ue
E,
> 5.5
(2.40’0,:0,:) . lo6 cm-2 s-l (0.48 f 0.09)
(0.34 f 0 . 0 6 )
GALLEX
+~
~
0
1 SNU (Solar Neutrino Unit) = 1 v, capture per
target nuclei per sec
We now discuss the new results from SN024925. The SNO detector26 (Fig. 5) is a water-Cherenkov detector, sited 2040 m underground in an
14
Deck Support
Photomultipliers with Reflectors
Figure 5.
Schematic drawing of the SNO detector.
active nickel mine near Sudbury (Canada). It comprises 1000 tons of ultrapure heavy water (D20) in a spherical transparent acrylic vessel (12 m diameter) serving as a target and Cherenkov radiator. Cherenkov photons produced by electrons in the sphere are detected by 9456 20 cmphotomultiplier tubes (PMTs) which are mounted on a stainless steel structure (17.8 m diameter) around the acrylic vessel. The vessel is immersed in ultra-pure light water (HzO) providing a shield against radioactivity from the surrounding materials (PMTs, rock). SNO detects the following three reactions induced by solar B8-neutrinos above an electron threshold of 5 MeV for the SNO analysis (d = deuteron):
+d +d vx + eV,
VX
+ + + + + +
e- 4-p p ( c c ) &?thresh = 1.44 MeV -+ V x p n (NC) &hresh = 2.23 MeV n d -+ H3 y(6.25MeV), y e- -+ y -+ vx e(ES)
4
+
+ e-
(13)
where the Cherenkov-detected electron is indicated by bold printing. The charged-current (cc) reaction (CC) can be induced only by v, whereas the neutral-current (nc) reaction (NC) is sensitive, with equal cross sections, to all three neutrino flavours v,, vp, v,. Also elastic v,-scattering (ES) is sensitive to all flavours, but with a cross section relation o ( v p e ) = a(v,e) = Ea(v,e)
(14)
15
where E = 0.154 above 5 MeV according to the electroweak theory. ( E # 1 since ufl,,e scattering goes only via nc, whereas u,e scattering has in addition to n c also a contribution from cc). Data taking by SNO began in summer 1999. For each event (electron) the effective kinetic energy T, the angle O,,, with respect to the direction from the sun, and the distance (radius) R from the detector center were measured. The principle of the analysis goes as follows: The three measured distributions N(z),,,, of I = T , O,,,, R3 from 2928 events with 5 < T < 20 MeV can be fitted by three linear combinations
N ( z )= N C C
'
wCC(2)
+ N N C ' W N C ( 2 ) + NES
'
WES(I)
+ NBG
'
wBG(2)(15)
where wi(z)are characteristic probability density functions known from Monte Carlo simulations (e.g. ~ E S ( C O SO,,,) is strongly peaked in the direction from the sun, i.e. towards cos O,,, = l),and the parameters Ni are the numbers of events in the three categories (13) (and in the background) to be determined from the fit. A good extended maximum likelihood fit to the measured distributions was obtained yielding (errors symmetrized): NCC = 1967.7f61.4, NNC= 576.5f49.2, NES = 263.6f26.0.
(16)
@SNo
From each of these event numbers Ni a B8-neutrino flux was determined, using the known cross sections for reactions (13) and the SSM B8-u spectrum. The exciting result (in units of lo6 cm-' s-l) is24 (statistical and systematic errors added in quadrature):
@F:o
= 5.09 f 0.62,
= 1.76 f 0.10,
@:go
= 2.39 f 0.26
(17)
where has been computed using cT(u,e), i.e. assuming no yeoscillations. agrees nicely with the Super-K resultz7 @ 2: = 2.32 f 0.09, computed with the same assumption. @%%' is the genuine u, flux @(u,) arriving at earth. For the case that the u, created in the sun arrived at earth all as v,, i.e. there were no u, oscillations, one would expect @cc = @NC = @ES. The SNO result (17) shows that this is obviously not the case, i.e. that there is significant direct evidence for a non-u, component in the solar u flux arriving at earth. The two fluxes @ ( y e ) and @(up,) (= ufl u, flux) and the total u flux @tot have been determined from (17) by a fit using the three relations
@;go
+
@CC
@NC
+ES
= @(Ye) = @ ( V e ) @ ( u p r ) = @tot = @ ( u e ) d ( u f l , ) with E = 0.154
+ +
(18)
16
with the result @ ( y e )=
1.76 f 0.10 and @(vp7) = 3.41 f 0.65.
(19)
is different from zero by 5.3 0 which is clear evidence for Notice that @(vCCT) some (N 66 %) of the original v, having changed their flavour. Furthermore, = = 5.09 f 0.62 (or the value @tot = the measured value (17) @(ye) @(vpT) = 5.17 f 0.66 from the fit result(l9)) agrees nicely (within the large errors) with the SSM value17 = 5.05!::; this agreement is a triumph of the Standard Solar Model. The SNO analysis is summarized in the [@(v,), @(vpT)] plane, Fig. 6. The four bands show the straight-line relations (with their errors):
@gzo
+
@zso @(v,) =
= 1.76 f 0.10
= @(v,)
+ 0.154. @(vpT) = 2.39 f0.26 (20)
@g%'= @(v,) + @(vCCT) = 5.09 f 0.62 @F:tM = @(ve) + @(vpT) = 5.052::;:.
Full consistency of the three measurements (17) amongst themselves and with the SSM is observed, the four bands having a common intersection. Table 3. Best-fit values for the five solutions from Ref.29
MSW
LOW Just So2 VAC
4.2 X 5.2 x 7.6 x lo-' 5.5 x 10-l' 1.4 x 10-l'
2.6 X 5.5 x 7.2 x 1.0 x 3.8 x
10-1
29.0
lop4 31.1
lo-'
36.0 10' 36.1 10-1 37.5
A two-flavour oscillation analysis (v, t)v p or vT) has been carried out by the SNO c ~ l l a b o r a t i o n Prior ~ ~ . to SNO several global oscillation analyses were performed using all available solar neutrino data, inchding the Super-K measurements of the electron energy spectrum and of the daynight asymmetry (which could originate from a regeneration of ve in the earth at night)27t28. Five allowed regions (e.g. at 3a, i.e. 99.7 % CL) in the (tan2@,6m2) plane were identified, their best-fit values e.g. from Ref.29 being listed in Table 3. These solutions, apart from Just So2, were also found by SN025 when only using their own data (measured day and night energy spectra), Fig. 7a. When including also the data from the previous experiments as well as SSM predictions in their analysis, only the
17
large-mixing-angle (LMA) MSW solution is strongly favoured (Fig. 7b), the best-fit values being
-
6m2 = 5.0 lo-' eV2, tan2 8 = 0.34 (8= 30").
(21)
The elimination of most of the other solutions is based on the Super-K measurements of the energy spectra during the day and during the night2'y3O. However, the issue seems not completely settled yet3'.
0
1
2
3
4
5 5
6
Figure 6. Fluxes of B'-neutrinos as determined by S N 0 2 4 . The bands show the flux a,, of (v, v,) vs. the flux ae of ve according to each of the three experimental relations and the SSM relation17 in (20). The intercepts of these bands with the axes represent the f l u errors. The point in the intersection of the bands indicates the bestfit values (19). The ellipses around this point represent the 68 %, 95 % and 99 % joint probability contours for ae,a,,. From Ref. 24.
+
In summary: Solar neutrinos have yielded strong evidence vor v, c) vx (X = p, T ) oscillations. In particular the recent SNO measurements show explicitly that the solar v flux arriving at earth has a non-v, component. These measurements and their good agreement with the SSM have solved the long standing solar neutrino problem; they are evidence, in addition to the results from atmospheric neutrinos, for neutrinos having mass. 4. Possible neutrino mass schemes
With two independent 6m2 values, namely (10) dm;,, M 3.2. eV2 and eV2 one needs three neutrino mass eigenstates vi = (21) bm:o, M 5 . 0 .
18
Figure 7. Regions allowed at the indicated confidence levels in the parameter plane as determined from a x2 fit (a) to the SNO day and night energy spectra alone, and (b) with the addition of data from the other solar experiments and of SSM predictions 17. The star in the LMA solution indicates the best fit (21). From SNO 2 5 .
u1 ,uz, u3 with masses m l , m2, m3 obeying the relation 6 m ~ l + 6 m ~ z + 6 m =~ 3 0 where 6m: = mt - m;. The neutrino flavour eigenstates u, = u,, up,v, are then linear combinations of the q and vice versa, u, = Uaiyi, in analogy t o (2). The absolute neutrino mass scale is still unknown, since a direct measurement of a neutrino mass has not yet been accomplished. Several possible mass schemes have been proposed in the literature. The two main categories are:
xi
-
A hierarchical mass spectrum, e.g. ml << m2 << m3. In this case the hierarchy may be normal or inverted, as shown in Fig. 8. If e.g. for the normal hierarchy one assumes ml M 0, then
m3 M -
a
M
d GM d
p eV M
6.lop2 eV.
A democratic (nearly degenerate) mass spectrum with ml M m2 M m3 >> In this case almost any m, value below 3 eV (upper limit of m(ve),eq. (1)) is possible. In particular, with m, 0 (1 eV) neutrinos
m.
--
could contribute noticeably to the dark matter in the universe.
19
m,eV 10-1 v2
solar VI
10-2
solar
I o - ~VI
normal
inverted
Figure 8. Schematic drawing of the normal and inverted hierarchical mass spectrum of the 3 neutrino mass eigenstates ui (i = 1 , 2 , 3 ) . The shadings show the admixtures lUei12 (white), lU,j12 (grey) and lUrilz(black) ofthe 3 flavour eigenstates v e , v , and u,, respectively. Adapted from Ref. 32.
Acknowledgements
I am grateful to the organizers of the 10. International Workshop on Multiparticle Production and in particular to Nikos Antoniou, for a very fruitful and enjoyable meeting with interesting talks and lively discussions on an island that is famous for its outstanding history and culture as well as for its beautiful nature. I also would like to thank Mrs. Sybille Rodriguez for her typing the text and preparing and arranging the figures. References 1. K. Hagiwara et al. (Particle Data Group): Phys. Rev. D66 (2002) 010001. 2. B. Kayser: ref. 1, p. 392; S.M. Bilenky, B. Pontecorvo: Phys. Rep. 41 (1978) 225; S.M. Bilenky, S.T. Petcov: Rev. Mod. Phys. 59 (1987) 671; 60 (1988) 575; 61 (1989) 169 (errata); J.D. Vergados: Phys. Rep. 133 (1986) 1; N. Schmitz: Neutrinophysik, Teubner, Stuttgart, 1997. 3. Y. Fukuda et al. (Super-Kamiokande): Phys. Rev. Lett. 81 (1998) 1562; Phys. Lett. B433 (1998) 9; B436 (1998) 33; T. Kajita, Y. Totsuka: Rev. Mod. Phys. 73 (2001) 85; B. Schwarzschild: Physics Today, Aug. 1998, p. 17. 4. H. Sobel (Super-Kamiokande): Nucl. Phys. Proc. Suppl. B91 (2001) 127; S. F'ukuda et al. (Super-Kamiokande): Phys. Rev. Lett. 85 (2000) 3999.
20
5. Y. Fukuda et al. (Kamiokande): Phys. Lett. B335 (1994) 237. 6. R. Becker-Szendy et al. (IMB): Phys. Rev. D46 (1992) 3720. 7. W.W.M. Allison et al. (Soudan 2): Phys. Lett. B391 (1997) 491; B449 (1999) 137. 8. M. Honda et al.: Phys. Rev. D52 (1995) 4985; V. Agrawal et al.: Phys. Rev. D53 (1996) 1314; T.K. Gaisser et al.: Phys. Rev. D54 (1996) 5578; P. Lipari et al.: Phys. Rev. D58 (1998) 073003; G. Fiorentini et al.: Phys. Lett. B510 (2001) 173; G. Battistoni et al.: Astropart. Phys. 12 (2000) 315; hep-ph/0207035. 9. K. Nakamura et al.: in Physics and Astrophysics of Neutrinos, M. Fukugita, A. Suzuki eds., Springer, Tokyo etc., 1994, p. 249; A. Suzuki: ibidem, p. 388; Y. Suzuki: Prog. Part. Nucl. Phys. 40 (1998) 427. 10. W.A. Mann (Soudan 2): Nucl. Phys. Proc. Suppl. B91 (2001) 134. 11. Y. Fukuda et al. (Super-Kamiokande): Phys. Rev. Lett. B82 (1999) 2644; Phys. Lett. B467 (1999) 185. 12. B.C. Barish (MACRO): Nucl. Phys. Proc. Suppl. B91 (2001) 141; M. Ambrosio et al. (MACRO): Phys. Lett. B478 (2000) 5; B517 (2001) 59. 13. M. Apollonio et al. (CHOOZ): Phys. Lett. B466 (1999) 415. 14. F. Boehm et al. (Palo Verde): Phys. Rev. D64 (2001) 112001. 15. K. Nakamura: ref.1, p. 408; M. Altmann et al.: Rep. Prog. Phys. 64 (2001) 97; T. Kirsten: Rev. Mod. Phys. 71 (1999) 1213. 16. J.N. Bahcall: Neutrino Astrophysics, Cambridge University Press, Cambridge etc., 1989. 17. J.N. Bahcall et al.: Astrophys. J. 555 (2001) 990 (BP2000). 18. B.T. Cleveland et al. (Homestake): Astrophys. J . 496 (1998) 505. 19. E. Bellotti (GALLEX + GNO): Nucl. Phys. Proc. Suppl. B91 (2001) 44; M. Altmann et al. (GALLEX GNO): Phys. Lett. B490 (2000) 16. 20. V.N. Gavrin (SAGE): Nucl. Phys. Proc. Suppl. B91 (2001) 36; J.N. Abdurashitov et al. (SAGE): Phys. Rev. C60 (1999) 055801; JETP 95 (2002) 181. 21. Y. Fukuda et al. (Kamiokande): Phys. Rev. Lett. 77 (1996) 1683. 22. Y. Suzuki (Super-Kamiokande): Nucl. Phys. Proc. Suppl. B91 (2001) 29. 23. S.P. Mikheyev, A.Yu. Smirnov: Nuovo Cimento 9 C (1986) 17; Prog. Part. Nucl. Phys. 23 (1989) 41; L. Wolfenstein: Phys. Rev. D17 (1978) 2369; D20 (1979) 2634. 24. Q.R. Ahmad et al. (SNO): Phys. Rev. Lett. 89 (2002) 011301. 25. Q.R. Ahmad et al. (SNO): Phys. Rev. Lett. 89 (2002) 011302. 26. J . Boger et al. (SNO): Nucl. Instrum. Meth. A449 (2000) 172. 27. S. Fukuda et al. (Super-Kamiokande): Phys. Rev. Lett. 86 (2001) 5651. 28. S. Fukuda et al. (Super-Kamiokande): Phys. Rev. Lett. 86 (2001) 5656. 29. J.N. Bahcall et al.: JHEP 05 (2001) 015; see also: JHEP 04 (2002) 007. 30. S. Fukuda et al. (Super-Kamiokande): Phys. Lett. B539 (2002) 179. 31. A. Strumia et al.: Phys. Lett. B541 (2002) 327. 32. A.Yu. Smirnov: Nucl. Phys. Proc. Suppl. B91 (2001) 306.
+
Sessions on Correlations and Fluctuations in e+e-, hh Collisions Chairpersons: C. N. Ktorides, B. Buschbeck, A. Giovannini, L. Liu, and I. Dremin
This page intentionally left blank
SCALING PROPERTY OF THE FACTORIAL MOMENTS IN HADRONIC Z DECAY G. CHENt, Y. HU, W. KITTEL, L.S. LIUt, W.J. METZGER PRESENTED BY W. KITTEL HEFIN, University of Najmegen/NIKHEF, Toernooiveld 1, 6525 ED Nijmegen, N L FOR T H E L3 COLLABORATION Three-dimensional, as well as one-dimensional, studies of local multiplicity fluctuations in hadronic 2 decay are performed using data of the L3 experiment at LEP. Normalized factorial moments in rapidity, transverse momentum and azimuthal angle with respect to the thrust axis are found to exhibit power-law scaling when partitioning with the same number of bins in each direction, indicating that the fluctuations are isotropic. This is confirmed by a detailed study of the second-order factorial moments in one dimension. Such scaling corresponds to a self-similar fractal, i.e., the associated branching process is self-similar. On the contrary, two-jet subsamples are found to have self-affine branching. These features are reproduced by the Monte Carlo model JETSET and qualitatively also by HERWIG.
The dynamics of a QCD branching cascade [l]involving q -+ qg, g -+ gg and g -+ qq, like other branching processes, [2] leads to fractal behavior. [3] This fractal behavior manifests itself in the form of power-law scaling of finalstate multiplicity fluctuations with increasing resolution in phase space. [4] Experimentally, approximate power-law scaling is indeed observed for e+ecollisions, however, be it of reduced strength, also for all other types of collisions. [5] As a possible distinction, it has been observed [6] that QCD branching may correspond to a self-similar fractal, in contrast to the self-affine fractal observed in hadron-hadron collisions at lower center-of-mass energies (22-27 GeV). [7] Dynamical multiplicity fluctuations can be studied using the normalized factorial moments (NFM) defined by [4]
where a region A in 1-, 2- or 3-dimensional momentum space is partitioned into M cells, nm is the particle multiplicity in cell m, and (. . .) denotes an
t Visitor from Inst. of Particle Physics, Huazhong Normal University, Wuhan, China, sponsored by the Scientific Exchange between China (MOST) and The Netherlands (KNAW), projects OlCDP017 and OBCDPO11.
23
24
average over the event sample. If the underlying dynamical fluctuations are intermittent rather than continuous, the Fq will exhibit power-law scaling [4]:
Fq(M)0: M4g
(M -+
00).
(2)
If this power-law scaling (called intermittency) is observed, then the corresponding hadronic system is a scaling fractal. [2] In higher dimensions, the observation or non-observation of power-law scaling, Eq. (2), of the NFM depends on how the underlying space is partitioned. For example, if in the two-dimensional case (a,b plane) power-law scaling of the NFM is observed when the space is divided equally in both directions, i.e., when the number of partitions Ma and Mb in directions a and b are equal, then the dynamical fluctuations are isotropic and the corresponding fractal is self-similar. On the other hand, if the power-law scaling of the NFM is observed when and only when phase space is divided differently in the two directions, i.e., when Ma # Mb, then the dynamical fluctuations are anisotropic and the corresponding fractal is self-affine. [7,8] The degree of anisotropy can be characterised by the log-ratio of Ma and M b
which is called the Hurst exponent., [2] The dynamical fluctuations are isotropic in the ( a ,b) plane if H = 1 and otherwise anisotropic. [8]The farther the Hurst exponent departs from unity, the stronger the degree of anisotropy. When evaluated in one dimension, the NFM saturate at large Ma (and Mb) due t o projection. [9] The saturation of the second-order NFM can be parametrized as
Pia)( Ma) = A ,
- B, MLyn ,
(4)
where A , and B, are positive constants and 7, = (In& - lnC)/lnA,, [lo] and similarly for Fib)(Mb) with the same value of C , which is positive and smaller than both A, and A b . Therefore, the Hurst exponent in the (a,b) plane is related t o the exponents 7, and ' y b through
,In observing the power law scaling property of a system, the phase space region in direction i is partitioned into A; bins, and then each bin is further partitioned into A; sub-bins, . . . . After v steps, the number of partitions in direction i is M ; = A T . In this process, it is the log-ratio of Ma and Mb rather than their ratio itself that remains constant and can be used as a characteristic quantity.
25
Consequently, the condition Hab = 1, which implies isotropic dynamical fluctuations in the ( a , b ) plane, is equivalent t o the condition ya = Yb. The method is easily extended to the study of the three possible Hi,j parameters of a three-dimensional analysis. In this paper, we quantitatively study multiplicity fluctuations in hadronic Z decay using data obtained by the L3 detector [ll]at LEP. The primary product of the Z decay considered here is a quark-antiquark pair, moving back-to-back in the Z rest frame. This implies a cylindrical symmetry about the quark direction. An appropriate frame to study the three-dimensional development of the qq system is therefore defined by this direction, and appropriate variables are the longitudinally Lorentz invariant variables rapidity y , transverse momentum pt and azimuthal angle cp. The qq direction is approximated by the thrust axis and the major and minor directions can be used as the other axes. Since the definitions only determine the directions of the thrust and major axes up t o a sign, we choose the signs at random. We refer t o the frame having its z- and z-axes along the thrust and major directions, respectively, as the major-thrust frame. The major axis is determined by the direction of emission of the hardest gluon. Using the major axis as the z-axis, therefore means that the azimuthal angle of that gluon is fixed to 0. Thus, multiplicity fluctuations, if any, will be largely reduced in this variable. 161 To relax this limitation, and to create a situation similar to the random choice of azimuthal angle in the case of hadron-hadron collisions, we also apply a rotation of the coordinate system around the thrust axis by a random angle. We refer t o this frame as the random-thrust frame. Since the thrust axis is only an approximation of the qij axis, we furthermore present the NFM after a Monte Carlo correction for this, which is a multiplicative factor given by the ratio, determined at generator level, of the Fq using the qq direction as the z-axis t o that using the thrust direction as the z-axis. We refer to this frame as the qq frame. We first investigate the 3-D NFM, partitioning phase space isotropically. Observation of a linear dependence of In Fq on In M is then a direct indication of the isotropy of the dynamical fluctuations. More quantitative evidence is found by fitting Eq. (4) to the three 1-D F2’s. Isotropy of the dynamical fluctuations will manifest itself in an equality of the three y’s, or equivalently, in a unit value of the three Hurst exponents calculated from the y’s via Eq. ( 5 ) . Besides studying the full data sample, we also analyze the scaling property in two-jet sub-samples t o investigate its dependence on the jet resolution parameter ycut. Varying ycut changes the relative dependence of particle production on parton branching and hadronization. The data used in the analysis were collected by the L3 detector [ll]in
26
1994 at a center-of-mass energy fi N 91.2 GeV. The resolution of the L3 detector for the difference in y , pt, and cp between two tracks is estimated to be 0.05, 0.03 GeV, and 0.03 radians, respectively. [12] The widths of the smallest bins used ( M = 40) are roughly 3-5 times these values, so that no migration is expected. The analysis uses nearly the entire phase space: -5 5 y 5 5 , -T 5 cp < T ,pt 5 3 GeV. All variables are transformed into their corresponding cumulative forms. [13,14] An NFM calculated from the data is corrected for detector effects by a correction factor determined from two Monte Carlo (MC) samples. The first is a generator level MC sample generated by the JETSET 7.4 parton shower program. [15] It is generated without initial-state photon radiation (ISR), Bose-Einstein (BE) correlations are included using the socalled BE0 algorithm. [16] It contains all charged final-state particles with a lifetime CT > 1 cm. The second MC sample is also generated by JETSET, but includes ISR as well as BE correlations. It is passed through a full detector simulation, [17] including time-dependent variations of the detector response based on continuous detector monitoring and calibration, and is reconstructed with the same program as the data and passed through the same selection procedure. It is referred to as detector level MC. From these two MC samples a correction factor is found: R, = FFn/Fqdet, where FF" and F F t are the values of the NFM of order q calculated from the generator-level and detector-level MC, respectively. The corrected NFM is then given by F, = R, F,'", where F r w is the NFM calculated directly from the data. These corrections, which increase with M and with q, are about 1-8%. Systematic uncertainties on the factorial moments have been assigned [12] for the following sources: event selection, track selection, Monte Carlo modeling for the detector correction. For the comparison of experimental data to MC models, the systematic errors of the models are calculated by changing their parameters by one standard deviation from their L3-tuned values. [18] Systematic errors on fit results are determined by repeating the analysis using charged tracks, rather than calorimeter clusters, for event selection and to determine the thrust axis.
Results for the full data sample The results for the 3-D NFM, using the same number of bins in each direction, are shown in Fig. 1 for the major-thrust, random-thrust, and qij frames. The error bars include both statistical and systematic uncertainties. The FZD are highly correlated for neighboring values of M . Disregarding the first point,
27
1.6
10
JETSET HERWIG
0
+
Randoin-Thrust frame (Data, Major-Thrust frame (Data)
liN 1.1 1.05
0
5
In M
0
5
In M
0
k,, ,
I
, , , , , , , , I , , , , I , , , , , , , , I , , , Ij~,, , I
5
In M
Figure 1. The three-dimensional factorial moments as a function of the number of partitions M = MYMPtM,, My = MPt = M,, compared to JETSET with BE (a<) and t o HERWIG (d-f) in the (a,d) majorthrust, (b,e) random-thrust, and (c,f) qq frames, respectively. The error bars include both statistical and systematic uncertainties.
Figure 2. The 1-D second-order factorial moments as a function of number of partitions M in the random-thrust frame (squares) and in the major-thrust frame (open circles) compared to JETSET with BE0 (band) and HERWIG (dashed line). The error bars on the data are the combined statistical and systematic uncertainties.
FiD
which is heavily influenced by momentum conservation, [19] the In appear to depend linearly on In M for this isotropic partitioning, as is expected if the dynamical fluctuations in hadronic Z decay are isotropic. This appears true for the major-thrust frame as well as for the random-thrust and qq frames, although the slopes are smaller in the major-thrust frame. The NFM for JETSET 7.4 and HERWIG 5.9 are also shown in the figure. Both agree well with the data. In order t o investigate the isotropy of the fluctuations more quantitatively, we study the one-dimensional NFM for the three variables y , pt and cp, separately. They are plotted in Fig. 2 for the major-thrust and random-thrust frames, and compared to JETSET and HERWIG. Since these frames have the same z-axis, only Fz((P)depends on which frame is used. It is clear from
28
the figure that the fluctuations in p are strikingly different in the two frames, being nearly absent in the major-thrust frame. This difference is attributed t o the limitation, in the major-thrust frame, of the direction of the first hard gluon emission to the (2,z ) plane as discussed in the introduction. As a consequence of the reduced fluctuations in p, the fluctuations in three dimensions are also reduced, resulting in the lower values of the NFM in the major-thrust frame observed in Fig. la. The results for the qq frame (not shown) resemble those for the random-thrust frame. In all frames, the data are well-described by JETSET, both with and without BE, but less so by HERWIG. Equation (4) is fit to the data using the full covariance matrix for the statistical and systematic errors. The results for random-thrust and qq frames are shown in Table 1. We find that the values of yy,yPt and 7, are equal as expected for isotropic fluctuations, although the values are shifted somewhat lower in the qq frame. Table 1. The fit parameters of 1-D NFM for the data. The first error is the combined statistical and systematic uncertainties from event and track selection and MC modeling; the second combines the systematic uncertainties arising from choice of event selection method and of thrust axis determination.
frame RandomThrust
Variable
sii
Pt
Y 1.002 f 0.050 f 0.034 1.088 f 0.038 0.068 0.917 f 0.072 f 0.056 1.025 f 0.041 f 0.055 0.908 f 0.101 f 0.066 0.915 f 0.071 f 0.052
*
X2/d0f 28/34 21/35 43/36 27/35 39/36 41/34
From these values of y, the Hurst exponents are calculated by Eq. (5). The resulting values in the qq frame are displayed in Table 2, where they are compared to the values obtained for JETSET and HERWIG. They are, for both data and MC’s, consistent with unity, in agreement with isotropic dynamical fluctuations. Since the dynamical fluctuations appear to be isotropic the 3-D Fq can be fit by
FiD= bqM@q
,
(6) where M = M y M p t M , with My = Mpt = M,, to obtain the intermittency indices 4q. The results of the fits, using the full covariance matrix, are shown in Fig. 1 and listed for the qq frame in Table 3. The first uncertainty includes both the statistical error and the systematic uncertainties from event and track
29
Table 2. Hurst exponents compared to JETSET and HERWIG in the qq frame. The uncertainties on the data are as in Table 1. The systematic uncertainties on the MC values follow from the uncertainties on parameters of the MC models.
HYPt
H P T 'p
0.918 f 0.055 f 0.063 0.918 f 0.055 f 0.063 0.942 f 0.070 f 0.044 0.964 f 0.020 f 0.028 1.019 f 0.021 f 0.030 0.975 f 0.016 f 0.057 0.976 f 0.061 f 0.058 0.959 f 0.044 f 0.037 0.946 f 0.057 f 0.026 0.950 f 0.017 f 0.035 0.981 f 0.012 f 0.039 0.959 f 0.012 f 0.032 1.063 f 0.078 f 0.073 1.045 f 0.061 f 0.054 1.004 f 0.084 f 0.031 0.986 f 0.029 f 0.020 0.962 f 0.026 f 0.047 0.983 f 0.021 f 0.051
Data - Major-thrust frame Data - Random-thrust frame Data - qij frame JETSET BE0 JETSET noBE HERWIG Data - Major-thrust frame Data - Random-thrust frame Data - qq frame JETSET BE0 JETSET noBE HERWIG Data - Major-thrust frame Data - Random-thrust frame Data - qij frame JETSET BE0 JETSET noBE HERWIG
selection as well as from Monte Carlo modeling. The second is the systematic uncertainty from the method of event selection and from the definition of the thrust axis. In the major-thrust frame the q5q are much lower than in the other frames, as a consequence of the dependence of this frame on the QCD dynamics, as discussed above. We also note that the q5q are systematically higher in the qij frame than in the random-thrust frame, from which we conclude that the difference between the thrust and qij axes, which also depends on the QCD dynamics, also serves to decrease the observed fluctuations.
Results for 2-jet sub-samples Now we turn t o the study of 2-jet sub-samples. Jets are identified using the Durham jet algorithm. [20] The jet resolution parameter ycut used in this algorithm is essentially the relative transverse momentum squared of the jets that are being combined, [21] kz = ycut. s, where the center-of-mass energy, fi,is equal to 91.2 GeV in our case. The fraction R2 of 2-jet events obtained
30
Table 3. Intermittency indices, &, from fits of Eq. (6) to the three-dimensional factorial moments in the qtj frame. The uncertainties are as in Table 1.
QI 2
A
I X‘ldof
I 0.234 f 0.007 f 0.003 1 0.730 f 0.022 f 0.013 1.437f 0.035 f 0.020 2.207 f 0.066 f 0.030 2.856 f 0.093 f 0.037 3.374 f0.181 f0.083 3.889 f 0.297 f 0.357
5.619 7.019 10.0/9 8.919 6.517 4.5/4 4.313
Figure 3. The variation of 7; (i = y,pt, p) with R2 and kt in the random-thrust frame for (a) data, (b) JETSET with BEo, and (c) HERWIG.
with the jet-algorithm depends on the parameter ycut. It is worthwhile noting that R2 tends to zero at around lct = 1 GeV. When lct increases, the ratio R2 of 2-jet events increases, and more and more parton branchings are included in the jets. In the extreme case, when ycut is very large, the “2-jet sample” coincides with the full sample and contains all the information on hard parton branching as well as on soft hadronization. The 1-D NFM are calculated from the 2-jet samples for different ycut values. The random-thrust frame is used throughout. The variation with R2 of the exponents yi (i = y , pt,p) are plotted in Fig. 3 for data and for JETSET with BE0 and HERWIG. At low R2 and kt, where almost all genuine two-jet events are split up and only the narrowest survive, all three exponents y are small, indicating only
31
marginal dynamical fluctuations. On the contrary, at Rz near 1, all three exponents are large and equal, in agreement with the strong and isotropic fluctuations in the full sample discussed in the previous section. While y increases with kt (or ycut) for each of the variables, the dependence on ycut is very different for the different variables, y, pt and 9. Somewhat similar behavior is seen for JETSET and HERWIG. Conclusions
The scaling property of the three-dimensional factorial moments of the hadronic system produced in Z decay is examined using the high statistics data from the L3 experiment at LEP. The results show power-law scaling when momentum space is partitioned isotropically, indicating the existence of self-similar (isotropic) dynamical fluctuations. This self-similarity is confirmed quantitatively by fits t o the one-dimensional F2. This is in sharp contrast t o the self-affine (anisotropic) fluctuations observed in hadron-hadron collision experiments. However, when 2-jet events are selected, the fluctuations are found t o be anisotropic. The degree of anisotropy depends on the resolution scale (ycut) used to identify the 2-jet events. Having observed the isotropy of dynamical fluctuations, the intermittency indices are obtained by fitting Eq. (6) to the 3-D factorial moments. Both JETSET and, t o a lesser extent, HERWIG give reasonable descriptions of both the 3- and 1-dimensional F4. References
1. See for example Yu. Dokshitzer, V.A. Khoze, A.H. Mueller and S.I. Troyan, Basics of Perturbative QCD (Editions FrontiBres, Giv-surYvette, 1991); G. Altarelli, The Development of Perturbative QCD (World Scientific, Singapore, 1994). 2. B. Mandelbrot, The Fractal Geometry of Nature (W.H. Freeman, New York 1983); B. Mandelbrot, in Dynamics of Fractal Surfaces, eds. E. Family and T . Vicsek (World Scientific, Singapore, 1991). 3. G. Veneziano, in Proc. Third Workshop on Current Problems in High Energy Particle Theory, Florence 1979, eds. R. Casalbnoni et al. (Johns Hopkins Univ., Baltimore, 1979) p. 45; K. Konishi, A. Ukawa and G. Veneziano, Phys. Lett. 78B (1978) 243; Nucl. Phys. B 157 (1979) 45; A. Giovannini, in Proc. Xth Int. Symp. on Multiparticle Dynamics, Goa 1979, eds. S.N. Ganguli, P.K. Malhotra and A. Subramanian (Tata Inst.) p. 364.
32
4. A. Bialas and R. Peschanski, Nucl. Phys. B 273 (1986) 703; B 308 (1988) 857. 5. E.A. De Wolf, I.M. Dremin, W. Kittel, Phys. Reports 270 (1996) 1. 6. Liu Fuming, Liu Lianshou and Liu Feng, Phys. Rev. D 59 (1999) 114020. 7. N.M. Agababyan et al. (NA22 Collab.), Phys. Lett. B 382 (1996) 305; N.M. Agababyan et al. (NA22 Collab.), Phys. Lett. B 431 (1998) 451; S. Wang, Z. Wang and C. Wu, Phys. Lett. B 410 (1997) 323. 8. Wu Yuanfang and Liu Lianshou, Phys. Rev. Lett. 70 (1993) 3197. 9. W. Ochs, Phys. Lett. B 247 (1990) 101. 10. Wu Yuanfang and Liu Lianshou, Science in China A 38 (1995) 435; Wu Yuanfang, Zhang Yang and Liu Lianshou, Phys. Rev. D 51 (1995) 6576. 11. B. Adeva et al. (L3 Collab.), Nucl. Instr. Meth. A 289 (1990) 35. 12. Y. Hu, PhD thesis, Univ. of Nijmegen, 2002. 13. W. Ochs, Z. Phys. C 50 (1991) 339. 14. A. Bialas and M. Gazdzicki, Phys. Lett. B 252 (1990) 483. 15. T. Sjostrand, Comp. Phys. Comm. 82 (1994) 74. 16. L. Lonnblad and T. Sjostrand, Phys. Lett. B 351 (1995) 293. 17. The L3 detector simulation is based on GEANT, see R. Brun et al., CERN report CERN DD/EE/84-1 (Revised), 1987, and uses GHEISHA to simulate hadronic interactions, see H. Fesefeldt, RWTH Aachen report PITHA 85/02, 1985 18. Sw. Banerjee, S. Banerjee, L3 Note 1978 (1996); J . Casaus, L3 Note 1946 (1996); S. Banerjee, D. Duchesneau, S. Sarkar, L3 Note 1818 (1995). 19. Liu Lianshou, Zhang Yang and Deng Yue, Z. Phys. C 73 (1997) 535. 20. Yu.L. Dokshitzer, J . Phys. G 17 (1991) 1537. S. Bethke, Z. Kunszt, D.E. Soper and W.J. Stirling, Nucl. Phys. B 370 (1992) 310. 21. Yu.L. Dokshitzer, G.D. Leder, S. Moretti and B. R. Webber, JHEP 8 (1997) 1.
RAPIDITY CORRELATIONS IN QUARK JETS AND THE STUDY OF THE CHARGE OF LEADING HADRONS IN GLUON AND QUARK FRAGMENTATION B.BUSCHBECK AND F.MANDL, DELPHI COLLABORATION Institute for High Energy Physics of the OAW, Nikolsdorferstrasze 18, A-1050 Wien, Austria E-rnail:
[email protected]. ac. at The study of rapidity correlations in quark jets in e+e- reactions at Lep has demonstrated that the hadronisation process is reproduced well by string models like e.g. JETSET. However our understanding of gluon fragmentation is less complete. In this study Gluon and Quark-jet enriched samples are selected in 3-jet events at fi = 91 GeV in the Delphi Experiment. The leading systems of the two kinds of jets are defined and their sum of charges is studied. Whereas for gluon-jets a significant excess of leading systems with total charge zero is found when comparing to Monte Carlo simulations with JETSET, the corresponding leading systems of quark-jets do not exhibit such an excess. Checks are performed to rule out possible trivial origins of this observation. The mass spectra of the leading systems with total charge zero are studied.
1
Introduction
Studying rapidity correlations in quark jets provides us with remarkable insights into the mechanism of hadronisation. It gives strong evidence for chainlike charge ordered particle production in jets in excellent agreement with string Monte Carlo models like e.g. JETSET . This is shown in particular by several contributions of the Delphi experiment at Lep. In ref. charge ordering of final hadrons is observed in rapidity along the thrust axis. In ref. baryon pair production and their rapidity correlations is investigated: Due to the small number of baryons ( B ) produced in hadronic Z o decays, their study offers the possibility of a detailed understanding of hadronisation. In ref. a sensible function is found which reveals in even more detail the rapidity-rank structure of pp pairs. Finally it is shown in ref. that particle pairs (T+T-,K+K- and p1) are aligned in rapidity with respect to the primary quark to antiquark direction (‘rapidity alignment’). Despite the excellent agreement between these data and the Monte Carlo predictions in quark jets, it has been shown recently that gluon jets are much less understood. In particular, it has been demonstrated that the sum of charges (SQ) of the leading particles shows a surplus of events with SQ=O compared to the Jetset model. In quark jets such a surplus does not show up. 2,37435
33
34
’.
The present contribution contains the major parts of ref. It includes in addition to ref. checks to rule out possible trivial origins for the observed SQ=O effect in gluon jets. The influence of the popcorn parameter in Jetset on the SQ distribution is investigated. The paper is following a suggestion of Minkowski and Ochs 8i9 to search for colour octet neutralization (and for glueballs and gluonic mesons) in gluon jets produced in 3-jet events in e f e - reactions. We define leading systems in quark and gluon jets by demanding a separation in the rapidity Ay from the rest of the jet and compare the leading systems of both types of jets (as proposed) to each other and also compare them both to the predictions of the Monte Carlo model JETSET which is not including the mechanism of octet neutralization. The following is a short reminder how the parton fragmentation into hadrons (the non-pertubative regime of QCD) is handled in JETSET for quarks and gluons: For quark fragmentationthe colour triplet field of the quark is neutralized by the creation of qq pairs (and less frequently by diquark-antidiquark pairs). Hadrons are then produced according to a recursive scheme. For gluon fragmentation in a qqg event the Lund model stretches a string from the q to the g and on to the q. The string fragments e.g. by the creation of qq pairs, similar to what happens for quark fragmentation. Thus the JETSET model regards gluon fragmentation as a double colour triplet fragmentation. Another possibility is the octet neutralization of a gluon in combination with another gluon. Since in the Monte Carlo model octet neutralization is not included, gluonic bound states are not predicted . Their signature (and that of octet neutralization in general) is a surplus of uncharged leading systems due to the requirement that the sum of charges of the decay products (leading particles) is zero. a The first important step is therefore to compare the sum of charges (SQ) in the leading particle system of gluon jets with the resp. Monte Carlo prediction and to search for a surplus of events with SQ=O. Furthermore, the same investigation for quark jets should not result in a surplus of any charge. In ref. l2 it has been observed that there were no significant deviations from Monte Carlo predictions for resonance production - in particular that of the 77 - in quark- and gluon-jets. Therefore one can expect that octet neutralization is a relatively rare process - if it exists at all. In 8,9 it is proposed to enhance the contribution of this process by selecting events where a leading aThis mechanism has been considered already by Peterson and Walsh lo. Other references can be found in 8. A different mechanism - colour reconnection - can produce similar effects 11.
35
particle system is separated from the rest of the low energy particles by a large rapidity gap (Ay) empty of hadrons. In this situation of a hard isolated gluon the octet field is expected not to be distorted by multiple gluon emission and by related colour neutralization processes of small rapidity ranges '. The price to pay for such a selection is however a strong reduction of the number of events because of the Sudakov form factor 1 3 .
2
Data sample and 3-jet event selection
The data sample used has been taken by the DELPHI experiment at the LEP collider at fi = 91 GeV in the years 1992-1995. About 220000 3-jet-events have been selected, which have been obtained by using the appropriate cuts for track quality and for the hadronic event type l4 as well as applying a Ict cluster algorithm (Durham) l5 with yCUt= 0.015. For the jet determination all topologies have been used with 02,0 3 = 135" f 35"' where the jets had been numbered with respect to the energy calculated from the inter-jet angles 0i,i.e. E3 5 E2 5 El.The inter-jet angles are numbered according to the jets opposing them. 16,17,18. The jet with the highest energy El (jet-1) is in most cases a quark-jet, that with the smallest energy E3 (jet-3) the gluon jet. Monte Carlo simulations show for the above mentioned conditions for jet-1 a quark-jet contribution of 2 90% and for jet-3 a gluon-jet contribution of about 70%. This is e.g. in agreement with the numbers quoted by L3 with similar jet energies and selections l9 (El = 41.4 GeV, E 2 = 32.2 GeV and E3 = 17.7 GeV).6 Heavy quark (b- and c- quarks) events are classified using an impact parameter technique 2oi21. In the present study events are only accepted if they do not exhibit a b-quark signal. The intention is to compare gluon jets only to 'lightquark' jets. A corresponding sample of Monte Carlo simulations (JETSET) of about twice the event-statistics has been created for comparisons. All comparisons are done with data not corrected for the detector performance and the corresponding Monte Carlo includes the full simulation of the detector effects.
bAlthough the mean energies of jet-1 and jet-3 differ by more than a factor 2, the maximum possible rapidities and mean multiplicities differ much less (e.g. ( 7 ~ j ~ t - 3=) 9, ( n j e t - l ) =11.6).
36
3
Preliminary Results for 3-Jet Events
3.1 Sum of Charges in the Leading System with Rapidity Gap
a
0.45 data
0.3 0.35
t
0
JET3 V
0.15
: :o
I,, , , ,,
,,
,Q,, ,,,,
O - 4 - 3 - 2 - 1
V
~ , ,,,
0
,I
,
,Q,, , ,o,, ,
1
2
,,, ,,,
3
Figure 1. Sum of the charges, leading system, gluon-jets
0.1 :
4
Figure 2. Sum of the charges, leading system, quark-jets
After the selection of 3-jet events and the determination of enriched quark and gluon jet samples, the leading hadronic system of a jet is singled out by requiring that all particles (charged or neutral) must have a rapidity with
37
respect to the jet-axis of 2 2 and that the region 0 5 y 5 2 must be empty of any hadrons assigned to this jet. For the charged particles the momenta are required to be larger than 0.2 GeV, for the neutrals this requirement is 0.5 GeV. The requirement of the rapidity interval Ay 2 2 below the leading system to be empty of hadrons reduces the number of jets drastically. Only 6700 gluon-jets and 7200 quark-jets meet this condition.‘ The sum of charges of the particles belonging to the leading system defined as above is given in Fig.1 for gluon-jets and in Fig.2 for quark-jets (full circles) and compared to JETSET Monte Carlo simulations (open circles). The numbers P(SQ) in the upper plots are defined as the number of events with a certain SQ divided by the total number of selected events. They are therefore an estimate for the probability of an event to have a ,certain SQ. The SQ distributions of the leading system for the gluon-jet (Fig.1) shows for SQ=O a striking difference between data and simulation. There is a significant enhancement of the data at SQ=O over the Monte Carlo as expected (see Section 1, ref. when the process of colour octet neutralization is present (as mentioned above, octet neutralization is absent in the simulation). On the other hand, there is no significant difference for the SQ distribution between the data and the JETSET Monte C;arlo simulation in the case of quarkjets (Fig.2). The lower parts of Figs. 1 and 2 show the difference of the P(SQ) between the data and the JETSET Monte Carlo simulation. This difference amounts for the gluon-jet (Fig.1) to about 9%which is more than 4 standard deviations from zero, for the quark-jet (Fig.2) this difference is compatible with zero ! *i9)
3.2 Sum of Charges of the two fastest Particles In order to increase statistics the sum of charges (SQ) of the two fastest charged particles without the requirement of a rapidity gap are plotted in Fig.3 for the gluon-jet (‘jet-3’) and in Fig.4 for the quark-jet (‘jet-1’). Compared to the 9% effect for the leading system with rapidity gap Ay 2 2 (Fig.lb) there is still a 3.5% enhancement data - Monte Carlo in Fig.3b, because of the small statistical error even with a higher significance. This enhancement is again not seen for the quark-jet in Fig.4. Requiring for the two fastest tracks a rapidity gap of Ay 2 1 (Figures not shown) the enhancement rises to a 6% effect, but with reduced statistical significance. =Chasing a rapidity gap Ay 2 2 is a cpmpromise between the requirement that the gap should be as large as possible, but the number of events should remain reasonable.
38
Figure 3. Sum of the charges, 2 fastest charged particles, gluon-jets
Figure 4. Sum of the charges, 2 fastest charged particles, quark-jets
3.3 Checks The following sources of systematic errors have been considered: i. Quality of event reconstruction. Due to loss of tracks and badly reconstructed tracks in the detector and due to errors when assigning particles to the three jets by the algorithm, there is usually a difference of several GeV between the jet-energy (Ecalc)calculated from the angles between
39
jets 22 and the sum of energies of all particles assigned to the jet (Esum). An event quality cut has been applied, cutting away about 30 % of the No significant observed jets with the largest difference Ecalc - E,,,. change of the signals at SQ=O both in gluon and quark jets has been observed. ii. The influence of track finding efficiency in the detector. In order to investigate the influence of track finding efficiency the effect of a reduction of the efficiency by 1%has been simulated. No significant change in the signals at SQ=O has been observed. iii. To investigate whether the good agreement between data and Monte Carlo in quark jets is accidental and only due to the larger particle momenta with less measurement quality, in a test-run only particles with momenta less than 30GeV have been accepted in jetl. The agreement with the Monte Carlo (at detector level) is pertinacious. iiii In the present JETSET simulation, the popcorn parameter is set to PARJ(5)=0.5. However in ref. it is shown that this value is too large and does not reproduce baryon correlations. We compared SQ distributions with PARJ(5) between 0.5 and 0.01 without explaining the surplus at SQ=O. Finally it should be remarked that in a study of Opal 23 which has been done in a different context also a surplus of SQ=O events is visible in tagged gluon jets. It has however not been commented there.
3.4 Mass Spectra Fig.5 shows, for both the gluon-jet and the quark-jet, the effective mass distribution M of the leading system, without neutral particles, a required rapidity gap Ay 2 2 for the charged particles and the total charge of the system being zero. The number of charged particles in the leading system has to be 2,4,6 etc. Several peaks can be observed for the glum-jet (Fig.5a). One peak around M 0.8 GeV might be attributed to the p resonance, another at M 5 0.5 GeV to a reflection of q, q’ and w . The latter statement is corroborated by the fact that in events with no neutrals the peak at M 5 0.5 GeV vanishes. Other peaks ( M 1 GeV, M 1.4 GeV, ...) show up with only weak statistical significance. The difference between data and simulation of the M distributions of the leading system is given in Fig.5b. The p region is slightly overestimated by the simulation, the enhancement at M 0.5 GeV seems to be understood by the simulation. Besides the region at very low
-
N
-
N
I
0.05
0.05
0.04
0.04
q-jets, (jetl)
0.03
0.03
2,4.0
0.02
0.02
0.01
0,Ol
0
0
v
LL
c)
I
-3 0
...-particle mass
2
3
4
3
4
0.02
I a 0.015
Y
data-MC, q-jets
I
0.01
0.01
a 0.005
0.005
0
Y
0
-0.005
-0.01 -0,015
-0.01 -0.025
-0.02
1
2
Figure 5 . Effective mass distribution of the leading system for both (a) gluon-jets and (c) quark-jets as well as the respective differences to the Monte Carlo (b) and (d).
-
M 5 0.3 GeV, the 2 enhancements in the M distribution of the leading system of the gluon-jet at M 1 GeV and M 1.4 GeV are not reproduced by N
41
the JETSET Monte Carlo simulation. Fig.5~shows the corresponding mass distribution of quark-jets and Fig.5d shows the difference between data and simulation of the M distributions of the leading system for the quark-jet. Compared to the gluon-jet there is no enhancement at M 1 GeV in F i g . 5 ~and in Fig.5d no excess compared to the simulation at M 1 GeV and M 1.4 GeV. Since only charged particles are used for the mass spectra and since reflections are expected from resonances/clusters decaying partly in neutral particles and because of the limited statistics, no decisive conclusions can be drawn yet from the mass distributions.
-
N
4
N
Summary and Conclusions
In the present study first efforts have been undertaken to search for the existence of octet neutralization in the fragmentation of gluon-jets. The full statistics of 1992-1995at f i = 91.1 GeV obtained by the DELPHI collaboration is used to select 3-jet events and to single out quark-jets (purity 2 90%) and gluon-jets (purity 2 70%) thereof. A leading system of a jet is defined which is separated from the rest of the low energy particles by a rapidity gap of width Ay 2 2 being empty of hadrons. The sum of charges of this leading system is studied. For the gluon-jets an enhancement of neutral leading systems over the Monte Carlo prediction of about 9% is seen (more than 4 standard deviations above zero), on the other hand, no such enhancement is seen in the quark-jet ! This could be due to the existence of colour octet neutralization in gluon jets. An even more significant deviation is revealed for the sum of charges for the 2 fastest particles without demanding a rapidity gap (3.5% effect but with very high significance). Several sources of possible systematic errors have been considered. None can explain the observations. In order to assess the existence of colour octet neutralization, further checks have to be done e.g. examination of the ability of modified Monte Carlos to explain the observations, determination of the quantum numbers of the leading system, better separation of the gluon-jet, better insight into the role of the neutrals etc. It can, however, be argued that there is an intrinsic shortcoming for the present JETSET Monte Carlo simulation (which has been tuned to various quantities measured by the Delphi experiment 24) describing the sum of charges in the leading system of the gluon-jet ! The effective mass of the leading system with sum of charges SQ = 0 is studied. An enhancement for the gluon-jet at M 1 GeV, which is not seen in the simulation nor for the quarkjet and another at M 1.4 GeV is yet of only weak statistical significance. N
N
42
Acknowledgements We thank W.Ochs for encouraging us to start the above study and for discussions in the course of it, 0.Klapp for technical support for the jet selection and MSiebel for valuable comments. We thank T. Sjoestrand for valuable discussions and clearifications.
References 1. 2. 3. 4. 5. 6.
7. 8. 9. 10. 11. 12. 13.
14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24.
T. Sjoestrand, Comp. Phys. Comm. 82 (1994) 74. P.Abreu et al., DELPHI-collab. Phys.Lett. B407 (1997) 174. P.Abreu et al., DELPHI-collab. Phys.Lett. B416 (1998) 247. P.Abreu et al., DELPHI-collab. Phys.Lett. B490 (2000) 61. DELPHI-Note 283, CERN-EP 2002-023. F. Mandl, Proceedings 31st Int. Symp. on Multiparticle Dynamics, eds. ... (WSPC, Singapore, 2002). DELPHI-collab., DELPHI-note 2002-053-conf-587. P.Minkowski and W.Ochs, Phys.Lett. B485 (2000) 139. P.Minkowski and W.Ochs, Proceedings 30th Int. Symp. on Multiparticle Dynamics, eds. R. Csorgo et a1 (WSPC, Singapore, 2001). C. Peterson and T. F. Walsh, PhysLett. B91 (1980) 455. T. Sjoestrand, private communication. P.Abreu eyt al., DELPHI Collab., Eu.Phys.J. C17 (2000) 207, L3 COLLAB., CERN-PPE/92-83. V.V.Sudakov, Sov.Phys. JETP3 (1956) 65; W.Ochs and TShimade, Proc. of XXIV Int. Symp. QCD and Multiparticle Production 1999. P.Abreu et al. (DELPHI-Collab.), Phys.Lett. B355 (1995) 415. S.Catani et al., NucLPhys. B269 (1991) 432; S.Bethke et al., Nucl. Phys. B370 (1992) 310. P.Abreu et al. (DELPHI-Collab.) Eur.Phys.J. C4 (1998) 1. K.Hamacher et al, ICHEP’98, paper 147, DELPHI 98-86 CONF154; ICHEP’99, DELPHI 99-127 CONF 314. P.Abreu et al. (DELPHI-Collab.), Z.Phys. C70 (1996) 179. L3-Collab., Phys.Lett. B407 (1997) 38. DELPHI Collab., Nucl. Instr. and Meth. A378 (1996) 57; G.Borisov and C.Mariotti, Nucl. Instr. and Meth. A372 (1996) 181. DELPHI-Collab, P.Abreu et al, Phys.Lett. B462 (1999) 425; O.Klapp, Thesis, WUB-DIS 99-16. DELPHI-Collab, P.Abreu et all Eur.Phys.J. C13 (2000) 573. OPAL-Collab, P.D.Acton et all Phys.Lett. B302 (1993) 523. Delphi-Collab., P.Abreu et al., Z.Phys. C73 (1996) 11.
GENUINE THREE-PARTICLE BOSE-EINSTEIN CORRELATIONS IN HADRONIC Z DECAY J.A. VAN DALEN, W. KITTEL, W.J. METZGER PRESENTED BY W. KITTEL HEFIN, University of Nijmegen/NIKHEF, Toernooiveld 1) 6525 E D Nijmegen, NL
FOR THE L3 COLLABORATION We measure three-particle Bose-Einstein correlations in hadronic 2 decay with the L3 detector at LEP. Genuine three-particle Bose-Einstein correlations are observed. By comparing two- and three-particle correlations we find that, in conventional interpretation, the data are consistent with what is expected from fully incoherent pion production. Alternative explanations are, however, not excluded.
Introduction The shape and size in space-time of a pion source can be determined from the shape and size of the correlation function of two identical pions in energymomentum space. [l]Resent results from LEP are given in [2-41. Additional information can be derived from higher-order correlations. Furthermore, such correlations constitute an important theoretical issue for the understanding of Bose-Einstein correlations (BEC). [5] Three-particle correlations are sensitive to asymmetries in the particle production mechanism [6,7] which cannot be studied by two-particle correlations. In addition, the combination of two- and three-particle correlation analyses gives access to the degree of coherence of pion production, [8,9] which is very difficult to investigate from two-particle correlations alone due to the effect of long-lived resonances on the correlation function. The DELPHI [lo] and OPAL Ill] collaborations have both studied three-particle correlations but did not investigate the degree of coherence.
The Data and Monte Carlo The data used in this analysis were collected by the L3 detector [12] in 1994 at a centre-of-mass energy of 91.2 GeV and correspond to a total integrated luminosity of 48.1 pb-l. The Monte Carlo (MC) event generators JETSET [13] and HERWIG [14] are used to simulate the signal process. Within JETSET, BEC are simulated using the BE0 algorithm. [15,16] The generated events are passed through the L3 detector simulation program, which is based on the G E A N T [17] and GHEISHA [18] programs, reconstructed and subjected t o
43
44
the same selection criteria as the data. The event selection is identical to that presented in [3], resulting in about one million hadronic Z decay events, with an average track multiplicity of about 12. Two additional cuts are performed in order to reduce the dependence of the detector correction on the MC model used: tracks with measured momentum greater than 1GeV are rejected, as are pairs of like-sign tracks with opening angle below 3". This results in an average track multiplicity of about 7. For the computation of three-particle correlations, each possible triplet of like-sign tracks is used to compute the variable Q3 dQZ2 QZ3 Q&,where Qijz is the absolute four-momentum difference between particles a and j. Since Qij,and thus Q3, depends both on the energy of the particles and on the angle between them, small Qij can be due to small angles or low energies. In a MC generator with BE effects, the fraction of pairs at small Qij with small angle is larger than in one without. Consequently, the estimated detection efficiency depends on the MC model used. The momentum and opening angle cuts reduce this model dependence. After selection, the average triplet multiplicity is about 6. In the region of interest, Q 3 < 1GeV, the loss of triplets by the momentum and opening angle cut is about 40%. The momentum cut improves the resolution of Q 3 by a factor three with respect to that for the full momentum spectrum. Using MC events, its average is estimated t o be 26MeV for triplets of tracks with Q3 < 0.8 GeV. We choose a bin size of 40 MeV, somewhat larger than this resolution.
=
+
+
d
w
The Analysis The three-particle number density p3(pl , p 2 , ~ 3 )of particles with fourmomenta pl,pz and p3 can be described in terms of single-particle, twoparticle number densities and genuine three-particle correlations as
+
P3(Pl,P2,P3)= Pl(Pl)Pl(P:!)Pl(P3) c(3 (Pl(P1) ) [P2(P2,p3)- PI(PZ)P~(P~)]}.
+C3(Pl,PS,P3) ' (1) where the sum is over the three possible permutations and C3 is the thirdorder cumulant. The p1p2 terms contain all the two-particle correlations. In order to focus on the correlation due to BE interference, we replace products of single-particle densities by the corresponding two- or three-particle density, PO, which would occur in the absence of BEC, and define the correlation functions
45
Genuine three-particle correlations are then given by the normalized cumulant correlation function
The kinematical variable Q 3 is used t o study three-particle correlations. For a three-pion system, Q 3 = J-, with M i 2 3 the invariant mass of the pion triplet and mT the mass of the pion. p3 is defined as
with Nev the number of selected events and ntripletsthe number of triplets of like-sign tracks, and p 2 is defined analogously. Conventionally, assuming totally incoherent production of particles and a source density f(x) in space-time with no dependence on the four-momentum of the emitted particles, the BE correlation functions are related t o the source density by [8,19] R2(Qij) R s ( Q 1 2 , Q23, Q31)
+
= 1 IF(Qij)lZ = 1+ l F ( Q i 2 ) 1 2
1
+ K2(Qij)
+ IF(Q23)l2 + I F ( Q 3 i ) I 2
(5)
+ 2 Re(F(Qi2)F(Q23)F(Q3i)} (6) = 2 Re{F(Qi2)F(Q23)F(Q31)} (7) where F ( Q i j ) is the Fourier transform of f(x). RZ does not depend on the phase q&j contained in F ( Q i j ) E IF(Qij)lexp(i4ij). However, this phase survives in the three-particle BE correlation functions, Eqs. (6) and (7). Assuming fully incoherent particle production, the phase $ij can be non-zero only if the space-time distribution of the source is asymmetric and Q i j > 0. Defining K 3 ( Q 1 2 , Q23, Q31)
then for an incoherent source Eqs. (5) and (7) imply that w = cos$, where 4 = 4 1 2 $23 $31. Furthermore, as Q i j + 0, then q5ij --+ 0, and hence w -+ 1. For Q i j > 0, a deviation from unity can be caused by an asymmetry in the production. However, this will only result in a small (a few percent) reduction of w, [6,7] and this only in the case where the asymmetry occurs around the point of highest emissivity. It is important to emphasize that for (partially) coherent sources, w can still be defined by Eq. (8), but Eqs. (5-7) are no longer valid, in which case more complicated expressions are needed, [7] and one can no longer deduce that w = cosd, or that w + 1 as Q i j + 0. In at
+
+
46
least one type of model, one can make the stronger statement that the limit w = 1 at Qij -+ 0 can only be reached if the source is fully incoherent. [20]
Determination of R3 and K s The reference sample, from which po is determined, is formed by mixing particles from different data events and Q 3 is calculated for each triplet of like-sign tracks, resulting in the density pmix(Q3). This mixing procedure removes more correlations than just those of BE. This effect is taken into account using a MC model without BE effects (JETSET or HERWIG) at generator level and using pions only, PO(Q3)
= ~ m i x ( Q s ) C m i x ( Q ~ ) , where
=
Cmix(Q3)
[
1
~ P3(Q3) m i x ( Q 3 ) MC,",,BE
. (9)
The density p3, measured in the data, must be corrected for detector resolution, acceptance, efficiency and for particle misidentification. For this we use a multiplicative factor, Cdet, derived from MC studies. Since no hadrons are identified in the analysis, Cdet is given by the ratio of the three-pion correlation function found from MC events at generator level to the three-particle correlation function found using all particles after full detector simulation, reconstruction and selection. Combining this correction factor with Eqs. (2) and (9) results in
The genuine three-particle BE correlation function, K3, is obtained via K3=R3-R1,2
,
(11)
where R1,2 E (Cp1pz)/p,-, - 2 is the contribution due to two-particle correlations, as may be seen from Eqs. (1) and (2). The product of densities C p1 (pl)pz(pz,p3) is determined by a mixing procedure where two like-sign tracks from the same event are combined with one track having the same charge from another event with the same multiplicity. [21] Also the ratio (Cp1pz)/po is corrected for detector effects as p3/pmix. In our analysis, we use JETSET without BEC and HERWIG to determine Cmix and JETSET with and without BEC as well as HERWIG to determine Cdet. These six MC combinations serve to estimate systematic uncertainties. The corrections are largest at small Q 3 . At Q 3 = 0.16GeV, these corrections to R3 are Cmix M 5-30% and Cdet M 20-30%, depending on which MC is
47
-
2
0
data
MC,noBEC
-Gaussian tit
...... -1 in Eq. (8) @......t8.8888
t ' " " "0.4
0.8 0.2
0.8
0.6 ""
" "0.6 '
a;rGy"l
1.4
1.6
" " " " "
" 1
1.8
' , 2I
I ' ~ b ~ i ' b ~ ~ " : ~ ~ ~ ~ o ~ ~ i Figure ~ ' i : 2. i ~The i ~ genuine i ~ i : i three-particle ~ i : i ~ ~ ~ BE correlation function R F . In (a) the Q [GeVl
Figure 1. (a) The three-particle BE correlation function, R3, (b) the contribution of two-particle correlations, R1,z and (c) R2. In (c) the dashed and full lines show the fits of Eqs. (13) and (14), respectively.
full line shows the fit of Eq. (12), the dashed line the prediction of completely incoherent pion production and a Gaussian source density in space-time, derived from parametrizing R2 with Eq. (13). In (b) Eqs. (15) and (14) are used, respectively.
used. The corrections for R3 and Rl,2 are correlated and largely cancel in calculating K3 by Eq. (11). To correct the data for two-pion Coulomb repulsion in calculating p2, each pair of pions is weighted by the inverse Gamow factor. [22] It has been shown [23] that this Gamow factor is an approximation suitable for our purposes. For p3, the weight of each triplet is taken as the product of the weights of the three pairs within it. For Cp2pl we use the same weight but with G2(Qij) 1 when particles i and j come from different events. At the lowest Q 3 values under consideration, the Coulomb correction is approximately lo%, 3% and 2%, for p3, C plp2 and p2, respectively.
=
Results The measurements of R3, Rl,2 and R2 are shown in Fig. 1. The full circles correspond to the averages of the data points obtained from the six possible
48
MC combinations used t o determine Cmix and Cdet. The error bars include both the statistical uncertainty and the systematic uncertainty of the MC modeling. Fig. l a shows the existence of three-particle correlations and from Fig. l b it is clear that about half is due to two-particle correlations. Figure l c shows the two-particle correlations. As a check, R3, R I , ~ and R2 are also computed for MC models without BEC, both HERWIG and JETSET, after detector simulation, reconstruction and selection. The results are shown in Fig. 1 as open circles and, as expected, flat distributions around unity are observed. Figure 2a shows the genuine three-particle BE correlation function RFnuine = K3 1. The data points show the existence of genuine threeparticle BE correlations. The open circles correspond to MC without BEC and form a flat distribution around unity, as expected.
+
Gaussian Parametrization
A fit from Q3 = 0.16 to 1.40GeV using the covariance matrix including both the statistical and the systematic uncertainty due to the MC modeling is performed on the data points with the parametrization [8,10,11,21]
+
R p n e ( Q 3 ) = y [1+ 2;\1.5exp(-fi2Qi/2)] (1 FQ3)
,
(12)
where is an overall normalization factor, ;\ measures the strength of the correlation, is a measure for the effective source size in space-time and the term (1 EQ3) takes into account possible long-range momentum correlations. The form of this parametrization is a consequence of the assumptions that w = 1 and that IF(Qij)I = f i e ~ p ( - f i ~ Q ? ~ / 2as ) , would be expected for a Gaussian source density. The fit results are given in the first column of Table 1 and shown as the full line in Fig. 2a. In addition to the MC modeling, we investigate four other sources of systematic uncertainties on the fit parameters. The influence of a different mixing sample is studied, systematic uncertainties related to track and event selection and to the choice of the fit range are evaluated, the analysis is repeated with stronger and weaker selection criteria. Finally, we study the intluence of removing like-sign track pairs with small polar and azimuthal opening angles. The total systematic uncertainty due to these four sources is obtained by adding the four uncertainties in quadrature. To measure the ratio w , we also need to determine the two-particle BE correlation function
+
R2(Q) = y [1+ Xexp(-R2Q2)] (1+ EQ) .
(13)
49
Table 1. Values of the fit parameters.
Gaussian parameter R F n e ,Eq. (12) Rz, Eq. (13) (-)
y
1-1 (-) -) E
0.96f0.03f0.02
Edgeworth
R Y , Eq. (15)
Rz, Eq. (14)
0.98f0.03f0.02 0.95f0.03f0.02 0.96f0.03f0.02
0.47f0.07f0.03 0.45f0.06f0.03 0.75fO.lOf0.03 0.72f0.08f0.03
R , fm
0.65f0.06f0.03 0.65f0.03f0.03 0.72f0.08f0.03 0.74f0.06f0.02
,G e T 1
0.02f0.02f0.02
0.01f0.01f0.02 0.02f0.02f0.02 0.01f0.02f0.02
(-) K
X'INDF
29.9127
60.2129
0.79f0.263t0.15 0.74f0.21f0.15 17.7126 26.0128
The parametrization starts at Q = 0.08 GeV, consistent with the study of R3 from Q3 = 0.16 GeV. The fit results are given in the second column of Table 1 and in Fig. l c . If the space-time structure of the pion source is Gaussian and the pion production mechanism is completely incoherent, and R as derived from the fit by Eq. (12) measure the same correlation strength and effective source size as X and R of Eq. (13). The values of X and R are consistent with and R, as expected for fully incoherent production of pions (w = 1). Using the values of X and R instead of and R in Eq. (12), which is justified if w = 1, results in the dashed line in Fig. 2a. It is only slightly different from the result of the fit by Eq. (12), indicating that w is indeed near unity. Another way to see how well R F i n ecorresponds to a completely incoherent pion production interpretation and a Gaussian source density in space-time, is to compute w with Eq. ( 8 ) , for each bin in Q 3 (from 0.16 to 0.80 GeV), using the measured R F i n eand Rz derived from the parametrization of Eq. (13). The result is shown in Fig. 3a. At low Q 3 , w appears to be higher than unity.
Edgeworth Parametrization However, the assumption of a Gaussian source density is only a rough approximation. Deviations from a Gaussian can be studied by expanding in terms of an Edgeworth expansion [25]
RAQ) = Y [1+ Xexp(-R2Q2)(1
+ @ 4 h R Q ) / 6 ) ](1+ E Q )
,
(14)
where K. measures the deviation from the Gaussian and H 3 ( 2 ) G x3 - 3s is the third-order Hermite polynomial. The fit results for the two-particle BE
correlation function with this parametrization are given in the fourth column of Table 1. Using Eqs. (14) and Eq. (8), assuming w = 1, Eq. (12) becomes ~
~
i
n
e
(
~
~
)
where the approximation is made that Qij = Q3/2. The effect of this approximation on RFnuineis small compared to the statistical uncertainty. The results of a fit by Eq. (15) are given in the third column of Table 1. For both R F i n eand R2, a better x2/NDF is found using the Edgeworth expansion, and the values of X and X are significantly higher. The values for and R are still consistent with the corresponding X and R , as would be expected for a fully incoherent production mechanism of pions. In Fig. 2b, as in Fig. 2a, we observe good agreement between the fit by R F i n e using the parametrization of Eq. (15) and the prediction of a completely incoherent pion production mechanism, derived from parametrizing R2 with Eq. (14), over the full range of Q3. In Fig. 3b, no deviation from unity is observed for the ratio w.This indicates that the data agree with the assumption of fully incoherent pion production. Fits to samples generated with JETSET with BE effects modelled by BEo or BE32 [16] result in values of R in agreement with the data but in significantly higher values of
x.
Other Experiments UA1 [26] finds cumulants K3 and K2, leading to a ratio w larger than unity. NA22 [27] is larger than but consistent with unity within its large errors. In agreement with earlier observations from the cumulant moments, (w)= 0.20 f0.02 f0.19, i.e. no genuine three-particle correlations are found outside the (large) errors for SPb in NA44. [29] What is particularly remarkable, however, is that the same experiment with the same methodology finds an average (w)= 0 . 8 5 f 0 . 0 2 f 0 . 2 1 for PbPb collisions and that this is supported by a value of (w)= 0.606 f 0.005 f 0.179 earlier reported by WA98. [28] So, if we trust NA44 (and we have no reason not to) and try t o stick with conventional pion interferometry, we end with a beautiful dilemma: i) e+e- collisions are consistent with fully incoherent production (w l)! ii) SPb collisions are consistent with coherent pion production (w O)!
-
N
51
"1, 0
Gaussian , ,
0.2
,
,
0.4
, , 0.6
Q, [GeW
,
,
I
0.8
0 . 20 5 1 E d g e w o f l h , , 0.2
6:[GeiY
,
,
0.8
Figure 3. The ratio w as a function of Q3 assuming Rz is described (a) by the Gaussian, and (b) by the first-order Edgeworth expansion of the Gaussian.
iii) Pb P b is somewhere in between! It could not be more opposite to any reasonable expectation from conventional interferometry. The hint for an alternative interpretation comes from so-called dilution. What conventional interferometry calls the cosine of a phase may in fact have nothing to do with a phase. It is simply the ratio of K3 and twice K;/'. It will be a challenge for the string model to explain why this is unity for an e t e - string. If that can be explained, the rest looks easy and very much in line with the unexpected behavior of the strength parameter X observed for heavy-ion collisions. The ratio w K3/2K,3I2 decreases with the number of independent sources N like N2/2N3I2 c( N112. As X does, it decreases with increasing atomic mass number A up to SPb collisions. A saturation or increase of X at and above this A value has been explained by percolation [30] of strings. Exactly the same explanation can be used to understand an initial decrease of ( w ) with increasing A , followed by an increase between SPb and P bPb collisions. [31]
-
References
1. G. Goldhaber et al., Phys. Rev. 120, 300 (1960); D.H. Boal, C.K. Gelbke, B.K. Jennings, Rev. Mod. Phys. 62, 553 (1990). 2. DELPHI Collab., P. Abreu et al., Phys. Lett. B 286, 201 (1992); Z. Phys. C 63, 17 (1994); ALEPH Collab., D. Decamp et al.; Z. Phys. C
52 54, 75 (1992); OPAL Collab., G. Alexander et al., Z. Phys. C 72, 389
(1996). 3. L3 Collab., M. Acciarri et al., Phys. Lett. B 458, 517 (1999) 517. 4. OPAL Collab., G. Abbiendi et al., Eur. Phys. J. C 16, 423 (2000); DELPHI Collab., P. Abreu et al., Phys. Lett. B 471, 460 (2000). 5. M. Biyajima et al., Progr. Theor. Phys. 84, 931 (1990). 6. H. Heiselberg and A.P. Vischer, Phys. Rev. C 55, 874 (1997) and Preprint nucl-th/9707036 (1997). 7. U. Heinz, and Q. Zhang, Phys. Rev. C 56, 426 (1997). 8. B. Lorstad, Int. J. Mod. Phys. A 4, 2861 (1989). 9. I.V. Andreev, M. Pliimer and R.M. Weiner, Int. J. Mod. Phys. A 8, 4577 (1993). 10. DELPHI Collab., P. Abreu et al., Phys. Lett. B 355, 415 (1995). 11. OPAL Collab., K. Ackerstaff et al., Eur. Phys. J. C 5, 239 (1998). 12. L3 Collab., B. Adeva et al., Nucl. Instr. Meth. A 289, 35 (1990); G. Basti et al., Nucl. Instr. Meth. A 374, 293 (1996). 13. T. Sjostrand, Comp. Phys. Comm. 82, 74 (1994). 14. G. Marchesini and B.R. Webber, Nucl. Phys. B 310, 461 (1988); G. Marchesini et al., Comp. Phys. Comm. 67, 465 (1992). 15. L. Lonnblad and T. Sjostrand, Phys. Lett. B 351, 293 (1995). 16. L. Lonnblad and T. Sjostrand, Eur. Phys. J. C 2, 165 (1998). 17. R. Brun et al., CERN report CERN DD/EE/84-1 (1984); revised 1987. 18. H. Fesefeldt, RWTH Aachen report PITHA 85/02 (1985). 19. V.L. Lyuboshitz, Sov. J. Nucl. Phys. 53, 514 (1991). 20. T . Csorgo et al., Eur. Phys. J. C 9, 275 (1999). 21. NA22 Collab., N.M. Agababyan et al., Z. Phys. C 68, 229 (1995). 22. M. Gyulassy, S. Kauffmann, L.W. Wilson, Phys. Rev. C 20, 2267 (1979). 23. E.O. Alt et al., Eur. Phys. J. C 13, 663 (2000). 24. L3 Collab., P. Achard et al., Phys. Lett. B 524, 55 (2002). 25. F.Y. Edgeworth, Trans. Cambridge Phil. SOC. 20, 36 (1905) 36; T . Csorgo and S. Hegyi, Phys. Lett. B 489, 15 (2000). 26. H.C. Eggers, P. Lipa and B. Buschbeck, Phys. Rev. Lett. 79, 197 (1997). 27. N.M. Agababyan et al. (NA22), Z. Phys. C68, 229 (1995). 28. M.M. Aggarwal et al (WA98), Phys. Rev. Lett. 85, 2895 (2000). 29. H. Boggild et al (NA44), Phys. Lett. B455 (1999) 77; I.G. Bearden et al. (NA44), Phys. Lett. B517, 25 (2001). 30. M.A. Braun, F. del Moral and C. Pajares, Eur. Phys. J. C21,557 (2001). 31. W. Kittel, Acta Phys. Pol. B32, 3927 (2001); M.A. Brown, F. del Moral and C. Pajares, hep-ph/0201312.
LIKE-SIGN PARTICLE GENUINE CORRELATIONS IN Zo HADRONIC DECAYS
EDWARD K. G. SARKISYAN (for the OPAL Collabomtion) CERN, EP Division, CH-1211, Geneve 23, Switzerland and University of Antwerpen, Universiteitsplein 1, B-2610 Wilrijk, Belgium Correlations among hadrons with the same electric charge produced in Zo decays are studied using the high statistics data collected with the OPAL detector at LEP. The method of normalized factorial cumulants are applied to measure the multidimensional genuine correlations up to fourth order. Both all-charge and likesign particle combinations show the strong positive correlations. The rise of the cumulants for all-charge multiplets is obatined to be increasingly driven by that of likesign multiplets. The PYTHlA implemented algorithms to simulate B o s e Einstein effects are found to reproduce reasonably well the measured second- and higher-order correlations among same-charge and among all-charge hadrons.
1. Introduction
Over many decades correlations in momentum space between hadrons produced in high energy interactions have been extensively studied in different contexts.’ The correlations provide detailed information on the hadronisation dynamics, complementary to that derived from inclusive one-particle and global event-shape spectra. In the present analysis we use the normalized factorial cumulant technique, which allows statistically meaningful results to be obtained down to very small phase space cells. The cumulants of order q are a direct measure of the stochastic interdependence among groups of exactly q particles emitted in the same phase space ell.^?^ Therefore, they are well suited for the study of true or “genuine” correlations between hadrons. Experimental studies of hadron correlations are given in reviews.’>* Those studies show that the correlations between hadrons with the same charge play an increasingly important role as cell size A decreases, thus pointing to the influence of Bose-Einstein (BE) interference effects. In con-
53
54
trast, correlations in multiplets composed of particles with different charges, which are more sensitive to multiparticle resonance decays than like-sign ones, tend to saturate in small phase space domains.’ It is to be noted that the subject has acquired particular importance in connection with high-precision measurements of the W-boson mass at LEP-II.5 For these, better knowledge of correlations in general is needed, as well as realistic Monte Carlo (MC) modelling of BEC. The OPAL collaboration recently reported an analysis of the Adependence of factorial cumulants in hadronic Zo decays, using much larger statistics than in any previous experiment.6 No distinction was made between multiplets of like-charge particles and those of mixed charge. Clear evidence was seen for large positive genuine correlations up to fifth order. Hard jet production was found to contribute significantly to the observed particle fluctuation patterns. However, MC models (JETSET and HERWIG) gave only a qualitative description of the A-dependence of the cumulants. Quantitatively, the models studied, which did not explicitly include BEtype correlation effects, underestimated significantly correlations between hadrons produced in relatively small cells in momentum space. 2. Factorial cumulant method
The normalized factorial cumulant moment technique3 is used to measure genuine multiparticle correlations. The factorial cumulant moments, or “cumulantsl’, are computed as earlier.6 A D-dimensional phase space is partitioned into equal size M” cells, A. From the number of particles counted in each cell, n, (m = 1,.. . ,M”), event-averaged unnormalized factorial moments, (n!) , and unnormalized cumulants, kim), are derived, using their interrelations.2 For e.g., q = 2 and 3, one has
+
Here, )I,[,( = (n(n- 1).. . ( n - q 1))and the brackets (.) indicate that the average over all events is taken. Normalized cumulants are calculated using the expression7
Here, N , is the number of particles in the mth cell summed over all N events in the sample, N , = The horizontal bar indicates
zy=l(nm)j.
averaging over the M” cells in each event, ( l / M D )Cz:,. Whereas (dq]) depends on all correlation functions of order 1 < p < q , k, is a direct measure of stochastic dependence in multiplets of exactly q
55
particles: k, vanishes whenever a particle within the q-tuple is statistically independent of one of the others. Non-zero cumulants therefore signal the presence of the “genuine” correlations. In the following, data are presented for “all-charge” and for “like-sign” multiplets. For the former, the cell-counts n, are determined using all charged particles in an event, irrespective of their charge. For the latter, the number of positive particles and the number of negative particles in a cell are counted separately. The corresponding cumulants are then averaged to obtain those for like-sign multiplets. 3. Experimental details
The present analysis uses a sample of approximately 4.1 x lo6 hadronic Zo decays collected in 1991-1995 with the OPAL detector’ at LEP. A sample of over 2 million events was generated with JETSE“7.4/ PYTHIA6.l9, including a full simulation” of the detector. The model parameters were previously tuned to OPAL data’l but Bose-Einstein effects were not explicitly incorporated. These events were used to determine the efficiencies of track and event selection and for correction purposes. In addition, for the evaluation of systematic errors, over 1.1 million events were simulated with PYTHIA including BEC with the algorithm” BE32. The event selection criteria are based on the multihadronic event selection algorithms.6 The cumulant analysis is performed in the following kinematic variables (all calculated with respect to the sphericity axis): 0
0
0
< <
Rapidity, -2.0 y 2.0, is defined as y = 0.51n[(E +pll)/(E - p ~ l ) ] , with E and pll the energy (assuming the pion mass) and longitudinal momentum of the particle, respectively. The log of transverse momentum p ~ -2.4 , 6 ln(pT) 6 0.7, used instead of pT itself, to reduce the dependence of the cumulants on cell-size arising from the nearly exponential shape of the pg-distribution. The azimuthal angle, 0 CP < 21r, calculated with respect to the eigenvector of the momentum tensor having the smallest eigenvalue in the plane perpendicular to the sphericity axis.
<
The cumulants have been corrected using correction factors, U,(M), evaluated as earlier6 using the JETSEYPYTHIA MC without BEC. used algorithm BE3212 in subroutine PYBOEI with parameters MSTJ(51)=2, MSTJ(52)=9, PARJ(92)=1.0, PARJ(93)=0.5 GeV.
56
0
0
0
0
0
As systematic uncertainties, we include the following contributions: The statistical error of the U,(M)-factors. Statistical errors due to the finite statistics of the MC samples are comparable to those of the data. Track and event selection criteria variation as in earlier study.6 The changes modify the results by no more than a few percent in the smallest cells, and do not affect the conclusions. The difference between cumulants corrected with the U,(M)-factors from MC with and without BE simulation. The difference in the U,-factors in the two cases is 5 5% in the smallest bins. The difference between cumulants corrected with the factors U,(M) derived from Monte-Carlo calculations with and without Bose-Einstein simulation. The correction factors in these two cases differ by at most 5% in the smallest bins. The difference between cumulants corrected with U,(M)-factors for allcharge combinations and those calculated for like-sign ones. The correction factors coincide within 1%.
4. Results
4.1. Like-sign and all-charge cumulants The fully corrected normalized cumulants Kq (q = 2,3,4) for all-charge and like-sign particle multiplets, calculated in one-dimensional (y and a) (lD), two-dimensional y x Q, (2D) and three-dimensional y x Q, x 1npT (3D) phase space cells, are displayed in Fig. 1 and Fig. 2. From Fig. 1 it is seen that, even in lD, positive genuine correlations among groups of two, three and four particles are present: K, > 0. The cumulants increase rapidly with increasing M for relatively large domains but saturate rather quickly. For K.t this behaviour follows from the shape of the second-order correlation function which is known to be approximately Gaussian' in the two-particle rapidity difference A = 6y. The rapid rise and subsequent saturation can be understood from hard gluon jet emission. In contrast to ID cumulants, those in 2D and 3D (Fig. 2) continue to increase towards small phase space cells. Moreover, the 2D and 3D cumulants are of similar magnitude at fked M , indicating that the correlations in p~ are small. This can be understood from the importance of multi-jet production in e+e- annihilation, which is most prominently observed in y x Q, space.6 Indeed, the 1D cumulants in p~ are found to be close to zero and therefore not shown. The 1D cumulants of all-charge and of like-sign multiplets (Fig. 1)show
57
-
0.2
10
10
* **
-1
-1
ki1
10
10
10
-1
:
-1
t' j"'1
10
-2
z
u 1
10
10
1
10
10
M
Figure 1. The cumulants K q in onsdimensional domains. The inner errors are stai tistical and the outer ones are statistical and systematic errors added in quadrature.
a similar dependence on M . The latter, however, are significantly smaller, implying that, for all M , correlations among particles of opposite charge are important in onedimensional phase space projections. This can be expected in general from local charge conservation and in particular from resonance decays. In 2D and 3D (Fig. 2), like-sign cumulants increase faster and approach the all-charge ones at large M . It can be verified that K2 for unlike-charge pairs remains essentially constant for M larger than about 6. Consequently,
58
10 2 1
1
10
1
10
M
Figure 2. The cumulants Kp in in 2- and 3-dimensional domains. The inner errors are statistical and the outer ones are statistical and systematic errors added in quadrature.
as the cell-size becomes smaller, the rise of all-charge correlations is increasingly driven by that of like-sign multiplets. 4.2. Model comparison
In this section, we compare the cumulant data with predictions of the PYTHIA MC event generator (version 6.158) without and with BE effects. Samples of about lo6 multihadronic events were generated at the Zo energy. The model parameters, not related to BEC, were set at values obtained from
59
a previous tune to OPAL data on event-shape and single-particle inclusive distributions" without BE-effects. We concentrate on the algorithm BE32 using the BE parameter values PARJ(93) = 0.26 GeV and PARJ(92) = 1.5. These values were determined by varying independently PARJ(93) and PARJ(93) within the range 0.2-0.5 GeV and 0.5-2.2, respectively, in steps of 0.05 GeV and 0.1, until satisfactory agreement with the measured cumulants K2 for like-sign pairs was reached.b We find that calculations with PARJ(93) in the range 0.2 - 0.3 GeV, and the corresponding PARJ(93) in the range 1.7 - 1.3, provide an acceptable description of the second-order like-sign cumulants. predictions for like-sign The dashed lines in Figs. 1 and 2 show PYTHIA multiplets for the model without BEC. Model and data agree for small M (large phase space domains), indicating that the multiplicity distribution in those regions is well modelled. However, for larger M , the predicted cumulants are too small, the largest deviations occuring in 2D and 3D. The model predicts negative values for &(@) which are not shown. The solid curves in Figs. 1 and 2 show a very significant improvement of the data description when one uses the predictions for like-sign multiplets based on the BE32 algorithm. Now not only two-particle but also higher order correlations in 1D y-space are well accounted for. In @-space(Fig. l), K3 and especially (the very small) K4 are less well reproduced. Figure 2 also shows that the predicted 2D and 3D cumulants agree well with data. Whereas the BE-algorithm used implements pair-wise BEC only, it is noteworthy that the procedure also induces like-sign higher-order correlations of approximately correct magnitude. This seems to indicate that high-order cumulants are, to a large extent, determined by the second-order one (see further Sect. 4.3). It is not clear, however, whether the agreement is accidental or implies that the physics of n-boson (n > 2) BE effects is indeed correctly simulated. We found that the like-sign BEtype correlations influence the correlations in all-charge multiplets (not shown) Large discrepancies between data and MC without BE, already discussed, almost disappear, especially in 2d and 3D cases, when the BE-effects are included using the BE32 algorithm.
~~
bNon-BEC related model-parameters were set at the following OPAL tuned values: PARJ(21)=0.4 GeV, PARJ(42)=0.52 G e V 2 , PARJ(81)=0.25 GeV, PARJ(82)=1.9 GeV.
60
4.3. The Ochs- Wosiek relation for cumulants
The success of the PYTHIA model with BEC in predicting both the magnitude and domain-size dependence of cumulants, has led us to consider the inter-dependence of these quantities in more detail. In Fig. 3 we plot K3 and K4 in 2D and 3D, for each value of M as a function of K2. We observe that the 2D and 3D data for all-charge, as well as for like-sign multiplets follow approximately, within errors, the same functional dependence. The solid lines is a simple fit to the function In& = a, rq In K2. The fitted slope values are 7-3 = 2.3 and 7-4 = 3.8. This is evidence that the slope rq increases with the order of the cumulant. Figure 3 suggests that the cumulants of different orders obey simple so-called “hierarchical” relations, analogous to the Ochs-Wosiek relation, first established13 for factorial moments. Interestingly, all-charge as well as like-sign multiplets are seen to follow, within errors, the same functional dependence. Hierarchical relations of similar type are common in various
+
xi
; 2D
-
10
I
OPAL 3D
“i 10
-‘
1
K,
Figure 3. The Ochs-Wosiek type plot in 2D and 3D domains.
61
branches of many-body physic^,^ but a satisfactory explanation within particle production phenomenology or QCD remains to be found. Simple relations among the cumulants of different orders exist for certain probability distributions. For example, for the Negative Binomial (NB) distribution, one of the most successful1parametrisation of hadron spectra in restricted bins, one has Kp = (q-l)! Kip' (q = 3 , 4 , . . .). This shows that the cumulants of q > 2 are here solely determined by K2. This relation is shown in Fig. 3 (dashed line). Comparing to the data, we conclude that the multiplicity distribution of all charged particles, as well as that of like-sign particles, deviates strongly from a NB one in small phase space domains. Recently, this and other much studied multiplicity distributions have been discussed in the present ~ 0 n t e x t . l ~ The Ochs-Wosiek type of relation exhibited by the data in Fig. 3 may exgenerate higher-order correlations plain why the BE algorithms in PYTHIA of (approximately) the correct magnitude. Assuming that the hadronization dynamics is such that higher-order correlation functions can be constructed from second-order correlations only, methods that are designed to ensure agreement with the two-particle correlation function, could then automatically generate higher-order ones of the correct magnitude.
5. Summary and conclusions Here, we have presented a comparative study of like-sign and all-charge genuine correlations between two and more hadrons produced in e+e- annihilation at the Zo energy. The high-statistics data on hadronic Zo decays recorded with the OPAL detector in 1991 to 1995 were used to measure normalized factorial cumulants as a function of the domain size, A, in Ddimensional domains (D = 1,2,3) in rapidity, azimuthal angle and (the logarithm of) transverse momentum, defined in the event sphericity frame. Both all-charge and like-sign multiplets show strong positive genuine correlations up to fourth order. They are stronger in rapidity than in azimuthal angle. One-dimensional cumulants initially increase rapidly with A decreasing but saturate rather quickly. In contrast, 2D and especially 3D cumulants continue to increase and exhibit intermittency-like behaviour. Comparing all-charge and like-sign multiplets in 2D and 3D phase space cells, we observe that the rise of the cumulants for all-charge multiplets is increasingly driven by that of like-sign multiplets as A becomes smaller. This points to the likely influence of Bose-Einstein correlations. The 2D and 3D cumulants K3 and K4, considered as a function of K2,
62
follow approximately a linear relation of the Ochs-Wosiek type: In Kp In K2, independent of D and the same for all-charge and for like-sign particle groups. This suggests that, for a given domain A, correlation functions of different orders are not independent but determined, to a large extent, by two-particle correlations. The data have been compared with predictions from the Monte Carlo event generator PYTHIA. The model describes well dynamical fluctuations in large phase space domains, e.g. caused by jet production, and shorterrange correlations attributable to resonance decays. However, the results of the present analysis, together with earlier less precise data, show that these ingredients alone are insufficient to explain the magnitude and domainsize dependence of the factorial cumulants. To achieve a more satisfactory data description, short-range correlations of the Bose-Einstein type between identical particles need to be included. N
References 1. E.A. De Wolf, I.M. Dremin and W. Kittel, Phys. Rep. 270, 1 (1996). 2. M.G. Kendall and A. Stuart, The Advanced Theory of Statistics, Vol. 1 (C. Griffin and Co., London, 1969); A.H. Mueller, Phys. Rev. D4, 150 (1971). 3. P. Carruthers and I. Sarcevic, Phys. Rev. Lett. 63, 1562 (1989); E.A. De Wolf, Acta Phys. Pol. B21, 611 (1990). 4. P. Boiek, M. Ploszajczak and R. Botet, Phys. Rep. 252, 101 (1995). 5. For a recent review, see e.g. W. Kittel, Acta Phys. Polon. B32, 3927 (2001). 6. OPAL Col., G. Abbiendi et a]., Eur. Phys. J. C11, 239 (1999). 7. K. Kadija and P. Seyboth, Z.Phys. C61, 465 (1994). 8. OPAL Col., P.P. Allport et a]., Nucl. Instr. Meth. A346, 476 (1994), and refs. therein. 9. T. Sjostrand, Comp. Phys. Comm. 82, 74 (1994); T.Sjostrand et al., Comp. Phys. Comm. 135, 238 (2001). 10. J. Allison et a]., Nucl. Instr. Meth. A317, 47 (1992). 11. OPAL Col., G. Alexander et a]., Z.Phys. C69, 543 (1996). 12. L. Lonnblad and T. Sjostrand, Eur. Phys. J. C2, 165 (1998). 13. W. Ochs and J. Wosiek, Phys. Lett. B214, 617 (1988); W. Ochs, Z.Phys. C50, 339 (1991). 14. E.K.G. Sarkisyan, Phys. Lett. B477, 1 (2000).
MEASUREMENT OF BOSE-EINSTEIN CORRELATIONS IN e+e- + W+W- EVENTS AT LEP J.A. VAN DALEN, W. KITTEL, W.J. METZGER PRESENTED BY S. TODOROVA-NOVA HEFIN, University of Nijmegen/NIKHEF, Toernooiveld 1 , 6525 E D Nijmegen, N L FOR THE L3 COLLABORATION Bose-Einstein correlations in W-pair production at LEP are investigated in a data sample of 629 pb-I collected by the L3 detector at 6 = 189-209 Gdr. No evidence is found for Bose-Einstein correlations between hadrons coming from different W’s in the same event.
Introduction In hadronic Z decay, Bose-Einstein correlations (BEC) are observed as an enhanced production of identical bosons at small four-momentum difference. [1,2] BEC are also expected within hadronic W decay (intra-W BEC). At LEP energies, in fully-hadronic W+ W- events (qqqij) the W decay products overlap in space-time. Therefore, it is also natural to expect [3,4] BEC between identical bosons originating from different W’s (inter-W BEC). A comparison of BEC in fully-hadronic W+W- events with those in semi-hadronic W f W events (qijlv), serves as a probe to study inter-W BEC. Together with colour reconnection, [5,6] inter-W BEC form a potential bias in the determination of the W mass at LEP.
Data and Monte Carlo The data used in this analysis were collected by the L3 detector [7] at fi = 189 - 209 GeV and correspond to a total integrated luminosity of 629 pb-l. Fully-hadronic and semi-hadronic Wf W- events are selected with criteria similar to those described in [8]. An additional requirement for the fullyhadronic channel is a cut on the neural network output [8]to further separate the signal from the dominant efe- -+ qq(y) background. In total, about 3,800 semi-hadronic and 5,100 fully-hadronic events are selected. The event generator KORALW 191 with the BEC algorithm BE32 [4] is used to simulate the signal process. The values of the BE32 parameters are found by tuning the Monte Carlo (MC) t o Z-decay data depleted in b-quark events. Both the BEC and the fragmentation parameters are tuned simultaneously. Systematic studies are made using an alternative set of parameter
63
64
values, obtained by tuning to Z-decay data of all flavours and used in [lo]. The background processes e+e- + qq(y), e+e- + ZZ and e+e- + Ze+e- are generated using PYTHIA. [ l l ] For the qq(y) channel KK2f [12] is also used. BEC are included in both programs. The generated events are passed through the L3 detector simulation program, [13] reconstructed and subjected to the same selection criteria as the data. The selection efficiencies of the channels qqeu, q q p , qqrv and qqqq are found t o be 83%, 75%, 50% and 86%, respectively. The purities of these channels are around 95010, 95%, 85% and SO%, respectively, varying a few percent between the different energy bins. The selection efficiencies of fullyhadronic events changes by less than 0.5% when BEC (inter-W, or both intraW and inter-W) are excluded. The charged pions used for the BEC study are detected as tracks in the central tracker, using selection criteria similar to those of [lo]. About 82% of the tracks selected in MC samples are pions. This selection yields about one million pairs of like-sign particles in the fully-hadronic channel and about 200,000 pairs in the semi-hadronic channel. Analysis Method
BEC can be studied in terms of the two-particle correlation function
where p2(p1 , p 2 ) is the two-particle number density of particles with fourmomenta PI and p2, and po(p1,pz) the same density in the absence of BEC. The largest BEC occur at small absolute four-momentum difference and Rz is parametrized in this one-dimensional distance Q= measure by defining
d
w
where N,, is the number of selected events and npairsthe number of like-sign track pairs in the N,, events. If there is no inter-W interference, we can write [14] P Y W ( P l , P 2 ) = 2PF(Pl,P2)
+ 2PF(Pl)PY(P2)
7
(3)
where the assumption is made that the densities for the W+ and W- bosons are the same. The terms py" and p y of Eq. (3) are measured in the fully-hadronic and the semi-hadronic events, respectively. To measure the
65
product of the single-particle densities, p p ( p l ) p p ( p 2 ) , a two-particle density p z Y ( p 1 , p z ) is used. It is obtained by pairing particles originating from two different semi-hadronic events. By construction, particles in these pairs are uncorrelated. The event mixing procedure is explained in detail in [lo] and [15]. The hypothesis that the two W’s decay independently can be directly tested using Eq. (3). In particular, the following test statistics are defined as the difference and the ratio of the left- and right-hand side of Eq. (3) in terms of Q AAQ) = (4) and
PF*(Q) ~ P F ( Q >~PZY(Q>
This method gives access to inter-W correlations directly from the data, with no need of MC. [14] In the absence of inter-W correlations, Ap = 0 and D = 1. To study inter-W BEC, deviations from these values are examined at small values of Q for like-sign particles. The influence of other correlations or potential bias on these quantities is studied by analysing unlike-sign pairs and MC events. The event mixing procedure could introduce artificial distortions or not fully account for some correlations other than BEC or some detector effects, causing a deviation of Ap from zero or D from unity for data as well as for a MC without inter-W BEC. These possible effects are reduced by using the double ratio
where D(Q)Mc,nointer
D (Q)dat a D’(Q) = D(Q)Mc,nointer ’ is derived from a MC sample without inter-W BEC.
Results To obtain the density function p2, Eq. ( 2 ) ,for the W+W- events, background is subtracted by replacing pz(Q) by
where P is the purity of the selection and n b g is the number of pairs of tracks corresponding to (1 - P)Nev background events. This density is further corrected for detector resolution, acceptance, efficiency and for particle misidentification with a multiplicative factor derived from MC. Since no hadrons are
66
identified, this factor is the two-pion density found from MC events at generator level divided by the two-particle density found using all particles after full detector simulation, reconstruction and selection. For this detector correction, the no inter-W scenario with the BE32 algorithm is used.
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Q [GeV] Figure 1. Distributions for uncorrected data at 6 = 189-209 GeV of (a) A p ( f , 3z) and (b) Ap(+, -). Also shown are the MC predictions of KORALW with and without inter-W BEC.
Figure 1 shows the distribution of Ap, Eq. (4),for like-sign, (*, *), and for unlike-sign, (+, -), particle pairs. Figure 2 shows the distributions of D and D',Eqs. (5) and (6), for like-sign and unlike-sign pairs. For the double ratio D' we use the no inter-W scenario of KORALW as the reference sample. The distributions of Ap, D and D' are not corrected for detector effects, but background is estimated from MC and subtracted according to Eq. (7), from p y and pyw. Also shown in Figs. 1 and 2 are the predictions of KORALW
67
1.2
1.1 h
. .1
+
E
1
0.9
0.8 1.I
T i
Y
b 0.9
0.8 0
0.5 1 Q [GeVl
0
0.5 1 Q [GeV]
Figure 2. Distributions for uncorrected data at 6 = 189-209 GeV of (a) D(f, *), (b) D(+, -), (c) D ’ ( f , *) and (d) D’(+, -). Also shown are the MC predictions of KORALW with and without inter-W BEC.
after full detector simulation, reconstruction and selection. Both the inter-W and no inter-W scenarios are shown. The inter-W scenario shows an enhancement at small values of Q in the Ap, D and D’ distributions for like-sign pairs. We also observe a small enhancement for unlike-sign pairs due to the MC implementation of BEC, which shifts the vector momentum of particles, affecting both the like-sign and unlike-sign particle spectra. The no inter-W scenario describes the A p ( f , f), D ( f , f)and D’(f, f)distributions, while the inter-W scenario is disfavoured.
68
Table 1. Contributions to the systematic uncertainty of J ( f ,f).
Source Track selection Event selection Background contribution Mixing procedure Neural network cut Energy - _ calibration Track misassignment in qqru channel
Contribution 0.084 0.068 0.055 0.065 0.038 0.024 0.038
For quantitative comparisons, the integral rQrnax
is computed. Also, the D ’ ( f , !C) distribution is fitted from Q = 0 to 1.4 GeV, using the full covariance matrix, with the parametrization
+
D’(Q) = (1 SQ)(1 + Aexp(-k2Q2)) ,
(9)
where 6, A and k are the fit parameters. Both J ( f , f )and A measure the strength of inter-W BEC. The systematic uncertainties on J ( f ,!C) and on A are listed in Tables 1 and 2, respectively. In addition to the track and event selections, the amount of background is varied and different MC’s, using both sets of MC parameter values, are used t o generate the background events. Furthermore, contributions to the systematic uncertainty on A are obtained by varying the choice of MC for the reference sample in D’ using PYTHIA and KORALW, both with no BEC at all and with only intra-W BEC. MC’s without BEC are used to estimate the effect of residual intra-W BEC. The effect of various models for colour reconnection a is included. Changes in the fit range (f400 MeV), in the bin size (from 40 to 80 MeV) and in the parametrization of Eq. (9) (removing (1 SQ) from the fit) also contribute t o the systematic uncertainty on A. In the mixing procedure, a semi-hadronic event is allowed to be combined with all possible other semi-hadronic events. To be sure that this does not
+
aThe so-called SKI (with reconnection probability of about 30%), SKII and SKII’ [6] models, as implemented in PYTHIA, are used.
69
Table 2. Contributions to the systematic uncertainty of the A parameter.
Source Track selection Event selection Background contribution Alternative MC as a reference Colour reconnection Fit range Re binning Fit parametrization Mixing procedure Neural network cut Energy calibration Track misassignment in qaru channel Total
Contribution 0.0029 0.0049 0.0042 0.0060 0.0026 0.0018 0.0020 0.0017 0.0044 0.0033 0.0017 0.0022 0.012
introduce a bias, the analysis is repeated for a mixed sample where every semihadronic event is used at most once. The influence of the mixing procedure is also studied by not only combining oppositely charged W’s, but also like-sign W’s. The influence of an extra momentum, [lo] used in the event mixing, is also included as a systematic effect. The effect of these three changes in the mixing procedure is also given in Tables 1 and 2. Moreover, the analysis is repeated removing the cut on the neural network output for the mixed events. Furthermore, the effect of uncertainties in the energy calibration of the calorimeters is studied. Finally, the influence of incorrect assignment of tracks to r or qij systems in the qqru channel is investigated. The value of J ( f ,f)is computed using the full covariance matrix, taking Qmax = 0.68GeV, the value where the two MC scenarios have converged to less than one standard deviation. The results for each centre-of-mass energy, displayed in Figure 3a, are consistent with each other. Combining all J ( f ,f) values results in
J ( f ,k)= 0.03 f0.33 f 0.15
,
where the first uncertainty is statistical and the second systematic. Using KORALW with the inter-W scenario gives J ( f ,f)= 1.38 f 0.10, where the uncertainty is statistical only. In Figure 3a this value is shown as a vertical band. It disagrees with the value of the data by 3.6 standard deviations. For unlike-sign pairs we obtain J(+, -) = 0.33 f0.36 f 0 . 1 6 , consistent with zero.
70
3.05
J
0
0.05
0.1
0.15
0.2
0.25
A
Figure 3. Values of (a) the integral J ( f ,&) and (b) the A parameter, at different centreof-mass energies and their average. The uncertainties are statistical only. The wide bands show the average value of the data including the systematic uncertainties. Also shown are the MC predictions of KORALW with inter-W BEC.
The value of the fit parameter A, Eq. (9), is shown in Figure 3b for each energy bin. Combining all A values results in
A = 0.008 f 0.018 f0.012 , where the first uncertainty is statistical and the second systematic. The value of k is found t o be 0.4 f0.4 f0.3 fm and the correlation coefficient between A and k is 0.45. A similar fit is performed for the KORALW MC sample with inter-W BEC, resulting in A = 0.098 f 0.008, where the uncertainty is statistical only. In Figure 3b this value is shown as a vertical band. It disagrees with the value of the data by 3.8 standard deviations. Using the alternative set of MC parameters results in J ( f ,f)= 1.78 f0.10 and A = 0.126 f0.008, where the uncertainties are statistical only. To summarize, an excess at small values of Q in the distributions of Ap(&,&), D ( f , k)and D’(&, &) is expected from inter-W BEC, but none is seen. These distributions agree well with KORALW using BE32 without inter-W BEC, but not when inter-W BEC are included. We thus find no evidence for BEC between identical pions originating from different W’s.
71
References
1. DELPHI Collab., P. Abreu et al., Phys. Lett. B 286, 201 (1992); DELPHI Collab., P. Abreu et al., Z. Phys. C 63, 17 (1994); ALEPH Collab., D. Decamp et al., Z. Phys. C 54, 75 (1992); OPAL Collab., G. Alexander et al., Z. Phys. C 72, 389 (1996); OPAL Collab., G. Abbiendi et al., Eur. Phys. J. C 16,423 (2000); DELPHI Collab., P. Abreu et al., Phys. Lett. B 471, 460 (2000); L3 Collab., P. Achard et al., Phys. Lett. B 524, 55 (2002). 2. L3 Collab., M. Acciarri et al., Phys. Lett. B 458, 517 (1999). 3. A. Ballestrero et al. in “Physics at LEP2”, eds. G. Altarelli et al., CERN 96-01 (1996) 141; L. Lonnblad, T. Sjostrand, Phys. Lett. B 351 (1995) 293 and Eur. Phys. J. C 2, 165 (1998); V. Kartvelishvili, R. Kvatadze, R. Mmller, Phys. Lett. B 408, 331 (1997); S. Jadach, K. Zalewski, Acta Phys. Pol. B 28, 1363 (1997); K. Fialkowski, R. Wit, Acta Phys. Pol. B 28, 2039 (1997); K. Fialkowski, R. Wit, J. Wosiek, Phys. Rev. D 58, 094013 (1998); S. Todorova-Nov&, J. RameS, Strasbourg preprint IReS 97-29 (1997). 4. L. Lonnblad, T. Sjostrand, E. Phys. J. C 2, 165 (1998). 5. G. Gustafson, U. Pettersson, P. Zerwas, Phys. Lett. B 209, 90 (1988); T. Sjostrand, V.A. Khoze, Phys. Rev. Lett. 72, 28 (1994); V.A. Khoze, T. Sjostrand, Eur. Phys. J. C 6, 271 (1999); G. Gustafson, J. Hakkinen, Z. Phys. C 64, 659 (1994); C. Friberg, G. Gustafson, J. Hakkinen, Nucl. Phys. B 490, 289 (1997); L. Lonnblad, Z. Phys. C 70, 107 (1996); B.R. Webber, J. Phys. G 24, 287 (1998). 6. T. Sjostrand, V.A. Khoze, Z. Phys. C 62, 281 (1994). 7. L3 Collab., B. Adeva et al., Nucl. Instr. Meth. A 289, 35 (1990); M. Chemarin et al., Nucl. Instr. Meth. A 349, 345 (1994); M. Acciarri et al., Nucl. Instr. Meth. A 351, 300 (1994); G. Basti et al., Nucl. Instr. Meth. A 374, 293 (1996); I.C. Brock et al., Nucl. Instr. Meth. A 381, 236 (1996); A. Adam et al., Nucl. Instr. Meth. A 383, 342 (1996). 8. L3 Collab., M. Acciarri et al., Phys. Lett. B 496, 19 (2000). 9. KORALW version 1.42 is used; S. Jadach et al., Comp. Phys. Comm. 119, 272 (1999). 10. L3 Collab., M. Acciarri et al., Phys. Lett. B 493, 233 (2000). 11. PYTHIA version 6.156 is used; T. Sjostrand et al., Comp. Phys. Comm. 135, 238 (2001). 12. KK2f version 4.14 is used; S. Jadach et al., Comp. Phys. Comm. 130, 260 (2000). 13. The L3 detector simulation is based on GEANT3, see R. Brun et
72
14. 15. 16.
17.
al., CERN report CERN DD/EE/84-1 (1984), revised 1987, and uses GHEISHA t o simulate hadronic interactions, see H. Fesefeldt, RWTH Aachen report PITHA 85/02 (1985). S.V. Chekanov, E.A. De Wolf, W. Kittel, E. Phys. J . C 6, 403 (1999). J.A. van Dalen, Ph.D. Thesis, University of Nijmegen (2002). F.Y. Edgeworth, Trans. Cambridge Phil. SOC. 20, 36 (1995). See also, e.g., Harald Cram&, “Mathematical Methods of Statistics”, Princeton Univ. Press, 1946; T . Csorgo, S. Hegyi, Phys. Lett. B 489, 15 (2000). L3 Collab., P. Achard et al., “Measurement of genuine three-particle Bose-Einstein correlations in hadronic Z decay”, Phys. Lett., in press, hepex 0206051.
ON THE SCALE OF VISIBLE JETS IN HIGH ENERGY ELECTRON-POSITRON COLLISIONS LIU LIANSHOU, CHEN GANG AND FU JINGHUA Institute of Particle Physics, Huazhong Normal University, Wuhan 430079 China PRESENTED BY LIU LIANSHOU E-mail:
[email protected] A study of the dynamical fluctuation property of jets is carried out using the Monte Carlo method. The results suggest that the anisotropy of dynamical fluctuations in the hadronic system inside jets changes abruptly with the variation of the cut parameter Vcut. A transition point exists, where these fluctuations behave like those in soft hadronic collisions, i.e., are circular in the transverse plane with respect to dynamical fluctuations.
The presently most promising theory of strong interaction - Quantum Chromo-Dynamics (QCD) has the special property of both asymptotic freedom and colour confinement. For this reason, in any process, even though the energy scale Q2 is large enough for perturbative QCD (PQCD) to be applicable, there must be a non-perturbative hadronization phase before the final-state particles can be observed. Therefore, the transition or interplay between hard and soft processes is a very important problem. An ideal “laboratory” for studying this problem is hadron production in moderate-energy e+e-collisions, e.g. at c.m. energy about in the range [lo, 1001 GeV. The initial condition in these processes is simple and clear. It can safely be considered as a quark-antiquark pair, moving back to back with high momenta. On the contrary, in other processes, e.g. in hadron-hadron collisions, the initial condition is complicated with the problem of hadron structure involved. Theoretically, the transition between perturbative and non-perturbative QCD is at a scale QO 1-2 GeV. Experimentally, the transition between hard and soft processes is determined by the identification of jets through some jet-finding process, e.g. the Durham algorithm. In these processes, there is a parameter - gcut,which, in the case of the Durham algorithm, is essentially &. From the the relative transverse momentum kt squared, kt = +. experimental point of view, kt can be taken as the transition scale between hard and soft. Its value depends on the definition of “jet”. Historically, the discovery in 1975 of a two-jet structure in e+e- annihilation at c.m. energies 2 6 GeV has been taken as an experimental con-
-
73
74
firmation of the parton model,2 and the observation in 1979 of a third jet in e+e- collisions at 17 - 30 GeV has been recognised as the first experimental evidence of the g l ~ o n These .~ jets, being directly observable in experiments as “jets of particles”, will be called “visible jets”. Our aim is to find the scale corresponding to these visible jets and to discuss its meaning. For this purpose, let us recall that the qualitative difference between the typically soft process - moderate energy hadron-hadron collision - and the typically hard process - high energy e+e- collision - can be observed most clearly in the property of dynamical fluctuations therein.4 The latter can be characterized as usually by the anomalous scaling of normalized factorial moments (NFM):5
where a region A in 1-, 2- or 3-dimensional phase space is divided into M cells, n, is the multiplicity in the mth cell, and (. ..) denotes vertically averaging over the event sample. Note that when the fluctuations exist in higher-dimensional (2-D or 3-D) space, the projection effect will cause the second-order l-D NFM to go to saturation according to the rule:,
F,(a)(Ma)= A , - B a M a Y a , where a = 1 , 2 , 3 denotes the different l-D variables. The parameter 7, describes the rate of approach to saturation of the NFM in direction a and is the most important characteristic for the higher,dimensional dynamical fluctuations. If 7a = yb, the fluctuations are isotropic in the a,b plane. If 7, # ~ a , the fluctuations are anisotropic in this plane. The degree of anisotropy is characterized by the Hurst exponent Hat,, which can be obtained from the as Ha,,= (1 yb)/(l+ 7,). The dynamical fluctuations values of 7, and are isotropic when Hab = 1, and anisotropic when Hab # 1. For the 250 GeV/c x(K)-p collisions from NA22, the Hurst exponents are found to be:’ HptV= 0.99 f 0.01, Hypt= 0.48 f 0.06, HgV= 0.47 f 0.06, which means that the dynamical fluctuations in this moderate-energy hadronhadron collisions are isotropic in the transverse plane and anisotropic in the longitudinal-transverse planes. This is what should be expected,1° because
+
aIn order to eliminate the influence of momentum con~ervation,~ the first few points ( M = 1 , 2 or 3) should be omitted when fitting the data to Eq. (2).
75
O.; 0.6 0.4
0.2
iI 1
'4
-
o:.,.
Fig.1 The variation of the parameter 7 with ycut ( k t ) there are almost no hard collisions at this energy and the direction of motion of the incident hadrons (longitudinal direction) should be privileged. In high energy e+e- collisions, the longitudinal direction is chosen along the thrust axis, which is the direction of motion of the primary quarkantiquark pair. Since this pair of quark and antiquark moves back to back with very high momenta, the magnitude of the average momentum of final state hadrons is also anisotropic due to momentum conservation. However, the dynamical fluctuations in this case come from the QCD branching of partons," which is isotropic in nature. Therefore, in this case the dynamical fluctuations should be isotropic in 3-D phase space. A Monte Carlo study for e+e~ the presently available collisions at 91.2 GeV confirms this a s ~ e r t i o n .Also experimental data on e+e- collisions at 91.2 GeV show isotropic dynamical fluctuations in 3-D.12 Now we apply this technique to the "2-jet'' sub-sample of e+e-collisions obtained from a certain, e.g. Durham, jet-algorithm with some definite value of ycut. Doing the analysis for different values of ycut, the dependence of the dynamical-fluctuation property of the "2-jet'' sample on the value of yCut can be investigated. Two event samples are constructed from the Jetset7.4 and Herwig5.9 generators, each consisting of 400 000 e+e- collision events at c.m. energy 91.2 GeV. The variation of the 7's of the 2-jet sample with ycut (kt) are shown in Fig's l(a) and ( b ) , respectively. It shows an interesting pattern. When ycut (kt) is very small, the three 7's are separate. As ycut ( k t ) increases, 7ptand 3;p approach each other and cross over sharply at a certain point. After that, the three 7's approach a common value. The latter is due to the fact that when ycut is very large, the "2-jet'' sample coincides with the full sample and the dynamical fluctuations in the latter are isotropic. We will call the point where T~~crosses 7v the transition point. It has
76
the unique property 7pt = 79 # yY,i.e., the jets at this point are circular in the transverse plane with respect to dynamical fluctuations. These jets will, therefore, be called circular jets. The above-mentioned results are qualitatively the same for the two event generators, but the ycut (Ict) values at the transition point are somewhat different. The cut parameters ycut, the values of y, the corresponding Hurst exponents H and the relative transverse momenta Ict at the transition point are listed in Table I. Table I y, H , ycut (GeV/c) and Ict (GeV/c) at the transition point Jetset 7.4 Herwig 5.9
I I
Ycut
0.0048 f0.0007 0.0022 f0.0008
I I
7Y 1.074 f0.037 1.237 f0.066
I I
7Pt
0.514 f0.080 0.633 310.064
I I
79 0.461 f0.021 0.637 f0.051
I HYPt I I 0.73 I f0.06 0.73 f0.05
HY9 0.70 f0.06 0.73 f0.05
I HPt9 I I 0.96 I f0.10 1.00 f0.07
kt 6.32 f0.03 4.28 f0.02
It is natural to ask the question: Is there any relation between the circular jets determined by the condition ypt = y9 # yv and the visible jets directly observable in experiments as L‘jetsof particles”? In order to answer this question, we plot in Fig. 2 the ratios R2 and R3 of LL2-jet” and ‘L3-jet”events as functions of the relative transverse momentum Ict at different c.m. energies. Let us consider the point where a third jet starts to appear. Historically, a third jet was firstly observed in e+e- collisions at c.m. energy 17 GeV. It can be seen from Fig. 2 that, for & = 17 GeV, R3 starts to appear at around Ict = 8-10 GeV/c, cf. the dashed vertical lines in Fig. 2. This value of Ict is consistent with the Ict value (4.3-6.3 GeV/c) of a circular jet within a factor of 2, cf. Table I. Thus, we see that the circular jet, defined as a kind of jet circular in the transverse plane with respect to dynamical fluctuations, and the visible jet, defined as a kind of jet directly observable in experiments as a “jet of particles”, have about the same scale - Ict 5-10 GeV/c.
-
f i (GeV)
Ycut
kt (GeV/c)
50 30
0.0186 f0.0012 0.059 f0.002
6.82 zk0.03 7.28 f0.03
In order to check how sensitively the magnitude of this scale depends on the c.m. energy of e+e-collisions, a similar analysis is carried out for fi = 50 and 30 GeV using Jetset7.4, cf. Fig’s. l(c,d). It can be seen that, although
77
Fig.2 The ratios RB and R2 of 3- and 2-jet events as functions of kt at different c.m. energies, ( a ) from Jetset7.4; ( b ) from Herwig5.9 the shape of yj versus ycut (kt)(i = y,pt,cp) changes considerably with energy, the qualitative trend is the same for these energies. In particular, the transition point where T~~crosses y,+, exists in all cases. The values of ycut and kt at the transition point are listed in Table 11. It can be seen that the kt values are also in the range 5-10 GeV/c. This shows that the sacle kt 5-10 GeV/c for the circular jet is universal, at least for moderate energy e+e-collisions. This scale is to be compared with the scale kt 1-2 GeV/c, which is the scale for the transition between the perturbative and non-perturbative domains. It is interesting also t o see what happens in the results of the jet algorithm at this scale. It can be seen from Fig. 213 (Jetset7.4) that, at this scale (kt 1-2 GeV/c) the ratio Rz of “2-jet” events tends to vanish almost independently of energy, provided the latter is not too low. This can be explained as follows. Consider, for example, an event with only two hard partons, having no perturbative branching at all. Even in this case, the two partons will still undergo non-perturbative hadronization to produce final-state particles. If the Ict is chosen to be less than 1-2 GeV/c, then the non-perturbative hadronization with small transverse momentum will also be considered as the production of new “jets” and this “should-be” 2-jet event will be taken as a “multi-jet” event too. This means that, when kt < 1-2 GeV/c, events with small transverse momentum will also become “multi-jet” ones, and Ra vanishes. However, even when kt < 1-2 GeV/c, a few 2-jet events may still survive if the hadronization is almost collinear. This effect becomes observable when the energy is very low, see, e.g., the Rz curve for fi = 6 GeV in Fig. 2a. A similar picture holds also for the results from Herwig5.9, cf. Fig. 2b, but the almost-collinear hadronization appears earlier. Let us give some comments on the physical picture behind the abovementioned two scales. A circular (or visible) jet originates from a hard parton. The production of this parton is a hard process. Its evolution into final-state
-
-
-
78
particles includes a perturbative branching and subsequent hadronization. The hadronization is a soft process. The perturbative branching (sometimes called parton shower) between the hard production and soft hadronization connects these two processes. This perturbative branching inside a circular jet is certainly not soft, but is also not so hard. This kind of processes is sometimes given the name semi-hard in the literature. The isotropic property of dynamical fluctuations provides a criterion for the discrimination of the hard production of circular jets and the (semi-hard) parton shower inside these jets.
Acknowledgments Supported in part by the National Natural Science Foundation of China (NSFC) under Project 19975021. The authors are grateful to Wolfram Kittel, Wu Yuanfang and Xie Qubin for valuable discussions.
References
1. G. Hanson et al, Phys. Rev. Lett. 35, 1609 (1975). 2. J. Ellis et aZ, NucZ. Phys. B111,253 (1976). 3. R. Brandelik et aZ (TASSO Coll.), Phys. Lett. B 86, 243 (1979); D.P. Barber et al (Mark J Coll.), Phys. Rev. Lett. 43,830 (1979; Ch. Berger et al (PLUTO Coll.), Phys. Lett. B 86, 418 (1979); W.Bartel et QZ (JADE Coll.), Phys. Lett. B 91,142 (1980). 4. Liu Feng, Liu Fuming and Liu Lianshou, Phys. Rev. D 59, 114020 (1999). 5. A. Bia€as and R. Peschanski, Nucl. Phys. B 273, 703 (1986); ibid 308, 857 (1988). 6. W.Ochs, Phys. Lett. B 347,101 (1990). 7. Liu Lianshou, Zhang Yang and Deng Yue, 2. Phys. C 73,535 (1997). 8. Wu Yuanfang and Liu Lianshou, Science an China (series A) 38, 435 (1995). 9. N.M. Agababyan et al (NA22 Coll.), Phys. Lett. B 382,305 (1996); N. M. Agababyan et al (NA22 Coll.), Phys. Lett. B 431,451 (1998). 10. Wu Yuanfang and Liu Lianshou, Phys. Rev. Lett. 21,3197 (1993). 11. G. Veneziano, Momentum and colour structure of jet in QCD, talk given at the 3rd Workshop on Current Problems an Hagh Energy Particle Theory, Florence, 1979. 12. P. Abreu et al (DELPHI Coll.), Nucl. Phys. B 386,471 (1992).
EXPERIMENTAL EVIDENCE IN FAVOUR OF LUND STRING WITH A HELIX STRUCTURE
S.TODOROVA-NOVA HEFIN, University of Nijmegen/NIKHEF, The Netherlands (on leave from FzU Prague, Czech Republic) E-mai1:novaQmail.cern.ch The idea of an ordered structure at the end of the parton cascade is reviewed. An alternative parameterization for the string structure is proposed and the experimental evidence for the latter is discussed.
1. Introduction
- why helix ?
The idea of an ordered gluon field forming the QCD string was proposed several years ago l . It was based on the following considerations: 0
0
due to helicity conservation, emission of gluons from quarks (or gluons) leaves an empty region around the emitting parton, due to a relatively large effective coupling, there is a tendency to emit as many (soft) gluons as possible.
The associated numerical study has shown that the optimal packing of emitted gluons in phase space corresponds to a helix-like structure (which minimizes the colour connections between gluons). It was also shown that the helix structure is to be viewed as an internal structure of the Lund string rather than an “excitation” on the string ; only gluons with kt > 1.6 GeV can effectively create such an excitation, so that the non-perturbative scenario, where a large number of soft gluons with small kt is emitted, takes over. In consequence, the colour field of the string should be treated as a continuous stream of colour-connected, helix-like ordered, soft gluons with similar kt. The existence of such an internal structure would have a non-trivial implication for the fragmentation of the Lund string: the transverse momentum of the hadron would stem from the integration over the transverse momenta of field quanta (see Fig.l), and no longer from a non-zero trans-
79
80
Figure 1. Left: Helix-like ordered field quanta; the recoiling parton(quark) spins
around the longitudind direction (string axis). Right: Bansverse momentum of a hadron, obtained b y integration o f kt of gluons forming the corresponding part of the string.
verse momentum of a qq pairs created via tunneling, as in the conventional Lund model. In the fragmentation, the internal structure of the string would impose a correlation between longitudinal and transverse momentum of the hadron (T stands for radius of the helix):
The exact relation between the transverse and longitudinal components of the hadron momentum depends on the parameterization of the helix form of the field. The possible solutions and their impact on observable features of fragmentation are discussed below. 2. Parameterization of the helix string
In this section, the original proposal of for the helix-like string is briefly recounted and its observable features are discussed. A modification of the original model is proposed. 2.1. Lund helix string; screwiness
The original proposal related the helix form t o the rapidity difference along the string. The rapidity in a given point along the string is defined as
81
where p + , p - stand for light-cone momenta of endpoint partons, and the fractions kf, kf d e h e the position of the point on the string (see the space-time diagram of the string evolution in Fig.2). The difference in the azimuthal angle between 2 points along the helix field is parameterized by
A@= Ay/r,
(3)
with r being a parameter of the model. The dotted lines in Fig.2 link points with the same phase @, tracing the evolution of the helix field with the time.
-
1
0,
-1
0.2 0
Figure 2 . Lund helix model. Left: Space-time diagram of the string evolution;
the fractions k+, k- of light-cone momenta of endpoint quarks define the rapidity along the string. Thin lines indicate the evolution of the helix field parametrized by Eq(3). Right: The cos(@) of the helix field as function of k+, k- (for an arbitrary parameter 7).
In search for an observable effect allowing to verify the model, a variable called screwiness was defined as S(Q) =
CpeICexp(i(wyj e
@j))12,
(4)
j
where first sum is over the set of events and second over the hadrons in each event; y and @ stand for the rapidity and azimuthal angle of the hadron, respectively. The presence of the helix structure of the type described by Eq.(3), for not too small values of r, would manifest itself as a peak in the scan over the w parameter, see Fig.3. The presence of such an effect was promptly checked by experimentalists, with a negative result 3 , which
82
Figure 3.
ScrewinessS(w) for r=0.3,0.5 and 0.7,M C estimate.
temporarily stopped the discussion about the helix-like string. However, there are several reasons to give it a second thought, and it is the aim of this contribution to point out some interesting features supported by the experimental data. 2.2. Modified Lund helix string
To investigate possible alternatives to the helix parameterization of Eq. (3), it is useful to recount the basic assumptions used in the construction of the helix model. In the process of multiple emission of soft gluons during the creation of the string, the helicity conservation imposes a restriction on the minimal mass of colour-connected quark-gluon (gluon-gluon) dipoles. Assuming that emitted gluons have similar energy/transverse momentum, the mass of the dipoles depends on the difference in gluon rapidity and/or azimuthal angle. The original proposal for the helix string put an accent on the rapidity difference. Here we aim to take another approach, based on the separation of gluons in the azimuthal angle, and to show that this leads to a viable alternative of the helix-like string structure. Imagining the colour field created by a quark and an antiquark as a stream of gluons with similar kt and rapidity, ordered in the helix structure optimizing the packing of gluons in phase space, the parameterization one arrives at is
A @ = 0.5 w(Ak'
+ Ak-)M,
(5)
where M stands for the mass of the string, and Ak+, Ak- define the (length of the) corresponding string piece. The resulting helix field is shown in
83
Fig.4. The difference between the two types of helix string are illustrated by the comparison of Figs.2 and 4:
0
Modified Lund helix model. Left: Space-time diagram of the string evolution; fractions of light-cone momenta of endpoint quarks k+, k- define the rapidity dong the string. Dotted lines indicate the evolution of the helix fieid parametrized by Eq.5. Right: The cos(@) of the helix field as function of k+, k(for an arbitrary parameter w ) . Figure 4.
0
homogeneity of the string field is achieved in the modified version of the helix; the evolution of the helix phase at a given point of the string is suppressed in the modified version of the model (the modified helix structure is static); the modified helix model solves the problem of handling of the divergence in the definition of the helix winding at the endpoints of the string ( k + ( k - ) + 0 in Eq.(3)). This problem, not addressed in ', is of special importance for the extension of the model to strings with hard gluon kinks (3- and more jet events).
It can be argued on theoretical basis that the modification of the helix string prescription brings numerical stability to the definition of the helix field for an arbitrary string topology and better fits the picture of fragmentation of a uniform string field. However, the main interest of the modified helix scheme lies in its impact on the observable features of fragmentation, discussed in the next section.
84
3. Observables
The lack of signal for screwiness as defined by Eq.(4) is in agreement with expectations from the modified helix model (see Fig.5), where the direction of the transverse momentum is not strongly related to the rapidity of hadrons. Therefore, this variable is not suited to test the alternative helix model.
0)
a, C
P
o.8 0.6
a/
2
4
o.8 nfi " . V
6 w
0
2
4
6
w b/
Figure 5 . Screwiness S(w) in the Lund helix parameterization ( ~ = 0 . 3and 0.7)
(a), and in the modified helix parameterization (u = 0.2) compared t o standard, non-helix Lund string (b).
The modified helix, however, implies a tight relation between transverse momentum and the energy of the final hadrons. It follows from Eqs.(l) and (5) :
I& (hudrm)[ = 2r[sin(wE(hadrm)/2) I, T ,w
(6)
being the parameters (radius, winding) of the helix string. These large correlations are shown in Fig.6 for hadrons stemming from the fragmentation of a simple qq string at E,, = 91.22 GeV. They are somewhat diluted by the resonance decay, but still visible until the parton shower is switched on (Fig.7). In the presence of hard gluons, the thrust axis of the event (used t o define the longitudinal direction) no longer coincides with the string axis, and the correlations become (unfortunately) unobservable, even after a strict cut on the Thrust value, selecting essentially 2-jet events. Even though a 'direct' observation of the helix structure in E-pt spectra seems unlikely, its presence can nevertheless be traced, indirectly, in the inclusive pt spectra. The rather poor description of pt distributions in Zo decay by the standard fragmentation codes, despite an extended tuning
85
Figure 6. Correlation between transverse and total momentum of final hadrons from the fragmentation of a simple qg string with helix structure defined by Eq.(5). Parameters of the helix are r = 0.5 GeV, and w = 0.4 rad/GeV. Left: direct hadrons only. Right: all final hadrons, including decay products of resonances.
Figure 7. Dilution of visible correlations between transverse and total momentum
of final hadrons from the fragmentation of a string with helix structure defined by Eq.(5), due to the presence of strings with hard gluon kinks after parton shower evolution. The transverse/longitudinal direction is defined with respect to the thrust axis of the event. Parameters of the helix are the same as in Fig.6. Left: Inclusive 2’ sample. Right: @i sample after 2-jet selection (Thr>O.97).
86
effort, is a well known (even if not widely publicised) fact, illustrated by Fig.8.
p: Thr.
0
2
4
6
8
1 0 1 2 1 4
0
0.5
1
15
2
p: [GeV/cl
25
3
35
p;t[Gev/c1
Figure 8. Data-MC comparison of transverse momentum distributions in the inclusive 2' sample. p P ( p g u t ) is the projection of the particle momentum on the Major(minor) axis of the event. None of standard fragmentation models gives a satisfactory description of the data.
0
pt-in(T)
1
2
3 pt-out(T)
Figure 9. DELPHI data-MC comparison of transverse momentum distributions in the inclusive 2' sample, for the standard Pythia 6.156, and Pythia with tuned modified Lund helix model
'.
87
As seen in Fig.9, the modified helix structure of the string describes better (even if not completely) the pt spectrum than the conventional Lund string. In particular, it removes the characteristic 'bump' at low pt (around 0.5 GeV/c). At the same time, the agreement with data in scaled momentum and various event shape variables remains (after retuning) basically unaffected (more information to be found in 5), which is a non-trivial conclusion given the amount of degrees of freedom removed from the fragmentation by Eq.(5). Another interesting property of the helix-like string is that it influences the 2-particle spectra. According to the preliminary estimates, the existence of the internal structure of the string may account for 10 % enhancement of the 2-particle correlation function (see Fig.lO), which may explain the observation of positive correlations between non-identical particles.
-
8
: v l
e
2
s
0
= < 1.1
'20.5 .
like-sign pairs
0 unlike-sign pair
1 0 unlike-sign poi
0.9
Q (GeV/c)
0
0.5
1
1.5
2
Q (GeV/c)
Figure 10. Left: MC comparison of 2-particle densities in the inclusive Zo sample,
for the standard Pythia 6.125 (full lines), and Pythia with tuned modified Lund helix model (points). Right: The contribution of the internal helix structure to the 2-particle correlation function.
4. Conclusions
The possibility of the existence of an internal helix structure of the color field and its consequences for the observable phenomena are discussed. It is shown that at least one particular form of the helix field (not identical to the one proposed in ') is supported by the experimental data. The introduction of a tight relation between longitudinal and transverse component of the hadron momenta yields a better description of inclusive pt spectra in hadronic Zo decay. Another non-trivial consequence of the QCD string having such a structure is the appearance of small, but not negligible 2-
88
particle correlations. In particular, the helix string can explain most of the low-Q correlations observed between unlike-sign pairs of particles, and it reduces the amount of correlations between identical particles which can be attributed to genuine Bose-Einstein correlations. A further study of the model, oriented on the investigation of the flavour dependence of the helix field, is under way. References 1. B.Andersson, G.Gustafson, J.H&innen, M.RingnQ, P.Sutton: Is there a
screwiness at the end ofthe QCD cascades? JHEP 09, 014(1998). 2. 9.Todorova-Novb: About the helix structure of the Lund string, CERN-EP
99-108. 3. A.De Angelis, Proc. of 28th ISMD, Delphi, Greece, 6-11 Sept.1998, Edts. N.Antoniou et al. (World Sci. Singapore 2000) p.336. 4. DELPHI Coll.,Z. Phys. C 7 3 , 11 (1996), CERN-PPE/96-120. 5. O.Devroede, PhD thesis(annexe), in preparation.
BOSE-EINSTEIN CORRELATIONS IN THE LUND MODEL FOR MULTIJET SYSTEMS
SANDIPAN MOHANTY T h e Department of Theoretical Physics, Solvegatan 14 A , 223 62 Lund, Sweden. E-mail: sandipanOthep.lu.se The interference based analysis of Bose Einstein Correlations in the Lund Model has hitherto been limited to simple strings without gluonic excitations. A new fragmentation method based on the Area Law in the Lund Model allows such an analysis to be extended to multigluon strings.
1. Introduction
The Bose Einstein effect or the enhancement of the two particle correlation function for identical bosons with very similar energy momenta is well known in hadronic interactions. Since hadronisation is mostly described through phenomenological models and Monte Car10 simulations, which are based on classical probabilistic concepts, quantum mechanical effects such as the Bose Einstein Correlations (BEC) pose a problem. In the event generator PYTHIA, where hadronisation is handled through the Lund string fragmentation model, this effect is mimicked by introducing an attractive interaction between identical bosons in the final state. The purpose behind this is to parametrise the effect, rather than to provide a physical model for it. A physical model for describing the BEC effect within the string fragmentation scenario was developed by Andersson and Hofmann in [l]which was later extended by Andersson and Ringnkr in [2]. They showed that associating an amplitude with the decay of a string into a set of hadrons in the Lund Model leads to interference effects which enhance the probability for identical bosons forming a shade closer in the phase space than what would be expected in a purely classical treatment, and identical fermions a shade farther appart. But their formulation was limited to the simplest string configuration,
89
90 i.e., a string stretched between a quark and an antiquark with no gluonic excitations. Comparison with direct experimental data on BEC was not feasible, since a proper description of the properties of hadronic jets requires parton showers, and subsequent fragmentation of multigluon strings. Even though PYTHIA implements one approach towards multigluon string fragmentation, the interference based model for Bose Einstein effect of Andersson and Ringnkr could not be extended to the multigluon string fragmentation scheme in PYTHIA. Recently, an alternative way to fragment the multigluon string has been developed in [3]. Unlike the approach in PYTHIA, this method does not try to follow the complicated surface of a multigluon string. It is based on the observation that the string surface is a minimal area surface in space-time, and hence it is completely determined by its boundary. An attempt was made to reformulate string fragmentation as a process along this boundary, called the “directrix”. The result was a new scheme for string fragmentation, with a simple generalisation to multigluon strings. This method of hadronisation has been implemented in an independent Monte Carlo routine called “ALFS” (for “Area Law Fragmentation of Strings”)a. Particle distributions from ALFS are in agreement with those of PYTHIAon the average, but there are differences at an exclusive event to event basis, which may show up in higher moments of the distributions. It was also understood that the interference based model for the BEC effect can be extended to multigluon string fragmentation in ALFS. In Sec. 2 this new fragmentation scheme will be summarised very briefly. A brief description of the basic physics of the interference based approach to the BEC appears in Sec. 3. In Sec. 4 the concept of coherence chains will be introduced which allows the extension of the analysis of BEC in the Lund Model to multigluon strings. Finally, some preliminary plots obtained by using this method to analyze two particle correlations will be presented in Sec. 5 . 2. String Fragmentation as a process along the directrix
We recall that the probability for the formation of a set of hadrons from a given set of partons in the Lund Model, is given by what is known as the “Area Law”. It states that this probability is the product of the final state phase space and the negative exponential of the area spanned by the string before it decays into the hadrons (cf. Figure 1): &availableon request from the author.
91
n
n,
hadrons
K
Figure 1. Fragmentation of a String without gluonic excitations in the Lund Model.
An iterative process based on the result in Eq. (1) is fairly straight forward to construct for systems without gluons. In the Lund Model, gluons are thought of as internal excitations on the string. A string with many such excitations traces complicated surfaces consisting of a large number of independent planar regions in space-time. One example can be seen in Figure 6 in Sec. 3, which illustrates the surface of a string with just one gluon. Calculating the energy momenta of the hadrons resulting from a decay of strings with many gluons is rather difficult. But since the world surface of a string is a minimal area surface, it has many important symmetry properties which may be exploited while considering its decay into a set of hadrons. Minimal surfaces are completely specified by their boundaries. For a string in the Lund Model, this boundary, called the “directrix”, is the trajectory of the quark or the antiquark (one of the end points). Since the directrix determines the string surface, it is possible to formulate string fragmentation as a process along the directrix, as shown in [3]. The directrix for a string, which can be thought to originate at a single point in space-time, is particularly simple and easy to visualize. This curve can be constructed by placing the energy-momentum vectors of the partons one after the other in colour order as shown (schematically) in Figure 2. The fragmentation process developed in [3] identifies the area in the area law with the area between the directrix and the (‘hadronic curve”b, bThe string constant n, or the energy per unit length in the string, will be set to unity
92
Figure 2. Schematic representation of the directrix for a configuration with a few resolved partons.
i.e., the curve obtained by placing the hadron energy momenta one after the other in rank order. The area used in the area law can be partitioned into contributions from the formation of each hadron in many different ways. Figure 3 shows one possible partitioning where a triangular region is associated with one particle (shaded region in the upper left part of the figure). This figure also illustrates the connection between the area in Figure 1 and the area between the directrix and the hadronic curve. The upper left part of the figure shows the same set of breakup vertices and hadrons as Figure 1. The vectors qj in the lower half of the figure are obtained from the vertex vectors Xj by inverting one light-cone component of q,and are “dual” to the vectors xj in this sense. They represent the energy momentum transfer between the two parts of the string formed because of the breakup at xj. The triangular regions in the upper part of the figure can be geometrically mapped to the triangular regions in the lower part. But the sum of the triangular areas in the lower part is the area between the directrix and the hadronic curve whereas in the upper part it is the area as used in Eq. (l)(ignoring a dynamically uninteresting constant contribution of i m 2 for each hadron of mass m ) . The hadronisation process in ALFS associates a quadrangular “plaquette” ,bounded by the hadron energy momentum vector, two ’vertex’vectors, and a section of the directrix, with the hadron. These plaquettes are not simple geometrical projections of the triangular areas shown in Figure 3, but their areas are related in such a way that the sum of the areas of the plaquettes is the same as the sum of the areas of the triangles. The ’vertex’ vectors in ALFS indeed do correspond to the space time locations where quark antiquark pairs form along the string during fragmentation, for a flat string. But in a more general context, it is better to think of them as
93
Figure 3. One possible way to partition the area of a fragmenting string into congributions for each hadron. It shows the connection between the area in the area law and the area between the directrix and the hadronic curve.
somewhat more complex dynamical variables. String fragmentation (especially as implemented in ALFS) could be thought of in terms of energy momentum transfer or “ladder” diagrams like in Figure 4. A quark momentum kq branches into a hadron momentum p l and an energy momentum transfer q1, which then branches into a hadron vector p2 and a new energy momentum transfer q2, and so on. At each stage the hadron momentum forms from the energy momentum transfer vector comming into that stage and another independent vector which serves to define a longitudinal plane in space-time. This other vector is just the anti-quark vector for a flat string. More generally it is a section of the directrix. This completes our brief overview of string fragmentation
4 n-1
s-
Figure 4. Lund Model can be thought of in terms of a ladder diagram involving energy momentum exchanges.
in ALFS. For a detailed treatment and the exact expressions the reader is referred to [3].
94
3. Physics of Bose Einstein Correlatians in the Lund Model There is a formal similarity between the Area Law in Eq. 1 and quantum mechanical transition probabilities. And even though hadronisation is a quantum mechanical process, the semiclassical approach in the Lund Model has been very successful in describing experimental data. It is not impossible therefore, that the underlying quantum mechanical process might have an amplitude which when squared resembles the area law. In [2] Andersson and Ringnhr argued that one can associate an amplitude of the form p=e
i(n+ib/2)A
Figure 5. Interchanging rank order of two identical particles in the final state would require a different set of breakup vertices and a different area under the string.
where K is the string constant, with the decay of a string into a set of hadrons. This amplitude trivially reproduces the Area Law in Eq. 1. But it also introduces interference effects for final states involving two or more identical particles, since for such final states the string fragmentation model allows many different ways to produce the same final state from a given initial state as illustrated in Fig 5. The figure shows two sets of breakup vertices which could lead to the same set of final state particles in the same flavour order. The particles labeled “1”and “2”, assumed identical, have interchanged rank orders between the two schemes. The two schemes clearly involve different areas, and hence will have different amplitudes according to Eq. (2). This means the total squared amplitude for forming such a final state (assuming there are no other identical particles in the event) should be [pi2 = 1p1 p2I2, where p1 and p2 are the amplitudes of the two schemes shown in Figure 5. But a probabilistic Monte Carlo simulation would assign a probability p: + p; with such a state, which
+
95
does not account for the interference term. Thus, to associate the right probability with the events we may weight this event with an event weight
The result can be generalised to the case of many identical particles, and to include the effect of transverse momentum generation during hadronisation, as described in [2]. Treatment of string states with gluonic excitations presents new problems. Since the multiplicity of the events rises with the number of gluonic excitations, the number of identical particles expected is larger. This presents a computational problem. More importantly though, in this case it is not always possible to find a string fragmentation scheme with only the rank order of two identical particles interchanged. When an exchanged scheme exists the calculation of true area differences and transverse momentum contributions to the amplitude is rather involved, if the exchanged particles were originally produced in different planar regions. But in string fragmentation, the particle energy momenta are constructed from local momentum flow along the string world surface in the neighbourhood of the breakup vertices. Therefore, most of the energy momentum of a hadron is along the local longitudinal directions relative to the string. Figure 6 once again shows two identical particles formed in different regions in the string. But this time they do not belong to the same planar region on the string. It is clear that the “exchanged” scheme (shown to the right) would be highly unlikely to emerge from this string as the energy momenta are no longer nearly aligned with the local longitudinal directions.
Figure 6. We show here the surface traced by a string in a system consisting of a quark, a single gluonic excitation, and an antiquark. Interchanging rank order of two identical particles in different string planes seems unnatural. The interchanged schemes would have very low probabilities to be produced during string fragmentation. It may help to think of the two surfaces represented here like two chairs facing the reader, for visualisation.
96
It was mentioned earlier that the fragmentation scheme in ALFS does not depend on explicit representations of the string surface such as the one in Figure 6 . In that approach, it is sometimes possible to find another partonic configuration which may result in the exchanged scheme as one possible event. But if the partonic state is held fixed, such an exchange would be improbable for the reasons just mentioned. As a first approximation therefore, it is reasonable to calculate BEC on multigluon strings by considering particle permutations in the planar regions of the string surface and ignoring the effects of exchange of particles across gluon corners. But the number of gluons and the size of planar regions on the string depend on the cut-off scale in the parton cascade used to generate the partons in an event generator. It would therefore seem that by making the cut-off sufficiently small we can make the planar regions so small that there would not be any instances of identical particles in one planar region anywhere in the event. To address this, we introduce the concept of coherence chains.
4. Coherence Chains
When the cut-off scale in the ordering variable (gluon transverse momentum, for example) is made small, softer and softer gluons are resolved. For a relatively soft gluon, the two planes in Figure 6 will be only slightly inclined with respect to each other, and the exchanged scheme would not appear so unnatural. If such exchanges are permitted in ALFS, the new partonic states created will not be outrageously different from the one we started with. However, parton showers are probabilistic in the Monte Carlos. Information about phases involved with different partonic configurations are “lost”. To analyse permutations of identical hadrons across gluon cornors, we need to consider interference effects between results of hadronisation from two slightly different partonic configurations. This appears to be problematic as we need both the phase information from the string fragmentation and the phase information from the partonic stage while calculating the interference terms and event weights. Infrared stability of string fragmentation, on the other hand, suggests that the detailed properties of the hadronic states should not be extremely sensitive to gluon emission around hadronic mass scales. In a sense the string state is resolved at the hadron mass scales by the fragmentation process. One interesting consequence of this was observed for the set of
97
hadrons emerging out of the fragmentation of multigluon strings in ALFS. The energy momenta of the hadrons could be collected into sets, such that inside each set, the energy momenta are aligned in a plane in space time upto a small scale in transverse momentum fluctuations. This suggests that at least some aspects of the hadronic phenomena might be insensitive to the softest gluons generated by the parton showers. With an analysis of BEC in mind we call these groups of particles in the final state as “Coherence Chains”. They describe the regions on the string over which coherent interference effects between hadrons should be considered. As we have seen, it seems quite unnatural to consider symmetrisation across hard gluon cornors, cf. Fig 6 , whereas symmetrisation across soft gluons is necessary. The transverse momentum resolution scale used to define the coherence chains should be chosen such that it distinguishes between these situations. The approximation being made in the analysis of BEC through the coherence chains could be stated as follows: we ignore the possible effects on BEC, of the slightly different amplitudes of different partonic states which may give rise to one coherence chain after hadronisation. To calculate BE weights, we treat the hadronic state as if it came from a simpler string state which has only those planes in it which are present in the coherence chains. Symmetrisation is then carried out separately for each plane and the squared amplitudes multiplied and a suitable event weight calculated. The hadron energy momenta are not directly altered as in PYBOEI (the but different events receive different weights. BE subroutine in PYTHIA), There is a tendency for events with higher multiplicity to yield higher weights. Since multiplicity is a function of gluonic activity, it is not possible to retune parameters pertaining to hadronisation to compensate for the multiplicity dependence of weights, unless we associate a total of one hadronic state for each partonic configuration. This leaves only the possibility of a rejection weighting on the hadronized states based on their BE weights in a Monte Carlo. This procedure is much slower than PYBOEI. But the purpose of this exercise is to provide a physical picture for the phenomenon inside the Lund Model.
5. Priliminary Results and Concluding Remarks The interference based analysis of BEC in the Lund Model has been extended so as to be applicable to multigluon string fragmentation as implemented in ALFS. Modules for BEC calculations have also been introduced
98
into ALFS. A priliminary analysis shows the expected enhancement of the two particle correlation function at small momentum differences. For events with a few prominent jets, BEC tends t o decrease with the number of jets if the total A-measure for the strings is kept fixed, cf Figure 7. No significant correlation is seen between oppositely charged pions, cf Figure 8. A detailed study of the properties of coherence chains, how they affect the analysis of BEC and further studies of this model for BEC itself will be presented elsewhere.
oc"
4 Jets
1.4 1.3 1.2
oc" ' ' ' ' ' ' ' J 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Q in GeV
0.9'
Figure 7. Two particle correlation function from ALFS for systems with few jets. "String length" or A-measure was kept fixed.
1.2
0.9'
'
'
'
'
'
'
"
0 0.2 0.4 0.6 0.8 1 1.2 1.4 Q in GeV
Figure 8. This plot shows that no significant correlation effects are expected between oppositely charged pions in this model.
Acknowledgements This project began as a collaboration between the Late Prof. Bo Andersson, my collegue F'redrik Soderberg and myself. Even though it is still an unfinished project and new developments are being made, we are indebted to Prof. Andersson for the numerous insights he provided, while he was with us.
References 1. B. Andersson and W. Hofmann, Phys. Lett. B169, 364(1986) 2. B. Andersson and M. Ringndr, Nucl. Phys. B 513, 627-644 (1998). 3. B. Andemson, S. Mohanty, F. Soderberg, I3ur.Phys.J. C21, 631-647 (2001).
POWER SERIES DISTRIBUTIONS IN CLAN STRUCTURE ANALYSIS: NEW OBSERVABLES IN STRONG INTERACTIONS R. UGOCCIONI AND A. GIOVANNINI Dipartimento di Fisica Teorica and I.N.RN. - sezione di Torino via F! Giuria I , 1-10125 Torino, Italy We present a new thermodynamical approach to mukiparticle production in high energy hadronic interactions, making use of the formalism of infinitely divisible power series disbibutions. This approach allows us to define new observables, linked to the system fugacity, which characterise different classes of events.
1 Introduction The phenomenological analysis of many-particle final states in hadron-hadron collisions in the GeV region has been successfully carried out’ using a two-component model: each event is assigned to one of two classes, called ‘soft’ and ‘semi-hard’, which correspond to events without mini-jets and events with mini-jets, respectively. We also assume that the multiplicity distribution (MD) in each class is described by a negative binomial (Pascal) distribution (NBD), of course with different parameters fi, k in each class. This model was successful in describing the shoulder in MD’s? the oscillations of high rank moments thereof1 and also forward backward multiplicity correlations.’ There are also experimental indications that the two classes behave differently in the TeV r e g i ~ n . ~ In order to extrapolate to the LHC region we have to make some assumptions on the behaviour of the parameters of the two NBD’s which we summarise as follows:
* The overall average multiplicity grows as In2 &.
* The soft component average multiplicity grows as ln &
and the MD obeys
KNO scaling (thus ksoft is approximately constant.)
* The average multiplicity in the semi-hard component is approximately twice as large as in the soft one. Three scenarios have been examined for the behaviour of ksemi-hxd: 1. same behaviour as for the soft component, i.e., it is constant (therefore KNO scaling is satisfied); 2. ks;Ai-hxd x In &,implying a strong violation of KNO scaling; 3. it follows a QCD-inspiredbehaviour; KNO scaling is attained only asymptotically. This last scenario is intermediate between the first two.
99
100
Of course, using NBD’s means we can explore the clan ~ t r u c t u r ethe : ~ ~average ~ number of clans and the average number of particles per clan are defined by
It turns out that the second and third scenarios show a number of clans which is rapidly decreasing with c.m. energy (accompanied by a fast increase of the average number of particles per clan). This is surprising, and in this paper we try to understand the implications of this result at parton level using thermodynamicalconcepts. To connect the hadronic and partonic levels we use the generalised local partonhadron duality (GLPHD),‘ which says that all inclusive distributionsare proportional at the two levels of investigation: &n,hadrons(Yl,.
. . ,Yn) = P Q n , p a r t o n s ( Y l ,
. ,Yn),
(2)
which corresponds for NBMD parameters to khadron
= kparton 7
fihadron
= p aparton.
(3)
GLPHD will be applied separately to soft and semi-hard components.
2 A new thermodynamical approach The thermodynamical approach to multiparticle production has a long history which cannot be summarised here. We would just like to attract the reader’s attention to the result7 that, to leading order in the allowed rapidity range, the generating function (GF) for the MD has the form of an infinitely divisible distribution (IDD). Keeping in mind this result, we propose the following approach. The partition function in the canonical ensemble, Qn(V, T), for a system with n particles, volume V and temperature T, is linked to the partition function in the grand-canonical ensemble with fugacity z, Q ( z , V, T),by the well known relation
Q(z,V,T) = C z n Q n ( V , T ) .
(4)
n
Quite in general, in the grand-canonical treatment the probability of finding n particles in a system is given by
That is to say, for a thermodynamical system the MD belongs to the class of power series distributions (PSD’s), and is characterised indeed by the following form:
101
with constants a,, b. We therefore propose, given a MD in power series form, the following correspondence with Eq. (5):
When the PSD is also IDD, then we know it can be cast in the form of a compound Poisson distribution, such that
p(0) = e-N.
(8)
In a two-step approach, fl is average number of objects (clans) generated in the first step. This way of describing the partonic cascade is well known:5 the ancestors (first step) are independent intermediate gluon sources; it is their thermodynamic properties which we want to explore. In our thermodynamical approach, N becomes of fundamental importance since Eq.s (7) and (8) imply
N = - Inp(0) = In Q;
(9)
all thermodynamical properties can be obtained by differentiating N. From the standard relation P V = kBT In Q,we obtain the equation of state
PV = f l k ~ j T ,
(10)
which says that clans form a classical ideal gas. The negative binomial (Pascal) distribution belongs to both classes, power series and IDD. The standard form, from which the correspondence with the partition function can be obtained, is the following: p(n)=
k(k + 1 ) . , . (k + n - 1)
(1 1)
n!
The identification we propose in our approach is
a, =
k(k + 1 ) . . . (k + n - 1) n! A
b=ii+k’
7
102
Notice that k(V, T) is the canonical partition function for a system with 1 particle; it is in our approach an unknown function of V and T. Finally notice that b is the fugacity z:
When the ancestors are created early in the evolution, at larger virtualities and with higher temperature, they tend to follow a quasi-classical behaviour, as the production of a new ancestor is competitive with the increase in gluon population within each clan. This results in a relatively large value of the k parameter, i.e., a small amount of aggregation. When the number of partons per clan is very small (close to 1; k is very large) then essentially each parton is a clan, and the equation of state reduces basically to that of an ideal gas (quasi-classicalbehaviour):
Pv M fikBT.
(14)
Via GLPHD, we expect a similar situation to hold at hadron level. This behaviour is qualitatively close to that of soft events as well as of scenario-1 semi-hard events. When the ancestors are created later in the evolution, at lower virtualities and with lower temperature, they tend to remember their quantum nature, as newly produced gluons prefer to stay together with other clan members rather than initiate a new clan. This results in a relatively small value of the k parameter, i.e., a larger aggregation and larger two-particle correlations. When the number of partons per clan begins to grow, the equation of state for partons becomes more and more different (quasi-quantum behaviour), but that for clans remains that of an ideal gas.
+
P V = N k B T = k ln (1 f i / k )kBT.
(15)
Via GLPHD, at hadron level we recognise the behaviour of scenario-2 and scenario-3 semi-hard events. It is interesting now to calculate some thermodynamicalquantities. The Helmholtz free energy can be rewritten in a form symmetric in fi and k:
A =fip-PV=filn kBT The average internal energy is --
The entropy is
103
which coincides with -A/T in the limit of ( d k / d T ) v -+ 0, since also U For further discussion of thermodynamicalquantities, see Ref. 8.
+ 0.
3 Clan behaviour as a function of fugacity Relying on GLPHD, we analyse first experimental data on the fugacity and the related a parameter: the NBD satisfies the recurrence relation
where
From Eq. (12) it is seen that b is the fugacity. In Figure 1 we show for each component and each scenario the energy variation of the parameters a and b. The points come from NB fits to experimental MD's, the lines show the predictions from the extrapolation mentioned in the introduction. The a parameter corresponds to the average multiplicity for a classical (Poisson) system. The relative behaviour of b and a = kb as the c.m. energy increases can be considered an indication of the relative importance of a behaviour closer to a quantum one, i.e. harder, with respect to a behaviour closer to a quasi-classical, i.e. softer, for a class of events. A very slow increase of b with c.m. energy and an almost constant behaviour of a is the main characteristic of the class of soft events and of scenario-1 semi-hard events. A very fast decrease of a in scenarios 2 and 3 and larger values of the fugacity b characterise harder events: the assumption of strong KNO scaling violation for the semi-hard component (an extreme point of view with respect to that of scenario 1) implies a completely new panorama. Then we explore the dependence of clan parameters on the fugacity b, induced by its energy evolution:
3
N = k l n (1 + -
nc =
ii
kln(l+fi/k)
= -kln(l - b ) ;
-
b (b-l)ln(l-b)'
Notice that the average number of particles per clan only depends on the fugacity b. In Figure 2 we show for each component and each scenario the clan parameters as a function of the fugacity. Again, the points come from fits to experimental data, the solid lines are our extrapolations. The dashed grey lines show the variation of clan parameters with b at fixed k (that is, at fixed V and T) for the following values of
104
b = fugacity = 7i/( Ti + k )
I 1lo
100 ( l i l , , l l ,
0
0.8
I , , 1 1
looo m,
a=kb
100001
I
loo
lo00
low0
lo
qips
c.m. energy (GeV)
c rn energy (CeV)
Figure 1. Fugacity and a parameter dependence on c.m. energy
i
105
b = fugacity
Figure 2. Clan parameters dependence on fugacity.
b = fugacity
106
k: 1 (lowest curve), 3, 7, 30 (highest curve); being A, independent of k, only one dashed line is visible in the correspondinggraphs. For the soft and scenario-1 semi-hard components it is shown that fl is a very slow growing function of the fugacity of the system throughout the ISR region ( b < 0.7), and then starts to grow quickly; A, as a function of the fugacity has a similar behaviour from M 1.5 to M 3. The decrease of the average number of clans in scenarios 2 and 3 leads again to the conclusion that this behaviour is closer to that of a quantum system than to a classical one, favouring as it does the production of larger clans and therefore of regions of higher particle density. For a discussion of other parameters, like the void scaling function, see again Ref. 8. 4
Conclusions
By defining a new thermodynamical approach to multiparticle production at parton level we have given the physical meaning offugacity to a parameter previously used only to describe deviations from Poisson behaviour in multiplicity distributions. On this basis, we revisited our previous extrapolations to the TeV region of inelastic hadron-hadron collisions and examine the different behaviours of the two classes of events (‘soft’ and ‘semi-hard‘). In the first class, ie., soft events, the ancestors of the clans are produced earlier, at higher virtuality and when the temperature is higher. Ancestors in these conditions generate little (clans are small). This results in a behaviour closer to that of a classical system (ideal gas). In the second class, i.e., semi-hard events, the ancestors are produced later in the cascade, at lower virtualities and when the temperature is lower. Ancestors in these conditions are more prolific (clans become larger). This results in a behaviour closer to that of a quantum system (stimulated emission); high density regions exist. Although we used explicitly in the illustration the NB(Pascal)MD, our result is extensible in principle to any infinitely divisible distribution which also belongs to the class of power series distributions. The results discussed in this paper bring in the spotlight the concept of clans, which up to now was only applied in a statistical framework. At this point, it becomes important to investigate other physical properties of clans, in order to answer questions like the following ones: can clans be considered observable objects? if so, what are their quantum numbers? do they start to interact among themselves in the TeV region? how will this possibility modify the ideal gas equation of state? Work in this direction has already begung by studying clan masses, with preliminary indications that the answer to the first question (observable clans) is positive.
107
This can be extremely relevant for the new heavy ion machines where the standard examination of events with tens of thousands of particles may be very problematic.
References 1. A. Giovannini and R. Ugoccioni, Phys. Rev. D 59,094020 (1999). 2. A. Giovannini and R. Ugoccioni, Phys. Rev. D 66,034001 (2002). 3. D. Acosta et al. (CDF Collaboration), Phys. Rev. D 65,072005 (2002). 4. A. Giovannini and L. Van Hove, Z. Phys. C 30,391 (1986). 5. A. Giovannini and L. Van Hove, Acta Phys. Pol. B 19,495 (1988). 6. L. Van Hove and A. Giovannini,Acta Phys. Pol. B 19,917 (1988). 7. D.J. Scalapino and R.L. Sugar, Phys. Rev. Lett. 8,2284 (1973). 8. A. Giovannini, S. Lupia and R. Ugoccioni, Phys. Rev. D 65,094028 (2002). 9. A. Giovannini and R. Ugoccioni, preprint DFTT 25/2002 (hep-ph/0209040), Torino University.
SCALE FACTORS FROM MULTIPLE HEAVY QUARK PRODUCTION AT THE LHC A. DEL FABBRO D i p a r t h e n t o di Fisica Teorica dell’unaversitd di ’Bieste and INFN, Sezione di Weste, Stmda Costiem 11, Mammare-Grignano, 1-34014 W e s t e , Italy. E-mail:
[email protected] The scale factors are geometrical dimensional coefficients that characterize multipa;rton collision processes and are related to spatial distribution of partons in the proton. We point out that one should be able to measure these factors in the multiple heavy quark production at the LHC.
1
Introduction
As a consequence of the high partonic luminosity at the LHC we expect a large number of events with two or more heavy quark pairs produced contemporarily in the same proton-proton collision. The high efficiency in detecting heavy quarks and the capability of the ALICE detector to measure at very low transverse momenta, where the effects of multiparton interactions are more pronounced, suggests to study multiple heavy quark production as an example of multiparton processes with the ALICE detector and measure the scale factors which are related to spatial density of partons in the proton l. The simplest case of multiparton scattering process is the double parton collision where two disconnected parton scatterings take place in the same hadron-hadron event. In the hypothesis of no correlations between the momentum fractions and factorization of the transverse degrees of freedom in the two-body parton distribution, the cross section to produce four heavy quarks, in a double parton scattering, is proportional to the product of two single parton scattering cross section. Hence in order to get the double parton scattering cross section we need to compute the heavy quarks QQ cross section which is usually calculated in the QCD collinear approximation at the nextto-leading-order z. The cross section is affected by several large theoretical uncertainties and recent experimental data on the beauty production, at the Tevatron, have shown that next-to-leading-order perturbation theory fails, in
108
109
fact the NLO pQCD calculations underestimate the cross section by a factor N 2,3, then large K-factors are needed to fit the data. The bottom cross section at high energy can be also calculated in the ktfactorization approach 4 5 where the interaction is factorized into unintegrated structure functions and off shell matrix elements. The kt-factorization gives consistent results with HERA and Tevatron data and, therefore, one hopes that the kt-factorization provide us the K-factors for heavy quark production at the LHC. We observe that some distributions, obtained either using the ktfactorization approach or calculating the cross section at the NLO pQCD, have the same shape and differ only for a normalization factor from those obtained with the parton model lowest order calculation. Therefore the effects of higher order corrections, in these simplest cases, are simply to rescale the parton model results by a K-factor: K = a ( & & ) / a ~ o ( Q QTo ) . obtain predictions on processes where two pairs or three pairs of heavy quarks are produced, we limit our considerations to these cases. With regard the 2 -+ 4 processes the higher order corrections are not known and we simply assume that the K(2 -+ 4) are equal of K(2 -+ 2) processes. 2
kt-factorization and K-factors
By comparing the total cross section at the LHC calculated in the ktfactorization approach with the lowest order calculation in pQCD we obtain the K-factor. To compute the inclusive QQ production cross section in the kt-factorization we use two different prescriptions for constructing the unintegrated distributions from the usual integrated parton densities. The first prescription is based on the conventional DGLAP evolutions equationse and the second one, is obtained from the leading order BFKL equation '. To generate the unintegrated structure functions we use the parton distributions set GRV94 with factorization scale p$ = 1. The production cross section is then expressed as45 *PP+QQ
=
d2Qt2 dzidz23(21, J d2Qtl7
Qti,p)3(22,qtZ,P)&(zi,Qti; z2,qt2; p )
(1) where 3 ( x , qt, p ) is the unintegrated structure function, representing the probability to find a parton with momentum fraction z, transverse momentum qt at the factorization scale p, while 6 is the off-shell partonic cross section for the subprocess g*g* + QQ. In the bottom production we get the value K N 5.5, using the set MRS99, with factorization and renormalization scale equal to the transverse mass of the heavy quark. In Fig.1 we plot our results and the DO experimental data for the cross section of b6 production at Tevatron
110
as a function of prin of the bquark. We also present the predictions for the ALICE detector.
3
Double and triple parton scatterings at LHC
The leading order QCD process p p + QQQQ corresponds to a single parton scattering at the fourth order in the coupling constant and the competing mechanism is the double parton scattering. The double parton cross section is * ~D(QQQQ)=
where the inclusive cross section uij(QQ) refers to the partonic process ij + A. The geometrical factors @$have dimension of an inverse cross section and result from the overlap of two-body parton distributions in transverse space and may depend on the kind of parton involved in the reaction. The heavy quark production at the LHC comes almost entirely from gluon fusion, hence, we can use the simplest expression
where for aeffwe have taken the value of 14.5 mb reported by CDF. Given the large cross section of charm production at the LHC, which may be of the order or larger of the effective cross section, we expect a considerable production of cE pairs in double parton collisions and even in triple parton collisions. To work out the triple parton scattering cross section we make use of the expression
which is obtained within the same simplifying hypothesis for the factorization of the double parton cross section. The parameter r is an unknown geometrical, dimensionless quantity of the order of unity '. In the calculations we fix r = 1. In the production of three cE pairs the competing mechanism to triple parton scattering, at low p t , is not provided by the single parton process gg + C E C E C ~but is given by a double parton mechanism with cross section
111
We present some results for the production of 4b, 4c-quarks and 6c-quarks at the LHC. Since OD, OT are proportional to cg, o:, the effect of higher order corrections is to rescale the parton model result by a factor K 2 , K 3 . In Fi g2 we plot the total cross section for the ALICE LHC at the centerof-mass energy of 14 TeV as a function of the pyi" between 0 and 10 GeV. The double parton cross section decreases with p y i n more rapidly than the single parton cross section and at 8 - 10 GeV they are of the same order. We also show the results in the case of f i = 5.5 TeV typical of nucleon-nucleon interactions in nucleus-nucleus collisions. In Fig.3 we plot the cross section to produce 4c and 6c-quarks for the ALICE detector at the center-of-mass energy of 14 TeV as a function of the pt"i".
The large values of the double parton cross section, and even of the triple parton cross section in multi-charm production, indicate that it would be important t o isolate and measure these processes at ALICE LHC. These multiparton processes provide us a unique tool to gain new information on the proton structure which cannot be obtained with single parton interactions processes.
Acknowledgment This work was partially supported by the Italian Ministry of University and of Scientific and Technological Researches (MIUR) by the Grant COFIN2001.
References 1. A. Del Fabbro and D. Treleani, arXiv:hep-ph/0207311. 2. P. Nason, S. Dawson and R. K. Ellis, Nucl. Phys. B 303 (1988) 607; Nucl. Phys. B 327 (1989) 49. 3. B. Abbott et al. [DO Collaboration], Phys. Lett. B 487 (2000) 264 [arXiv:hep-ex/9905024]. 4. S. Catani, M. Ciafaloni and F. Hautmann, Nucl. Phys. B 366 (1991) 135. 5. J. C. Collins and R. K. Ellis, Nucl. Phys. B 360 (1991) 3. 6. M. A. Kimber, A. D. Martin and M. G . Ryskin, Eur. Phys. J. C 12 (2000) 655 [arXiv:hep-ph/9911379]. 7. J. Blumlein, Report No. DESY 95-121, hep-ph/9506403. 8. A. Del Fabbro and D. Treleani, Phys. Rev. D 63, 057901 (2001) [arXiv:hep-ph/0005273]. 9. G. Calucci and D. Treleani, Phys. Rev. D 60,054023 (1999) [arXiv:hepph/9902479].
112
b
1000
I
,
'
.
.
PP->b6
n
P 3.
W
b
ptmin( GeV)
Figure 1. pp -+ b6 cross section as a function of py'" at 4 = 1.8 TeV, with the bquark within lib1 < 1, experimental data from ref.', and at 4 = 14 TeV with the bquark within lql < 0.9. Dotted(dashed) lines correspond to BFKL(DGLAP) prescriptions. Continuous lines represent the parton model result rescaled by the K-factor.
113
h
b
ptmin (G eV)
PP- >bKb6
1
ptmin (G eV)
Figure 2. b6b6 cross section at f i = 14 TeV and at f i = 5.5 TeV as a function of pr'" with all the four b-quarks in 1 ~ < 1 0.9. Continuous(dashed) lines correspond to double(sing1e) parton scattering.
114
lW
1000
100
n
10
P
3.
W
b
i
0.1
0.01 10
0
ptmin( GeV)
PP-
>CCCFCF
0
10
ptmin( GeV)
Figure 3. cEcE and cEci?cE cross sections with the equal sign c-quarks in 1111 < 0.9 at f i = 14 TeV. In the 2 + 4 process the continuous(dashed)line corresponds to double(sing1e) parton scattering while in the 2 + 6 process the continuous(dashed)line corresponds to triple, Eq.4, (double, Eq.5,) parton scattering.
ON TRUNCATED MULTIPLICITY DISTRIBUTIONS
I.M. DREMIN Lebedev Physical Institute, Leninsky pr 53 Moscow 119991, Russia
E-mail: dreminOlpi.ru In experiment, the multiplicity distributions of inelastic processes are truncated due to finite energy, insufficient statistics or special choice of events. It is shown that the moments of such truncated multiplicity distributions possess some typical features. In particular, the oscillations of cumulant moments at high ranks and their negative values at the second rank can be considered as ones most indicative on specifics of these distributions. They allow t o distinguish between distributions of different type.
1. Introduction Studies of multiplicity distributions of high-energy inelastic processes have produced many important and sometimes unexpected results (for the reviews, see, e.g., ', 2 , 3). The completely new region of very high multiplicities will be opened with the advent of RHIC, LHC and TESLA accelerators. Theoretical approaches to multiplicity distributions in high-energy processes have usually to deal with analytic expressions at (pre)asymptotic energies which only approximately account for the energy-momentum conservation laws or with purely phenomenological expressions of the probability theory. The multiplicity range extends in this case from zero to infinity. In experiment, however, one has to consider distributions truncated at some multiplicity values in one or another way. These cuts could appear due to energy limitations, low statistics of experimental data or because of special conditions of an experiment. Energy limitations always impose the upper cutoff on the tail of the multiplicity distributions. Low statistics of data can truncate these distributions from both ends if it is insufficient to detect rare events with very low and/or very high multiplicity. Similar truncations appear in some specially designed experiments 4 , when events within some definite range of multiplicities have been chosen.
115
116
It would be desirable even in these cases t o compare the distributions within those limited regions with underlying theoretical distributions. The straightforward fits are sometimes not accurate enough to distinguish between various possibilities because the probability values vary by many orders of the magnitude. More rigorous approach is to compare different moments of the truncated distributions. The simpleminded X2-fits are less sensitive and provide less information. The cumulant moments Kq seem to be most sensitive to slight variations (and, especially, cuts and shoulders) of the distributions. They often reveal such tiny details of the distributions which otherwise are hard to notice. In particular, QCD predicts quite peculiar behaviour of cumulant m e ments as functions of their rank q. According to solutions of the equations for the generating functions of the multiplicity distributions in the asymp totic energy region, the ratio of cumulant moments Kpto factorial moments Fq usually denoted as Hq = Kq/Fq behaves as q-2 and at preasymptotic energy values reveals the minimum at q = 5 with subsequent oscillations at higher ranks 6, '. Such a behaviour has been found in experiment at presently available energies 8 , g . The solutions of the corresponding equations for the fixed coupling QCD also indicate on similar oscillations l'. At asymptotics, the oscillations should disappear and Hq becomes a smoothly decreasing and positively definite function of q, as mentioned above. Neither of the distributions of the probability theory possesses these features. Among them, the negative binomial distribution (NBD) happens to be one of the most successful ones in the description of global features of the multiplicity distributions ll. Let us remind that the negative binomial distribution is defined as
where a = (n)/lc, (n) is the mean multiplicity, k is an adjustable parameter, and the normalization condition reads
c 00
Pn = 1.
n=O
Its generating function is 00
G(r)=XP,(l+z)"= n=O
(l-F). -k
(3)
117
The integer rank factorial and cumulant moments, and their ratio are
+
Hq-moments at the parameter k = 2 behave as 2 / q ( q l), i.e. with the power-law decrease reminding at large q that of QCD, however, with a different weight factor. Therefore, at first sight, it could be considered as a reasonably good analytic model for asymptotic behaviour of multiplicity distributions. It has been proclaimed 12, 13, 14, l5 that the superposition of two NBDs with different parameters and their cutoff at high multiplicities can give rise to oscillations of Hq and better fits of experimental data at preasymptotic energies. Nevertheless, the fits have not been perfect enough. Let us compare first the asymptotic QCD predictions with NBD fits at different values of the adjustable parameter k. The values of D, = q2Hqas functions of q for the asymptotic QCD are identically equal to 1. For NBD at k = 2, they exceed 1 tending to 2 at large q. At larger values of k , all D, are less than 1 except D2 = 1 at k = 3. Surely, the identity D1 = 1 is valid for any k due to the normalization condition. To get asymptotic QCD results with all D, 3 1 from the expressions similar to NBD, one would need to modify NBD in such a way that the parameter k becomes a function of n. Thus some effective values of k should be used to get QCD moments D, = 1 at various q. They are obtained as the solutions of the equation (7) n=l
which follows from Eq. ( 6 ) for Hq = q-2. They show that k somewhat decreases from 3 to some values exceeding 2 with increase of q. This reflects the well known fact that the tails of distributions are underestimated in NBD-fits l4 compared to experimental data in the preasymptotic region. Also, the amplitude of oscillations and their periodicity are not well reproduced by a single truncated NBD 14, and one has to use the sum of at least two NBDs to get a better fit. However, rather large values of k were obtained in these fits. It implies, in fact, that the fit is done with the help
118
of two distributions very close to Poissonian shapes because the Poisson distribution is obtained from NBD in the limit k -+ 00. Therefore, the tails are suppressed very strongly. Here, we will focuse our efforts on qualitative changes of moments when NBD is truncated, especially, as applied to studies of very high multiplicities. We omit all Figures which can be found in hepph/0207068 16. In QCD considerations based on the equations for the generating functions for quark and gluon jets, the preasymptotic (next-to leading order etc) corrections give rise to oscillations of Hq. Even though they are of the higher order in the coupling strength, they appear mainly due to account of energy conservation in the vertices of Feynman diagrams but not due to considering the higher order diagrams which are summed in the modified perturbation theory series (see 3). In the phenomenological approach, this would effectively correspond to the cutoff of the multiplicity distribution at some large multiplicity. Therefore, we intend here to study how strongly such a cutoff influences the NBD-moments, whether it produces oscillations of the cumulant moments, how strong they are, and, as a more general case, consider the moments of NBD truncated both at low and high multiplicities. This would help answer the question if the shape of the distribution in the limited region can be accurately restored from the behaviour of its moments. It could become especially helpful if only events with very high multiplicities are considered in a given experiment because of the above mentioned underestimation of tails in the NBD-fits.
2. Truncated NBD and its moments
In real situations, the multiplicity distribution is sometimes measured in some interval of multiplicities and one can try to fit by NBD the data available only in the restricted multiplicity range. Therefore, we shall consider the negative binomial distribution within the interval of multiplicities m 5 n 5 N called Pt’ and normalized to 1 so that
c N
Pt’ = 1.
n=m
Moreover, due to above reasoning and to simplify formulas we consider here only the case of k = 2. The generalization to arbitrary values of k is straightforward. The generating function of the truncated distribution G,(z) can be eas-
119
ily found as N
G,(z) =
C P?)(l+
f f (0) '
z )= ~ G(z)(l+ z ) ~ -
n=m
(9)
where f(z) = 1
+ m(1-
z) - [1+( N f 1)(1
-.)I
N-m+l 7
(10)
a
z=b(l+z),
b= I+a'
= f(Z
= f(z = b ) .
Correspondingly,
f(0)
= 0)
(12)
Using the above formulas for the factorial moments, one gets the following formula for the moments of the truncated distribution expressed in terms of the NBD-moments (4):
where (n), is the mean multiplicity of the truncated distribution. It is related to the mean multiplicity (n)of the original distribution as
+
+
(1 - b ) [ ( N 1)(N 2)bN-"+l - m(m + 111. (n)- (n)c = 1+ m ( l - b) b N - m + l [ ( N l ) b - N - 21
+
+
(14)
Inserting formula (4) in (13), one gets
For computing it is more convenient to use the formula (15) in the following form:
120
N+l
+ 1- r)P-m+' -(a+l
r
n ( N + 3 - i)] i=l
These expressions can be used also for the distributions truncated at one side by setting m = 0 or N = 00. The cumulant moments can be calculated after the factorial moments are known from Eq. (13) according to the identities
This formula is a simple relation between the derivatives of a function and of its logarithm (see Eqs (4)and (5)). Therefore it is valid for both original and truncated distributions. For the Poisson distribution, the ratios Hg are identically equal to zero, and are given by Eq. (6) for NBD while truncation induces new features. At the beginning, we consider the abrupt cutoff only of the very high multiplicity tail, i.e., the case m = 0 and N > (n). This mimics the energy-momentum conservation limits. Such a cutoff induces oscillations of H g . The farther is the cutoff from the mean multiplicity, the weaker are oscillations. This quite expected result is known from long ago 12, 13. The closer is the cutoff to (n),the stronger the low-rank moments are damped. For the faraway cutoff, the period of oscillations increases. This increase is larger for lower mean multiplicity. At N/(n)=const, one observes the approximate scaling of Hg. 3. Very high multiplicities
With the advent of RHIC, LHC and TESLA we are approaching the situation when average multiplicities become very high and the tails of multiplicity distributions reach the values which are extremely large. These events with extremely high multiplicities at the tail of the distribution can be of a special interest. The tails of particular channels die out usually very fast, and a single channel dominates at the very tail of the distribution. Mostly soft particles are created in there. Thus one hopes to get the direct access to very low-a: physics. QCD-interpretation in terms of BFKL-equation (or its
121
generalization) can be attempted. Also, the hadronic densities are rather high in such events, and the thermodynamical approach can be applied 17. However, these events are rather rare and the experimental statistics is quite poor until now. The Poisson distribution has the tail which decreases mainly like an inverse factorial. According to NBD (l),the tail is exponentially damped with the power-increasing preexponential factor. At the same time, QCD predicts even somewhat slower decrease. This is important for future experiments in the very high multiplicity region. To study these events within the truncated NBD with k = 2 according to Eqs (15), (16), let us choose the multiplicity interval of the constant length and place it at various distances from the mean multiplicity. The most dramatic feature is the negative values of H2 and the subsequent change of sign of Hq at each q in the case when the lower cutoff m is noticeably larger than (n)( m / ( n ) 2 2). This reminds of the behaviour of Hq for the fixed multiplicity distribution and shows that the NBD-tail decreases quite fast so that the multiplicity m dominates in the moments of these truncated distributions. The same features have been demonstrated for different average multiplicities and different positions of the fixed window. Again, the signchanging characteristics remind those for the fixed multiplicity distribution. Another possibility to study the tail of the distribution with the help of Hq-ratios is their variation with the varying length of the tail chosen. At the same mean multiplicity, we calculate moments for the intervals starting at a fixed multiplicity and ending at different values. The values of Hq at rather low ranks q = 2, 3, 4, 5 are very sensitive to the interval length and vary by the order of magnitude.
4. Conclusions
In connection with some experiments planned, our main concern here was to learn if Hq-ratios can be used to judge about the behaviour of the tail of the multiplicity distribution. Using NBD as an example, we have shown that Hq behave in a definite way depending on the size of the multiplicity interval chosen and on its location. Comparing the corresponding experimental results with NBD-predictions, one would be able to show whether the experimental distribution decrease slower (as predicted by QCD) or faster than NBD. In particular, the negative values of Hz noted above are of special interest because they show directly how strong is the decrease of the tail. NBDs
122
at different k values would predict different variations of H2 with more negative H2 for larger k. Also, the nature of oscillations of H,-moments at larger values of q reveals how steeply the tail drops down. Let us stress that the choice of high multiplicities for such a conclusion could be better than the simpleminded fit of the whole distribution. As one hopes, in this case there is less transitions between different channels of the reaction (e.g., from jets with light quarks to heavy quarks), and the underlying low-x dynamics can be revealed.
Acknowledgements This work is supported by the RFBR grants N 00-02-16101 and 02-0216779.
References I.M. Dremin, Phys.-Uspekhi 37,715 (1994). E.A. DeWolf, I.M. Dremin and W. Kittel, Phys. Rep. 270,1 (1996). I.M. Dremin and J.W. Gary, Phys. Rep. 349,301 (2001). V.A. Nikitin, Talk at I11 International workshop on Very High Multiplicities, Dubna, June 2002. 5. I.M. Dremin, Phys. Lett. B 313,209 (1993). 6. I.M. Dremin and V.A. Nechitailo, Mod. Phys. Lett. A 9,1471 (1994); J E T P Lett. 58,881 (1993). 7. S. Lupia, Phys. Lett. B 439,150 (1998). 8. I.M. Dremin, V. Arena, G. Boca et al, Phys. Lett. B 336,119 (1994). 9. SLD Collaboration, K. Abe et al, Phys. Lett. B 371,149 (1996). 10. I.M. Dremin and R.C. Hwa, Phys. Rev. D 49, 5805 (1994); Phys. Lett. B 324,477 (1994). 11. A. Giovannini, Nuovo Cim. A 15,543 (1973). 12. R. Ugoccioni, A. Giovannini and S. Lupia, in M. M. Block, A.R. White (Eds.) Proc. XXIII Int. Symposium on Multiparticle Dynamics, Aspen, USA, 1993, WSPC, Singapore, 1994, p. 297. 13. B.B. Levtchenko, in B.B. Levtchenko (Ed.), Proc. VIII Workshop on High Energy Physics, Zvenigorod, Russia, 1993, MSU, 1994, p. 68. 14. A. Giovannini, S. Lupia and R. Ugoccioni, Phys. Lett. B 388, 639 (1996); 342,387 (1995). 15. R. Ugoccioni and A. Giovannini, Nucl. Phys. Proc. Suppl. 71,201 (1999). 16. I.M. Dremin and V.A. Nechitailo, hepph/0207068. 17. J. Manjavidze and A. Sisakian, Phys. Rep. 346,1 (2001). 1. 2. 3. 4.
FORWARD-BACKWARD MULTIPLICITY CORRELATIONS IN e+e- ANNIHILATION AND pp COLLISIONS AND THE WEIGHTED SUPERPOSITION MECHANISM A. GIOVANNINI AND R. UGOCCIONI Dipartimento di Fisica Teorica and I.N.F.N. - sezione di Torino via P. Giuria 1, I-I0125 Torano, Italy Forward-backward multiplicity correlations in symmetric collisions are calculated independently of the detailed form of the corresponding multiplicity distribution. Applications of these calculations to e+e- annihilation and pp collisions confirm the existence of the weighted superposition mechanism of different classes of substructures or components. When applied to pp collisions in particular, clan concept and its particle leakage from one hemisphere to the opposite one become of fundamental importance. The increase with c.m. energy of the corrrelation strength as well as the behaviour of the average number of backward particles vs. the number of forward particles are correctly reproduced.
1
Essentials on forward-backward multiplicity correlation in symmetric collisions
The average number of charged particles generated in different events in the backward hemisphere (B), fig, is a function of the number of particles occur, by the correlation strength ring in the forward hemisphere (F), n ~controlled ~ F B
In hadron-hadron collision^^^^^^^^ the correlation strength parameter is ~ > ~is growing with c.m. rather large with respect to e+e- a n n i h i l a t i ~ n and energy in the total sample of events as shown in Table 1. In addition in e+e- annihilation at LEP energies it has been found5 that bFB x 0 in the separate two- and three-jet sample of events. No information is available on the correlation strength in the separate samples of soft (no minijets) and semihard (with minijets) events in hadron-hadron collisions. 2
The problem
We want to calculate the parameter
bFB
ne+nF=n
123
for the multiplicity distribution
124
Table 1. Experimental results on forward-backward correlation strength. ~ F B
pp PP e+e-
UA5
ISR OPAL TASSO
0.43 f 0.01 (1 < lql < 4) 0.58f0.01 (0 < 171 < 4) 0.155 f 0.013 0.103 f 0.007 0.080 f 0.016
546 GeV c.m. energy 63 GeV c.m. energy LEP 22 GeV c.m. energy
where TZFand nB are random variables and Ptotal(nF,ns) is the joint distribution for the weighted superposition of different classes of event^,^ i.e.,
a being the weight of class 1 events with respect to the total. 3
The general solution
+ (1 - a)b2Di,2(1+ bl) +ia(l- a)(.fi2- iil)2(l+ bl)(l + b2) a D i , i ( 1 + b2) + (1 - a)D:,2(1+ bi)
ab1Di,,(1 + b2) bFB =
+;a(l - a ) ( f i 2 - fi1)~(1+ bl)(l+
7
(4)
b2)
where & are the correlation strengths of class 1 (i = 1) and class 2 (i = 2) events, Dn,i are the multiplicity distribution dispersions of class 1 (i = 1) and class 2 (i = 2) events and .fii the corresponding average charged multiplicity for class 1 (i = 1) and class 2 (i = 2) events. In case bl and bz are zero (as in the separate two samples of events in e+e- annihilation) one finds
It should be pointed out that above formulas are independent from any specific form of the multiplicity distributions PI and Pz! They depend only on the weight alpha and average charged multiplicities and dispersions of the two classes of events.
125
4
4.1
Applications of Eqs. (4) and (5)
A n intriguing application of Eq. (5) to e+e- annihilation
Opal collaboration has found that forward backward multiplicity correlations are non existent in the separate two- and three-jet samples of events i.e. bl and 6 2 in the first general formula are zero and the correlation strength of the total sample of 2-jet and 3-jet events is equal to 0.103 f 0.007. Using a fit to OPAL data with similar conditions to the jet finder algorithm for the separate samples of events we can determine all parameters in formula (5) and test its prediction with the experimental finding. It turns out that the values of the parameters' needed in ( 5 ) are Q! = 0.463, fil = 18.4, f i 2 = 24.0, Of= 25.6, 0;= 44.6 and the predicted value of b F B is 0.101, in extraordinary agreement with experimental data!
4.2 A suggestive application of Eq.
(4) to pji
collisions
The application of (5) to pp collisions leads to unsatisfactory results but opens a new perspective: forward-backward multiplicity correlations cannot be neglected in the separate components. Accordingly Equation (4) and not (5) should be used. Repeating the same approach done in e+e- annihilation for calculating b F B (Le., assuming that in the separate samples of events FB multiplicity correlations are absent, bl = b 2 = 0) in the case of pp collisions at 546 GeV c.m. energy and using Fuglesang's fitgto soft and semihard events (accordingly = 0.75, f i l = 24.0, f i 2 = 47.6, D i , l = 106, D i , 2 = 209) one finds b F B = 0.28 = 0.58). The theoretical prediction in this case is too small! It is clear that our working hypothesis was not correct in this case. In conclusion forward-backward multiplicity correlations are needed in each class of events, i.e., bl and bz should be different from zero, and after their determination general formula (4) and not formula (5) should be used! Results in 4.1 and 4.2 are a striking test of the existence of the weighted superposition effect, only a guess up to now.
(b$z)
5
A new theoretical problem
Following above conclusions the next problem is how to determine bl and b 2 when explicit data on forward-backward multiplicity correlations in the two separate samples of events are lacking and b F B of the total sample is known from experiments.
126
The generality of Equation (4) should be limited by introducing additive assumptions inspired by our phenomenological knowledge of the particle emission process in the collision under examination. Assuming for instance that a. particles are independently produced in the collision, b. binomially distributed in the forward and backward hemispheres, it is found that
where Dn,i and ni are the dispersion and the average charged multiplicity of the overall multiplicity distribution of each component being as usual i = 1,2. Assuming next that c. the multiplicity distribution in each i-component is NB(Pasca1) with parameters f i i and Ici (an assumption which is suggested by the success of the weighted superposition mechanism of NB(Pascal)MD’s in describing shoulder effect in charged particle multiplicity distributions and Hq vs q oscillations and which we hardly would like to abandon), we find
Accordingly bi values can be calculated by using again Fuglesang’s fit parameters on the two components at 546 GeV c.m. energy. After inserting in the general formula (4) these parameters we find bFB = 0.78. A too large value with respect to the experimental one ( ~ F B= 0.58)! This result leads to the following question: Which one of above mentioned apparently quite reasonable assumptions should be modified? Our guess is that charged particle FB multiplicity correlation is not compatible with independent particle emission but is compatible with the production in cluster, i.e., clan within a NB(Pasca1)MD framework. An idea which we propose to develop and to explore in the following. 6
Clan concept is of fundamental importance
Successive steps of our argumentlo are i) the joint distribution Ptotal(nF,n ~is )written as the convolution over the number of produced clans and over the partitions of forward and backward
127
produced particles among clans:
ii) forward backward hemispheres symmetry property is used
iii) leakage parameter p is introduced: it controls the probability that a binomially distributed particle generated by one clan lying in one hemisphere has to leak in the opposite hemisphere, q is the leakage parameter working in the symmetric direction, p q = 1 (notice that p = 1 or q = 0 means no leakage, the variation domain of p is 0.5 5 p < 1 and when p < 0.5 the clan is classified in the wrong domain). iv) covariance y ( ( p~ , ! i ~ ) ( p g- j i g ) ) of p~ forward and pg backward particles within a clan for forward and backward binomially distributed particles generated by clans is also introduced. v) clans are binomially produced in the forward and backward hemispheres with the same probability and particles within a clan are independently distributed in the two hemispheres. It follows for each i-component
+
b=
- 4(d"NF " ) ) ( p
- q)2
+4Ny/fiz
+ 4(d2NF(N))(p- q ) 2 - 4 N y / f i : + 2ND:/fi: - D:/fi - DF/fic - 4(dLF( N ) ) ( p- q ) 2 f i c / N+ 4 y / f i c D;/fi + D,2/fic+ 4(dkF( N ) ) ( p- q ) 2 f i c / N- 4 y / f i c ' D&
(10)
Eq. (10) assuming NB (Pascal) behavior with characteristic f i i and ki parameters for each component, binomial clan distribution in the two hemispheres, binomial distribution in the two hemispheres of logarithmically produced particles from each clan according to clan structure analysis gives
Accordingly the problem is therefore reduced to determine leakage parameters pi in the two classes of events! Notice that in the limit f i i -b 00, for decreasing ki, bi depends on pi only.
128
7
A phenomenological argument for determining leakage parameters pi
By assuming that the semihard component is negligible at 63 GeV c.m. energy and knowing ~ F Bfrom experiment at such energy, equation (11) allows to determine psoft (0.78); the relatively small variation of Fiic,soft from 63 GeV to 900 GeV (it goes from M 2 to M 2.44) leads to the conclusion that the leakage parameter for the soft component psoftcan be considered in the GeV domain nearly constant, i.e., psoft= 0.78: therefore the correlation strength for the soft component at 546 GeV c.m. energy, bS0ft(546GeV), can easily be determined. The germane equation for bsemihard(546GeV) contains of course the unknown parameter psemjhard at the c.m. energy of 546 GeV. By inserting in equation (4)for ~ F B(total) bs,ft(546 GeV) = 0.78 and bsemihard(546 GeV) as given by equation (11) with unknown Wemihard PtWEmeter, psemihard at 546 GeV can be calculated from the experimental value of ~ F B(total) = 0.58. It is found psemihard(546 GeV) = 0.77. Since fic,semihard does not vary too much in the GeV region (it goes from 1.64 at 200 GeV c.m. energy to 2.63 at 900 GeV c.m. energy, a relatively small variation which will hardly affect the corresponding leakage parameter in this domain) it is not hazardous to take Psemihard M constant in the same region. Under just mentioned assumptions a. the correlation strength c.m. energy dependence is correctly reproduced in the GeV energy range from ISR up to UA5 top c.m. energy and follows the phenomenological formula ~ F B= -0.019 0.061 Ins (see Fig. 1). b. when extrapolated to the TeV energy domain in the scenarios discussed in Ref. 7 with the same values of psoftobtained in the GeV region (A,,,,ft(14 TeV) being M 2.98 makes this guess acceptable) and psemihard also constant (a too strong assumption of course), a clean bending effect in bFB vs. In s is predicted. Bending effect is enhanced or reduced by allowing Psemihard to increase (less leakage from clans and more bending) or to decrease logarithmically with c.m. energy (more leakage from clans and less bending). Energy dependence of leakage parameter for the semihard component is clearly expected in the TeV region in a scenario with strong KNO scaling violation in view of the quite large average number of particles per clan with respect to that found at 900 GeV c.m. energy (it goes from 2.63 at 900 GeV up to 7.36 at 14 TeV). See again Fig. 1. c. in addition F i ~ ( n pbehavior ) at 63 GeV c.m. energy (ISR data) is quite well described in terms of the soft component (single NB) only and at 900 GeV c.m. energy (UA5 data) in terms of the weighted superposition of soft
+
129
1
b
0.8
0.6 0.4
0.2 I
10
102
I IIIIII
I
I I I I Ill1
1o3 1o4 c.m. energy (GeV)
Figure 1. Predictions for the correlation coefficients for each component (soft and semihard) and for the total distribution in p p collisions in scenario 2. Three cases are illustrated, corresponding t o the three numbered branches: leakage increasing with fi (upper branch, a),constant leakage (middle branch, 0 )and leakage decreasing with f i (lower branch, 0). Leakage for the soft component is assumed constant at all energies. The dotted line is a fit to experimental values.
and semihard components, i.e., of the superposition ‘of two NB(Pasca1)MD’s. (See Fig. 2, where the second case is shown). 8
Conclusions
Weighted superposition mechanism of two samples of events describes forward backward multiplicity correlations in e+e- annihilation independently of the specific form of the charged particle MD in the different classes of events: only the average numbers of particles and related dispersions in addition to the weight factor are needed. In order to describe forward backward multiplicity correlations in pp collisions lack of information on FB multiplicity correlations in the separate components is demanding to specify the form of particle multiplicity distributions of the two components.
130
(b) 900 GeV lq1<4
E B semi-hard comp.
-
-
-0
5
10
15
20
25
30
35
40
nF
Figure 2. Results of our model for f i ~ ( nvs.~ n~ ) compared to experimental data in the pseudo-rapidity interval 171 < 4 at 900 GeV.
The choice of NB(P)MD for each component (supported by its success in describing shoulder effect and HQ vs q oscillations) outlines the role of clan properties in this framework and allows to determine correctly ~ F Benergy dependence for the total sample of events in the GeV region. Its bending in the TeV region within possible scenarios discussed in the literature is predicted. f i ~ ( nvs~n~ ) trend is also nicely reproduced at 63 GeV (only soft component is assumed to contribute) and at 900 GeV (superposition of soft and semihard components is used), and its behavior in the TeV energy range predicted. Last but not least we have found that our study on FB multiplicity correlations in pp collisions when extended to the TeV energy region assuming KNO scaling violation for the semihard component enhances the intriguing connection already shyly anticipated in the GeV region between particle populations within clans, particle leakage from clans in one hemisphere to the
131
opposite one and the superposition effect between different components. Clan concept appears in this framework as a powerful tool which goes far beyond its simple statistical interpretation and raises the question on its real physical significance: an interesting but also compulsory question for future experimental work in pp collisions and not only. References
1. 2. 3. 4.
5. 6.
7. 8. 9. 10.
K. Alpgbrd et al. (UA5 Collaboration), Phys. Lett. B 123,361 (1983). S. Uhlig et al, Nucl. Phys. B132, 15 (1978). V.V. Aivazyan et al. (NA22 Collaboration), Z. Phys. C 42,533 (1989). T. Alexopoulos et al. (E735 Collaboration), Phys. Lett. B 353, 155 (1995). R. Akers et al. (OPAL Collaboration), Phys. Lett. B 320,417 (1994). W. Braunschweig et al. (TASSO Collaboration), Z. Phys. C 45, 193 (1989). A. Giovannini and R. Ugoccioni, Phys. Rev. D 59,094020 (1999). A. Giovannini, S. Lupia and R. Ugoccioni, Phys. Lett. B 374,231 (1996). C. Fuglesang, in Multzparticle Dynamics: Festschrift for Le'on Vun Hove, edited by A. Giovannini and W. Kittel (World Scientific, Singapore, 1990), p. 193. A. Giovannini and R. Ugoccioni, Phys. Rev. D 66, 034001 (2002).
SOFT PHOTON EXCESS OVER THE KNOWN SOURCES IN HADRONIC INTERACTIONS
MARTHA SPYROPOULOU-STASSINAKI Nuclear and Particle Physics Section, Physics Department, Univercity of Athens 15'7'71 Panepistimioupolis, ILISSIA ATHENS, GREECE E-mail mspyropOcc.uoa.gr Presented at CF2002,8th-15th June 2002, Crete, Greece In this presentation the reported excess of soft photons, over the all known sources of photons produced in hadronic interactions the last twenty years is reviewed. In a number of experiments, with different beam particles and beam energies, the measured sample of the unexpected photons is concentrated mainly in the kinematic region with 0.2 < E, < 1. GeV and PT < 60 MeV/c and in the forward rapidity region(Ycms> 0). Our group in Physics Department of Athens University is contributing actively all this period in the experimental effort of understanding the phoenomenon.
1. Introduction
During the last two decades experimental observations of direct soft photon production (photons from the main interactions with transverse momentun less than 6OMeV/c) over the radiative hadronic decays have been reported (see ref. 1 to 10) in hadronic collisions in a wide range of hadron (pion, Kaon and proton) beam momentum. The excess signal is compared with the Inner Hadronic Bremsstrahlung (InBr), the only known source of y 's of non hadronic origin. In two cases,(see ref 1 and 7) the signal was found to be compatible with InBr, while in the other cases the excess was several times bigger than the expected InBr. In this article the summary of published results is an introduction to the next presentation of a study of WA102* data by P. GANOTI. The experiments which have studied the soft photon production are listed below. In table 1 the summary of the reported soft photon excess is also shown. 1) K + p at 70 GeV/c (WA27,CERN,BEBC,1984)2 2) K + p and n+p at 250 GeV/c (CERN, NA22,EHS,1991)' 3)n-p at 280 GeV/c, (CERN,WA83,0MEGA,1993)6
132
133
4) pBe at 450GeV/c, ( CERN,HELIOS,1993)5 5 ) ~ - pat 280 GeV/c, (CERN WA91*,0MEGA,1997)8 6)the same as above , (CERN,WASl*,OMEGA,2002)’ 7)p p at 450 GeV/c, ( CERN,WA102*,OMEGA,2002)lo The first observation of a soft photon signal in excess of hadronic Bremsstrahlung predictions has been reported by WA27 experiment and this result gave the motivation for the WA83 proposal3. In the this proposal has been shown that: if the excess of photons has similar features as the Internal Hadronic Bremsstrahlug, the kinematic region where the effect will be more pronounced ( over small bakground) is low energy and low PT (both at the same time), or low energy and small X F , or low energy and small emission angle of the photons, or low energy and forward rapidity of the produced photons. In this kinematic region the contribution of gammas from hadronic decays ( the main source of background inside the soft photon signal ) is very small. Several theoretical models (see ref. 11-18)have been proposed in order o explain the above phenomenon, some times in relation with the production of soft dileptons. Until now, there is no explanation of the soft photon excess at least for Pt less than 10 MeV/c where the signal is more pronounced.
2. The WASS/SOPHIE experiment
The WA83/SOPHIE experiment, (r-p interactions at 280 GeV/c) has been performed at the CERN OMEGA spectrometer equipped with two electromagnetic calorimeters6. Two sub-samples of events have been fully analysed: The events with information from the inner electromagnetic calorimeter (called PLUG, see ref.6) and the events with a photon materialised inside the hydrogen target and reconstructed as an e+e-pair inside the OMEGA spectrometer(see ref.6d and 6e). The aim was to investigate the soft photon production with two different samples, under the same experimental conditions and covering very similar kinematic region. The first sample had high statistics and good photon detection efficiency but the direction of the photons was unmeasurable. The second sample resulted in low statistics but with the y’s having a well defined point of origin. Comparison of the two samples has provided an internal consistency check of these results.
134
2.1. WA83/SOPHIE Inner Calorirneter(PL U G ) data Information about the construction and the characteristics of this calorimeter can be found in reference 6 and the references therein. The PLUG calorimeter was located 11 meters from the center of the target and it recorded photons emitted inside a cone of 20 mrad half angle around the beam direction. An excess of directly produced photons by a factor 7.9f 1.4 over the Inner Hadronic Bremsstrahlung calculation was measured in kinematic region (0.2< E, < 1 GeV, PT<10 GeV/c)'. In Figure 1 the PT distribution of photons (corrected with the reconstruction efficiency)is compared with the InBr predictions and the Monte Carlo calculations for y's from hadronic decays.
2.2. WA83/SOPHIE e+e- pairs The sample of events including at least one V o was used to search for photons converted into e+e- pairs inside the 1 meter long H2 target and measured with the OMEGA MWPC's, which were located inside a 1.1Tesla magnetic field. The reconstructed e+e- pairs had a contamination less than 5% in the region Epair
135
3. WA91* e+e- pairs
In experiment WA83 the soft photons (with lower limit of energy 200 MeV/c) were detected with a Pb-scintillating calorimeter.The origin of the photons was assumed to be the main interaction vertex for the determination of their otherwise unmeasurable direction. The calculated background from various sources in the apparatus was found to be negligible inside the geometrical acceptance of the above calorimeter. It was obvious that a direct measurement of the photon production point would give an experimental check of this type of background. The WA91* experiment, using a photon materialisation technique, was motivated by this idea. The data of WA91* were collected using the set-up of the WA91 experiment at the CERN OMEGA spectrometer. A 280 GeV/c n-- beam incident on a 60 cm long hydrogen target, was used to reproduce similar conditions to the WA83 exposure. All interaction triggers (minimum bias) were recorded. A 50 x 50 cm2 Pb sheet of 1 mm thickness was placed just before the set of B MWPC’s at a distance of 73 cm downstream from the center of the hydrogen target. The photons were detected via their conversion in the Pb sheet into an electron-positron pair. Informations about the reconstruction of photons and the analysis algorithm can be found in references 8 and 9. Only those photons were taken where the spatial separation of their e+e- vertex from any charged track at the lead sheet was greater than 3 mm in the x-y plane This was useful in order to remove soft photons which were produced in the Pb sheet by electrons or positrons originating upstream of the Pb sheet. The method is called ’isolation criterion’. Figure 3a shows the efficiency corrected PTspectrum of photons, emitted inside 240 mrads around the beam direction. It can be seen that the data follow well the expected PT distribution of photons coming from hadronic decay above a PT of 50 MeV f c where the contribution coming from inner bremsstrahlung is small. For P ~ < 5 0MeV/c an excess exists which rises rapidly towards zero PT. As can be seen by comparing figures 3a and 3c this excess, in the energy range 0.2 < Elab < 1 GeV, is essentially concentrated at angles 0 < 20 mrad and it was measured to be 5.3 f 1.0 times the Inner Bremsstrahlung predictiong. 4. WA102* e+e- pairs
The WA102* experiment was motivated by the idea to check the soft photon production at higher beam momentum and with different kind of beam
136
particle. A 450 GeV/c proton beam incident on a 60cm long hydrogen target, was used to reproduce similar conditions to the WA83 and WA91* experiments. All interaction triggers were collected. A 50 x 50 cm2 Pb sheet of 1 mm thickness was placed just before the B set of MWPC’s at a distance of 66 cm downstream from the center of the hydrogen target. The photons were detected via their materialisation in the Pb sheet into an electronpositron pair.The requirements for accepting a V o as a photon candidate are described in reference 10. The same ’isolation criterion’ (as in references 8 and 9) of 7’s from any charged track has been used in this analysis. The PT spectrum of reconstructed photons emitted inside a cone of half angle of 225 mrad around the beam direction and with energy 0.2 < E, < l.OGeV, corrected for efficiency is shown in figure 4. ”Brems” stands for the inner hadronic bremsstrahlung. ”BG” means the full Monte Car10 nondirect photon background; In fig. 4c and 4d an additional restriction of 6 < 2Omrad has been applied. The errors on the plots are statistical. The excess measured from these plots is 4.1 f 0.8 . In conclusion, this experiment, which has been performed in higher energy ( f i = 29) and with different beam particle (p instead of T-) than WAS3 and WA91(fi = 23), also reports the observation of an excess of soft photons in low energy and low PT region. References 1. A.T. Goshaw et al.,Phys. Rev. Lett.43 (1979) 1065. 2. P.V. Chliapnikov et al.,Phys. Lett.14lB (1984) 276. 3. WA83 Proposal and Letter of Intent: Investigation of soft photon production in hadronic collisions using the OMEGA spectrometer, CERN/SPSC 856 4 , SPSC/P219 and references therein, CERN/SPSC 84-60, SPSC/I 156. 4. J.J. Aubert et al.,Phys. Lett.218B (1989) 248. 5. F. Botterweck et al.,Z. Phys.C51 (1991) 541. 6. a) S. Banerjee et al., Phys. Lett. B305 (1993) 182 b) S. Banerjee et al. preprint CERN-PPE/92-218 c)Irene Vichou, Ph.D. Thesis,University of Athens, 1993 d)A. Belogianni, Ph.D. Thesis , Univesrity of Athens,1996 e)M. Spyropoulou-Stassinaki, Proceedings of : -Quark Matter 1990, Menton fiance, Nuclear Ph. A 525(1991)487 -Hadron Structure 1992, Stara Lesna, Czecoslovakia -1nt. Symp. on Multiparticle Production (Soft Physics and Flactuations) ,Cracow 1993 ( World Scientific , p. 51). -XXV Int. Symp. on Multiparticle Dynamics, Stara Lesna, Slovakia, 1995 ( World Scientific, p. 670 ) -1nt. Symp. on Correlations and flactuations, Nijmegen, The Netherlands 1996 ( World Scientific, p. 63 ) -XXVIII Int. Symp. on Multiparticle
137
Dynamics, Delphi, Greece, 1998 ( World Scientific 2000, p. 318 ) 7. J. Antos et al. ,Z. Phys.C59 (1993) 547 8. A. Belogianni et al. ,Phys. LettB408 (1997) 487 9. A. Belogianni et al. ,Phys. LettB548 (2002) 122 10. A. Belogianni et al. ,Phys. LettB548 (2002)129 11. L. Van Hove,Ann. of Phys.192 (1989) 66-76 12. V. Balek, N. Pisutova and J. Pisut,Acta Phys. Pol.B21 (1990) 149. 13. V. Cerny et a1 2. Phys.C31 (1986) 163-166 14. B. Andersson et al. 'Soft Photons in the Lund mode1,LUPT 88-1, 988 15. E. V. Shuryak,Phys. Lett.B231 (1990) 175 16. P. Lichard, L. Van Hove,Phys. Lett.B245(1990)485 17. P. Lichard,SUNY-NTG-94-41, Phys.Rev.D 50 (1994) 6824
138
Table 1. A summary of experimental results on direct soft photon observation. Ref. 1
2
5
Beam and target
Photon kinematic range 0<XF
Ratio of Signal to Inner bremsstrahlung 1 . 2 5 f 0.25
10.5 GeV/c
E,
< 0.01, > 30 MeV, PT < 20 MeV/c
K+P, 70 GeV/c
-0.001 < X F < 0.008, > 70 MeV, PT < 60 MeV/c
4.0 f 0.8
E,
K+P, 250 GeV/c
-0.001 < X F < 0.008, > 70 MeV, PT < 40 MeV/c
6 . 4 f 1.6
E,
T+P,
The same
T+P,
6 . 9 f 1.3
250 GeV/c 6
7
8 9
10
T-P, 280 GeVlc
P Be, 450 GeV/c T-P, 280 GeV/c T-P, 280 GeV/c PP, 450 GeV/c
0.2 < E,
15
1.4 < 9 c . m . s . 5 5, < 1 GeV, PT < 10 MeV/c
-1.4 < 9 c . m . s . 5 0, < E, < 150 MeV, PT < 10 MeV/c
1.4 < 9 c . m . s . 5 5, < 1 GeV, PT < 20 MeV/c 1.4 < ~ c . m . s5. 5, 0.2 < E, < 1 GeV, PT < 20 MeV/c
7.9 f 1.4
< 1.5 - 3 (at 90% C.L.) 7 . 8 f 1.5
0.2 < E,
0.2 < E,
1.2 < 9 c . m . s . 5 5, < 1 GeV, PT < 20 MeV/c
5.3 f 1.0
4.1 f 0.8
139
0
WA83 fibre calorimeter data (0.2<E,<20. GeV)
\
z " 1
+
data corrected for efficiency s from hadronic decays -- QED inner bremsstrohlung prediction
N
--y
0
9
, 0
104
I
10
I
0
'
I
'
1
'
0.02
I
'
I " "
0.04
'
0.06
I
'
I
I
0.08
"
"
'
'
0.1
I
'
0.12
I " " " " ' 0.14 0.16
"
0.18
h ?0.2
p, (GeV/c) Figure 1. a) The efficiency corrected PT spectrum of photons, measured with the PLUG calorimeter, upon which are superimposed the predictions of the FRITIOF Monte Carlo. and the Q.E.D. inner bremsstrahlung calculations.
140
1 .<E,
PLUG Data/Hadr.decoys
A e+e- Data corrected/Hadr.dec. ma
1.5
I(:____ 00 0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09 1
Figure 2. WA83:Comparison in PT bins of the ratios (data/simulation of hadronic decays) for the WA83 e+e- pairs (open triangles) and for the WA83 calorimeter (PLUG) showers (full dots), in the photon's energy range 0.2 < E, < 1 GeV and restricted inside the angle of 0 < 20mrad .
141
0
\ 80000
Doto
2, .. 70000
I
270000 A Brerns
A Brerns
h60000
60000
50000
1
50000
84240 mrad
40000
40000 30000
30000
20000
20000
1
10000
10000 0 0
0
0.05
0.1
0
0.05
0.1
0.15
GEOOOO
0
0
\ 80000
>, 3
Data
0
Data-BG
c)
A Brerns
40000
1
60000
50000
t+
8 4 0 mrad
30000
20000
4
0.005
8 4 0 mrad
40000
20000
0
A Brems
01
60000
50000
Yl 3 70000
0 BG
70000
h
0.15
1
+
L -a-
0.01
0.015
0.02
PT (GeV/c)
Figure 3. a)WA91*: PT distribution for e+e- pairs (selected with the isolation criterion) with energy 0.2 < Er < 1 GeV, 0 < 240 mrad corrected for detection efficiency. "Data" means the experimental data, "BG" the full non-direct photon background and "Brems" for inner hadronic bremsstrahlung; b) same with the background subtracted; c,d) same as figures 2a,b but within interval 0 < 20 mrad.
142
L
A
0
V
Doto
0
'60000
2 3
P
N
A Brems
N
v
A Brems
\,50000
z
8 4 2 5 mrad
800
Data-BG
8 4 2 5 mrad
40000
30000
600
2oooo
400
i'l,
10000 200
0 0 0
0.05
0.1
0.15
0.2
0.05
0
0.1
0.15
0.2
P, (GeV/c V
0
I
Data-BE
A Brems
Di
v
\50000 Z
+
50000
40000
840mrad
8 4 0 mad
40000
F +
30000
20000 -A-
0
0.005
0.01
0.015
+ +*
0.02
P, (GeV/c)
Figure 4. a)WA102 e+e- pairs (selected with the isolation criterion) PT distribution for photons with energy 0.2 < E, < 1 GeV, 0 < 225 mrad corrected for detection efficiency. "Brems" stands for the inner hadronic bremsstrahlung, "BG" means the full Monte Carlo non-direct photon background; b) same with the background subtracted; c,d) same as figures 4a,b but with additional restriction of 0 < 20 mrad.
A STUDY OF SOFT PHOTON PRODUCTION IN pp COLLISIONS AT 450 GeV/c AT CERN-SPS
PARASKEVI GANOTI University of Athens E-mail:
[email protected] Presented a t CF.2002, 8-15 June, Crete, Greece
A. Belogiannil, W. Beusch2, T.J. Brodbeck3, F.S. Dzheparov4, B.R. French2, P. Ganoti', J.B. Kinson5, A. Kirk5, V. Lenti2, I. Minashvili', V. Perepelitsa4, N. Russakovich', A.V. Singovsky7, P. Sonderegger2, M. Spyropoulou-Stassinakil,0.Villalobos-Baillie' 1 2 3 4 5 6 7
Athens University, Physics Department, Athens, Greece.
CERN, European Organization for Nuclear Research, Geneva, Switzerland. Department of Physics, University of Lancaster, Lancaster, U.K. ITEP, Moscow, Russia. University of Birmingham, Physics Department, Birmingham, U.K.
JINR, Dubna, Russia. IHEP, Protvino, Russia. Photons produced in fixed target p p interactions at 450 GeV/c were detected by reconstructing the e+e- pairs of photon conversions in a lmm thick lead sheet placed in front of the MWPCs of the OMEGA spectrometer at CERN. The soft e+epairs were analysed in the forward kinematic region and a signal of prompt photons (data after background of hadronic decays and secondary 7 ' s being subtracted) was found to exceed the expected level of the inner hadronic bremsstrahlung by a factor of 6.1 & 1.3.
1. Introduction
The last 20 years an unexplained excess of soft photons over all known sources, produced in the forward region of hadronic interactions at high energy, has been reported in K+p, 7r+p, 7r-p [l-81 and recently in p p interactionsg. The present study of soft photons is based on the same data as in [9],
143
144
namely p p interactions at 450 GeV/c using the same method of photon detection and inside the same kinematic region. The analysis has been performed by following the development of all known sources in the experimental process, including those suppressed in [9] by the application of an additional cut (isolation cut) in order to make the photon sample cleaner. The advantages of this analysis are its better statistical accuracy and smaller experimental bias. 2. A brief description of the experiment
The data for this analysis were collected during special data taking periods of the WA102 experiment that was performed at the CERN OMEGA spectrometer, the layout of which is similar to that used in the WA91 experiment6. A proton beam of 450 GeV/c was incident on a 60 cm long hydrogen target. A 50 x 50 cm2 Pb sheet of 1 mm thickness was placed at a distance of 66 cm downstream from the center of the target, thus restricting the angular acceptance of the y conversion on this sheet inside a 225 mrad cone. The magnetic field ( B = 1.2 T ) direction was along the z (vertical) axis of the OMEGA coordinate system in which the beam was along the z axis. Minimum bias (interaction) triggers were collected. Out of them 3.4 x lo6 events with interaction vertex within a beam spot inside the target and having less than 8 charged tracks have been used in this analysis. The last requirement was necessary in order to select the cleaner events for which the pattern recognition results are reliable. The photons were detected via the materialisation in the lead sheet into an electron-positron pair. The e+e- were reconstructed as Vos from the digitizations produced in the MWPCs of the OMEGA using a modified version of the standard TRIDENT" reconstruction program which enabled reconstruction of tracks originating from the lead sheet with momenta down to 50 MeV/c. It was thus possible to determine the line of flight of the photon with an average error of f 1 0 mrad by measuring its momentum. This error is mainly due to the multiple scattering of the electrons and positrons in the lead sheet. However, for the calculation of the photon polar angle 0 and the associated variable of transverse momentum p ~ we, preferred the more precise direction given by using the reconstructed interaction point in the target and the photon materialisation point in the lead sheet. The requirements €or accepting a positive and negative particle pair ( V o )as a materialised photon candidate were: (1) that each track had at least 4 reconstructed space points;
145
(2) that the effective mass of the V o found by the TRIDENT reconstruction program, assuming electron masses for the tracks, was less than 70 MeV/c2; (3) that the x coordinate of the photon apex, defined to be at the position where the two tracks have zero angle between them, was within f3 cm from the middle of the lead sheet; (4) that the distance between the two tracks in the x-y plane at the position where they had zero angle between them was less than 3 mm.
In works [S-91 the above set of selection criteria was followed by the isolation cut. With this cut, the photons accepted for analysis were those for which the spatial separation of their e+e- vertex from any charged track at the lead sheet was greater than 3 mm in the x-y plane. 3. The reconstruction efficiency of the e+e- pairs
The efficiency for reconstructing photons were determined by a method of implantation of simulated 7's into the raw data. This method generates photons with bremsstrahlung-like spectrum (l/E), uniformly distributed in 8, converts them in the lead sheet, transports the resulting e+e- pairs through the lead sheet and the MWPCs using the EGS411 code, and simulates clusters in these chambers at the position where the e+ and ecross them. After digitizing these clusters are implanted on actual events which passed through the TRIDENT reconstruction program followed by a standard selection and analysis algorithm. The efficiency has been studied as a two dimensional function of photon energy ( E ) and emission angle 8 (Fig.1). The validity of the efficiency correction has been assessed by comparing the efficiency corrected photon p~ spectrum, in a region where photons from hadronic decays dominate, with the predictions of the Monte Carlo code for hadronic background. This resulted in an estimate of the systematic uncertainty due to efficiency corrections of below 10%. 4. Simulation through FRITIOF and EGS M.C.
Detailed simulation of the experiment has been done with FRITIOF12 used as an event generator and EGS" used to transport electromagnetic particles through the experimental setup. A generator of inner bremsstrahlung photons was added to the FRITIOF producing them accordingly to Low and Haissinski f ~ r m u l a s l ~ * ~ ~ .
146
0.25
0.2 0.15 0.1
0.05 O
~ 0
~
" ~ " ' 0.02 0.04
~ ~ " ~ " 0.06 0.08
' ~ 0.1
~
~ l l ~ 0.12 0.14
~ ~ 0.16
i
~ , 0.18
, l 0.2 O(rad)
,
Figure 1. a).The reconstruction efficiency as a function of E,forthelcinematicregionO.2 i E, < 1 GeV and 8 <20 mrad. b). The efficiency as a function of 8 in the energy region 0.2 < E , < 1 GeV.
Thus, at the generator level there are two sources of photons: a background from hadronic decays (nO,q, w...) which is called hadronic decay background (HDB) and expected signal of the inner hadronic bremsstrahlung (InBr). In the energy region of interest (0.2 < By < 1 GeV) these two sources give the soft photon rates of table 1, after accounting for conversion probability (only the efe- pairs having both tracks with momenta above 10 MeV/c are taken into account).
,
,
I
147
Table 1. Number of soft photons per event(%) for the primary photon sources InBr and HDB in the soft photon kinematic region as calculated by MC. Photon source InBr HDB
Number of y's per event (%),0.2 < E-, < 1 GeV and 0 <225 mrad 3.0 61.1
Number of y's per event (%), 0.2 < E-, < 1 GeV and 0 <20 mrad 1.4 1.7
From this table it is clear that restricting the analysis in the forward region (6 <20 mrad), the hadronic background is of the same order of magnitude as the expected contribution of inner bremsstrahlung.This is the main reason why the forward region has been selected for the analysis15. In addition to these primary photons (HDB and InBr) there is a number of secondaries, originated from the primary ones as they pass through the experimental setup. These photons are the external bremsstrahlung (ExBr): when a high energy photon generates an e+e- pair in the material upstream of the lead sheet (target, target walls, air between the target and lead sheet) the pair particles may radiate bremsstrahlung photons, which can enter our kinematic region. Table 2 shows the respective rates for ExBr as in table 1.
Table 2. Number of secondary soft photons (ExBr) per event(%) in the soft photon kinematic region as calculated by MC. Photon source ExBr
Number of y's per event (%),0.2 < E-, < 1 GeV and 0 <225 mrad 7.0
Number of 7's per event (%),0.2 < E-, < 1 GeV and 0 <20 mrad 3.5
The ExBr background is expected to be similar in shape with InBr and together with it and HDB produces the p~ distribution shown in Fig. 2 by open circles.It can be seen from the table above that in the forward region of our experimental setup the ExBr rate exceeds the sum of the InBr and HDB rates. Additionally, pairs from hadronic background photons of E, > 1 GeV converted on the lead sheet can degrade to energies below 1 GeV due to bremsstrahlung radiation. Let us call this background DHDB (degraded hadronic decays background). The p~ distribution of photons with the DHDB added is shown in Fig.:! by triangles. Comparing the two distributions, an increase of the total background is clear for p~ <50 MeV/c. Fig.3 demonstrates how the initial spectra of HDB change (in two angular ranges, B < 225 mrad and 6 < 20 mrad), if degraded hadronic photons are
148
h 0
\
5,
5
510000 2-
0
rnaterialised ys (InBr+HDB+ExBr)
A with DHDB added
0 total e'e- Dairs on the Pb sheet
8000
6000
4000
2000
0
0
0.02
0.04
0.06
0.08
0.1
0.12
014
0.16
0.18
I 2
Figure 2. The evaluation of the soft photon background as calculated by the simulation in the kinematic region 0.2 < E7 < 1 GeV and 0 <225 mrad. Open circles: The p~ distribution of converted photons ( from In.Br, HDB and ExBr sources). Triangles: same with hadronic decays 7 ' s degraded from a higher energy range added. Squares: same with further addition of the combinatorial background.
taken into account . However, besides the sources described above there is an additional background resulting from tracks of different photons converted on the lead sheet at close y and z coordinates, which can be combined to a fake photon by the reconstruction program, satisfying the photon selection criteria. These fake combinations are added to the soft photon source with a higher source index (a,b,c,d of table 3). For example, a fake e+e- pair, in which the e+ belongs to source (a) and the e- to source (c), will be attributed to source (c). The result is that the InBr is kept clean while the wrong combinations are added to the sources b,c and d. This new background is called combinatorial background.
149
In addition, when treating the e+e- pairs at this latest stage, a spread in energy of the e+e- pairs and position of the reconstructed photon vertex, as coming from the experimental procedure, is applied in order to imitate the experimental conditions. The resulting total background distribution after spreads and combinatorial background are being accounting for, is given in Fig.2 by squares. The photon rates for all sources described above (before and after spreads and combinatorial background added), in the kinematic range of 0.2 < E, < 1 GeV, 8 < 20 mrad, are given in table 3 accounting for conversion probability. The quoted errors are statistical. The systematic errors were evaluated to be about 5% for all soft photon sources in the table. Table 3. Number of soft photons per event, for all soft photon sources in the kinematic region 0.2 < Ey < 1 GeV and 0 < 20 mrad (MC results) .
Soft Photon Source
Number of 7's per event (%)
Number of y's per event (%)
before adding
after adding experim.spreads
experimental spreads and
to all sources and combinatorial
combinatorial background
background to sources b to d
a) InBr
1.45 f 0.02
1.43 f 0.02
b) HDB
1.74 f 0.02
2.12 f 0.02
c) DHDB
1.18 f 0.02
1.20 f 0.02
d) ExBr
3.33 f 0.03
5.52 f 0.04
s u m over b to d
6.25 f 0.05
8.84 f 0.05
The photon rates given in table 3 are valid for photons satisfying the selection criteria described in section 2. 5. Experimental results Fig.4 shows the p ~distribution of WA102 data corrected for reconstruction efficiency (triangles), together with the the p ~distribution of HDB DHDB ExBr (open circles) and separately the p ~distribution of the InBr (stars). The combinatorial background is included into HDB, DHDB and ExBr. In this figure, the data follow well the expected p ~distribution of photons coming from the background above a p ~of 40 MeV/c where the contribution coming from the inner bremsstrahlung is small. For p ~< 40 MeV/c a clear excess exists which rises towards small p ~ . Restricting the angular range to 0 < 20 mrad and subtracting from the data the sum of sources b to d (table 3, last column), the p ~distribution
+
+
150
5000
=
increase of 14% in the range of 0.2<E,< 1 .GeV
Figure 3. The E-, distributions of HDB photons on the P b sheet a) for 8 < 225 mrad; b) for 0 < 20 mrad. Dashed lines are plotted before accounting for degraded photon yield, and solid lines after the accounting for it.
of Fig.5 shown by triangles is produced. The expected distribution of inner bremsstrahlung is shown by open circles in the same figure. As can be seen, the data exceed strongly the inner bremsstrahlung predictions below 20 MeV/c. The ratio of photons in this kinematic region (0.2 < E7 < 1 GeV, 8 < 20 mrad) t o the number of interactions is (17.6 f O.l)%, accounting for conversion probability. The quoted error is statistical. The systematic error, mainly due t o uncertainty in the reconstruction efficiency, is 10% of
151
-2
18000
N 5
7
1 cA
16000 -
A
z 14000
-
A
A
12000 -
A
'AA~A
r
A
W A l O Z doto corrected f o r efficiency
0
HOB
*
lnBr
+ OHOB + ExBr
6000
4000
2000
0
0
002
004
006
008
01
012
014
016
018
(
Figure 4. The p~ distribution of WA102 soft photon data corrected for efficiency in the kinematic region 0.2 < E7 < lGeV and 0 < 225 mrad, together with the Monte Carlo predictions.
the rate. Reducing the photon rate by the sum of contributions (b) to (d) of Table 3 we find that, in the defined kinematic region, the signal of the prompt soft photons is (8.7 f 0.1 f 1.8)%. The ratio of the observed signal to the expected inner bremsstrahlung rate, in the current analysis, is 6.1 f 1.3, where the main contribution to the error comes from the uncertainty in the efficiency corrections. 6 . Conclusion
This analysis confirms the existence of an anomalous soft photon signal in p p interactions a t 450 GeV/c in the kinematic region of 0.2 < E-, < 1 GeV and 8 < 20 mrad at a level (6.1 f 1.3) times that expected from inner hadronic bremsstrahlung, which is to be compared with that reported in [lo], i.e. 4.1 f 0.8.
152
,-.
L
0
\
$-
6000 -
z
5000 -
-4-
Y
-
-4A WA102 dota(ofter bockqround subtraction)
-44000
-
0 lnBr
-4-
-A3000
-
-A-A-
2000
-A-
+-A-
1000
-
..@.
--0-
..0.
+--+-A-
..a..@.
:-@-
0
-A-
--O-..O.
k
0
-A-
..@.
I , >
0.002
I
t
,
0.004
I
I I , , ' , I
0.006
,
0.008
I
I
I
0.01
,
-* -, - -
-&--A-
--@-..@...
I
~-~.*.r.e.~ -,.*.,& 1 .*.A.A.
0.012
0.014
0.016
0.018
C 12
Figure 5. The p~ distribution of WA102 soft photon data corrected for reconstruction efficiency in the kinematic region of 0.2 < E , < 1 GeV and B < 20 mrad after the subtraction of the sources b to d (last column of table 3), together with the Monte Carlo predictions for InBr
Acknowledgments
The Greek co-authors have been supported by the Special Research Account by two projects: 70/4/5834 and 701314969. References 1. P.V. Chliapnikov et al., Phys. Lett. B 141 (1984) 276. 2. F. Botterweck et al., Z. Phys. C 51 (1991) 541. 3. S. Banerjee et al., Phys.Lett. B 305, (1993) 182. 4. M. Spyropoupou-Stassinaki. - S.Abatzis et al. Nucl.Phys. A 525 (1991) 487. - Proceedings of Hadron Structure 92, Stara Lesna, Czechoslovakia, September 1992. - Proceedings of the Cracow Workshop on Multiparticle Production (Soft
153
Physics and Fluctuations), Cracow, 1993 (World Sci., Singapore) p.51
- Proc.XXV 1nt.Symp. on Multiparticle Dynamics (Stara Lesna, Slovakia, September 1995) (World Sci.,Singapore) p.670. - Proc.XXVII1 1nt.Symp. on Multiparticle Dynamics (Delphi, Greece, Septem-
ber 1998) (World Sci.,Singapore, 2000) p.318.
- Proc.Correlations and Fluctuations,Nijmegen, The Netherlands, June 1996 (World Sci., Singapore) p.120. 5. T.J. Brodbeck.Proceedings of the Cracow Workshop on Multiparticle Production (Soft Physics and Fluctuations), Cracow, 1993 (World Sci., Singapore) p.63 6. A.Belogianni et al., Phys. Lett. B 408, (1997) 487 7. V. Perepelitsa.Proc.XXVII1 1nt.Symp. on Multiparticle Dynamics (Delphi, Greece, September 1998) (World Sci.,Singapore, 2000) p.375. 8. A.Belogianni et al., Phys. Lett. B 548, (2002) 122 9. A.Belogianni et al., Phys. Lett. B 548, (2002) 129 10. J.C. Lassalle, F. Carena, S. Pensotti, NIM 176 (1980) 371 11. W.R. Wilson, H.Mirayama and Q.W.O. Rogers, SLAC preprint SLAC265(1985) 12. B. Andersson, G. Gustafson, Nilsson-Almquist, Nucl. Phys. B 281(1987) 289 13. F. Low, Phys. Rev. 110 , (1958) 14. J. Haissinski, LAL 87-11, 1987 15. CERN/SPSC 85-22, SPSC/P212, 6 March 1985
QCD AND STRING THEORY
G. K. SAWIDY National Research Center Demokritos Ag. Paraskeui, GR-15310 Athens, Hellenic Republic E-mail:
[email protected] Few natural basic principles allow to extend Feynman integral over the paths t o an integral over the surfaces so, that they coincide a t long time scale, that is when the surface degenerates into a single particle world line. In the classical approximation the loop Green functions have perimeter behavior. That corresponds to the free quarks. Quantum fluctuations of the surface generate nonzero string tension, that is the area law and we have quark confinement. In this string theory confinement and asymptotic freedom can coexist.
1. Introduction There are few important facts constituting our knowledge of strong interaction. The hadrons are made up of quarks and they are free to propagate together through the space-time. However it is impossible to separate quarks without creating new hadrons and, opposite to that picture in deepinelastic processes, over short time and short distances quarks behave like free pointlike particles The hadrons, propagating as a whole through the space-time, swept out the world line, narrow strip, and Feynman path integral perfectly describes this propagation, if we neglect the internal motion of the quarks. To describe the fine structure of hadrons one should take into account an internal motion of the quarks. Perhaps the gauge field, keeping quarks together, compresses into flux line and the propagation of hadron through the space-time forms the world surface rather than the line Thus, to describe the propagation of the real hadrons with internal structure one should extend Feynman integral over the paths to integral over the surfaces. This should be done in a such way that both amplitudes will coincide at long time scale, that is in the cases, when the surface degenerates into a single particle world line 12. 1,213,4353637.
7,899*10711.
154
155
To incorporate these properties into the theory one should require that: (1) long narrow strips of the world surface must have the amplitude proportional to the average length of the strip l 2 ( 2 ) two surfaces distinguished by a small deformation of the shape should have close amplitudes 13J2. These principles, especially continuity principle (2), allows to define the action of the theory. 2. Action
Indeed, Feynman path integral is defined by summing over all piecewise linear random walks from point X to Y with transition amplitude proportional to the length of the path A ( L ) l 4
while the surface is identified with a piecewise flat triangles (polygons) glued together through their sides and as we stress in (1) the amplitude should be proportional to the length. Thus it must be proportional to the linear combination of the lengths of all surface edges
A ( M )=
C
8 i , j . [ X i - XjI
denotes the vertex coordinate, summation is over all edges < Oi,j is unknown factor, which can be defined by use of the continuity principle (2). Indeed, if we impose a new vertex X on a given flat triangle < X 1 , X 2 , X 3 >, then for Q i . IX -Xi/ to A ( M ) that new surface we will get an extra contribution and we will get more extra terms imposing a new vertexes, despite the fact that the surface does not changed. To exclude such tape of contributions we should choose unknown factor Qi,j such that it will vanish in flat cases. This can be done by use of the edge angles, therefore I 3 J 2 where
Xi,Xj
Xi
> (i and j are the nearest neighbors) and
xi
where
O(7r) = 0,
(3)
156
8(2T
-
a)= @(a),
@(a)L 0
(4)
(5)
and a i j is the angle between the neighbor triangles in Rd having a common edge c X i , X j >. First condition (3) guarantees that property (2) is fulfilled, the second (4), that the action (2) is local, that is one can measure the angle ai,j irrespective to the side of the surface and the third one (5) that the action is positive. These conditions do not define the weight factor Q(a)completely, continuous "number" of possibilities still remain. Generally speaking, suitable selection of the factor e(a)can be done if we require convenient scaling behavior of the physical quantities. In the next section we will see that the scaling behavior of the string tension indeed depend on @ ( a )and partly define the later. Thus we should further tune this string 12.
3. Quark Masses To understand the physical meaning of the factor O ( a ) let us consider a surface with the boundaries created by external sources or by virtual quarks. In accordance with continuity principle ii) we should take ai,j on the boundary edges equal to zero or to 27~.Then from (2) it follows that the boundary part of the action is simply proportional to the full length of the boundary Aboundary = L
'
@(o).
(6)
In ( 6 ) the total length of the boundary L is multiplied by the factor Q ( O ) , thus e(0)plays the role of the quark mass 8(0) = m,
(7)
and if quarks are infinitely heavy, then O(0) + 00.
(8)
This observation makes it clear that quantity Q(0) connected with the quark masses and should be considered as a free external parameter of the theory. In fact, if we take Q(0) to be very large, then we will have tremendous simplifications because a ) probability of quark loop creation tends to zero
157
and p) crumpled surfaces with folds are strongly suppressed 12. This limit corresponds to pure QCD, that is, quarks are not dynamical. When Q(0) is not very large, then quarks are dynamical. We will use this freedom to describe physical properties of quarks in the next section. 4. QuarkLoops
Let us consider vacuum expectation values of two currents < J ( X ) ,J ( Y ) > built from quark fields. In this approach it should be represented by the sums over all quark paths and sums over all surfaces connecting these paths . All possible virtual quark loops must be summed over too, that is over surfaces with holes (rimmed by quark loops)
p a t h s surfaces
The action A ( M ) can always be written as a sum of two terms
A ( M ) = A b o u n d a r y + &side
(M)
and as it is easy to see from (2-3) for minimal surfaces we will get A i n s i d e ( M ) = 0. This mean, that the value of the action on the classical trajectory coincide with Aboundary
A ( M )= Aboundmy. Now from ( 6 ) , it follow that the amplitude associated with quark paths can be written in the form
where P is the total length of the quark lines, including the internal quark loops. Therefore before quark paths averaging the loop Green function can be represented in the form
where T denotes the set of triangulations and integration is over all vertexes inside the fixed boundaries. The classical part, first term in (lo), represents
158 the perimeter law and describe a free propagation of quarks '. The second term in (10) is associated with quantum fluctuations of the surface and will produce quantum corrections to a perimeter law (9). As we will see these fluctuations generate confinement 12, that is the area law . To simplify the proof let us consider the case of infinitely heavy quarks (8). Indeed, when O(0) + 00, the crumpled surfaces with folds are strongly suppressed, because for them ai;j is close to zero and @(aM 0) is very large. The folds are totally suppressed if @ ( a )increase sufficiently fast near a = 7r 12. Therefore the surface fluctuate near the edge angles close to a = 7 r , that is near flat surfaces. In the limit of infinitely heavy quarks (8) we also suppress the virtual quark loops as it follow from (9) and therefore only valence quarks and antiquarks paths connecting X and Y will remain. Thus, in this limit, we are left with almost fiat surfaces without holes. In fact, this mean that quasiclassical expansion should be done around minimal flat surface and small transverse fluctuations are regulated by the action Ainside(M).If XI is the amplitude of transverse fluctuations, then edge angles fluctuate in the region of order X , / ( P / f l ) , where T is the number of vertexes and the action is equal to
where we use the following parameterization of @ ( a near ) a
M 7r
@(a= ) I7r - a/<. For c 5 1the factor @(a)increase sufficiently fast near a = 7r and suppress large transverse fluctuations. Therefore
(11)
and expressing q in the form q=-
d-2 d+26
for the transverse fluctuations we will get
159
(13) Substituting (13) into (11)yield
C
y))
,-a.T.ln(P2/T)-(d+26).T.(l-ln(
T
and for 6 = 0 we will have convergence, hence
where S is the area and the string tension o(p) is equal to d .(p) x -(1- ln(d/P)) M -P. - P c a2
a2
So string tension scales and the exponent v is equal to
where
This value coincide with the one for the random walk integral (1) and is expected, because both theories are close in nature. The physical reason, why this model generates confinement, is very simple and general. As it follow from continuity principle ii) the vertexes which lie on the flat minimal surface do not contribute into the action, that is they ”prefer” to fluctuate near flat minimal surface occupying the area P 2 / T each. Now fluctuations in the transverse direction are regulated by ”transverse” action in (11) and they are small if @(a)increase sufficiently fast near (Y = 7r. Thus the Wilson integral is proportional to the volume of the narrow layer around the minimal surface and confinement appear as a natural consequence of the basic principles (1) and (2).
160
5. Particle Production Suppose now, that quarks are rather light, and thus Q(0) is not very large. Then fluctuations with acute edge angles, ai,j M 0, will have large amplitude and crumpled surfaces with folds are no longer totally suppressed. If we will consider now a new surface which has a cuts along these acute edges, these cuts correspond to a virtual quark loops, then it is easy to see that these two surfaces have almost the same amplitude. Indeed, this follow from continuity principle (2), the factor @ ( a is ) unique for all surfaces and therefore, there is no big difference between contributions from boundary edges, where ai,j = 0 or from the acute edges, where a i j x 0. This means that when the quarks are rather light a virtual quark loops can easily be created and as we see they are created in the places where the surface crumples producing folds with acute angles. The number of virtual quark loops are equal to the average number of folds appearing on the surface. In principle this observation can help to compute the structure functions of the hadrons. Together with our previous result (15), that the ”fat” surfaces are strongly suppressed and quark paths are unlikely to be well separated, it means that when we separate quarks the surface easily tear into pieces along the folds producing long narrow strips, the mesons and they are free to propagate through the space-time with the amplitude (1) as it follows from (1) and (2). This is in accord with Wilson picture of confinement 7 , because we have perimeter law for the cases of nearby quark-antiquark pairs and exponential damping is associated with large size loops having large area. This picture is very close to what we are expecting out of QCD. We don’t know how to introduce baryons into the theory in a unique way, but if one consider the baryons as a surface having three strips glued together through one common boundary and therefore forming the letter Y , one should only define the extra contribution coming from common sideedges. Again using continuity principle (2) we should take these common edges with the weight
C
IXi -XjI
1 *
+ @(Pi,j)+ Q ( y i , j ) )
z(Q(ai,j)
(18)
common edges
where a ,p, y are the angles between strips. It is easy to consider the cases, when this surface describe the decay of the baryon. We should stress,that this definition introduce effective self-avoidance
161
into the theory, because, as it is easy t o see from (18), self-intersection extend the action. One can also consider the intersection of many strips glued together through one common side just adding new ”angle” terms in (18). This extend the action much more and suppress the high order self-intersections. As it is evident from previous discussion continuity principle (2) play very special role in our construction and help to find expressions for different surface amplitudes. This approach supplement lattice QCD approach, where strong coupling expansion produce certain geometrical roles for surface amplitudes 15. Our approach can help t o find interrelation between this two theories.
Acknowledgments
I wish t o thank the organizer of the conference in Crete and especially Nikos Antoniou for the invitation and for arranging interesting and stimulating meeting. This work was supported in part by the EEC Grant no. HPRNCT-1999-00161.
References 1. M.Gel1-Mann, Phys. Lett. 8, 214 (1964) . G.Zweig. CERN Report No.TH401, 4R12 (1964) 2. R.P.Feynman. Phys.Rev.Lett. 2 3 1415 (1969). 3. J.B.Bjorken. Phys.Rev. 179 1547 (1969) 4. J.B.Bjorken and E.A.Paschos. Phys.Rev. 185 1975 (1969) 5. D.J.Gross and F.Wilczek. Phys.Rev.Lett. 30 1343 (1973) 6. H.D.Politzer. Phys.Rev.Lett. 30 1346 (1973) 7. K.Wilson. Phys.Rev. D10 2445 (1974) 8. G.t’Hooft. Nucl.Phys. B 7 2 461 (1974) 9. J.Schwinger. Phys.Rev. 128 2425 (1962) 10. J.Kogut and L.Susskind. Phys.Rev. D11 395 (1975) 11. A.Casher,J.Kogut and LSusskind. Phys.Rev.Lett. 31 792 (1973) 12. G.K.Savvidy and K.G.Savvidy,Int.J.Mod.Phys.A8 3993 (1993). 13. R.V.Ambartzumian, G.K .Savvidy, K. G Savvidy and G .S .Sukiasian. PhysLett. B 2 7 5 99 (1992) 14. R.P.Feynman. Rev.Mod.Phys. 20 367 (1948) 15. M.Creutz. Rev.Mod.Phys. 50 561 (1978) 16. E.Brezin. C.Itzykson. G.Parisi and J.B.Zuber. Comm.Muth.Phys. 59 35 (1978)
ARE BOSE-EINSTEIN CORRELATIONS EMERGING FROM CORRELATIONS OF FLUCTUATIONS? 0.V.UTYUZH AND G.WILK T h e A n d r z e j Sottan Institute f o r Nuclear Studies; H o i a 69; 00-689 Warsaw, Poland E-mail: [email protected] and [email protected]
M.RYBCZYNSKI AND Z.WLODARCZYK Institute of Physics, Swigtokrzyska Academy; Konopnickiej 15; 25-405 Kielce, Poland E-mail: [email protected] and [email protected] We demonstrate how Bose-Einstein correlations emerge from the correlations of fluctuations allowing for their extremely simple and fast numerical modelling. Both the advantages and limitations of this new method of implementation of BEC in the numerical modelling of high energy multiparticle processes are outlined and discussed. First applications to description of e+e- data are given.
1
Introduction
Bose-Einstein correlations (BEC) between identical bosons are since long time recognized as indispensable tool in searching for dynamics of multiparticle production processes because of their potential ability to provide some space-time information about them a. However, serious investigation of such processes can be performed only by means of involved numerical modelling using for this purpose one of the specially designed Monte Carlo event generators (MCEG) ’. Their a prior2 probabilistic structure prevents the genuine BEC to occur because they are of purely quantum statistical origin. The best one can do is either to change accordingly the output of such MCEG’s to provide for the necessary bunching of identical bosonic particles (mostly T ’ S ) in the phase space or t o try to incorporate somehow their bosonic character (or at least some specific features distinguishing the bosonic and purely classical particles) into the MCEG itself, i.e., to construct the MCEG providing on the output particles already showing the bunching mentioned above ’. Our proposition discussed here (based on our works 9,10) can be in this respect regarded as belonging to the first category but, at the same time, it uses also ideas explored in and, to some extent, can be regarded as improvement of * 4,53697
’
aThe ”infinite” number of references on BEC will be projected here on some selected number only, starting from the pedagogical review and followed by a selected number of more recent, mostly complementary in their approach to BEC, reviews ’.
162
163
(at least in what concerns the numerical performance). Whatever one is doing the final objective is always the same: to reproduce the characteristic signals of BEC obtained experimentally, which for the case of 2-particle BEC means that the following two-particle correlation function,
defined as ratio of the two-particle distributions to the product of singleparticle distributions increases towards C2 = 2 when Q approaches zero. Notice that (1) depends explicitly on the measured momenta (pi,pj) of the (like) particles, the searched for space-time information can be obtained only when treating it as a specific Fourier-transform of the distributions p ( r ) of the production points, in which case C2 is usually (schematically) written as1t2
It allows (in principle) to translate the (observed) width of the peak in C2(Q) into the (deduced) size of the region of emission p ( r ) '. From that point of view the first approach to the numerical modelling of BEC mentioned above is addressing (in one or other form) directly the (2) whereas second (and our as well) is providing only (1) which can be later analysed (much in the same way as the experimental data are) to obtain the information on p ( ~ '.) 2
BEC
- quantum-statistical approach
In what follows we shall treat the BEC as arising because of correlations of some specific fluctuations present in physical system under consideration (known as photon bunching effect in quantum optics l1 where similar correlations are also known under the name of HBT effect). Notice that the main ingredient of C2 is the correlator (n1n2), which can be written as l ? l 2 (n1n2) = (.1)(nz)
+ ((n1- (n1))(n2 - (n2))) = (n1)(n2>+pfl(n1)fl(n2). (3)
reality all this is, of course, much more complicated but this will not be our point of interest here. Notice only that what is being deduced in this way is not the whole interaction region but rather the region where the like-particles with similar momenta are produced. We could call it the elementary emitting ceZZ, introducing in this way the idea used later on in this talk. =Here u(n)is dispersion of the multiplicity distribution P ( n ) of produced secondaries and p is the correlation coefficient depending on the type of particles produced: p = +1, -1,0 for bosons, fermions and Boltzmann statistics, respectively. '1'
164
Therefore the two-particle correlation function (1) is entirely given in terms of the covariances (3) explicitly showing its stochastic character:
In eq.(4) above Cz(Q) is just a measure of correlation of fluctuations present in the system under consideration, which is maximal (i.e., C2 = 2) whenever fluctuations are maximal (i.e., for g(n) = ( n ) ,what happens for the geometrical distribution of the produced particles, which in turn happens in a natural way where they are bosons and as such show maximal tendency to bunch themselves in the same state (or cell) in the phase space). This feature is the cornerstone of the only MCEG providing particles already showing C2(Q) > l without any additional procedures d . Those who would like to start with the symmetrization of the corresponding multiparticle wave function should realize l 3 that such symmetrization is equivalent to the change from the Maxwell-Boltzmann statistics (classical, distinguishable particles) to the Bose-Einstein one (indistinguishable quantum particles). To do this one has to select as independent subsystems not the individual particles but rather the groups of particles in the consecutive states. In effect the original Poissonian distribution of particles goes over to the geometrical one mentioned above. This can be visualised in a best way on the following simple example, which will also represent the main ideas of e . Suppose that mass M (in rest) is going to hadronize (for simplicity for one kind of neutral particles with mass m each). When one is selecting each time a particle with energy Ei according to the simple statistical distribution P ( & ) = exp ( - E i / T ) until the whole M is used up, one gets multiplicity distribution of the produced secondaries, P ( n ) ,in poissonian form (with < n > depending on the parameter T ) . Suppose now that one is changing algorithm in a following way: after selection the first energy, E l , one adds with a probability P to the particle chosen in this way another particles of the same energy El and does it until the first failure (i.e., until the random number selected is greater than P ) . Then one is selecting a new particle with energy E2 (i.e., in fact one is selecting a new energy E2) and repeats the above procedure again and again, until the whole mass M is used up. It is straightforward to realize that what one is getting is the number of Poisson-like distributed cells (with (n,,a,) depending on T containing each a number of geometrically distributed secondaries (with (n) in a given cell given by parameter P ) . Taken together this convolution dThe nearest in spirit to is the unpublished attempt presented in providing BEC in not so natural way as eThose interested in details should consult directly.
’.
14.
The two others l5 are
165
of poissonian and geometrical distributions results in well defined Negative Binomial (NB) distribution of the produced secondaries f. However, in order to get C2(Q)with a characteristic width AQ 1/R (corresponding to a "radius" R) one has to allow for a spreading of energies E (l' ) of particles belonging to the k-th cell, AE. Let us stress at this point that it is precisely this spreading which translates finally into the dimensional parameter R being the main subject of interest when interpreting experimental results for The MCEG proposed in has the same structure with particles selected according to Bose-Einstein distribution, P(&) exp [ni( p - Ei)/ T ] (ni is their multiplicity and Ei are their energies, the two parameters, "temperature" T and "chemical potential p correspond to the previous T and P ) and the like-particles are located in the cells of a fixed size 6y (in the longitudinal, i.e., rapidity y space), which is the third parameter corresponding to the AE above). It turns out that in this approach one gets at the same time both the correct BEC pattern (i.e., correlations) and fluctuations (as characterized by the observed intermittency pattern) '. This is very strong advantage of this model, which is so far the only example of hadronization model, in which BoseEinstein statistics is not only included from the very beginning on a single event level, but it is also properly used in getting the final secondaries. In all other approaches at least one of the above elements is missing. The serious shortcoming of method are, however, numerical difficulties to keep the energy-momentum conservation as exact as possible and its limitation to the specific event generator onlyg .
-
-
cz.
495,6y7
3 BEC
- our proposal: general ideas
In lo we have proposed a new method of numerical modelling of BEC, which we shall shortly introduce now (some of its elements were already formulated in '). It was thought as additional element of any MCEG describing multiparticle production processes. So far it was tested only on some special hadronization scheme proposed by us (the CAS model 18) and on the ~~
f This
can shed a new light on the NB distributions and its different applications as discussed during this conference with, for example, our cells being the equivalent to clans in the standard approach to NB distributions. Similar concept of elementary emitting cells has been also proposed in Q. 9Notice that in our simple example, which apparently did not show these shortcomings, we were considered only one type of produced secondaries. When one attempts to incorporate all charges and to keep the energy-momentum conserved, our example starts to be also intractable in practice.
166
JETSET model for e+e- processes (to be shown below). We aimed at fast and universal algorithm providing BEC among the secondaries produced by a given MCEG already at the event-by-event basis.
Our way of reasoning was as follows: the BEC is a quantum mechanical phenomenon whereas all MCEG are of classical character. Therefore to mimic BEC one has to resign from a part of information provided by the MCEG, which usually gives us energy-momenta, space-time positions and charges ( Q i ) of the produced secondaries. All methods are in one or other way changing the first twoh. We propose to keep them intact and to change, instead, the original charge assignment provided by the MCEG. This is done in the following way lo. Suppose that our MC event generator provides us with N ( + ) , N ( - ) and N ( 0 ) of positive, negative and neutral particles, uniformly distributed and showing no BEC pattern (cf. Fig. 1, left panel). We change now their charge allocation (keeping the same N ( + ) , N ( - ) and N ( 0 ) ) and getting the picture shown in the right panel of Fig. 1. The like charges are in visible (albeit strongly exaggerated) way bunched (correlated) together leading to signal of BEC. What we have done is the following: ( a) we have resigned from the (not directly measurable) part of the information provided by event generator concerning the charge allocation to produced particles and ( b ) we have allocated charges anew in such a way as to keep the like charges as near in phase space as possible (keeping also the total charge of any kind the same as the original one). In this way we have formed objects which we shall call in what follows elementary emitting cells (EEC) ', each containing only particles of the same sign and belonging to the same state. In Fig. 1 they are shown as separate in the phase space but in principle they can overlap. Particles belonging to such a cell are supposed to be in the same state (the fact that their momenta differ from each other reflects only the natural spreading of such state in the momentum space and precisely this spreading is the source of the analogous complementary spreading in the position space and eventually in the observed structure of the Cz(&)). The actual implementation of our proposition (algorithm) is presented 1,2t43576,7
hThis is especially visible in works claiming to describe BEC from the very beginning in a quantum mechanical way 17.
167
in detail in lo. Here we shall discuss some further points of the proposed approach and present new results of its application to the known algorithms for hadronization applied to the e+e- data. We would like to stress the point that the basic entity here is not so much the hadronizing source but the EEC mentioned above and C2(Q) are in our approach directly sensitive to the number of such cells and to their mean occupation. There is no direct dependence on the ”size of the hadronizing source” On the other hand our approach contains automatically BEC of all orders (in practice, the highest order accounted for is given by the highest multiplicity in a given EEC one can reach).
’.
4
BEC - our proposal: numerical results
To illustrate how our prescription works we present in Fig. 2 sample of results using as initial source of secondaries the CAS hadronization model developed by us some time ago l8 and the standard JETSET model 20. This very preliminary and limited in scope attempt (for example, for simplicity only direct pions were produced both by the CAS and JETSET MCEG’s) shows in a clear way the main points of our method mentioned before. The EEC’s were formed according to the algorithm given in lo using as parameter the probability that a new selected particle will joint the particular EEC which is being actually filled. This probability was kept constant in the case of CAS (and equal P , as indicated in Fig. 2(a)) whereas in the case of using JETSET it was given by P = exp(-E/T) (where E is the energy of the particle considered at given moment) with value of parameter T as indicated in Fig. 2b. In the case of CAS the, so called, ”two split-type sources” (cf. lo for details) were usedj. Together with fits to the sample of DELPHI data l9 are shown the corresponding distributions of EEC’s, P(Ncell),and particles in such cell, P(npart). ‘Notice, that the ”size” R (or its variants) discussed in all known analyses of BEC (both data and formulas) are representing at first the distance between the two emitting points, R = r1 - 7-2, and therefore are only indirectly sensitive to the ”true” size of the source. Nevertheless, customary they are referred to as the ”size of the source” I . J I t means that original source were supposed to consists of two sub-sources of equal mass each, which were cascading independently, and our algorithm was then applied to all particles without discriminating which source they were coming from. In effect one is obtaining much more dense source (i.e., particles are packed more closely in the phase space) than in the case of the single cascade only lo. The slope of C z ( Q )turns out to be very sensitive to this l o . The opposite situation, in which particles from different sources are supposed not to show BEC, is referred to as ”indep-type sources” and can, for example, easily explain the BEC puzzle for W + W - production data lo.
168
1,zs -
-9 . .
-g
0" 1,15-
0" 1,15-
1.25-
1.20-
1,lO-
130-
.
1,lO-
-
1,05-
1,oo-
1300-
0,95-
035-
1,05
Figure 2. Application of our method to fit DELPHI data l9 on e+e- annihilation using CAS l8 (a)and JETSET7.4.10 (with standard parameters) 2o (b) hadronization models. Panels ( c ) and ( d ) show the corresponding distributions of EEC, P ( N c e l l ) ,and distribution of the like-charge particles allocated to such cell, P ( n p a r t ) .The best poissonian and geometrical fits t o the, respectively, P ( N c e l l ) and P(npart)are also shown.
169
Notice that, with a very good accuracy, distribution of particles in a single EEC is of geometrical type (i.e., according to our discussion before, corresponding to bosonic particles). This follows directly from the construction of our algorithm lo (one is adding new particles to the EEC with probability P until the first failure, in such a case in the ideal situation (npart)= P/(1- P ) ) . The strength of BEC will in our case depend on how much (on average) particles are allocated to EEC and how many EEC’s one has (on average) in an event. One can see in Fig. 2 that the first number is rather low, (npart) 1.2 1.3, the probability to get 3 or more particles in a single EEC is of the order of 1% only (and less). It means that multi-particle BEC are not very prominent feature under normal circumstances and, if at all, should be probably looked for in a really very high multiplicity events (which, on the other hand, are very rare 21). The second number is of the order of 7. Notice that the EEC’s are distributed essentially in a Poisson-like manner. It is interesting then to notice that, according to what we have already mentioned before, it means that the total multiplicity distribution is of NB type 16. Actually, this is the distribution provided by the original MCEG we are working with (in our case CAS l8 and JETSET 2 0 ) , which is then reproduced by the action of our generator modelling the BEC (as it should be)’.
-
+
-
5
Summary
Our presentation of BEC is surely not the orthodox one, i.e., we do not see BEC as the immediate source of the information on the space-time characteristic of the hadronizing object. We approach the problem from the quantum statistical point of view, stressing more the behaviour of the like-charged bosons in the momentum-space (which, after all, is what is really measured!) necessary to be incorporated in the MCEG whenever one aims to model the true BEC. Our particles feel the BEC only when they are in the same state (it allows for the sizeable differences in their momenta because of the spreading of momenta of the wave packets representing such states). It means then that our primary objects are EEC’s rather, than the whole emitting hadronic source itself. And it is the property of the average EEC, which is responsible for the finally observed structure of the Cz(Q)function. Most of our presentation deals with a kind of universal algorithm lo, which ~
is worth to stress at this point that, according to our discussion in 9 , a constant number k of EEC with geometrical distribution of particles in each of them leads to NB (and in the limit of large k to Poisson) distribution whereas binomially distributed EEC’s, with k limited for some reason, leads to the so called modified negative binomial (MNB) multiplicity distributions 22 characterised by oscillating cumulant moments 23.
170
'.
should be suitable as a simple addition to (almost) any MCEG However, it is very likely that such approach will fail when subjected to the more serious scrutiny than it is done so far (in the sense that its action will have to be connected with a re-parameterization of the original MCEG in order t o get the right results, i.e., to correct for deviations introduced by the BEC implementation In such case, the only solution left seems to be to start to construct MCEG from assuring that it models in a correct way the bosonic character of produced secondaries, i.e., t o follow and improve the program started with the work * along the lines presented at the beginning of this talk. 294*5t6,7).
Acknowledgments GW would like to thank Prof. N.G. Antoniou and all Organizers of X-th International Workshop on Multiparticle Production, Correlations and Fluctuations in QCD for financial support and kind hospitality.
References 1. W.A.Zajc, A pedestrian's guide to interferometry, in "Particle Production in Highly Excited Matter", eds. H.H.Gutbrod and J.Rafelski, Plenum Press, New York 1993, p. 435. 2. R.M.Weiner, Phys. Rep. 327, 249 (2000), Bose-Einstein correlations in particle and nuclear physics (Collection of selected articles), J.Wiley, (1997) and Introduction to BEG' and subatomic interferometry, Wiley (1999). U.A.Wiedemann and U.Heinz, Phys. Rep. 319, 145 (1999); T.Csorgo, in Particle Production Spanning MeV and Te V Energies, eds. W.Kitte1 et al., NATO Science Series C, Vol. 554, Kluwer Acad. Pub. (2000), p. 203 (see also: hep-ph/0001233); G.Baym, Acta Phys. Polon. B29 (1998) 1839; W.Kitte1, Acta Phys. Polon. B32 (2001) 3927; K.Zalewski, Acta Phys. Polon. B32 (2001) 3973. 3. K.J.Eskola, Nucl. Phys. A698 (2002) 78. 4. L.Lonnblad and TSjostrand, Eur. Phys. J. C2 (1998) 165. 5. A.Bialas and A.Krzywicki, Phys. Lett. B354 (1995) 134; K.Fialkowski and R.Wit, Eur. Phys. J. C2 (1998) 691; K.Fialkowski, R.Wit and J.Wosiek, Phys. Rev. D58 (1998) 094013; T.Wibig, Phys. Rev. D53 (1996) 3586. 'What are physical changes the action of our algorithm brings to the original MCEG used is discussed in full in lo.
171
6. J.P.Sullivan et al., Phys. Rev. Lett. 70, (1993) 3000; K.Geiger, J.Ellis, U.Heinz and U.A.Wiedemann, Phys. Rev. D61 (2000) 054002. 7. B.Andersson, Acta Phys. Polon. B29 (1998) 1885 and references therein. 8. T.Osada, M.Maruyama and F.Takagi, Phys. Rev. D59 (1999) 014024. 9. M.Biyajima, NSuzuki, G.Wilk and Z.Wlodarczyk, Phys. Lett. B386 (1996) 297. 10. O.V.Utyuzh, G.Wilk and Z.Wlodarczyk, Phys. Lett. B522 (2001) 273 and Acta Phys. Polon. B33 (2002) 2681; cf. also: O.V.Utyuzh, Fluctuations, correlations and non-extensivity in high-energy collisions, PhD Thesis, available at http://www.fuw.edu.p1/ smolan/p8phd.html. 11. See, for example, R.Loudon, The quantum theory of light (IInd ed.) Clarendon Press - Oxford, 1983or J.W.Goodman, Statistical Optics, John Wiley & Sons, 1985. 12. K.Fialkowski, in Proc. of the XXX ISMD, Tihany, Hungary, 9-13 October 2000, Eds. T.Csorgo et al., World Scientific 2001, p. 357; MStephanov, Phys. Rev. D63 (2002) 096008. 13. K.Zalewski, Lecture Notes in Physics 539 (2000) 291; cf. also Acta Phys. Polon. B33 (2002) 2643 and references therein. 14. J.G.Cramer, Event Simulation of High-Order Bose-Einstein and Coulomb Correlations, University of Washington preprint (1996), unpublished. 15. W.Zajc, Phys. Rev. D35 (1987) 3396; R.L.Ray, Phys. Rev. C57 (1998) 2532. 16. See talks by A.Giovannini and R.Ugoccioni and references therein. 17. H.Merlitz and D.Pelte, 2. Phys. A351 (1995) 187 and 2. Phys. A357 (1997) 175; U.A.Wiedemann et al., Phys. Rev. C56 (1997) R614; T.Csorgo and J.ZimAnyi, Phys. Rev. Lett. 80 (1998) 916 and Heavy Ion Phys. 9 (1999) 241. 18. O.V.Utyuzh, G.Wilk and Z.Wlodarczyk, Phys. Rev. D61 (2000) 034007 and Czech J. Phys. 50/S2 (2000) 132 (hep-ph/9910355). 19. P.Abreu et al. (DELPHI Collab.), Phys. Lett. B286 (1992) 201. 20. TSjostrand, P.Edbn, Ch.Fribeg, L.Lonnblad, G.Miu, S.Mrenna and E.Norrbin, Comp. Phys. Commun. 135 (2001) 238 and TSjostrand, L.Lonnblad and S.Mrenna, PYTHIA6.2 - Physics and Manual, LUTPO121, hep-ph/0108264. 21. See talk by J.Manjavidze and references therein. 22. NSuzuki, M.Biyajima and G.Wilk, Phys. Lett. B268 (1991) 447. 23. NSuzuki, M.Biyajima, G.Wilk and Z.Wlodarczyk, Phys. Rev. C58 (1998) 1720; M.Rybczyriski, Z.Wlodarczyk, G.Wilk, M.Biyajima and NSuzuki, hep-ph/9909380. See also talk by 1.Dremin and references therein.
This page intentionally left blank
Session on Phase Transitions in QCD Chairperson: N. Schmitz
THEORY VERSUS EXPERIMENT IN HIGH ENERGY NUCLEUS COLLISIONS
ROBERT D. PISARSKI High Energy Theory, Bldg. 510A Brookhaven National Laboratory Upton, N Y 11973 USA e-mail: pzsarskiQquark. phy. b?zl.gov I compare the present status of theory versus experiment for the collisions of nuclei at high energy. While some models can describe some aspects of the data, there are several notable features, first seen at RHIC, which cannot be described by any single model.
1. Introduction
The collisions of large nuclei may tell us about what happens to QCD at high temperature (and/or density). In this talk I review the present status of how theory compares to experiment. For experiment, I take the status of things as they stood during Quark Matter 2002. To the best of my knowledge, no results which I quote have changed significantly since that time. For reasons of length, I concentrate on the the results for the largest nuclei, with atomic number A x 200. It is clear, however, that one cannot claim to understand AA collisions, without a complete understanding as well of both p p and p A collisions as well. 2. Theoretical Expectations
Our understanding of QCD at nonzero temperature rests upon numerical simulations on the Lattice.' While dynamical quarks must certainly be included in any realistic simulation of QCD, at present the Lattice is not near to the continuum limit if light, dynamical quarks are included. In part this is because of the nonlocal effects which dynamical quarks introduce, and which necessitate much greater computing resources, by at least several orders of magnitude. For the most part, however, it is due to a lack of human resources. The simulations involved are extremely involved, requiring
175
176
dedication over many years. And this dedication has not been rewarded, at least recently. Even without reliable data for QCD, this should not obscure what an important advance it is to know, with measurable certainty in the continuum limit, the thermodynamic behavior of the pure glue theory with three colors. Especially about the phase transition, the theory is in a strong coupling regime; there is no small parameter in which we can expand in, or tune in some manner. It is only the Lattice upon which we can count. In the pure gauge theory, from the work of ’t Hooft, Polyakov, and Susskind,’ the deconfining phase transition is rigorously associated with the spontaneous breaking of a global Z ( 3 ) symmetry, above a transition temperature T,. Taking the string tension to be (400 MeV)’, T, M 270 MeV, within errors which are, at present, M 3~5%.The errors in this ratio are not due to where T,occurs; rather, they are due to uncertainties in measuring the string tension on the Lattice. Thus it is very possible that the errors could be reduced to M &l%with present techniques. The standard expectation of the phase transition is a confined phase of hadrons below T,, and a deconfined “Quark-Gluon Plasma” above T,. By a Quark-Gluon Plasma (QGP), I mean a gas of nearly ideal quark and gluon quasiparticles. These qausiparticles are dressed, by the interactions, into fields which propagate at less than the speed of light. (This is termed “thermal masses”, in a somewhat confusing terminology, as these masses do not violate gauge invariance). To first approximation, the thermodynamics of quarks and gluons is very different from that of hadrons. Thus if the theory went directly from a hadronic phase, to a QGP phase, it would be most natural to expect a strongly first order phase transition. This might be modeled, for example, by a bag model, in which a bag constant is added to the free energy of the quarks and gluons, and the hadronic pressure is neglected. In a bag model, the jump in the energy density, versus that for free quarks and gluons, is = 4/3. One would also not expect any notable increase in correlation lengths near T,. Thus the string tension just below T, would be near its value at zero temperature. Above T,, gauge invariant correlation lengths should be slowly varying with temperature, say with the logarithms expected from perturbation theory in an asymptotically free theory. For a strongly first order transition, above T, hadrons discover that is thermodyanmically favorable to disassociate into their constitutents, so they just fall apart, in a manner which produces a discontinuity in the first derivative of the free energy (which is the jump in the energy density).
177
The great surprise from the Lattice is that for three colors, this is not what seems to occur. In terms of the latent heat, the jump in the energy density, versus that of a free theory, is M 1/3, instead of 413 in a bag model. This decrease is not that dramatic, and so one might think that not much interesting is going on. To understand the theory, it is necessary to measure the change in correlation lengths about T,. In the hadronic phase, one finds that as the temperature increases, the string tension decreases by about an order of magnitude as one goes from zero temperature, to just below T,. In the deconfined phase, one can define a (gauge-invariant) screening mass from the two-point function of Polyakov loops. From temperatures of 1.2TCon up, this mass changes slowly, as one would expect from the logarithms in an asymptotically free theory. (Although close agreement with perturbation theory, without resummation, does not occur for much higher temperatures.) But as one goes from 2Tc to just above T,, the screening mass drops by an order of magnitude. Thus it appears that there is a third region, sandwiched between the hadronic and the QGP. Using the terminology of 't Hooft, about T, the modes which become light are electric Z ( 3 ) glueballs; these axe related to Polyakov loops. Magnetic Z ( 3 ) glueballs, related to 't Hooft loops, remain heavy about T,, with masses which change little with temperature. (In this discussion, I any mass is divided by temperature, so there is no cheating involved.) This is really the only way in which one can have a weakly first order transition: there must be a nearly critical mode which controls the transition from one phase to another. That this is the appropriate mode is well known from the analysis of Svetitsky and Yaf€e.3 In particular, for two colors, the deconfining transition, as studied by Damgaard, Engels, and others,* seems to be truly second order. Critical exponents, as measured directly from the four dimensional gauge theory, agree with the prediction of an Ising universality class to within two significant figures. To gain more insight, one can ask about what happens for more than three colors. The standard expectation is that the transition is strongly first order. For example, consider the limit where the number of colors becomes infinite, N + 00. As noted first by Thorn,5 the hadronic pressure is of order one, while the pressure in the deconfined phase is of order N 2 . So one would expect a strongly first order transition, with a latent heat which is also of order N 2 . This agrees with a Lattice analysis of Gocksch and Neri,' who argue, from a reduced Eguchi-Kawai model, that hadronic masses are constant in the confined phase. N
N
178
Lattice simulations for four colors appear to support this. Simulations find a jump in the energy density, divided by the ideal value, which is x 2/3;7 this is twice as large as for three colors, and half way to the value of 4/3 in the bag model. I suggest, however, that this may be a hasty conclusion. After all, even for three colors, it was ten years, until the results from APE and Columbia, that it was clear that the transition is weakly, and not strongly, first order. Simulations on small lattices consistently found latent heats which were much larger than on large lattices. The same may well be true for four colors, where other technical problems (such as a nearby bulk transition) also intrude. So what might happen for four or more colors? Let l be the order parameter for the deconfining transition. In a related spin model, Kogut, Snow, and Stone’ have shown that the potential in mean field theory is of a special form:
v
=
mi
+ x (P+ ( c * ) ~ )+ . . .
(1)
Near the phase transition, the mass for the l field vanishes. The absolutely remarkable feature of this potential is that many terms vanish: for large N , there are no terms proportional to (Ill’))”, (1112)3,etc. In mean field theory, this potential is very flat about T,. The jump in l may be large, but this is misleading, as the height of the barrier is small. This was a mean field analysis,’ in a special spin model. So perhaps it is just a peculiarity. However, perhaps it is more general; maybe the potential above is generic. If so, it leads to the following conjecture: that for N 2 4,the transition is of second order, but with a width of the critical region which shrinks like 1/N, measured in units of the reduced temperature, IT - T,I/T,. There is some support for this conjecture. In comparing the second order transition for two colors, to the nearly second order transition for three colors, one finds that critical region is smaller for three colors than for two. This is seen both below and above T,. Below T,, the point at which the string tension has decreased to one half its value at zero temperature occurs at a lower reduced temperature for two colors. Above T,, the spike in the energy density minus thrice the pressure, divided by T4,occurs at a higher reduced temperature for two colors. In any case, the conjecture certainly yields a testable prediction for four colors: one has to go much closer to T, to see what might be critical behavior. Without care, one would conclude that the transition is of first order. Why should one care at all about the possibility of a nearly critical
-
179
region for three colors? Even optimistically, the critical region is at most within 20% of T,; to be pessimistic, let us call this within 10% of T,. That is, below M .9T,, the hadronic description is appropriate; above M l.lTc, there is some sort of QGP. Polyakov loops matter, at best, only in what is a really window about T,. The reason why it matters is not only because Polyakov loops control the order of the transition. Rather, it is because they manifestly control hadronization, which occurs at T,. Further, if they do control the region about T,, and if their potential changes rapidly, there can be characteristic signals. There is a hint of this in the RHIC data, from HBT radii, discussed below.1° More to the point, what about QCD, where the effects of dynamical quarks cannot be neglected? At present, the Lattice can only simulate dynamical quarks if the pions are too heavy. Estimates are T, M 175 MeV, with no true phase transition in the thermodynamic limit, but only cross over behavior. It is possible that a first order transition reappears for physical pions, closer to the continuum limit.’ In principle, the transition can be more strongly first order than in the pure glue theory, since with the addition of three flavors of massless quarks, the ideal gas pressure goes up by a factor of three. (It is useful to compare to the ideal gas pressure, since asymptotic freedom implies that the pressure is ideal at infinite temperature.) Even so, gluons may dominate the thermodynamic behavior of QCD, in a manner which is less obvious. Present simulations with dynamical quarks demonstrate an approximate universality, termed “flavor independence”. This is the observation that as a function of TIT,, the ratio of the true pressure, to the corresponding value in an ideal gas of quarks and gluons, is nearly universal, independent of the number of flavors. This is remarkable, given that T, is a factor of two smaller, and the ideal gas term, a factor of three larger. The reason this happens is because in the confined phase, the pressure of pions, and other Goldstone bosons, remains small, relative to that of the quarks and gluons. Because the phase transition is weakly first order, a small pressure from the pions can wash it out completely. Even with light dynamical quarks, and no true phase transition, an approximate T, is easy to define, by the rapid increase in the pressure in the deconfined phase. This suggests that the order parameter, and indeed the potential for gluons, still controls the dynamics with dynamical quarks. All of these aspects are only known through numerical simulations.
180
There would be no reasonable way to anticipate either the weakly first order transition for the pure glue theory with three colors, nor how flavor independence might apply with dynamical quarks. There have also been recent advances with simulating systems at nonzero (quark) chemical potential p. The Lattice can generate a thermal distribution of quarks, but with standard Monte Carlo techniques, it cannot fill a Fermi sea. However, for a quark with energy E , if the temperature T is much greater than p, then the Fermi-Dirac distribution function at p # 0 is approximately that for p = 0: 1 e(E-P)IT
+1
1
w @IT
(2)
+1
In order words, at high temperature it doesn’t matter much whether or not you fill the Fermi sea. Thus the Lattice can compute at T >> p. Considering just the free quark propagator, temperature enters as nT, versus p for the chemical potential. Thus the method may well work up to rather high values of p , perhaps nT/2 or so. Numerical simulations find that by p 200 MeV, T, has only decreased a small amount, to 160 MeV.ll This Lattice data is extremely exciting. Of course it only applies for large temperatures, and tells us nothing about what happens for zero temperature. But we have data at zero temperature: a chemical potential doesn’t matter until it exceeds the mass of the particle. For hadrons, then, nuclear matter does not condense until the quark chemical potential exceeds one third the mass of the nucleon, p 2 313 MeV. One has to correct for the binding energy of nuclear matter, but this is infintesimal on this scale, at most 5 MeV. If T, is still 160 MeV at p 200 MeV, it is a hint that the transition for T = 0 is significantly higher than w 313 MeV. While it is too long a story to go into here,g if the transition at T = 0 occurs for large p, say M 400 MeV or greater, then it is very possible that there is a new class of hadronic stars, composed primarily of quark matter. N
N
N
N
3. Experimental Overview The SPS at CERN covers energies from &/A : 5 -+ 17 GeV. There are two notable results for AA collisions in this energy regime:12 J / Q suppression: the number of J / Q pairs is smaller in the most central collisions, versus the extrapolation from peripheral collisions, or from collisions with smaller A. The effect is most striking for the largest nuclei. Excess dileptons below the p: the rate of e+e- pairs exceeds that in conventional hadronic models. The excess can be explained by a pmeson
181
whose width increases due to interactions. The effect may be due to density, however, and not to temperature, since it is more prominent not at high, but at lower, energies. This also supports interest in going to even lower energies, such as at the proposed GSI collider at Darmstadt, which has recently been approved. At BNL, RHIC has run at energies of &/A = 55 GeV (briefly), at 130 GeV during Run I, and at 200 GeV during Run 11. Results from Run I were first presented at Quark Matter 2001; those from Run 11, at Quark Matter 2002. There is one notable change expected change between the SPS and RHIC, which was predicted many years ago by Bjorken. At the SPS, the particle multiplicity in A A collisions is a single peak about zero rapidity. By RHIC energies, a Central Plateau is expected to open up, in which physics is (approximately) boost invariant, independent of rapidity. Away from the incident nucleons of the fragmentation region, the Central Plateau is where a system at nonzero temperature, and almost zero quark density, might emerge. Perhaps even deconfined matter, as the fabled Quark-Gluon Plasma. At RHIC, if one looks just at the multiplicities for identified particles, then one does find a Central Plateau opens up. In all, particles are spread out over M f 5 units of rapidity. Looking just at the multiplicities for identified particles, they are nearly constant over M f l unit of rapidity.l37I4 However, if one looks at, say, the average transverse momentum, p t , for pions, they are only constant over a region half as large, f . 5 units of rapidity.13 Why the number of pions is constant, but their average momentum changes, is not clear. This certainly shows that for AA collisions, one has to go to higher energies in order to see a larger Central Plateau. But as in many other cases, it also shows that experimental reality is often much more complex and interesting than indicated by naive theoretical expectations. It certainly demonstrates that at RHIC, one wants to study properties of identified particles over all rapidities. Several common prejudices, held before the RHIC data came out, are now extinct. Cascade models had tended to predict large increases in multiplicity. This is not seen; the increase in multiplicity is relatively small, suggesting a logarithmic increase typical of hadronic collisions. It was also believed that the QCD phase transition might be strongly first order. If so, and the system went through long lived metastable states as it supercooled, there would be large increases in the HBT radii, which is also not observed. At the SPS, the real surprises were from electromagnetic data. As of
182
yet, there is not much electromagnetic data from RHIC. Even so, there are four notable features of the data: “High”-pt suppression: the number of particles with transverse momentum pt between 2 - 10 GeV is suppressed, relative to that in pp, times The overall suppression is by facthe number of binary collisions tors of 2 - 4. The suppression is now seen to be approximately constant for these pt.16 While the effect was predicted before RHIC turned on, it was not expected that it would be constant over such a large range of pt. This is opposite what happens at the SPS, where high pt particles are not suppressed, but enhanced by factors of 2 - 3, through the Cronin effect. Elliptic flow: is a measure of momentum anistropy in non-central collisions. Hydrodynamics predicts elliptic flow is linear in pt for pions. This is seen up to pt M 1.5 GeV, as is the hydrodynamic behavior of protons. For pt : 2 -+ 6 GeV, though, the (total) elliptic flow is flat.18>19This is not expected from hydrodynamics, or indeed any other model, and is one of the great surprises of the RHIC data. HBT radii: pion interferometry gives a measure of the spatial size(s) of the system. Hydrodynamics predicts that a certain ratio of two sizes, R o u t l R S i d e , is greater than one, and increases as pt does. Instead, experiment finds that Rout/Rsidedecreases with increasing p t , and is about one by pt M 400 MeV.19720HBT radii indicate that hadronization can be modeled as a type of “blast” wave.2oThis description was due to the experimentalists because of the data, and was not anticipated beforehand. Jet absorption: At these energies, jets are seen in p p collisions, but an angular correlation finds that in A A collisions, the backward jet is strongly suppressed.21 That is, in A A collisions there is “stuff” which eats jets. It must be emphasized that there is striking agreement between different experiments for many quantities of interest. This is a testimony to the experiments themselves, who as usual do not believe in the setup or analysis of others. It is also an important principle for the field to remember: new results can only be believed when measured by different groups. The other notable feature of the experiments is their precision. Consider, for example, HBT radii. If one measures them only to, say, -+50%, one cannot differentiate between hydrodynamic behavior, and something new. When the experimental errors have been beaten down to *5%, then it becomes possible to rule out many models. The really important challenge to theory is to incorporate all of these measurements into one consistent framework. The tendency of the field is to have one model to describe one feature of the data, another model to 15916,17.
183
describe another feature, with little overlap. While this may be important in understanding the data as it first appears, it cannot remain as the favored approach. Of course, it must be admitted that my interest in the experimental results is from a rather distant theoretical perspective. And it is always easier to criticize than to construct.
4. Statistical Models, and the Cretan Test
For the most central collisions at zero rapidity, an amazing summary of the single particle spectra is a thermal fit. l3 Fits in which chemical freezeout occurs at the same temperature as kinetic freeze-out are favored, with T M 165 MeV and p M 14 MeV.22 There is an excess of pions at low momentum in central collisions, although not in peripheral collisions. This excess is usually described as due to resonance decays, but this can’t be right, as the same would apply for peripheral collisions. To describe the pion excess, a chemical potential for pions must be introduced. This is manifestly a parameter used to describe non-equilibrium effects. The approximate equality of the temperatures for chemical and kinetic freeze-out is peculiar. Any scattering in a hadronic phase produces chemical freeze-out at a higher temperature than that for kinetic freeze-out, so the data suggest that both temperatures are really one of hadronization, with little rescattering in a hadronic phase. This is one hint of possible nonequilibrium behavior at RHIC. The temperature for chemical freeze-out is consistent with data at lower &/A. From energies of &/A from a few GeV on up, chemical freeze-out occurs along a curve in which the energy per particle is constant, about M 1GeV. In the plane of T and p, even if the hadronization temperature agrees with T, at p = 0, it is distinctly lower than Tc( p )for p # 0. For example, chemical freeze-out at AGS energies gives about p = 200 MeV, and a hadronization temperature which is at most M 120 MeV; from the Lattice, though, at this p the corresponding T, is much higher, x 160 MeV.ll To describe the behavior of particles with increasing mass, it is necessary to assume that all hadrons are emitted with respect to a local moving rest frame. At RHIC, the radial velocities of this local rest frame go up to = 213 c; averaged over radius, they are about M 112 c. This can be seen by eye: versus p t , single particle distributions for pions turn up, while those for protons (say) turn down. The radial dependence of the velocity of the local rest frame is not constrained by the data, and is fit to agree with the observed spectrum.
184
The same is true of hydrodynamical m0de1s.l~This is why they are fits. For example, consider how the single particle spectra change with rapidity, or centrality. While the temperature might be the same, the local flow velocity now depends not just upon the radius, but also upon the rapidity, centrality, etc. It is untenable to consider only zero rapidity, and ignore the rest. A statistical model implies not only what the chemical composition is, but, as well, the pt-dependence of the single particle spectrum. Of course a thermal distribution should only hold up to some upper scale, perhaps 1 - 2 GeV. It would be interesting to compute the ratios of moments of transverse momenta:
I term this the “Cretan” test, since I thought of it at this meeting. Here exp and t h denote, respectively, moments computed from experiment, versus a thermal distribution (with some assumed velocity profile). By definition, if the overall number of particles is thermal, rg = 0. For n > 1, T, is a dimensionless series of pure numbers; the fit is good until T, is no longer small. This must happen at some large n, since eventually fluctuations from hard momentum processes dominate. It would be interesting to determine these ratios from experiment, for all collisions in which a thermal fit works. This ratio is identical to that used in perturbation theory, where one compares theory to experiment to form a dimensionless number. As such, it is a much more stringent test than is usually applied; what is usually plotted is the number of particles on a logarithmic plot, and one can hide a lot on a log scale. 5. Hydrodynamics and Elliptical Flow
A dynamical realization of a thermal fit is a hydrodynamical model. A measure of hydrodynamic behavior is given by elliptic flow. For a peripheral collision, in which the two nuclei only partially overlap, an “almond” is formed in the plane perpendicular to the reaction plane. As the system hadronizes, this spatial anistropy turns into a momentum anistropy, with the average momentum larger along the narrow part of the almond then along the long part. This elliptical a n i ~ t r o p yhas ~ ~been ~ ~ measured ~ as a function of centrality and p t ; overall, the values at RHIC are about twice as large as at the SPS. By geometry, elliptic flow vanishes for zero centrality, as nuclei which completely overlap cannot have any anistropy.
185
Hydrodynamic models predict that for pions, the elliptic flow depends linearly on the transverse momentum. The local flow velocity also predicts the behavior of elliptic flow for heavier particles, such as protons. Both predictions are borne out by the experimental data, for momenta up to pt x 1.5 GeV. Versus centrality, as measured by the number of participants, hydrodynamics predicts that the elliptic anistropy is linear near zero centrality, which is observed. When the number of participants is half the maximum value, though, hydrodynamics significantly overpredicts the elliptic flow. The assumption of ideal hydrodynamics is not supported by estimates of the viscosity, nor does it exclude the possibility of fits to the single particle spectra with non-ideal hydrodynamic^.^^ Experimentally, it is unremarkable that hydrodynamics fails above pt M 1.5 GeV. Hydrodynamics should break down at short distances; that it works down to M .13 fm is actually pretty good. Rather, the surprise is 6 GeV. that the elliptic anistropy is approximately constant for pt : 2 In QCD, one expects cross sections to peak at some momentum scale on the order of a few GeV, and then to fall off with the powers characteristic of QCD. It is very difficult to imagine how anything flat in pt could ever emerge. ---f
6. A “Blast” Wave from HBT Radii
For identical particles, a length scale can be determined by pion interferometry through the Hanbury-Brown-Twiss (HBT) effect.20 This length scale is related to the surface at which the pions last interacted. Since there is axial symmetry to a heavy ion collision, there are three distances, corresponding to along the beam direction, Rlong,along the line of sight, Rout, and perpendicular to that, &ide. One of the big surprises from RHIC is that the HBT radii did not grow is, much between &/A = 17 to 200 GeV. The change in RlongRsideRout more or less, the same as the increase in multiplicity, M 50%. This can be taken as direct experimental evidence for the absence of a strongly first order phase transition in QCD, completely independent from the Lattice. If the transition were strongly first order, as it went through T, the system would supercool and grow in size. Estimates of the sizes of the system before QM’O1 ranged up to tens of fermi, which are not seen. Unfortunately, putting a bound on the latent heat of the transition is manifestly a model dependent exercise. Still, it would be an amusing
186
exercise. The details of the HBT radii, however, have proven to be much more interesting than expected. Before the RHIC data, it was thought that the hadronic firetube from an AA collision might be like a “burning log”. But instead of smouldering, the RHIC data suggests that the log blows up. In particular, the results from RHIC appear to contradict any hydrodynamic description.” Versus experiment, hydrodynamics gives values of Rlong and Rout which are too large, and a &ide which is too small. Of especial interst is the ratio of Rout/Rside: hydrodynamics predicts this ratio should be M 1.5 2, and which increases with pt. At RHIC, the ratio decreases as pt goes up, and is about one, .85 2 Rout/Rside 2 1.15.” The HBT data can be parametrized as a type of “blast” wave, with a velocity M 3/4 c.~’This may indicate a type of “explosive” behavior,” a term first used by the experimentalists. -+
7. Suppression of Particles at High-pt
From the first RHIC data, it was clear that the spectra for “high”-pt particles, meaning above, say, 2 GeV, is qualitatively different in central A A collisions, versus pp collisions at the same energy. Dividing by the number of participants, the number of particles at high-pt is significantly less in central A A collisions than in pp, by overall factors of 2 - 4.l51I69l7 This is quantified through the ratio RAA,which is the ratio of the number of particles in central A A collisions, divided by that in pp, as a function of pt. The suppression begins above pt M 2 GeV; above 4 GeV, RAAM 1/3 1/4 for charged hadrons, and RAAM 1/5 1/6 for pions.16 A surprise of the Run I1 data is that for pt : 2 4 9 GeV, RAAis approximately constant, up to at least 9 GeV.“ This suppression of high-pt particles is opposite to what is observed at the SPS. There, due to what is known as the Cronin effect, the ratio RAA is greater than one, going up to M 2.5 by pt M 3 GeV. This change in the spectrum must be considered as one of the most dramatic features of the RHIC data. The usual explanation of high-pt suppression is due to energy 10ss.l~ Bjorken originally noted that a fast quark (or gluon) loses energy as it traverses a thermal bath, in just that same way that any charged particle does in matter. Single particle distributions can be explained using parton m0de1s.l~ The observed constancy of RAAfor pt : 2 + 9 GeV is surprising; per--f
-+
187
turbative models of QCD do not give constant behavior. The apparent constancy also reflects changes in particle composition, while pions dominate below pt x 2 GeV, unlike p p collisions, there are as many protons as pions above pt M 2 GeV.
8. Saturation Models
Another surprise from the first RHIC data was that the multiplicity did not grow as rapidly as predicted, at least on the basis of various cascade models. One explanation for this is given by models of s a t ~ r a t i o n . ~ The application of saturation to AA collisions is, at the most basic level, purely a kinematical effect. Consider a nucleus-nucleus collision, in the rest frame of one of the nuclei. For atomic number A x 200, in its rest frame the incident nucleus has a diameter no greater than M 2A113 M 15 fm. By Lorentz contraction, this distance gets shrunk down by a factor which is about l/(fi/A).Eventually, the color charge of the incident nucleus looks not like a nucleus, but just like a very thin pancake, with a big color charge All3. Assuming that distances on the order of l / 3 ---t 1/4 fm are small on hadronic scales, the incident nucleus looks like a thin pancake when @ / A : 45 -+ 60 GeV. It is amusing that a simple estimate gives an energy right near where a Central Plateau, in which the particle density is constant with rapidity, first appears. In detail, saturation is a dynamic criterion. It states that at sufficiently small Bjorken-z, quark and gluon distribution functions are dominated by gluons, which peak at a characteristic momentum scale, termed the “saturation” momentum, psat. (This gluon dominance is reminiscent of flavor independence for thermodynamics.) For any perturbative approach to work, psat cannot be less than at least 1 GeV. The above kinematic argument suggests that p:at All3: thus one can probe smaller z values with large nuclei at RHIC, say, than in ep collisions at HERA. What is most important about saturation is, again, almost a kinematical effect: it resets the “clock” €or heavy ion collisions. In the Bjorken picture which dominated before the RHIC data, one assumed that hadronization occured at time scales M 1 fm/c; after all, what other time scale is there? Thus in the Bjorken picture, there seemed as if there was little time for even the largest nuclei, only 7 fm in radius, to thermalize. (Unless, again, there were a strongly first order transition, which is why it was so popular before RHIC.) With saturation, however, the natural scale of the clock is given by N
N
188
llpsat; for psat M 1 GeV, this is already M .2 f m / ~ . That ~ ~ . is, saturation makes the hadronic L‘clock’’runs at least five times faster! The possibility of interesting things happening is far more likely. The gluon fields from Saturation is realized in the Color Glass model 25 the incident nucleus are described as classical color sources, reacting much quicker than the fields in the target nucleus. Taking a gluon field to scale l/g, one concludes that the with the QCD coupling constant g as A f action, and indeed all quantities - such as particle multiplicity, average energy, etc. - scale like 1/g2. In an asymptotically free regime, then, all quantities grow like l/as(psat) M log(p,at). This small, logarithmic growth in the multiplicity agrees qualitatively with the RHIC data (although one really needs the increase to LHC energies to make this quantitative). This picture is only approximate. Even if a gauge field is M 119, the action need not scale like 1/g2. In AA collisions, at initial stages there is a screening mass generated along the beam direction, but not transverse, with a mass squared asat leading order.17 Such a dynamically generated mass scale changes integral powers of l/a, log(psat) into fractional powers.17 Modulo these theoretical quibbles, it seems plausible that saturation describes the initial state of A A collisions at high energies. Fits to the particle multiplicity, including the dependence upon centrality and rapidity, agree approximately with the data.25 It is not evident how to turn gluons into hadrons, as sometimes the mysteries of “parton-hadron duality” are invoked. It is surprising that such models work over a wide region of centrality and rapidity, since saturation (valid at small Bjorken-2) should not work well in the fragmentation region (which is large Bjorken-2). The particle density in such fits has a peak at zero rapidity.26 Saturation does not describe other basic features of the data, though. The most serious problem is that the averagept in saturation is (pt) M 2psat; even with psat as low as 1 GeV, this is an average pt 2 GeV. In contrast, at RHIC (pt) M 550 MeV. The average energy from saturation will decrease due to inelastic processes and the generation of entropy. Assuming that this fixes the overall constant, one is still at a loss to explain why the average pt changes by at most x 2 - 3% between &/A = 130 GeV and 200 GeV, while the multiplicity changes by at least 15%.13In saturation models, the average pt grows with multiplicity. A related problem is the chemical composition. Parton-hadron duality is really gluon-pion duality; but if the average gluon momentum is large, why don’t the hard gluons become kaons? Instead, at RHIC kaons are
-
N
N
N
N
189
much less numerous t h a n pions, only about 15% as much. Of course if saturation describes the initial state, and not t h e final state, then there is no problem with t h e above features of the data.
Acknowledgments This work was supported by DOE grant DE-AC02-98CH10886.
References 1. K. Kanaya, hepph/0209116. 2. G. 't Hooft, Nucl. Phys. B 138, 1 (1978); ibid. 153, 141 (1979); A. M. Polyakov, Phys. Lett. B 72, 477 (1978); L. Susskind, Phys. Rev. D 20, 2610 (1979). 3. B. Svetitsky and L. G. Yaffe, Nucl. Phys. B 210, 423 (1982). 4. P. H. Damgaard, Phys. Lett. B 194, 107 (1987); J. Engels and T. Scheideler, Phys. Lett. B 394, 147 (1997); Nucl. Phys. B 539, 557 (1999). 5. C. Thorn, Phys. Lett. B 99,458 (1981); R. D. Pisarski, Phys. Rev. D 29, 1222 (1984). 6. A. Gocksch and F. Neri, Phys. Rev. Lett. 50,1099 (1983);M. Billo, M. Caselle, A. D'Adda, and S. Panzeri, Intl. Jour. Mod. Phys. A12, 1783 (1997). 7. S. Ohta and M. Wingate, Phys. Rev. D 63,094502 (2001); R. V. Gavai, Nucl. Phys. Proc. Suppl. B 106, 480 (2002); heplat/0203015. 8. J. B. Kogut, M. Snow, and M. Stone, Nucl. Phys. B 200[FS4], 211 (1982). 9. J. Schaffner-Bielich, E. Fraga, and R. D. Pisarski, Phys. Rev. D 63,121702 (2001). 10. A. Dumitru, nucl-th/0209001. 11. Z. Fodor, heplat/0209101. 12. H. Satz, hepph/0209181. 13. T. Ullrich, nucl-ex/02llOO4. 14. G. Van Buren, nucl-ex/0211021. 15. G. Kunde, nucl-ex/0211018. 16. S. Mioduszewski, nucl-ex/0210021. 17. R. Baier, hepph/0209038; A. Mueller, hepph/0208278; X. N. Wang, nuclt h/0208079. 18. S. Voloshin, nucl-ex/0210014. 19. P. Huovinen, nucl-th/0210024. 20. S. Pratt, unpublished. 21. D. Hardtke, nucl-ex/0212004. 22. W. Florkowski, nucl-th/0208061. 23. U. Heinz, nucl-th/0209027. 24. D. Teaney, nucl-th/0209024, nucl-th/0301099. 25. D. Kharzeev, nucl-th/0211083; E. Iancu, hepph/0210236. 26. P. V. Ruuskanen, nucl-th/0210005; K. Tuominen, hepph/0209102.
PROSPECTS OF DETECTING THE QCD CRITICAL POINT
N. G. ANTONIOU, Y. F. CONTOYIANNIS, F. K. DIAKONOS AND A. S. KAPOYANNIS Department of Physics, University of Athens, 15771 Athens, Greece
+
We investigate the possibility to observe the QCD critical point in A A collisions at the SPS. Guided by the QCD phase diagram expressed in experimentally accessible variables we suggest that the process C C at 158 GeV/n freezes out very close to the critical point. We perform an analysis of the available preliminary experimental data for a variety of SPS processes. The basic tool in our efforts is the reconstruction of the critical isoscalar sector which is formed at the critical point. Our results strongly support our proposition regarding the C C system.
+
+
1. Critical properties of QCD
The study of the QCD phase diagram in the baryonic chemical potentialtemperature plane is a subject of rapidly increasing interest in the last decade. Recent investigations suggest that in the real world where the u and d quarks have a small current mass (O(1OMeV)) and the strange quark is much heavier (O(lO0MeV)) there is a second order critical point as endpoint of a first order transition line. This critical endpoint is located at low baryonic density (compared to the baryonic density in the nuclear matter) and high temperature (O(100MeV)) values. The order parameter characterizing the critical behaviour has isoscalar quantum numbers and the underlying symmetry which breaks spontaneously at the critical point is the Z(2) symmetry classifying the QCD critical point in the 3 - D Ising universality class '. However this symmetry does not represent an obvious symmetry of the original QCD Langrangian but it is rather an invariance of the effective thermal QCD action. The fluctuations of the condensate formed at the critical point correspond to isoscalar particles which are distributed in phase space producing a characteristic self-similar pattern with fractal geometry determined by the isothermal critical exponent of the 3 - D Ising universality class *. The properties of the isoscalar condensate ~ ( 2are ) strongly affected by the
190
191
baryonic environment: ap x
x
(y) fso
where p is the baryonic density in the critical region, pc is the critical baryonic density and X is a dimensionless parameter of order one. Eq.(l) relates the isoscalar condensate at zero baryonic density (a,) with its value at baryonic density p. The form of eq.(l) suggests that the difference p- pc can be considered as an alternative order parameter (besides the isoscalar condensate a ) characterizing the QCD critical point. Projecting the baryonic density onto the rapidity space and using the scaling properties of the critical baryonic fluid formed in a A A-collision process, one obtains the relation 5 :
+
+
where AL is the total number of nucleons of the A A system in the plane transverse to the beam, nb is the net baryon density at midrapidity and 9 is a scaling function. The variable z, is defined as: z, = A12/3AtL-' with At the total number of participating nucleons in the A A collision and L the size of the system in rapidity space. The scaling function 9 depends also on the ratio of the chemical potentials and for p = pc simplifies to:
+
In fact the scaling relation (2) represents an alternative description of the QCD phase diagram in terms of measurable quantities 5 . In Fig. 1 we present a plot of eq.(2) in the (z,,<) plane (we use the notation = AT2I3nb). In the same plot we show also the coordinate pairs (zc,i,
<
+
+
192
Figure 1. The QCD phase diagram in experimentally accessible variables according eq. (2). The various processes in recent and future heavy-ion collision experiments are also displayed.
2. Statistical description of the isoscalar condensate
The isoscalar condensate formed at the critical point can be described as a critical (Feynman-Wilson) fluid in local thermal equilibrium 6 . Universality class arguments determine the effective action for the dynamics of the condensate ~ ( 5at)energies M T, in 3 - D as:
I "
) ~ I',[o]= TF1 d31 ~ ( V U+ GT:(TF
(3)
Eq.(3) leads to the correct equation of state:
where 6 = 5 is the isothermal critical exponent of the 3 - D Ising model. The coupling G has been calculated in for the 3 - D Ising model on the
193
lattice leading to G M 2. The field a(.)' in eq.(3) is in fact macroscopic, i.e. the quantum fluctuations are integrated out to get the effective action (3) and therefore it possesses classical properties. Following ref. ( 5 ) we recall here that the experimentally accessible quantity is not the field (a) itself but the quantity ( a 2 )which represents density fluctuations of the a-particles created at T = T,. Based on the effective action (3) we can now proceed to determine the partition function of the condensates as a functional integral:
The path summation in the above equation is dominated by the saddle points of the corresponding action which have an instanton-like form '. Within this approximation we can determine the density-density correlation of the critical system both in configuration as well as in momentum space. Then using the calculated density-density correlation function we find the distribution of the corresponding a-particles in phase space. We end up with a pattern formed through the overlapp of several self-similar clusters with fractal mass dimension determined by the isothermal critical exponent 6 '. Using the Fourier transform of the spatial density-density correlation function we obtain the corresponding quantity in momentum space. A similar pattern occurs also in momentum space. The number of clusters as well as the multiplicity within each cluster are the same in both spaces while the local fractal dimension differs. Another property determining the geometrical features of the critical system is the shape of its evolution. For a cylindrical evolution the number of clusters is in general greater than one while in the case of spherical evolution the system consists a single cluster. Also the corresponding fractal dimensions are influenced by the geometrical shape of the evoluting system 6. A less influenced property is the fractal dimension of the transverse momentum space which turns out to be % 0.7 for cylindrical systems and M 1 for spherical systems.
The Critical Monte Car10 (CMC) event generator Using the results of the saddle point approximation to the partition function of the critical system one can develop a Monte-Carlo algorithm to simulate the production of the critical a-particles in an A A collision. We restrict our interest to the the distribution of the sigmas in momentum space since the coordinates in this space are experimentally accessible. The momentum coordinates of the centers of the a-clusters are treated as random variables
+
194
distributed according to an exponential decay law with range determined by the critical temperature. Within each cluster the particles are strongly correlated and possess a fractal geometry. The corresponding fractal dimension is given in terms of the exponent 6 while the multiplicity within each cluster is determined through the transverse radius of the entire system, its size in rapidity, the critical coupling G and the critical temperature T,. The momenta of the sigma-particles within each cluster are generated using the tool of L6vy walks '. Exactly at the critical temperature T, the mass of the sigma particles is zero (for an infinite system). As the system freezes out the sigma-mass increases and when it overcomes the two pion threshold the sigmas decay into pions which constitute the experimentally observable sector of the critical system. Unfortunately there is no theoretical description of this process based on first principles. A possible treatment of the u-decay into pions is to introduce the probability density P(m) for a sigma to have mass m and then using pure kinematics to determine the momenta of the produced pions. The mass m is assigned to the decaying sigmas randomly. A more detailed description of the whole algorithm can be found in '. 3. SPS data analysis (preliminary)
If the mass of the decaying sigma is well above the two-pion threshold the momenta of the produced pions are very distorted with respect to the momentum of the initial sigma and the fractal geometry of the critical condensate is not transfered to the pionic sector. Therefore in order to reveal the critical fluctuations in an analysis of the final pions one has to isolate the part of phase space for which the mass of the decaying sigmas is very close to the two-pion threshold. In this case the fractal properties of the sigma-momenta are transfered to final pions. Our proposal is to perform an event by event analysis in an A A-dataset forming for each event all the pairs of pions with opposite charge and filtering out those pairs with invariant mass within a narrow window just above the value 2m, ':
+
2m,
I dm;+,- I 2m,
+
;
2
m,+T- = (p,+
+ p,-12
(5)
In (5) pTi are the four-momenta of the positive (negative) charged pions respectively. The parameter E is assumed to be very small compared to the pion mass. We apply first our analysis to a large set of CMC generated events (100000). The CMC input parameters have been chosen to meet the properties of the C C system at the SPS:
+
0 0
The size in rapidity A = 6, corresponding to f i = 158 GeV/n The transverse radius R l = 15 fm.With this choice we can f k the mean multiplicity of charged pions to be z 50 close to the corresponding value in the C C-system at the SPS The critical temperature T,x 140 - 170 MeV The self-couplig G = 2 and the isothermal critical exponent b = 5 detrmined by the universality class of the transition
+
0 0
The essential parameters in our approach is the transverse radius RI which controls the mean pion multiplicity in the simulation of an A Aprocess, the size in rapidity A which controls the total energy of the system and the isothermal exponent 6 determining the fractal geometry of the isoscalar fluctuations. Having produced the lo5 CMC events we calculate the factorial moments in transverse momentum space of the final produced chraged pions. For the decay of the o-s into pions we use a Gaussian o-mass probability distribution with a large mean value (300 M e V ) and large standard deviation (100 MeV). With such a deviation in mass we expect that the critical fluctuations present in the sigma-sector will be strongly suppressed in the charged pion-sector. This choice of P ( m g )may be quite conservative but it consists a good test for the efficience of our data-analysis algorithm. Then using the charged pion momenta of each event one can form T+ 7r- pairs with invariant mass very close to the two-pion threshold. In our actual calculations we have used as a window E = 4 MeV to filter out pion pairs with invariant mass in the range 2m, 5 m,+,- 5 2m, E . Then we consider each charged pion pair as a sigma-particle. Performing the factorial moment analysis in the new momenta (of the reconstructed sigmas) we expect a partial restoration of the critical fluctuations. Indeed this characteristic behaviour is clearly shown in Fig. 2a where we present the results of the calculation of the second factorial moment in transverse momentum space for both the negative pions as well as the sigmas. The effect of the restoration of the critical fluctuations in the reconstructed sigma sector combined with the large suppression of the fluctuations in the negative pions, as predicted, is impressive. The theoretical expectation, for an infinite critical system, for the corresponding intermittency index is ) : !s x 0.67 while our analysis leads to sFD) x 0.54. We have applied the same analysis to data sets obtained from the NA49 experiment at the SPS. We have analysed 13731 events of the C C system at 158 GeVln, 76065 events of the Si Si system at the same energy, 13420 events of the
+
F
+
+
+
196
+
Pb Pb system at 40 GeVln, 384 events of the same system at 80 GeVln and finally 5584 events of the Pb Pb system at 158 GeVln. It must be noted that all the data sets used in our analysis are only preliminary and there is a need for further investigations with improved data. In Fig. 2b we show the results for the second moment in transverse momentum space in the C C system. There is an impressive agreement between simulated and real data. In fact the slope spD) of the C C system turns out to be x 0.58.
+
+
100-
+
-
0 reoonsbucteda (CMC for CCC at 158 GeV) X negative pions for C+C at 158 GeV (CMC)
100-
A r ~ c o m d a
(SPS data for C+C at 158 GeV) % negaWepionsforCcC at 158 GeV (SPS)
h
z
w
(4
LL*
10:
10:
f
+'
H
sbpe.0.58
sbpe=0.53
'3 mnl
- ...-..
=
xi
1:
en >=5.6
I 3E
. I
. .....', .
7 ....".I ......-I
r
x
a >=5.4
197
In Fig. 3 we show the second factorial moment for all the available NA49 experimental data sets both for negative pions as well as sigmas. A gradual increament of the slope sfD) as we approach the C C system - and according to Fig. 1 the critical point - is observed close to our theoretical expectations. For all the systems the effect of the reconstruction of the critical fluctuations in the a-sector is clearly seen. The analysis described so far concerns a finite kinematic window above the two pion threshold. It is interesting to extrapolate the properties of the various systems exactly at the two pion threshold. In this case no distortion due to the a-decay into pions will be present and we expect to reproduce the theoretically expected results for the critical system. Therefore we have to take the limit E 3 0. In order to extract this information one has to calculate s r D ) for various values of the kinematical window E and use an interpolating function to extrapolate to E = 0. The obtained value):gs (2D) can be directly compared with the theoretical expected value for sz,cr . To be able to perform this analysis one has to study a system with very large charged pion multiplicity per event and/or to use a very large dataset. For this reason we have applied our approach to two systems: (i) the 5584 Pb Pb events at 158 GeV/n and (ii) the lo5 CMC generated critical events (simulating the C C system at 158 GeV/n). The results of our calculations are presented in Fig 4. The solid circles are the values of sfD) for the Pb+Pb system while the open triangles describe the CMC results for various values of E . The dashed lines present a corresponding exponential fit. For the CMC events we find s z f ) = 0.69 z t 0.03 a value which is very close to the expected ~ 2 = ,0.67, ~ while ~ for the Pb Pb at 158 GeV/n system we get):?s = 0.34. The last value corresponds to a strong effect, owing to the fact that the Pb Pb system lies in the scaling region around the critical point. However it is clearly smaller than the theoretical value at the endpoint, in accordance with the fact that this system freezes out in a distance from the critical point in terms of the variables in Fig. 1. In summary we have introduced an algorithm to detect critical fluctuations related to the formation of an isoscalar condensate in A A-collisions. First analysis, using preliminary SPS-NA49 data, indicates the proximity to the critical point of the freezeout area in the collisions with nuclei of medium size (C C or Si Si).
+
+
+
+
+
+
+
+
198
reconstruction of
x
(T
-
SPS data
C+C (158 GeV) Si+Si (158 GeV) Pb+Pb (40 GeV) Pb+Pb Pb+Pb
0
A
+ +
-r
5 -
-5
1111
1
I
I I ,11111
10
I
I
I
,,,,'I
100
I
I
,,,,,,I
1000
, ,,
I
I,,,,
I
10000
M* Figure 3. The second factorial moment in transverse momentum space for all the analysed SPS processes. Represented are only the results obtained after the reconstruction of the isoscalar sector.
Acknowledgments The authors thank the NA49 Collaboration for supplying the preliminary experimental data from the SPS.
199
0.81
Pb+Pb at 158 GeV CMC
A
o*61
'A
0.44
?
.
i-.,
*...
0.2i
0
'0
.
1
.
1
2
.
4 E
1
.
6
1
'
8
(MeV)
Figure 4. The slope s p D ) for different values of the kinematic window E both for the lo5 CMC events as well as for 5584 Pb + Pb events at 158 GeV/n using preliminary SPS-NA49 data.
References 1. M. A. Stephanov, K. Rajagopal and E. Shuryak, Phys. Rev. Lett.81, 4816 (1998). 2. S. Gavin, A. Gocksch and R. D. Pisarski, Phys. Rev.D49, R3079 (1994); M. A. Halasz, A. D. Jackson, R. E. Schrock, M. A. Stephanov and J. J. M. Verbaarschot, Phys. Rev.D58,096007 (1998); K. F'ukushima, hep-ph/0209270. 3. F. Karsch, E. Laermann and C. Schmidt, Nucl. Phys. Proc. Supp.Bl06, 423 (2002). 4. N. G. Antoniou, Y . F. Contoyiannis, F. K. Diakonos and C. G. Papadopoulos,
200
5. 6. 7. 8.
Phys. Rev. Lett.81, 4289 (1998); N. G. Antoniou, Y . F. Contoyiannis and F. K. Diakonos, Phys. Rew.E62, 3125 (2000). N. G. Antoniou, Acta Phys. Pol.3333, 1521 (2002). N. G. Antoniou, Y . F. Contoyiannis, F. K. Diakonos, A. I. Karanikas and C. N. Ktorides, Nucl. Phys.A693, 799 (2001). M. M. Tsypin, Phys. Rev. Lett.73, 2015 (1994). P. Alemany and D. Zanette, Phys. Rew.El9, R956 (1994).
LOCATING THE QCD CRITICAL POINT IN THE PHASE DIAGRAM
N. G. ANTONIOU, F. K. DIAKONOS AND A. S. KAPOYANNIS Department of Physics, University of Athens, 15rrl Athens, Greece It is shown that the hadronic matter formed at high temperatures, according to the prescription of the statistical bootstrap principle, develops a critical point at nonzero baryon chemical potential, associated with the end point of a first-order, quark-hadron phase-transition line. The location of the critical point is evaluated as a function of the MIT bag constant.
1. Introduction
Quantum Chromodynamics is unquestionably the microscopic theory of strong interactions and offers an accurate description of quark-gluon matter. The formation of hadronic matter is still an open problem in the context of QCD. This theory predicts however the existence of a critical point at non zero baryon chemical potential, which is the end point of a quarkhadron critical line of first order [l].This singularity is associated with the formation of hadronic matter at high temperatures and its location in the QCD phase diagram is of primary importance. On the other hand the hadronic side of matter can be treated as a thermally and chemically equilibrated gas. The inclusion of interactions among hadrons is crucial in order to reveal the possibility of a phase transition. A model that allows for the thermodynamical description of interacting hadrons is the Statistical Bootstrap Model (SBM), which was first developed by Hagedorn [2-51. In what follows we investigate the possibility of the formation of a critical point within the framework of the statistical bootstrap hypothesis. 2. The hadronic matter
The SBM is based on the hypothesis that the strong interactions can be simulated by the presence of hadronic clusters. In the context of SBM the strongly interacting hadron gas is replaced by a non-interacting infinite-
201
202
component cluster gas. The hadronic states of clusters are listed in a mass spectrum p , so that fidm represents the number of hadronic states in the mass interval {m,m dm}. The mass spectrum can be evaluated if the clusters, as well as, their constituents are treated on the same footing by introducing an integral bootstrap equation (BE). In the bootstrap logic clusters are composed of clusters described by the same mass spectrum. This scheme proceeds until clusters are reached that their constituents cannot be divided further. These constituents are the input hadrons and the known hadronic particles belong to this category. The BE leads to the adoption of an asymptotic mass spectrum of the form 171
+
p(m2,{A})
m & w
~ C ( { A } ) e~xPp [ m , ~ * ( { ~.) ) ]
(1)
The underlying feature of SBM is that the mass spectrum rises exponentially, as m tends to infinity. ,B* is the inverse maximum temperature allowed for hadronic matter and depends on the existing fugacities {A}. a is an exponent which can be adjusted to different values allowing for different versions of the model. The manipulation of the bootstrap equation can be significantly simplified through suitable Laplace transformations. The Laplace transformed mass spectrum leads to the introduction of the quantity G(,B,{X}). The same transformation can be carried out to the input term of SBM, leading to the quantity cp(,B, {A}). Then the BE can be expressed as
cp(P, {A})
= 2G(P, {A>) - exp[G(P, {A})l+
1.
(2)
The above BE exhibits a singularity at cp(,B, {A}) = ln4 - 1 .
(3)
The last equation imposes a constraint among the thermodynamic variables which represent the boundaries of the hadronic phase. Hadronic matter can exist in all states represented by variables that lead to a real solution of the BE or equivalently in all states for which temperatures and fugacities lead to
cp(B, {A}) 5 ln4 - 1 .
(4)
In the general form of SBM the following four improvements can be made which allow for a better description of hadronic matter: 1) The inclusion of all the known hadrons with masses up to 2400 MeV in the input term of the BE and also inclusion of strange hadrons. This leads to the introduction of the strangeness fugacity A, in the set of fugacities
203
[6,7]. Another fugacity which is useful for the analysis of the experimental data in heavy ion collisions is T ~ This . fugacity allows for partial strangeness equilibrium and can also be included in the set of fugacities of SBM [8]. 2) Different fugacities can be introduced for u and d quarks. In this way the thermodynamic description of systems which are not isospin symmetric becomes possible. Such systems can emerge from the collision of nuclei with different number of protons and neutrons [9]. 3) The choice of the exponent a in (1) has important consequences, since every choice leads to a different physical behaviour of the system. The usual SBM choice was cy = 2, but more advantageous is the choice a = 4. With this choice a better physical behaviour is achieved as the system approaches the hadronic boundaries. Quantities like pressure, baryon density and energy density, even for point-like particles, no longer tend to infinity as the system tends to the bootstrap singularity. It also allows for the bootstrap singularity to be reached in the thermodynamic limit [lo], a necessity imposed by the Lee-Yang theory. Another point in favour of the choice a = 4 comes from the extension of SBM to include strangeness [6,7]. The strange chemical potential equals zero in the quark-gluon phase. With this particular choice of a , p, acquires smaller positive values as the hadronic boundaries are approached. After choosing a = 4 the partition function can be written down and for point-like particles it assumes the form
where B is the energy density of the vacuum (bag constant) and it is the only free parameter of SBM which is left after fixing a = 4 [6,7]. 4) The contributions due to the finite size of hadrons, accounting for the repulsive interaction among hadrons, can be introduced via a Van der Waals treatment of the volume. The negative contributions to the volume can be avoided if the following grand canonical pressure partition function is used
where 5 is the Laplace conjugate variable of the volume. All values of 5 are allowed if Gaussian regularization is performed [ll]. The value 5 = 0 corresponds to a system without external forces [10,11] and it will be used throughout our calculations. With the use of (6) and the SBM point particle
204
partition function ( 5 ) one obtains af(B+E/4B7{A))
vHG(C,P,
= X 1 - ~ a f ( Pax + E / 4 B , { A )? )
(7)
4B
where X is the fugacity corresponding to the particular density, and 0f(P+E/4B,{x})
PHG(C, P, {A})
=
))-4B 00 31 f ( 8 + 5 / 4 B , { x0f(P+EI4B7{X)) 48
(8)
ab
The dependence of the pressure on the volume can be recovered if for a given set of parameters t, @, {A} the density va of the conserved baryon number < b > is calculated. Then the volume would be retrieved through the relation =-.< b > (9) vb
By using the SBM with all the above improvements the possibility of a phase transition of hadronic matter can be traced. The study of the pressure-volume isotherm curve is then necessary. When this curve is calculated one important feature of SBM is revealed. This curve has a part (near the boundaries of the hadronic domain) where pressure decreases while volume decreases also (see Fig. 1). This behaviour is due to the formation of bigger and bigger clusters as the system tends to its boundaries. Such a behaviour is a signal of a first order phase transition which in turn is connected with the need of a Maxwell construction. 0.20 0.19
Fp 0.18 BI 0.17
\
6
2 cn
E
0.16 0.15
P4
0.14
0.13 0.12
Figure 1. Isotherm pressure-volume curve for SBM and IHG (both with Van der Waals volume corrections using the pressure ensemble). B is constant.
205
If on the contrary the interaction included in SBM is not used then no such behaviour is exhibited. This can be verified if the Ideal Hadron Gas model is used. Then for this model the equation that corresponds to eq. (5) is
where gai are degeneracy factors due to spin and isospin and a runs to all hadronic families. This function can be used in eq. ( 6 ) to calculate the Ideal Hadron Gas (IHG) pressure partition function in order to include Van der Waals volume corrections. The result is that the pressure is always found to increase as volume decreases, for constant temperature, allowing for no possibility of a phase transition. The comparison of SBM with the IHG (with volume corrections) is displayed in Fig. 1, where YO is the normal nuclear density YO = 0.14 f m - 3 . In both cases (SBM or IHG) the constraints < S >= 0 (zero strangeness) and < b >= 2 < Q > (isospin symmetric system, i.e. the net number of u and d quarks are equal) have been imposed. Also strangeness is fully equilibrated which accounts to setting ys = 1.
3. The quark-gluon matter We may now proceed to the thermodynamical description of the quarkgluon phase. The grand canonical partition function of a system containing only u and d massless quarks and gluons is [13] In ZQGP (V,P, A),
=
E 6 rP2 - 3 [ ( I - ? ) "
\
/
quark terin
+ V -8r2 PF3 45 \
( 1 - 2 ) +
gluon term
'
BBV v
.
vacuum term
This partition function is calculated to first order in the QCD running coupling constant a,. The fugacity A, is related to both u and d quarks. B is again the MIT bag constant and g equals to the product of spin states, colours and flavours available in the system, g = N,N,Nf = 12. Using this
206
partition function the QGP baryon density and pressure can be calculated through the relations
If the strange quarks are also included, the quarks assume their current masses and a, = 0, then the following partition function can be used.
-
8?r2 +v-p-345
gluon term
quark term
psv
v
.
vacuumterm
The index i runs to all quarks and antiquarks. The ,current masses are taken mu = 5.6 MeV, md = 9.9 MeV and m, = 199 MeV [14]. The fugacities are A- - A-1 21 , A d- - A-1 and A,- = 'A ; = 1 (since strangeness is set to zero). The baryon density is then
where i includes only u, fi, d and dquarks and Ni = 1 for u and d quarks and Ni = -1 for ii and d quarks. The pressure is
In order to study the effect of the inclusion of strange quarks we can use the partition function (11) and add the part of the quark term of (14) which corresponds to the strange quarks. 4. Matching the two phases
After completing a thermodynamic description for the hadronic and for the quark-gluon phase we can trace whether a phase transition can occur between the two phases. Similar situations have been studied in [10,12,13],
207
but here, apart from the use of the SBM incorporating all four improvements, we shall focus our calculations to the location of the critical point. So no value of B or a, will be selected a-przorz. If a, and ( are fixed, then the only free parameter left would be the MIT bag constant B . If a value of B is chosen, also, the pressure-volume isotherms of Hadron Gas and QGP can be calculated for a specific temperature. Then at the point where the two isotherms meet would correspond equal volumes and equal pressures for the two phases. But assuming that the baryon number is a conserved quantity to both phases, the equality of volumes would lead t o the equality of baryon densities. When performing calculations about the location of the point where the two phases meet, with fixed MIT bag constant, what is found is that at a low temperature the QGP and SBM pressure-volume isotherms meet at a point where the Hadron Gas pressure is decreasing while volume decreases. This is reminiscent of the need of a Maxwell construction. So at that point the phase transition between Hadron Gas and QGP must be of first order. As the temperature rises, a certain temperature is found for which the QGP isotherm meets the SBM isotherm at a point which corresponds to the maximum Hadron Gas pressure for this temperature. So no Maxwell construction is needed. It is important to notice that this point is located at finite volume or finite baryon density and it can be associated with the QCD critical point. Then, as temperature continues to rise, the QGP isotherms meet the SBM isotherms at points with even greater volume. Again no Maxwell construction is needed and this region belongs to the crossover area. These situations can be depicted in Fig. 2(a), where all curves have = 210 MeV. The dotted curved lines correspond been calculated for B1/4 t o SBM, while the almost straight dotted lines correspond to QGP. For the calculations three quark flavours have been used with their corresponding current masses and a, = 0. The thick lines are the resulting pressurevolume curves for the Hadron Gas-QGP system. A Maxwell construction is needed for the low temperature isotherm. This is depicted by the horizontal line which is drawn so that the two shaded surfaces are equal and represents the final pressure-volume curve after the completion of the Maxwell construction. In the same figure the isotherm that leads the pressure curves of the two phases to meet at the maximum hadron gas pressure, forming a critical point, is drawn, also. Finally for higher temperatures the two curves meet at a point so that the resulting pressure curve is always increasing as volume decreases, without the need of a Maxwell construction (crossover
208
Figure 2. (a) Three isotherm pressure-volume curves for Hadron Gas (using SBM) and QGP phase (using partition function including u , d and s quarks at their current masses and a, = 0). The low temperature isotherm needs Maxwell construction, the middle temperature isotherm corresponds to critical point and the high temperature isotherm corresponds to crossover. B is constant. (b) A similar case as in (a). The boundaries of Maxwell construction are displayed with the slashed line.
area).
A more detailed figure of the previous one is Fig. 2(b), where more curves that need Maxwell construction can be displayed. The coexistence region of the two phases are represented by the horizontal Maxwell constructed curves. The slashed line represents the boundaries of the Maxwell construction and so the boundaries of the coexistence region. 5. Locating the Critical Point To locate the critical point with the choice (14) for the QGP partition function, for a given B , one has to determine the parameters (P, A,, Ad, A,, Ah, Xi), which solve the following set of equations. vb S B M ( P , x u , A d , A s ) = vb Q G P ( P , A:,
A&)
(17)
209
( b(P, X u , A d , As) ( b(P, A,:
( Q(P, X u , A d , As)
)SBM - 2
)QGp
- 2 ( Q(P, A,: A);
)SBM =0
(21)
=0
(22)
)QGp
Eq. (19) is equivalent to P ~ B M = PSBMm a z , when all the rest of the equations are valid. Eq. (20) imposes zero strangeness to HG phase. Eqs. (21) and (22) account for isospin symmetry in the HG and QGP phase, respectively. Also we have set ys = 1 assuming full strangeness equilibrium. With the choice (11)for the QGP partition function only the equations (17)-(21) have to be solved, since only one fugacity A, = A; = Xl, is available in the QGP phase. 035
,,,,,,, , , , , ,,,, ,,,, ,,,, , , , ,
, I , ,
,,
. . . mu=m,+J
quarks nn included a&, mp5.6 MeV, m . 4 9 MeV. m,=199 MeV
-
md=9.9 MeV. %=I99 MeV
145MeV
ow ~ ~ ~ ~ ~ ~ ~ " ' ~ ~ " ' ~ ~ " " ~ " " ~ " ~ ' ~ " ' ' ~ ' 100
110
120
130
Id0
150
160
Critical Point Temperature, T (MeV) (a)
170
140
160
180
200
220
240
MIT Bag Constant, Bu4 (MeV)
(b)
Figure 3. (a) The baryon density at the critical point versus the critical temperature for different values of B and for different types of the QGP partition function. (b) The critical temperature as a function of the MIT bag constant for different types of the QGP partition function.
The calculations for the position of the critical point for different values of B are presented in Figs. 3-4. The range of values of B1I4= (145 - 235) MeV [14,15] have been used for these calculations. In Fig. 3(a) we depict the critical temperature as a function of the critical baryon density. The dotted curves correspond to the QGP partition function with massless u and d quarks, without strange quarks and for different values of as. The thick solid curve corresponds to the QGP partition function with massive
210
u,d and s quarks and a, = 0. The slashed curve corresponds t o the QGP partition function with massless u,d, massive s quarks and a, = 0.1. Fig. 3(b) presents the connection of the MIT bag constant with the baryon density of the critical point, divided by the normal nuclear density.
190
1
1
,
, , , ,
,
, , , ,
1
1
1
, , , ,
,
L
-.
-_
-
I60
-%
- . _ _. _ - . . .._
140
E
..
bu 130
-_
Critical Point:
. . ... mu=md* squarks not included ~ 4 m,=5.6 , MeV, e-- ~ 4 . MeV, 9 m,=199 MeV as=O.l, mu=qi=O, m,=199 MeV Bootstrap Singularity
100
.
. - . _' .
".
.---
90
0
100
200
300
B 114 =145 MeV
400
-
'.
500
600
'.
3
'-
700
Baryon Chemical Potential, pc (MeV) Figure 4. Critical temperature versus critical baryon chemical potential for different values of B and for different types of QGP partition functions. The bootstrap singularity lines for maximum and minimum values of B,as well as, the critical points corresponding to these values (filled circles) are also displayed.
In Fig. 4 the critical temperature is plotted versus the critical baryon chemical potential. The code of lines are as in Fig. 3. In this graph the lines representing the bootstrap singularity, that is the boundaries of the maximum space allowed to the hadronic phase, for the maximum and minimum values of B, are also depicted (slashed-dotted curves). The filled circles represent positions of critical point for the different choices of the QGP partition functions for these maximum and minimum values of B. As it can be seen the critical point is placed within the hadronic phase, close to the bootstrap singularity. Every modification made to external parameters drives the critical point in parallel to the bootstrap singularity line. Typical values for the position of the critical point are listed in Table 1.
21 1
Table 1. Some values for the position of the critical point for different values of B and different QGP partition functions.
B1/4(MeV)
vb c r . p . (fmP3)
Tc (MeV)
pC (MeV)
a, = 0, mu = m d = 0, s-quarks not included
235
0.2158
171.2
180
0.1361
127.9
544.5
145
0.0690
102.6
623.4
a , = 0, mu = 5.6 MeV,
md
299.4
= 9.9 MeV, ms = 199 MeV
235
0.3110
159.1
451.1
180
0.1489
121.2
598.6
145
0.0721
98.4
651.9
6. Concluding Remarks
From our study we may conclude that, as B increases, the critical point moves to higher baryon density, smaller baryon chemical potential and higher temperature until a certain value of B is reached. If B is increased further, then the critical point moves quickly to zero baryon density and zero baryon chemical potential, while temperature keeps increasing slowly. The inclusion of strange quarks always moves the critical point to higher baryon density and higher baryon chemical potential (for fixed values of B and u s ) . As a, is increased (at the same QGP partition function), the critical point moves to smaller baryon density, smaller baryon chemical potential and higher temperature, while the move of the critical point towards zero chemical potential takes place at smaller values of B. From the last two remarks we can infer that the calculation with massive quarks and a, = 0 represents the higher baryon density, higher baryon chemical potential and smaller temperature (for a given B) that the critical point can acquire. So this particular QGP partition function can give us an upper limit for the position of the critical point in baryon density or baryon chemical potential. From Fig. 4 it is evident that the critical point is positioned near the bootstrap singularity curve. So this curve can represent, to a good approximation, the first-order transition line between hadron and quark-gluon phase. From Table 1 we observe that in the minimal, two flavour version of the quark-gluon description ( a , = 0) and in the chiral limit (pu = pd = 0),
212
where the critical point becomes tricritical, the location of the singularity may come close to the freeze-out area of the SPS experiments (typically: T, M 171 MeV, pc M 300 MeV). On the contrary, the Lattice QCD solution [16] with unphysically large values of the quark masses p,, pd drives the critical baryon chemical potential to higher values (T, NN 160 MeV, pc % 725 MeV). In order to bridge this discrepancy one needs an improvement in both approaches. In the bootstrap approach a realistic partition function of the quark-gluon matter is needed, based not on perturbation theory but on the knowledge of the quark-gluon pressure on the lattice for nonzero chemical potential. At present, there exist lattice results for the pressure only for p = 0 [17]. In the lattice search for the critical point on the other hand the solution for small quark masses (chiral limit) is needed before any quantitative comparison, both with the bootstrap solution and the location of the freeze-out area in heavy-ion collisions, could be made. References 1. F. Wilczek, hep-ph/0003183; J. Berges, K. Rajagopal, Nucl. Phys. B538, 115 (1999). 2. R. Hagedorn, Suppl. Nuovo Cimento 111, 147 (1965). 3. R. Hagedorn and J. Ran€t, Suppl. Nuouo Cimento VI, 169 (1968); R. Hagedorn, Suppl. Nuouo Cimento VI, 311 (1968). 4. R. Hagedorn, Nuouo Cimento LVI A, 1027 (1968). 5. R. Hagedorn and J. Rafelski, Phys. Lett. 9 7 B , 136 (1980). 6. A. S. Kapoyannis, C. N. Ktorides and A. D. Panagiotou, J. Phys. G23, 1921 (1997). 7. A. S. Kapoyannis, C. N. Ktorides and A. D. Panagiotou, Phys. Rev. D58, 034009 (1998). 8. A. S. Kapoyannis, C. N. Ktorides and A. D. Panagiotou, Phys. Rev. C58, 2879 (1998). 9. A. S. Kapoyannis, C. N. Ktorides and A. D. Panagiotou, Eur. Rhys. J. C14, 299 (2000). 10. J. Letessier and A. Tounsi, Nuovo Cimento 99A, 521 (1988). 11. R. Hagedorn, 2. Phys. C17, 265 (1983). 12. R. Fiore, R. Hagedorn and F. d’ Isep, Nuouo Cimento 88A, 301 (1985). 13. J. Rafelski and R. Hagedorn: From hadron gas to quark matter 11. In: Statistical mechanics of quarks and hadrons, H. Satz (Ed.), Amsterdam: North Holland (1981). 14. Cheuk-Yin Wong: Introduction to High-Energy Heavy-Ion Collisions, World Scientific Publishing (1994). 15. W. C. Haxton and L. Heller, Phys. Rev. D22, 1198 (1980); P. Hasenfratz, R. R. Horgan, J. Kuti and J. M. Richard, Phys. Lett. 95B, 199 (1981). 16. Z. Fodor and S. D. Katz, hep-lat/0106002. 17. F. Karsch, E. Laermann and A. Peikert, Phys. Lett. 478B, 447 (2000).
BARYONIC FLUCTUATIONS AT THE QCD CRITICAL POINT
K. S. KOUSOURIS National Research Center "Demokritosl', Institute of Nuclear Physics Ag. Pamskewi, GR-15910 Athens, Greece E-mail: kousourisOinp.demokritos.gr
The existence of the QCD critical point at finite baryon density is supported by theoretical evidence. In this case the isoscalar condensate, which is the natural order parameter of the phase transition, is directly related to the baryon density. Therefore, the critical fluctuations of the order parameter (< gq >) manifest themselves in the baryon 'liquid', generating dynamical fluctuations of the net baryon density. We have investigated the properties of these fluctuations and demonstrated this effect through a Monte-Carlo simulation.
1. Introduction
Theoretical investigations of the thermal aspects of QCD clearly indicate the existence of a critical point at finite temperature and chemical potential for the case of two light quarks '. Assuming nonzero bare quark masses for the lightest quarks u,d the proper order parameter is the u N< ijq > field which has zero expectation value < u >= 0 at high temperature (T > Tcritical) and becomes non zero at lower temperature (T < Tcritical) < u # 0. In this case, the order parameter of the phase transition is an isoscalar and the critical system can be argued to belong to the 3-d king universality class3. The heavy ion experiments offer the opportunity to explore the QCD phase transition and study the properties of the quark matter in the vicinity of the critical point. Yet, it should be emphasized that the quark-hadron phase transition takes place in a multiparticle environment and therefore we also have to consider the effects caused by the existence of the baryonic background. At high energies, due to the nuclear transparency the net baryon number in the central rapidity region is almost zero and carried away in the fragmentation region. However, at lower energies, the net baryon density (in central rapidity) is non-zero modifying the isoscalar condensate.
213
214
The dependence of the order parameter on temperature, at zero chemical potential or baryon density, is given by the relation' :
where fm is the pion decay constant, while the 'in medium', zero temperature behavior of the chiral condensate is also known and given by the model independent relation2:
where O N is the nucleon sigma-term and m, is the pion mass. Under the light of the previous relations, which hold provided that we have low enough density and the temperature satisfies the relation << 1, we can write the following relation for the isoscalar condensate:
6
with X being a dimensionless constant of order unity. It is clear that the critical fluctuations of the order parameter are directly related to fluctuations of the local baryonic density p(r). If these fluctuations could be measured in heavy ion collisions, they would serve as a possible signature of the critical point. 2. Effective action
According to the previous discussion, the appropriate, dimensionless order parameter of the phase transition is:
&,
the inverse temperature. For practical reasons we are going where ,Bc = to decompose the order parameter further: d(r) = 3 m k )
m(r) = P,3Ip(r) - Pcl
(7)
It can be argued that the effective action describing the properties of the order parameter at T = Tc, assuming local thermal equilibrium, is6:
215
where 6 is the critical isothermal exponent and g is an effective coupling, driving the equation of state: g@, which has been calculated4 for the 3-d Ising model. In order to describe properly the critical system it is necessary to adjust it to the geometry of heavy ion collisions and the Bjorjen expansion scenario. Assuming cylindrical expansion and introducing the rapidity variable E we write for a local observer the longitudinal space element as dxll M TCdJwith T~ beeing the proper time. The physical length scale of the system is defined by the inverse temperature pc and thus we can take T~ = CJC, where the parameter CA is fixed by the size of the colliding ions with A nucleons: CAN A l l 3 . Inserting the new parameters into the effective action we end up with the following expression:
$f=#
N
3. Rapidity projection
The thermodynamics of the critical system is determined through the partition function which is the basic quantity used for the calculation of thermal expectation values: 2=
I
Dme-rcbI
(10)
where the integral runs over all possible configurations of the field m(r). In order to procceed further we are going to rely upon the assumption that this functional integral is dominated by field configurations which can be projected into a longitundinal profile and a transverse one, with the final picture being the Cartesian synthesis of the two projections. Starting with the rapidity projection, we assume that the order parameter depends only on the rapidity variable, allowing us to integrate over the transverse space of radius R I . In this case, the effective action becomes:
and after collecting the constants we have:
216
In order to calculate the expectation value of any observable we actually have to take into account all possible configurations, each weighted by the Boltzmann factor e-rc . However, for gil) >> 1 the saddle point approximation is justified and the basic contribution to the partition function comes from the configurations which minimize the action. The instanton-like solutions of the Euler-Lagrange equation = 0 are of the formlo:
&
2
4 0 =4 s where
[
2
to should
g y (6- 1 ) 2
]A.
be identified as the "instanton" size and A
(13) =
It can be proved that the solutions, of the above form,
that give non-vanishing contribution to the partition function are those for which to>> t ,meaning that the anomaly at &, lies outside the system. So, the saddle point configurations can be classified according to the parameter CO
:
The basic assumption for the geometrical structure of the critical system is that it consists of several clusters. Supposing that we have a cluster of size S,, we can easily calculate its contribution to the action:
The observable we are interested in, is the mean value M ( & ) = :J m(C)& of the order parameter in the thermal environment:
The 'size' tomentioned earlier serves as the appropriate measure for the functional integration over the field configurations. Taking this into account we can perform the integrations end eventually:
where
217
is a rapidity scale and 23%
6,
=
2 6+1)
(6 - l)*
is a maximum size of a cluster in rapidity and determines also the number of critical clusters . The function f(x) is a slowly varying function of 2 and in the case where x > 1 + b, > be it takes the asymptotic value:
f(.)
= q-+j r(+) 6+1
Notice that if 6, > S, there is a power law describing the dependence of the mean multiplicity on the cluster size and therefore 6, serves as an upper bound for the appearance of correlated selfsimilar fluctuations. The fact that there is a power law describing the fluctuations in the clusters, clearly indicates that there exist correlations on every scale (up to 6,)) which leads to the conclusion that the fluctuation's structure resembles a fractal of dimension d$) = It should be clear however that the physical observable is the net-baryon number, the local density of which, fluctuates and there fore the picture of the whole system is rather a fractal distribution(due to dynamical correlations) on a flat background which exhibits fluctuations of statistical origin. The relation of the net-baryon number to the above mentioned multiplicity is straightforward:
A.
which leads to the relation:
4. Transverse space projection
The next step towards the full description of the critical system is the projection onto the transverse space. We use as a starting point the original effective action, considering now the fact that the baryon density is constant in rapidity (total range in rapidity=A):
I?,
/ [5
= ACAF~
+
d 2 x ~1 ( V ~ r n ) ~gT,2F 6-lrn6+1
This expression can be reformulated as
1
(22)
218
where g y ) = A C A F and ~ gy) =~T,23~-’. Following exactly the steps of the previous section and assuming azimuthal symmetry we get results similar as in the rapidity analysis. The mean multiplicity in a cluster of radius R in the transverse space is:
where
The constant R, again serves as the upper bound of a cluster’s size, so as to have a power law distribution. The fractal structure of the transverse fluctuations is also justified, leading to a fractal dimension d g ) = The netbaryon fluctuation in a cluster of radius R is < 6Nb >= ACA < M ( R ) > and we can also calculate the fluctuation’s density-density correlation inside the cluster:
6.
An accessible physical observable in heavy-ion collisions is the transverse momentum distribution. In order to compare our results to what is directly measured in experiments, we have to express the previous relations in the momentum space. The procedure lies on the fact that the momentum correlation function is actually the Fourier transformation of the transverse space correlation function, that is:
The multiplicity turns out to be dimension c i = ~
&
<M
2
(pl)
>Np
l which leads to a fractal
5. Monte-Carlo simulation
The study of the critical fluctuations has revealed that they satisfy a proper power law and should be distinguished from the random statistical fluctuations. If they could be measured in an experimental procedure, they would
219
serve as a direct signature of the continuous phase transition. In the following we will use the factorial moments5 to study the baryonic density fluctuations. Though the method has, so far, been used for several sets of data being monofractal, we claim that it can be used for fat-fractals (fractal fluctuations on background) as well. Trying to demonstrate the geometry of the system and support our claims, we have performed a Monte-Carlo simulation of the critical system, matching the conditions of AuSAu collisions at RHIC. The major theoretical assumption is that the quark-hadron phase transition belongs to the 3-d Ising universality class, determining the isothermal exponent to be 6 x 5 and the universal parameter4 g x 2. The set of input parameters is rather limited: A M 11,,&x 0.81fm,rc x 6fm, R l x 12fm, CA x 7.4,F M 100. The net-baryon number is approximately8 No = 165. Inserting the above set into the relations we find that the size of a critical cluster in rapidity is d, x 0.66 and in transverse space R, x 30fm leading to N I IM 8 clusters in rapidity and N_Lx 1 in transverse space. We finally calculate the total, net-baryon fluctuation: < bNb >x 120. Starting from the transverse momentum, the simulation algorithm consists of the following steps: 0
0
At every event No = 165 baryons are distributed in transverse momentum space according to the background spectrum
and the momentum vector is orientated uniformly. In order to develop the fluctuations, we perform a Levy-flight of < dNb > steps in the momentum space, using a test function of the kind P ( x ) x-'-'F which creates a fractal set of the desired dimension. At each step of the flight we choose randomly the insertion of a new baryon or its removal. In the latter case, the baryon closest to the point is erased. Event by event factorial moment analysis is performed at the final distribution. N
0
The rapidity simulation is more complicated due to the existence of more than one clusters. The basic ideas, however are the same: 0
The background baryons are placed uniformly.
220
0
0
Nll clusters center’s are distributed also uniformly in rapidity. The size of each one is decided according to the overlap with the neighbouring clusters. One distribution is accepted if the total netbaryon fluctuation is exactly < SN, >= 120. At each cluster we perform a Levy flight around the center with as many steps as the local net-baryon fluctuation and the insertion or removal of a new baryon is decided randomly, with equal probability. The factorial moment analysis is again performed at the final distribution.
6. Results-discussion
The M o n t d h r l o simulation of the critical events, which generate dynamical fluctuations according to the cluster discription, has been performed both in rapidity and transverse space projection. The subsequent event by event factorial moment analysis has verified that the critical fluctuation can be identified and the fractal dimensions measured were in very good aggreement with our t heoritical expectations. More spesifically, in the rapidity projection, having used 20000 events for better statistics, we can identify strong intermittent behaviour and the second factorial moment gives us a slope 4 2 = 0.172 which is very close to the theoretical expectation d - d F = M 0.167. In figure [4]we can see the distribution of the slopes s2, centered at 4 2 x 0.163 also in perfect agreement. In the transverse momentum space we used 1000 events and we have measured that 4 2 = 1.47 close enough to the expected value of 1.67. The difference is probably due to the fact that the background density as well as finite size effects reduce the intermittent behaviour. Eventually further analysis of the higher moments gave the values 4 3 = 0.75f0.01,44 = 1.60f0.03 in rapidity and 4 3 = 3.03f0.02,44 = 4.59f0.03 in the transverse momentum. 7. Conclusions
In heavy ion collisions it is possible to fine tune the experimental parameters so as to drive the freeze out of the system close to the critical point, the location of which is roughly known according to latest studies of QCD’s phase diagramm7>I2.Through the years there have been proposed several signatures for the identification of the phase transition from hadronic matter to quark gluon plasma. We claim that a clear sign of the system passing
221
through the critical point is due to the critical fluctuations of the net baryon density in the central rapidity region, originating from the direct connection t o the o field which is the natural order parameter of the phase transition. By using universality arguments we have been able t o treat the effective action describing the system at the critial point and we have shown that it is dominated by self similar clusters with well defined fractal indices related to the isothermal critical exponent. Through a Monte-Carlo simulation, adjusted t o Au+Au collisions a t RHIC, it has been possible t o analyze the net baryon density fluctuations and t o verify the fact that the dynamical ones dominate over the statistical giving direct evidence of criticality
Acknowledgments
I would like t o thank the organizers of CF2002 workshop for their kind hospitality. I am also grateful t o N. G. Antoniou and F. K. Diakonos for the fruitful1 discussions on this work.
References 1. H. M. Ortmanns, Rev.Mod.Phys. 68, 473 (1996). 2. R. Brockmann, W. Weise, Phys. Lett. B367,40 (1996). 3. M. Stephanov, K. Rajagopal, E. Shuryak, Phys. Rev. Lett. 81, 4816 (1998). 4. M. M. Tsypin, Phys. Rev. Lett. 73, 2015 (1994). 5. A. Bialas, R. Peschanski, Nucl. Phys. B273, 703 (1986). 6. M. Reuter, N. Tetradis, C. Wetterich, Nucl. Phys. B401,567 (1993). 7. Z. Fodor, S. D. Katz, JHEP 0203, 014 (2002) 8. N. G. Antoniou, Nucl. Phys. B92, 26 (2001)
9. N. G. Antoniou, Y. F. Contoyiannis, F. K. Diakonos, C. G. Papadopoulos, Phys. Rev. Lett. 81, 4289 (1998) 10. N. G. Antoniou, Y. F. Contoyiannis, F. K. Diakonos, Phys. Rev. E62, 3125 (2000) 11. K. S. Kousouris, MSc. Thesis 'Baryonic Fluctuations at the QCD Critical Point ', University of Athens, Physics Department (2002) 12. N. G . Antoniou, Acta Phys. Pol. B33, 1521 (2002)
222
2
3
4
5
6
I
7
10
9
11
rapidiw
Figure 1. Final rapidity distribution in a single event
e3
T
0.7
-
0.6
-
0.5
-
0.4:
0.3
o.2://
0.1
,
Figure 2.
'
,
'
I
'
I
'
I
.
I
'
I
'
I
Second factorial moment for 20000 events
223
354
Figure 3.
The p ( s z ) distribution for 20000 events
..
1.0-
0.5
= $
-
-
0.0-
.
4'
-0.5
8
8
-1.0-
8
D
l , . , . , , , . , , , , , . , . , , , , , -1.0
-0.8
-0.6
-0.4
42
0.0
0.2
0.4
0.6
0.8
1.0
pJGeV,
Figure 4. Background distribution for a single envent
224
12
-
1.0-
0.4 0.8
0.6
s g 4'
8 8 .
s
02-
I
0.0:
8%
. 42-
8 8
8 8
4.8 4.4
8
4.6
8
-1.0
4.8
-0.6
4.4
4.2
0.0
0.2
0.4
0.6
0.8
1.0
P, far3
Figure 5 . Fluctuated distribution for a single event
f 4
8! Figure 6.
Second factorial moment for 1000 events
NON-EQUILIBRIUM PHENOMENA IN THE QCD PHASE TRANSITION
E. N. SARIDAKIS
Physics Department, University of Athens, 15771 Athens, Greece Within the context of the linear o-model, we investigate some non-equilibrium phenomena that may occur during the two flavour QCD chiral phase transition in heavy-ion collisions. We assume that the chiral symmetry breaking is followed by a rapid quench and the system falls out of thermal equilibrium. We study the mechanism for the amplification of the pion field during the oscillations of the ofield towards and around its new minimum. We show that the pion spectrum can acquire a zone pattern with pronounced peaks at low momenta, which corresponds to clustering behaviour in momentum space.
1. The model
Experiments at RHIC and LHC are expected to probe many questions in strong interaction physics. One major area of interest concerns the chiral phase transition. For given baryon-number chemical potential p there exists a critical temperature T, above which the system lies in the chirally symmetric state. As the temperature decreases below T, the system moves into the chirally broken phase. It is believed that, for zero quark masses, there is a 1st-order phase transition line on the ( T , p ) surface at large p. This line ends at a tri-critical point beyond which the phase transitions become 2nd order. The line of 2nd-order transitions ends at the p = 0 axis. In the case of non-zero quark masses, the 1st-order line ends at a critical point, beyond which the 2nd-order transitions are replaced by analytical crossovers. This phase diagram has been discussed within various frameworks. Our interest lies in the study of possible non-equilibrium phenomena that may occur during the phase transition. In particular we would like to study the possibility that the system falls out of thermal equilibrium through rapid expansion. This is a realistic possibility in the framework of heavy-ion collisions.
225
226
The scenario we have in mind assumes an initial thermalization at a sufficiently high temperature for the system to move into the chirally symmetric phase. The subsequent fast expansion generates deviations from thermal equilibrium. We model this process by a quench during which the volume of the system increases instantaneously by a certain factor, with the number densities of the various particles decreasing by the same factor. We consider only the two lightest flavours and neglect the effects of the strange quark. As an effective description of the chiral theory we use the a-model '. The Lagrangian density is 1 L = -(dpad%7 2
+ dP?P?) - V ( a ,?;)
(1)
with the potential A2
V ( a ,3 ) = -(a2 4
+ ii2
-
v2)2
m: +2 (a2 + ii2
-
2va
+ v2) .
(2)
The last term in the potential accounts for the explicit chiral symmetry breaking by the quark masses. The scalar field a together with the pseudoscalar field ii = ( T + ~ T O , T - )form a chiral field CJ = (a,?). When the symmetry is restored at high temperatures, in the absence of the second term in the potential, the system lies in the symmetric state (a) = 0 + ( a ) = 0 , (?) = 0. However, in the presence of the explicit symmetry breaking term in (2), the expectation value of the a-field approaches zero but never vanishes, i.e. chiral symmetry is never completely restored. After symmety breaking, the expectation values of the fields become ( a )= v = fT and (?) = 0 , where fT is the pion decay constant. We fix the parameters of the Lagrangian using the phenomenological values m,M 139MeV, mm = d m w 600MeV, v M 87,4MeV, which yields X2 x 20. 2. Equations of Motion
The equations of motion resulting (semi-classically) from (1) are: d p P a+ ~
~+ ?i2( - v2)a a +~m2a
= vm:
+~
~+ ii2 ( - v2)+ a +~m;?
= 0.
d,dp?i
(3)
We neglect the fluctuations of alwhile we treat ?(.',t) as a quantum field:
227
The creation and annihilation operators ai,u,ak,u are defined in the interaction picture at the vacuum corresponding to the minimum u of the potential, and &(t)are the mode functions of the pion field. Furthermore, we will work in the frame of the Hartree approximation using: 0
?i2(Z,t) M
0
$3(Z,
t)M
(.'2(IC',
t ) )= ( # ( t ) ) ?(Z, t).
9(.-2(t))
The second approximation can be derived if we consider the three components 7ri(Z,t ) of $(a,t ) . The term 7r"j is replaced by 3(7r!)7rj for j = i, and by (7r')rj for j # i. Substituting the above approximations into (3) we get:
+ x2 ( n 2 ( t +) (?i2(t))- v 2 ) n ( t )+ rn:a(t) = urn:
ii(t)
i ( t )+ k2 - X2w2
[
5 + X2n2(t)+ ,A2(?i2(t)) + rn:
1
&(t)= 0.
(5) (6)
In (5),(6) (."(t)) is given by
The quantum field ?(Z, t ) can be expanded in terms of creation and annihilation operators ai,,(t) and ak,,(t). These are related to through a Bogoliubov transformation. Instead of considering the time-dependent field 7ri(Z,t ) ,we may use the Schrodinger picture around the vacuum state at the minimum w of the potential. The time evolution of 7ri(Z, t ) is replaced by the time evolution of the ground state in this picture. The particle number operator has a non-zero expectation value, which can be expressed in terms of the Bogoliubov coefficients. The particle density per momentum mode, for each component i of the pion field is 719:
with wk = (TIT+, T O , 7 r - )
d m . we have:
For the total number of pions of all species
228
where V is the volume of our system, that is the volume of the fireball in a heavy-ion collision experiment. 3. Initial Conditions
Our choice of the vacuum at u as our reference state has the advantage that the particle interpretation of the field ?(Z, t ) is close to the experimentally observable quantities. It requires, however, some care with respect to our choice of initial conditions for the evolution described by eqs. ( 5 ) , ( 6 ) . We assume that initially the fireball created by the collision, is in local thermodynamic equilibrium, or it has been separated in Disoriented Chiral Condensates (DCCs) each one in its own local thermodynamic equilibrium. If the second case is realised, we focus our treatment inside one of these DCCs. The expectation value 0 1 of the cr-field is small, but non-zero because of the explicit chiral symmetry breaking. For our calculation we use cr1 = 0 . 1 at ~ T = 140 MeV. For the pions we expect initially a thermalised gas that follows a BoseEinstein distribution with 1 nki = 5 . e T -1
WE
We assume the dispersion relation around the vacuum at cr = u: = k2 m:, even though the pion mass depends on the temperature. The justification for this approximation is provided by the explicit study in l2 of the pion mass during the chiral phase transition. There, it is shown that m, stays approximately constant from T = 0 up to T 100 MeV. The mode functions f k i ( t ) , for a configuration corresponding to a noninteracting pion gas in thermal equilibrium, are
+
N
in agreement with (8). In the following we assume large occupation numbers and neglect the factor 112, related to the zero point energy. In our scenario we assume an instantaneous expansion of the fireball by a volume factor A (a quench). This means that the number densities of the pion gas must be reduced by the same factor. In addition, in order to be consistent with the conservation of energy, the initial value of 0-field has to change according to the relation
229
where cr1 is the value before the quench, and (TA the one after. We remark that this assumption is rather crude as it neglects possible fluctuations of (T. However, it guarantees the minimal requirement of energy conservation. The above discussion implies that the natural initial conditions for the evolution of the fields are
and
These initial conditions are different from those assumed for particle production through inflaton decay in cosmology 3 , and from those in some works in QCD as in these cases the initial particle number is zero. 415,
4. Non-Equilibrium Evolution
Before solving the equations of motion numerically let us make some comments about their general form. Immediately after the quench, when the o-field starts rolling down from the maximum of the potential towards the minimum, the curvature of the effective potential, i.e the squared effective pion mass, is negative. This leads to the amplification of the low-momentum modes, a phenomenon characterized as Spinodal Decomposition (SD) ’. At later stages, when the field oscillates around the vacuum, another mechanism becomes effective. It is more transparent for X2v2 >> m:, when the term rn: coming from the explicit symmetry breaking, is negligible. If the initial value of (.“(t)) is small compared to v2, equation (5) can be solved in terms of Jacobi functions. Substitution of o(t>in (6) g‘ives a Lam6 equation for the evolution of the mode functions fki(t). The Lam6 equation has solutions that are unstable in some momentum zones, mostly at low-momenta. This means that, while o(t) oscillates around its new minimum after the quench, it leads the mode functions of the pion field in particular momentum zones to exponential increase. This phenomenon, called Parametric Resonance (PR) 7, occurs as a result of a particular relation between k and o(t) which makes the effective frequency in (6) imaginary. As time increases, (?i2(t))grows and it cannot be neglected anymore. It is this term that terminates the resonance. In general, whether SD and/or PR will take place depends on the parameters of the model and the initial conditions (mainly o(0) and (#(O))).
230
The two mechanisms can operate simultaneously, so that it may not be clear which one drives the pion production, In condensed matter physics spinodal instabilities are well known 11, while during the preheating stage of the Universe P R is the only mechanism we know that could rapidly amplify the matter fields and reheat the Universe 3!7910. In our study, the initial conditions play a significant role in determining the relative contributions from SD and PR. In particular, if the value U A (determined by energy conservation through equation (12)) is larger than the value of o for which the curvature of the potential turns positive, no SD effects are expected. This means that the details of the quench are important for the form of the resulting pion spectrum. However, we believe that our approach, based mainly on energy conservation, captures the most important elements of the process. A common property of both mechanisms of pion amplification is the enhancement of the pion spectrum at low momenta. This may provide a characteristic signature of these non-equilibrium QCD phenomena in heavyion collisions. 5. Numerical Results
We solve equations (5) and (6) numerically using a fourth-order RungeKutta algorithm for the differential equations and an ll-point NewtonGotes integrator to compute the momentum integral:
We calculate the pion density in momentum space using (8), and the total number of produced pions Ntot using (9). We define p ( k ) as
Ntot =
/
dkp(k).
(16)
We assume an initial temperature T ~ 1 4 MeV 0 and use roxl0 fm for the fireball radius before the quench. We then run our program for various expansion factors A (1 5 A 5 4 if the final radious is r j 5 15 fm). Some typical results are presented in Figs. 1-5. In Fig. 1 the initial distribution of pions p ( k ) for A = 3 ( r j = 14.4 fm) is shown. In Figs. 2 and 3 (again for A = 3 ) we observe that during the first two oscillations of o(t)we have a fast increase of Ntot, mainly because of PR. This lasts for about 5 fm and subsequently Ntot only fluctuates around a
231
h
c Q
0
250 500
750 1000 1250 1500
k (MeV) Figure 1. Initial distribution p ( k ) , according to (16), for the pions, for initial radius 10 f m and expansion factor A = 3 (rf = 14,4 fm).
mean value. The duration of the nonequilibrium effect is smaller than the typical duration of the collision, which is of the order of 60 fm. In Fig. 4 we present the final distribution of pions p ( k ) at a time t=15 fm. In the same graph we depict the results obtained by the distribution of the pion momenta in a Monte Carlo generated event through simulation of the density p(k). We observe the large enhancement of the spectrum at low momenta, and a formation of a zone pattern with specific peaks, characteristic of the non-equilibrium amplification. The transfer of energy to the pions is consistent with the decrease of the a-oscillation amplitude in Fig. 2. We can also observe a shift of the maximum to lower k compared to Fig. 1. This implies an additional energy transfer from hard to soft pion modes
232
60 0
2
4
6
8
1 0 1 2 1 4
(fml
Figure 2.
u evolution for A = 3.
through the mode-mode coupling (the (?i2(t))term) in ( 6 ) ) . Lastly, in Figure 5 we demonstrate the distribution of the pion momenta in a single event, which we extacted from a Monte Carlo simulation. We observe the clustering behaviour corresponding to the zone pattern of Fig. 4.
6. Conclusions and Future Investigations
Non-equilibrium phenomena, like the ones .we have investigated, may occur in the QCD phase transiion. The dynamics of the system may amplify pion mode functions in certain momentum zones while the a-field moves towards and around the (chirally broken) vacuum after the quench. These phenomena produce a significant number of new pions mostly at low momenta. The pion spectrum has a zone pattern and the pions form clusters in momentum space. The presence of initial thermal pions and the explicit symmetry breaking term decrease the production of new pions, which is not as pronounced as expected in the literature In future work we will elaborate on the pion spectrum as a signature of a *y5y8.
233
0
2
4
6
8
10
12
14
18
1 (fm)
Figure 3.
Ntot vs time for A = 3.
phase transition out of equilibrium, using factorial moments and fluctuation analysis '. We will study the pion clusters that arise from non-equilibrium effects and their distinction from clusters of different origin. We would also like to investigate the finite-time quench and the expanding fireball, and estimate the induced modifications to the pion spectrum. Finally, the role of u-fluctuations must be taken into account in the non-equilibrium phenomena. Acknowledgments
The author thanks N.G. Antoniou, F.K. Diakonos and N. Tetradis for helpful discussions and collaboration. References 1. M. Gell-Mann and M. Levy, Nuowo Sim 26, 705 (1960). 2. K. Rajagopal and F. Wilczek, NudPhys. 399, 395 (1993). 3. D. Boyanovsky et al, (hep-ph/9608205), (1996). 4. D.I. Kaiser (hep-ph/9801307), (1998). 5. A. Dumitru and 0. Scavenius, (hep-ph/0003134), (2000).
234
4
3 ~
r h
k 2
Numerical Integration MC-Simulation
5 h
Y
v
Q
I
0 I
250
500 750 1000 1250 1500
k (MeV) Figure 4.
Final distribution p ( k ) for t=15 fm for A = 3.
6. N.G. Antoniou et all Nucl.Phys A. 693, 799 (2001). 7. L. Kofman, A. Linde and A. Starobinsky, Phys. Rev. Let. 73, 3195 (1994). 8. D. Boyanovsky et all (hepph/9701304), (1997). 9. D. Boyanovsky, D. Lee and A. Singh, Phys. Rev. D. 48, 800 (1993). 10. Y. Shtanov, J. Traschen and R. Brandenberger, Phys. Rev. D. 51, 5438 (1995). 11. G. Brown, (cond-mat/9905343), (1999). 12. J. Berges, D.U. Jungnickel and C. Wetterich, Phys. Rev. D. 59, 034010, (1999).
235
kz (MeV)
+
+
+ ++
+
+ ++ +
400
+ 200
+ 0 -200 -400
Figure 5. Final distribution of pion momenta in a single event, created by a Monte Car10 simulation, for t=15 fm and A=3.
This page intentionally left blank
Sessions on Correlations and Fluctuations in Heavy Ion Collisions Chairpersons: G. Wilk and T. Trainor
This page intentionally left blank
CORRELATIONS AND FLUCTUATIONS IN STRONG INTERACTIONS: A SELECTION OF TOPICS
A. BIALAS M. Smoluchowski Institute of Physics, Jagellonian University, Cracow, Poland E-mail: [email protected] Invited talk at the 10th Workshop on Multiparticle Production: Correlations and Fluctuations in QCD. It contains a short account of (i) Event-by-event fluctuations and their relations to ”inclusive distributions; (ii) Fluctuations of the conserved charges;” (iii) Coincidence probabilities and Renyi entropies, and (iv) HBT correlations in the presence of flow.
1. Introduction
The title of this talk is a compromise between the very general formulation proposed by organizers (the first part of the title) and the reality. First, it does not seem possible to cover such a broad subject in one hour. Second, it seems reasonable to avoid a repetition of the topics which were already covered by other speakers. The points discussed here being only loosely related to each other, let me just start without any further ado. 2. Event-by-event fluctuations and inclusive distributions
An increased interest was shown recently in event-by-event fluctuations, particularly in studies of heavy ion collisions 1 - 5 . Here I would like to bring to your attention the results of the paper by Volker Koch and myself in which we have explained the relation between the event-by-event fluctuations and inclusive distributions. It is clear that such a relation must exist because the full knowledge of all inclusive multiparticle distributions gives a complete information about the particle spectra and thus about any possible observable, including the event-by-event fluctuations. Thus the real question is whether this relation can be useful, i.e., if it does not involve an infinite number of inclusive distributions (as is the case, e.g., for the relation between the exclusive and inclusive particle spectra). Fortunately
239
240
it turns out that the result obtained is fairly simple. To formulate the problem, let us consider a quantity Sm[x]defined for each event (labelled by the subscript m ) as a sum
c N,
Sm[zI =
(1)
Zm(Pi)
i=l
where pi denotes the particle momentum, the sum runs over all particles in a given phase-space region and z m ( p i ) is any singleparticle variable. One sees that Sm[x]defined in this way is an extensive quantity”. The Eq.(l) defines a single particle extensive quantity. One may also consider a similar construction for two or more particles, e.g.
i=l j = ]
where now the sum runs over all pairs of particles in a given phasespace region and x(pi,pj) is any variable depending on the momenta of the pairb. The main result of can be summarized as follows. The moments of any k-particle extensive quantity can be expressed as a linear combination of a finite number of moments of inclusive distributions. For the moment of rank r , one needs the inclusive distributions up to order q, where
(3) I shall not bother you with the derivation of this result which is not difficult and may be found in 6 . Let me simply quote the result for the first two moments of any single particle extensive quantity. It reads q = rk.
where p1 and p2 are the singleparticle and two-particle inclusive distributions, respectively. One special case of these relations, corresponding to the choice z ( p ) = 1, is well-known. In this case Sm[z]= Nm, the multiplicity. We thus find ”
< N >= / d p p ~ ( p ) ; ” < N 2 >=< N > +
s
dp1dpspz(p1,pz)
aWhen a system is made K times bigger (i.e. if it consists of K identical systems) is multiplied by K . bNote that in this case we must have i # j.
24 1
Thus the result of can be considered as a generalization of the well known relations connecting the factorial moments of the multiplicity distributions and the integrals of the multiparticle inclusive distribution functions. We conclude that the event-by-event fluctuations of extensive quantities give direct information on multiparticle densities. This tool becomes rather effective when fluctuations of multiparticle quantities are measured, as is seen clearly from Eq. (3). Indeed, one can in this way obtain information on high order densities (and thus high order correlations), not easy to reach otherwise. It would be interesting, I think, to exploit fully these relations in the future data analysis. It is also often interesting to consider the intensive quantities, "normalized ""per particle"" (see, e.g., '):" 1 s ~ [ x=] - S ~ [ X ] . (6)
Nm
Unfortunately, there are no such simple rules for them. In this case one must rely on additional assumptions or to work at a fixed multiplicity. Some examples are discussed in 6 . 3. Fluctuations of conserved charges
It was pointed out recently that fluctuations of conserved charges can provide interesting information on the structure of the system created in a high-energy collision. The idea is based on the observation that this system undergoes very fast expansion in longitudinal direction which makes it approximately boost-invariant . Consequently, a fixed interval in rapidity corresponds to a fixed part in the longitudinal extension of the system. Consider now an interval by in rapidity. The net charge contained in this interval is conserved, provided there is no leakage through the boundary. Consequently, if the leakage is neglected, the distribution of charge is independent of the history of the system. Thus by measuring this distribution one can obtain information on the system at very early stages of the collision. The problem of leakage through the boundary can be dealt with in two ways. First, if the lenght of the interval, 6 y , is very large, the effect of the boundary is expected to be minimized '. This can be verified by performing measurements with varying by. Another possibility is to estimate the leakage in various models g . To calculate the fluctuations of net charge, let us consider a system consisting of different species of particles (labelled by the subscript 2 ) . De-
242
noting the corresponding charges by qi we have for the total net charge i
where ni is the number of particles of the type i. From this definition we can calculate the average value and the dispersion of Q:
< Q >=
qi < ni
>
(8)
i
< [AQI2 >=< [Q- < Q >I2 >= Cqiqj[< ninj > - < ni >< nj >] (9) if
The last equation can be rewritten in the form
i
i,j
where Cij are the normalized two-particle correlations:
If the particles are weakly correlated, the second term can be neglected and we obtain
< [AQI2>= c q f < ni > i
One sees that < [AQI2 > depends strongly on the charge of the particles which form the system. Since the result depends also on < ni >, it is convenient to consider the ratio
where N + , N- denote number of positively and negatively charged particles (Nch = N+ fv-). For a pure pion gas one obtains < Nch >= < ni > and thus D = 4. For the resonance gas, the decays of the neutral resonances contribute to the denominator but cannot increase the numerator. If one considers only two-body decays one obtainsC a reduction of D from 4 to about 3
+
xi
’.
CThecase of the resonance gas can be treated as the pion gas with non-negligible correlations between pions, i.e., using the Eq. (10).
243
It is of course very interesting to compare these numbers with the result one may expect for the partonic systems. For a system made of up and down quarks, antiquarks and gluons we have 5
1
< [AQ]' >= -[4(< nu > + < n,i >)+ < n d > + < na >= - < Nq > (14) 9 18 where in the second equality we have assumed that the abundances of all quarks and antiquarks are the same and equal to < Nq > 14. Gluons of course do not contribute to < [AQ]' >. To obtain D, one has to estimate < Nch >. For the quark-gluon plasma it was argued that < Nch >x< Nq >. This follows from estimates of the entropy of the system (which is very large because of the large number of degrees of freedom of gluons). Using (14) this implies D M M 1. The existing preliminary data lo give results close to 4, consistent with the pion, or hadron resonance gas and in serious disagreement with the expectations from the quark-gluon plasma. Another possibility l1 is to consider a system consisting of (constituent) quarks and antiquarks. In such a system hadrons are created by coalescence Consequently, the average number of all of the quark-antiquark pairs hadrons is, approximately, equal to < Nq > / 2 . Assuming equal charge distribution we thus obtain < Nch >x< Nq > 13, i.e., D x 10/3, a result close to that obtained for the hadron resonance gas and thus close to the preliminary experimental data.
'.
4. Coincidence probabilities
In this section I shall briefly outline a new, recently advocated 13, method to study the fluctuations in the multiparticle systems by measuring the so-called coincidence probabilities. The coincidence probability of the rank k is defined as
ck
N(k) N ( N - 1)...(N - k + 1)
(15)
where N ( k ) is the total number of observed k-plets of identical eventsd and N is the total number of events Considered". It is clear that Ck are sensitive dSince the observed events are labelled by particle momenta which are continuous variables, the definition (15) makes sense only after discretization. The result will depend on the way the discretization is made. eFor illustration: In the simplest case of k = 2, N ( k ) is the number of pairs of identical events. The denominator, N ( N - 1) is the total number of pairs of events considered.
244
to event-by-event fluctuations. If there were no fluctuations whatsoever, i.e. if all events were identical, all c k = 1 . In case of wild fluctuations c k are expected to be small. To quantify this a little better, let us observe that, as any statistical system , a multiparticle system is defined by a set of states li > (i=l,2,..J) and the probabilities pi to occupy these states. To learn about the system one may draw a number of samples (so-called Bernoulli trials) and investigate their properties. In case of a multiparticle system, such samples are represented by events, each one representing a possible state of the system in question. It is not difficult to show that the coincidence probabilities defined in ( 1 5 ) are simply related to the moments of the probability distribution pi:
i
In the case all probabilities are equal to each other (i.e. for a microcanonical system at equilibrium) we have p = l/r,” + ck = 1/rk--’. This shows that the coincidence’’ probabilities are related to the number of states of the system, i.e. to its entropy. This observation can be made more precise by introducing the Renyi entropies
”One can easily show that in the limit k t 1,” ” H k -+ S, where S is the Shannon entropy, S =< logp >.” There are several attractive features of this measure of fluctuations. First, the result depends on all multiparticle correlations present in the system and thus allows one to investigate the effects of correlations of very high orders, which are difficult to access by the standard methods. Second, the relation of this measurement to the entropy, or the number of states of the system, shows that it may be a very useful tool in assessing the very basic nature of the systems produced in multiparticle collisions. Finally, as can be shown by considering closer the technique of Bernoulli sampling, the relative error of this measurement behaves as
Thus C2 is indeed the probability t o find a pair of identical events in the whole sample, which explains the name.
245
which shows that at small k and relatively large r, the numer of events needed to obtain a decent error is strongly reduced as compared N >> r, needed for an accurate measurement of the probabilities pi. Unfortunately, little is known till now how sensitive is this method for uncovering the multiparticle correlations. More studies - most likely through MC simulations - are needed to verify this. I believe it is worthwhile to undertake a serious effort in this direction, starting, e.g. by comparison of the data with the standard MC codes.
5. HBT parameters in presence of flow
One of the most puzzling results at RHIC is that the measured HBT parameters are very similar to those obtained at lower energies. The transverse radii, in particular, turn out amazingly energy independent, contrary to original expectations. Indeed, since there is a strong evidence that the initial energy density grows with incident energy, the expansion is expected to be longer and thus the size of the freeze-out volume is expected to grow with energy as well. This qualitative expectation is confirmed by several hydrodynamic calculations. We are thus confronted with a serious problem which is largely debated and "known as ""the HBT puzzle"" 14." When addressing this problem one should keep in mind that the measured HBT radii can be interpreted as a measure of the size of the particle source only if the momentum distribution of particles and the positions at which they were emitted are uncorrelated 15. In actual high-energy experiments, however, one expects that the system expands and this -in turnimplies that that particle momenta and "positions at the freeze-out are correlated. Thus the observed ""HBT" "puzzle"" indicates that the effects of the increasing size and those of' "the ""flow"" cancel each other in the effective HBT parameters. The" question is if this can be understood in a natural way. Such a natural explanation cannot be excluded because both increasing size of the system and the flow are induced by the same effect, the expansion. Thus the question may be formulated as a constraint on the character of the expansion process which must be such that the measured HBT radii remain independent of the initial energy density in the system. Below I shall illustrate all these problems by considering a simple twodimensional Gaussian model of the (transverse) distribution at freeze-out. The model assumes that there are no multiparticle correlations except those induced by Bose-Einstein statistics.
246
We thus consider a single-particle two-dimensional Wigner functionf in the form
The parameters A and R describe the size of the system in the (transverse) momentum and configuration space. This can be seen by integrating (19) over either d2X or over d2P. One obtains the distribution in P and in 2, respectively:
dN ”d2p “eXP
(-&).’
dN
n __ d2X
eXP
(-f)
so that we have
< X 2 >= 2R2; < P2 >= 2A2.”
(21) The parameter u is responsible for correlation between P‘ and 2,as can be seen from the relation ”
=2RAu. (22) From (19) one can obtain the single particle density matrix by performing the Fourier transform with respect to 2:
where
If there are no correlations between particles (at this level of the argument) the two-particle density matrix is simply a direct product of the two matrices (23). As is well-known, the Bose-Einstein symmetrization implies that the two-particle distribution reads l6 ,, dN d2qid2q2 = p 2 ( 4 1 , 4 2 ; 4 1 , 4 2 ) ” ” +p2(41,42;42,41)” (25) and thus it can be written as
‘The Wigner function (sometimes called in this context the Source function) is a Fourier transform of the single particle density matrix. It represents the best approximation to the momentum and position distribution, consistent with quantum mechanics.
247
The second term in the bracket represents the HBT correlation. It is of the form exp[-(& - &)‘RLBT]where RHBT is the HBT radius given by 1
R$BT = R2(1- u2)- 4A2
(27)
This formula shows explicitly that the presence of momentum-position correlations (u # 0) implies a reduction of the HBT radius as compared to the actual size of the system (represented by R). It also shows that it is indeed possible to compensate increasing R with the increasing correlation parameter u in such a way that RHBTremains constant. The real question, however, is to find a physical mechanism in which this compensation would come out naturally. The distribution (19) implies the presence of flow in the system, as can be seen from the formula
< @(d)>= UAZ/R
(28)
This is radial flow with momentum proportional to the distance from the ”center (””Hubble flow””). We can thus relate the parameter u to the” velocity v of the flow at [dl = R:
where = M2+ < P:ide > and M is the mass of the particle. The momentum fluctuations in the direction perpendicular to d ” (called usually ””side””)”can be related to the temperature of the system. We obtain
< P i d e ( d )>= A2(1- U 2 ) .
(30)
One sees that the temperature deduced from this formula is independent of 2, i.e. uniform in the whole volume of the system. Furthermore, the temperature is reduced in the presence of a flow. An interesting consequence of these formulae is the dependence of the HBT radius of the system on the mass of the particles used to measure the HBT correlations. Since the temperature and the flow velocity are expected to be the same for all particles, it is seen that A and u must depend on particle mass. Consequently, also RHBTdepends on M . A simple algebra shows that the HBT radius is expected to decrease with the increasing mass of the particle 17. Of course this simple model cannot be treated as a serious canditate for an explanation of the HBT puzzle. But it convincingly illustrates, I hope,
248
the fundamental idea that the presence of the flow profoundly modifies the naive interpretation of the HBT radii.
Acknowledgments
I would like to thank Nikos Antoniou for invitation to the Workshop, financial support and encouragement. This investigation was supported in part by the Subsydium of Foundation for Polish Science NP 1/99 and by the KBN Grant No 2 P03B 09322. References 1. M. Gazdzicki and S. Mrowczynski, 2. Phys. C54, 127 (1992). 2. L. Stodolsky, Phys. Rev. Lett. 75, 1044 (1995); M. A. Stephanov, K. Rajagopal and E. V. Shuryak, Phys. Rev. D60, 114028 (1999). 3. S. Jeon and V. Koch, Phys. Rev. Lett. 85,2076 (2000); M. Bleicher, S. Jeon and V. Koch, Phys. Rev. C62,061902 (2000). 4. M. Asakawa, U. Heiz and B. Muller, Phys. Rev. Lett. 8 5 , 2072 (2000); Nucl. Phys. A698, 519 (2002). 5. NA49 coll., H. Appelhauser et al., Phys. Lett. B459,679 (1999); S.V. Afanai sev et al., Phys. Rev. Lett. 86,1965 (2001). 6. A. Bialas and V. Koch, Phys. Lett. B456,1 (1999). 7. S. A. Bass, P. Danielewicz and S. Pratt, Phys. Rev. Lett. 85,2689 (2000); S. Jeon and S. Pratt, Phys. Rev. C65,044902 (2002). 8. M. Doring and V. Koch, Acta Phys. Pol. B33, 1495 (2002). 9. F. W. Bopp and J. Ranft, Acta Phys. Pol. B33, 1505 (2002); Eur. Phys. J. C22, 171 (2001). 10. C. Blume, NA49 coll., Presented at the Quark Matter 2001; J. G. Reid et al., STAR coll., Nucl. Phys. A698, 611c (2002). 11. A. Bialas, Phys. Lett. B532,249 (2002). 12. J. Zimanyi et al., Phys. Lett. B472,243 (2000); J. Zimanyi, P. Levai and T. S . Biro, hep-ph/0205192 and references quoted there. T. Csorgo, Nucl. Phys. B92 (Proc.Suppl.), 62 (2001). 13. A. Bialas and W. Czyz, Phys. Rev. D61, 074021 (2000); Acta Phys. Pol. B31,687 (2000); B31,2803 (2000); A. Bialas, W. Czyz and J. Wosiek, Acta Phys. Pol. B30, 107 (1999). 14. See, e.g., S.Pratt, report at QM2002 and references therein. 15. M. G. Bowler, 2. Phys. C29, 517 (1985); 2. Phys. '241,353 (1988); Phys. Lett. B185,205 (1987). 16. See, e.g., A. Bialas and A. Krzywicki, Phys. Lett. B354, 134 (1995). 17. A. Bialas and K. Zalewski, Acta Phys. Pol. B30, 359 (1999); A. Bialas, M. Kucharczyk, H. Palka and K. Zalewski, Phys. Rev. D62, 114007 (2000); Acta Phys. Pol. B32, 2901 (2001).
LONG RANGE HADRON DENSITY FLUCTUATIONS AT SOFT PT IN AU AU COLLISIONS AT RHIC
+
MIKHAIL L. KOPYTINE Department of Physics, Kent State University, USA
FOR THE STAR COLLABORATION Dynamic fluctuations in the local density of non-identified hadron tracks reconstructed in the STAR T P C are studied using the discrete wavelet transform power spectrum technique which involves mixed event reference sample comparison. The two-dimensional event-by-event analysis is performed in pseudo-rapidity q and azimuthal angle r$ in bins of transverse momentum p ~ HIJING . simulations indicate that jets and mini-jets result in characteristic signals, visible already at soft p ~ , when the dynamic texture analysis is applied. In this analysis, the discrepancy between the experiment and the HIJING expectations for Au+Au at = 200 GeV is most prominent in the central collisions where we observe the long range fluctuations to be enhanced at low p ~and , suppressed above p~ = 0.6 GeV
=
1. Introduction The on-going RHIC program, motivated by an interest in the bulk properties of strongly interacting matter under extreme conditions, has already yielded a number of tantalizing results. Deconfinement and chiral symmetry restoration’ are expected to take place in collisions of ultra-relativistic nuclei. Because these phase transitions are multiparticle phenomena, a promising, albeit challenging, approach is the study of dynamics of large groups of final state particles. The dynamics shows itself in the correlations and fluctuations (texture) on a variety of distance scales in momentum space. The multi-resolution dynamic texture approach (applied for the first time2 at SPS) uses discrete wavelet transform 3(DWT) to extract such information. At the present stage, the information is extracted in a comprehensive way, without any built-in assumptions or filters. Mixed events are used as a reference for comparison in search for dynamic effects. Event generators are used to “train intuition” in recognizing manifestations of familiar physics (such as elliptic flow or jets) in the analysis output, as
249
250
well as to quantify sensitivity to the effects yet unidentified, such as critical fluctuations or clustering of new phase at hadronization. 2. The STAR experiment The STAR Time Projection Chamber4(TPC)is mounted inside a solenoidal magnet. It tracks charged particles within a large acceptance (171 < 1.3, 0 < 4 < 27r) and is well suited for event-by-event physics and in-depth studies of event structure. The data being reported are obtained during = 200 GeV) year of RHIC operation. The minimum the second (G bias trigger discriminates on a neutral spectator signal in the Zero Degree Calorimeters5. By adding a requirement of high charged multiplicity within 1771 < 1from the scintillating Central Trigger Barrel, one obtains the central trigger. Vertex reconstruction is based on the TPC tracking. Only high quality tracks found to pass within 3 cm of the event vertex are accepted for the texture analysis.
3. Dynamic texture analysis procedure Discrete wavelets are a set of functions, each having a proper width, or scale, and a proper location so that the function differs from 0 only within that width and around that location. The set of possible scales and locations is discrete. The DWT transforms the collision event in pseudo-rapidity T,I and azimuthal angle 4 into a set of two-dimensional functions. The basis functions are defined in the (77, 4) space and are orthogonal with respect to scale and location. We accumulate texture information by averaging the power spectra of many events. The simplest DWT basis is the Haar wavelet, built upon the scaling function g(x) = 1 for 0 5 x < 1 and 0 otherwise. The function f(x) = {+1 for 0 5 x
1 2
< -; -1
1 for - < x 2-
< 1;0 otherwise}
(1)
is the wavelet function. The experimental acceptance in q,$, and p~ ((171 < 1, 0 < 4 < 27r)) is partitioned into bins. The 7-4 partitions are of equal size, whereas in p ~ the binning is exponential when more than one p~ bin is used. In each bin, the number of reconstructed tracks satisfying the quality cuts is counted. The scaling function of the Haar basis in two dimensions (2D) G(4, q ) = g(4)g(q) is just a bin’s acceptance (modulo units). The wavelet functions
,
251
0
0
0
Figure 1. Haar wavelet basis in two dimensions. The three modes of directional sensitivity are: a) diagonal b) azimuthal c) pseuderapidity. For the finest scale used, the white rectangle drawn “on top” of the function in panel a) would correspond to the smallest acceptance bin (pixel). Every subsequent coarser scale is obtained by expanding the functions of the previous scale by a factor of 2 in both dimensions. (Reproduced from 2).
FA(where the mode of directional sensitivity X can be azimuthal +, pseudorapidity q , or diagonal $q)are F@J= f(+)f(d, FQ = f ( + ) d d ? FV = 9(9)f(d. We set up a two dimensional (2D) wavelet basis: q ) = 2mFX(2m+- i, 2mq - j ) ,
(2)
(3)
where m is the integer scale fineness index, i and j index the positions of bin centers in and q . Then, FA,i,j with integer m, i, and j are known to form a complete orthonormal basis in the space of all measurable functions defined on the continuum of real numbers L2(R). We construct Gm,i,j(+,q ) analogously to Eq.3. Fig. 1 shows the wavelet basis functions F in two dimensions. At fist glance it might seem surprising that, unlike the 1D case, both f and g enter the wavelet basis in 2D. Fig. 1 clarifies this: in order to fully encode an arbitrary shape of a measurable 2D function, one considers it as an addition of a change along (f(+)g(d, panel a change along 77 (9(+)f(77), panel (c)), and a saddle-point pattern (f(+)f(q), panel (a)), added with appropriate weight (positive, negative or zero), for a variety of scales. The finest scale available is limited by the two track resolution, and, due to the needs of event mixing, by the number of available events. The coarser scales correspond to successively re-binning the track distribution. The analysis is best visualized by considering the scaling function Gm,i,j(4, q ) as binning the track distribution p(+, q ) in bins i,j of fineness m, while the set of wavelet functions FA,i,j(+,q)(or, to be exact, the wavelet expansion
+
+
252
coefficients (p, F&j )) gives the difference distribution between the data binned with given coarseness and that with binning one step finer. We use WAILI' software to obtain the wavelet expansions. In two dimensions, it is informative to present the three modes of a power spectrum with different directions of sensitivity P"(m), P@(m), P"m) separately. We define the power spectrum as
where the denominator gives the meaning of spectral density to the observable. So defined, the P X ( m )of a random white noise field is independent of m. However, for physical events one finds P X ( m )to be dependent on m due to the presence of static texture features such as acceptance asymmetries and imperfections (albeit minor in STAR), and non-uniformity of the d N / d q shape. In order to extract the dynamic signal, we use P X ( m ) t r u e - P A(m)mizwhere the latter denotes power spectrum obtained from the mixed events. The mixed events are composed of the (q,4) pixels of true events, so that a pixel is an acceptance element of the finest scale used in the analysis, and in no mixed event is there more than one pixel from any given true event. The minimum granularity used in the analysis is 16 x 16 pixels. a Systematic errors can be induced on Px(m)true- Px(m)mizby the process of event mixing. For example, in events with different vertex position along the beam axis, same values of q may correspond to different parts of the TPC with different tracking efficiency. That will fake a dynamic texture effect in q. In order to minimize such errors, events are classified into event classes with similar multiplicity and vertex position. Event mix- Px(m)mizis constructed within such classes. ing is done and Px(m)true Only events with I vertex lying on the beam axis within 25 cm from the center of the chamber are accepted for analysis. To form event classes, this interval is further subdivided into five bins. We also avoid mixing of events with largely different multiplicity. Therefore, another dimension of the event class definition is that of the multiplicity of high quality tracks in the TPC. For central trigger events, the multiplicity range of an event class is typically 50. aFor a quick reference, here are the scales in q. Scale 1: Aq = 1; scale 2: A? = 112; scale 3: Aq = 114 and so on.
253
4. “Coherent” interference of patterns and normalization of power spectra
Imagine a reconstructed event as a distribution of points in the space of variables (7,4,m). We slice this space into p~ bins and analyze twodimensional ( q , 4 ) patterns. The patterns from different p~ slices of the same event will amplify the texture signal when those p~ bins are merged. Depending on how the amplification works, one will find different scaling - Px(m)mizsignal amplitude with the underlaw to relate the Px(m)true lying number of particles. The DWT power spectrum at each scale is (using Haar wavelet) a sum of squared pixel-to-pixel content differences for the given pixel fineness (scale). One can think of the pixel-to-pixel content difference the same way as one thinks of a random fluctuation in the pixel content. Imagine that the pattern being analyzed is a small sub-sample of the event, and its number of particles N can be increased at will, up to the point of making it an entire event - as is the case when the sub-sample is a p~ bin of the event. The pixel content will scale with N , and if the dynamic pattern preserves its shape from one p~ bin to another, the pixel-to-pixel difference on the characteristic scale of the pattern will also scale as N . Consequently, the dynamic component of the power spectrum for this scale will grow as N2. We will call this behavior “coherent” in analogy with optics, where one needs coherence in order to see interference patterns. Normalization is needed in order to, first, express different measurements in the same units; second, eliminate trends in p~ dependence which are induced by the design of the measure and unrelated to the physics. For the “coherent” case, the normalized dynamic texture observable is
One could also imagine “incoherent”p~ slices. In the “incoherent” case, the pixel content will grow proportionally to N , but the pixel-to-pixel difference will grow as the FtMS fluctuation of the pixel content, i.e. as the Poissonian The dynamic component of the power spectrum will grow as N (i.e. 0:P ( m ) )and
a.
(P’
(m)true - ~ ’ ( m ) m i z ) / ~ ’ ( m ) m i z
(6)
should be used in this case. In the DWT-based texture analysis, amplification of the signal is based not on adding the patterns themselves, but on adding the power spectra of local density fluctuations, that is (continuing the optics analogy) adding the intensities rather than field amplitudes. For
254
0
0.5
1
1.5
2
2.5
3
3.5
4
3
3.5
4
WVk
Figure 2. a) - 114, b) - d, and c) - I ) directional components of the dynamic texture in HIJING (events with impact parameter between 0 and 3 fm), arising primarily due t o jets. Data sets with different p~ bin widths, indicated by the open and solid symbols, are statistically consistent at both scales when the “coherent” normalization is included. o oscale 1; ::- scale 2. Enhanced fineness scale 2 of the d, texture plot (b) reflects back-to-back correlations.
lo-‘
0
0.6
1
1.5
2 2.5 ,p , WVk
0
0.5
1
1.5
2
25
3
3.5
4
U V k
this reason, in the DWT analysis one does not require “coherence” to amplify the signals from many p~ slices, just as in optics one does not need coherence to see the light intensity increase with an increase in the number of photons.
5. Textures of jets and critical fluctuations in event generators Dynamic texture is to be expected from HIJING7 given its particle production mechanism at RHIC energy (jets, mini-jets and string fragmentation). HIJING combines a perturbative QCD description at high p~ with a model of low p~ processes. Figure 2 demonstrates observability of the HIJING dynamic effects in our analysis. We see that, first, the difference between the true and mixed events is noticeable and can be studied as a function of pr with the present HIJING statistics of around 1.6 x lo5 events. Second, all MC generators, no GEANT and no response simulation is done. Instead, only stable charged particles ( e , p , r , K , p )and their antiparticles from the generator output are considered, provided that they fit into the STAR TPC fiducial 7) acceptance 1111 5 1. Momentum resolution and p~ acceptance are not simulated.
255
1
lo
Critical Monte Carlo
-
cg--r
t c
$o-*
Figure
F
B
lo-? 10“
3.
(PLe
-
P&z)/PAiz f N from the Crit-
0.1
0.2
0.3 p,, GeVlc
0.4
LLL 0.5
ical MC generator. Events with 20 to 30 charged tracks in the STAR acceptance are malyzed. 0- scale 1, 0 - scale 2, A - scale 3,. 0 - scale 4.
the open and closed symbols, which correspond to different p~ bin sizes, appear to fall on the same curve after the 1/N normalization, where N is a p~ bin multiplicity, as would be the case for ‘Lcoherent”(see Section4)p~ bins. Third, the rise of the signal with p~ is due to the fact that high p~ is dominated by jet production. As far as the p~ “coherence” is concerned, one would expect that a high p~ parton, creating hadrons via fragmentation, produces similar (q,$) patterns at different p~ as the energy sharing among the secondaries proceeds, and thus the coherent interference of p~ patterns is natural for this mechanism of particle production. These signals in HIJING are gone when jet production is turned off in the generator. Ability to study jet textures at soft, as well as high, p~ means that the study promises to be very informative because majority of the reconstructed tracks will be utilized. CMC is Critical Monte Carlo generator created by N.Antoniou and coworkers 8 . In the framework of an effective action approach, these authors simulate a system undergoing a second order QCD phase transition. The 7 signal at low p~ (Fig. 3) is much stronger than seen in HIJING and is dominated by the coarse scale. 6. STAR measurements of dynamic textures
Elliptic flow is a prominent large scale dynamic texture effect already well measured at RHIC’. The DWT approach localizes elliptic flow on scales 2 and, to some degree, 3 of the azimuthal observables. In this report, we ignore flow and concentrate on the q observables. Fig. 4 presents the STAR measurements of long range (scale 1) fluctuations in peripheral (0.014 < mult/no < 0.1) collisions and compares
256
Figure 4. (Pzoe - P;iz)/P;i2/N data for f i = 200 GeV, 0.014 mult/no < 0.1.
for scale 1, peripheral events. Open stars - STAR - HIJING at the same energy,
< mult/no < 0.1.
them with HIJING simulations. Qualitatively, both sets of points behave similarly: a region of nearly flat or falling behavior around mean p~ is replaced by a rising trend for pr > 0.8 GeV/c. This trend has already been discussed in Section 5 and is due to jets. The HIJING signal is below ; rise the STAR data at low pr, but reaches higher values at higher p ~ its with pr is stronger. From this figure we conclude that the fluctuations in local hadron density due to jet production are observable at RHIC in the < 2 GeV), and that their qualitative features are reasoft p~ range sonably well described by a super-position of independent nucleon-nucleon collisions based on the physics learned from pp@) and e+e- experiments at comparable energies. Quantitatively speaking, we keep in mind that due to nuclear shadowing effect 11, peripheral Au+Au events are not supposed to be identical to elementary collisions. A comparison of pp, dAu and AuAu data from RHIC will shed more light on this effect. In the absence of experimental data on nuclear shadowing of gluons, HIJING assumes7 equivalence of the effect for quarks and gluons. Next look at a central sample (Fig. 5) - there is a remarkable difference: we now see a change in the p~ trend above pr = 0.6 GeV. Instead of rising with p~ (as in the peripheral events), the STAR data points become
257
STAR prelirnina
Figure 5. (Pz,, - P;i,)/P2iz/N for scale 1, central events (0.65 < mult/no < 1.). Open stars - STAR data for fi = 200 GeV. - regular HIJING; o - HIJING with jet quenching, both at 4 = 130GeV.
consistent with 0. The p~ trends in the data and HIJING look opposite: the model still predicts a monotonic rise with p ~ Can . there be a single explanation to both disappearance of texture at moderate p~ and its enhancement at low p ~ ?The hypothetical deconfined medium is expected to suppress jet production via dissipative processes (jet quenching) lo. The medium-induced energy loss per unit of length is proportional to the size of the medium and thus, the effect grows non-linearly with system size. Suppression of hadron yields at high p~ in central AuAu events with respect to scaled pp and peripheral collisions has been reported12 and interpreted as an evidence of medium effects (possibly, nuclear shadowing 'I). Jet quenching is modeled in HIJING, and is seen (compare two sets of HIJING points in Fig.5) to affect the texture observable somewhat. If the dissipation takes place, one may expect that as jets and mini-jets thermalize, the textures associated with them migrate towards mean p ~ A. transport model would be needed in order to simulate such a process. However, the low pT fluctuations may have an independent origin, unrelated directly to the partonic energy loss in medium.
258
7. Conclusions A non-trivial picture of texture effects emerges when the DWT power spectrum technique is applied to AuAu data from RHIC. Long range (Aq M 1) pseudo-rapidity fluctuations at soft p~ are observed in peripheral events and identified with jets and mini-jets. In central events, these fluctuations are not seen, which indicates a change in the properties of the medium. Large scale of the effect points to its early origin. An excess of fluctuations at low p~ compared to HIJING is seen in peripheral and central events.
Acknowledgment
I am grateful to Nikos Antoniou and Fotis Diakonos for providing me with simulated phase transition events to establish the sensitivity of the technique to critical phenomena. References 1. H. Meyer-Ortmanns, Rev. Mod. Phys. 68, 473 (1996) 2. I. Bearden et 01. [NA44], Phys. Rev. C. 65 (2002) 044903 3. I. Daubechies, Ten Lectures on Wavelets (SIAM, Philadelphia, 1992) and references therein. 4. K. H. Ackermann et al. [STAR], Nucl. Phys. A 661,681 (1999) (Nucl. Phys. A 698,408 (2002)l. 5. C. Adler, A. Denisov, E. Garcia, M. Murray, H. Strobele and S. White, Nucl. Instrum. Meth. A 470, 488 (2001) 6 . G. Uytterhoeven et al., WAILI: Wavelets with Integer Lifting. TW Report 262, Department of Computer Science, Katholieke Universiteit Leuven, Belgium, July 1997. 7. X. N. Wang and M. Gyulassy, Phys. Rev. D 44, 3501 (1991). M. Gyulassy and X. N. Wang, Comput. Phys. Commun. 83,307 (1994) 8. N.G. 'Antoniou, Y.F. Contoyiannis, F.K. Diakonos, A.I. Karanikas, and C.N. Ktorides, Nuc1.Phys.A 693 (2001) 799 9. K. H. Ackermann et 01. [STAR], Phys. Rev. Lett. 86,402 (2001); C. Adler et al. [STAR], Phys. Rev. C 66,034904 (2002) 10. R. Baier, D. Schiff and B. G. Zakharov, Ann. Rev. Nucl. Part.Sci. 50, 37 (2000) 11. J. Ashman et 01. [EMC], Phys. Lett. B 202, 603 (1988); M. Arneodo et al. [EMC], Phys. Lett. B 211,493 (1988). 12. K. Adcox et aI. [PHENIX], Phys. Rev. Lett. 88, 022301 (2002) C. Adler et al. [STAR], Phys. Rev. Lett. 89,202301 (2002)
THE CORRELATION STRUCTURE OF RHIC AU-AU EVENTS*
THOMAS A. TRAINOR CENPA 354290 University of Washington Seattle, WA 98195 E-mail: [email protected]. Washington. edu
A survey of initial measurements of fluctuations and correlations in Au-Au events at fi= 130 GeV is presented. Large @ t ) fluctuations (14% increase over a central-limit expectation) with nonmonotonic centrality dependence are observed. mt @ mt correlations are observed which are compatible with the ( p t ) fluctuations and provide further information on correlation mechanisms. Large-scale isoscalar and isovector two-particle correlations are observed on axial momentum variables ( ~ , 4which ) provide information on minijet structure, thermal fluctuations, elliptic flow, net-charge correlations and source opacity.
1. Introduction Event-wise global-variables fluctuations were advocated to search for critical phenomena in heavy-ion collisions associated with the QCD phase boundary'. More recent theoretical proposals have included enhanced fluctuations with non-monotonic systematics near a critical endpoint of the QCD phase boundary', and fluctuations in particle and pt production resulting from decay of a semi-classical Polyakov-loop condensate3. Results from the SPS indicate that phase-boundary critical fluctuations certainly do not dominate event structure at lower energies. At RHIC we have found new sources of final-state fluctuations and correlations - incompletely-equilibrated hierarchical structure in transverse momentum and particle production from initial-state multiple scattering (e.g., minijets4 and other aspects of partonic and hadronic cascades) which are the dominant sources of nonstatistical fluctuations at higher energy. Separation of phase-boundary correlations from hierarchical equili*This work is supported by the United States Department of Energy
259
260
bration processes requires precision differential analysis. We have therefore improved our fluctuation measures, extended measurements to isospin dependence and elaborated the connection between fluctuation measures and two-particle correlations. The result has been a wealth of fluctuation and correlation event structure at RHIC whose physics implications we are just beginning to explore.
2. General Analysis Method Fluctuations and correlations address by different methods the same underlying event structure. ‘Fluctuations’ refers to ‘non-statistical’ structure in momentum-space distributions. If collision dynamics cause the effective ‘parent’ distribution for particle production to fluctuate event-wise, or produce multiparticle correlations within events, additional fluctuations appear which are measured by dzfferential fluctuation measures. Event-wise and ensemble-averaged correlations are revealed in multiparticle correlation spaces, direct products of the primary-hadron momentum space, restricted here to two-particle correlations and variance measures. Because correlation structure in RHIC collisions is approximately momentum-space-invariant near mid rapidity we can form projections of the full two-particle momentum space onto a difference subspace spanned ~ 71 - 772 with little loss of information. by difference variables such as 7 7 s Fluctuations and correlations are simply related. Distributions on difference variables are autocorrelations. The running integral of an autocorrelation on its difference variable is a correlation integral5. The autocorrelation difference between object and reference distributions is the n e t autocorrelation - a correlation measure. The integral of a net autocorrelation defines the total variance - a fluctuation measure. The running integration limit is associated with the scale of a primary-space binning. Two-point correlations provide more differential access to physical phenomena at the expense of greater statistical noise for a given data volume as compared to scaled fluctuation analysis. We extract autocorrelation distributions as projections from two-particle momentum space and differential fluctuation measures from scaled binnings of single-particle momentum space for each of four charge-pair types. Certain combinations of charge-pair types decompose correlation structure into isoscalar and isovector components. The main objects of correlation and fluctuation analysis are pt and multiplicity correlations. p t for the two charge states of unidentified pri-
26 1
mary hadrons is an extensive measure distributed on axial momentum space (77,$). We decompose the structure of the pt distribution into that of the measure itself relative to its support (e.g., ( p t ) on a hadron distribution), and the correlation structure of the support itself (e.g., the hadron number distribution). The measure pair (n,( p t ) ) is thus the primary object of correlation and fluctuation analysis.
3. Fluctuation Measures Total variance is defined as the difference between correlation integrals for object and reference distributions5i6
C;t(Az,
62) = C 2 , o b j ( ~ tAX, ; Sx) - Cz,,,f(pt; Ax, 65)
- 2 - Cp*:n
+
2 Cpt:n,fit71
(1)
+ gtn
and is related to the conventional per ban variance by C&(Ax,Sx) 21 M(Az, 6z) a;t (Sz),where M(Az, 6%) is the bin number in the distribution support (the number of occupied bins at scale Sx in a bounded region) (Sz) is the per-bin variance. and The central limit theorem is equivalent to a hypothesis of scale (bin size 62) invariance of the total variance of a measure distribution6. This scale invariance is then a test of CLT conditions in the form of the total variance difference (Ax, Sz1,6z2) = C;t (Ax, 8x2) - C;t (Ax, S q ) , a CLT-based double-differential fluctuation measure which compares object and reference distributions across a scale interval. Total-variance difference corresponds to integration of the net autocorrelation across the interval. Deviations from CLT scale invariance are identified with net two-point correlations (net autocorrelation) within the scale interval. Total variance for measure pt is decomposed in Eq. (1) into three terms, the first reflecting the structure of the pt distribution relative to its support (the notation p t : n suggests a conditional), the third reflecting the structure of the support itself, and the second reflecting a possible interaction (covariance) between these terms. The detailed forms of total variance difference for the first and third terms in Eq. (1) are given by
{
A C ; ~ : , ( A ~ , J ~=) N ( A ~ .) (pt(Sx) - n ( s z c ) f i t ) 2 / n ( S x ) 3
-
Oit}
N(Az) . AO:,:,(SX)
AC;t ,(Ax, 6z) = N(Az) . f i f (n(6z)- n(6x)) 2 / f i ( 6 z ) - I}
= N(Az) . fi:
{
Aai(6z)
(2)
262
where fit is the inclusive mean, aZt is the inclusive variance, n(6x) and pt(6x) are bin contents and AX) is the mean total multiplicity in the acceptance. These expressions factorize dependence on acceptance Ax (distribution boundary or detector acceptance) and on scale 62. The variance differences Aap (Sz) are independent of acceptance, are zero across scale intervals satisfying CLT conditions and under linear superposition of independent elements, for example A-A collisions as linear superpositions of p-p collsions, an example of CLT scale invariance which motivated the definition of apt7. For the purpose of comparison with previous analyses we define the dzflerence factor Aa,,,, = Aazt:,/(2 ofit)= apt. ,: is by construction minimally biased by multiplicity fluctuations. 4. ( p t ) Fluctuations: Central Events
We first present a graphical analysis of ( p t ) fluctuations in central events at = 130 GeV. The analysis involved 183k central (top 15% of atot) events with centrality estimate based on total charged-particle multiplicity in the detector acceptance. Momentum acceptance was restricted to 0.1 < p t < 2.0 and 1771 < 1 over the full azimuth. Mean event multiplicity for central events was about 730 after quality cuts.
Jslvrv
-5
4
'O
25 20 15 10
10
5
Au+Au &=13CGeV 1
............................
STAR Dreliminarv - 5 4 - 3 - 2 4
0
1
2
3
4
t f v...............................
5
Figure 1. Frequency distribution (left panel) on ,hi(&) - Ijt)/ap, for 70% of primary hadrons in 1171 < 1 and 183k central (top 15%) events (histogram) compared to two gamma distributions: the CLT reference (dotted curve) and with T m s width broadened according to the numerical analysis (solid curve). The difference 6N of data - reference normalized by its Poisson error (histogram - right panel) is compared with a curve derived from the numerical analysis.
Fig. 1 shows a frequency histogram (left panel) on random variable f i ( ( p t ) - &)/act for 70% of primary hadrons of both charges (charge-
263
independent distribution), a central-limit reference gamma distribution (dotted curve) and a gamma distribution (solid curve) with width determined by the numerical analysis described be lo^'^^^^^. Because the nfolding of a gamma distribution is a gamma distribution, a distribution of ( p t ) values from an event ensemble satisfying central-limit conditions is described by a gamma reference distribution determined by the inclusive distribution lowest moments and the mean sample number f i . Fig. 1 also shows (right panel) the difference between data histogram and gamma reference in units of Poisson standard deviations, demonstrating the very large bin-wise significance of the variance excess. The horizontal axes of both figures are normalized to the inclusive rms width act. These universal plot formats facilitate intercomparisons among collision systems and experiments. Graphically we observe a 14% width excess of charge-independent (pt) fluctuations relative to a central-limit gamma reference for 70% of primary hadrons in the acceptance. A gamma distribution broadened according to the numerical analysis below describes the data well. We observe no significant contribution from anomalous event classes. 5 . ( p t ) Fluctuations: Centrality Dependence
The basis for the numerical analysis of ( p t ) fluctuations is the variance difference ACT:,:, = (pt - nljt)2/n- agt from Eq. (2). This contains the variance of the random variable in the graphical analysis: (pt - n & ) / f i = 6( ( p t ) - &). For direct comparison with aptin preliminary studies the difference factor Aapt = An;, : , / ( 2 a ~ ~ is)reported below. A measure separable into charge species (mc = m+ + m- , ma = m+ m-) has the relations among total variances C i = C? CE 2C:and C i = C: Cc - 2C: _. Forming CLT total variances and factorizing yields a decomposition of variance differences into charge-independent (CI) or isoscalar (C) and charge-dependent (CD) or isovector (A) components
+
+
N Aag N Aai
= N+ Aup + NN+ Aaq
Aa!
+ N- Aa!
+2 -
+
Aa:
-
2 d m Ao:
-
J
m
(3)
For a consistent system we define a covariance difference for mixed charges as AaZt+p t - = (pt+ - n+&+)(pt- - n- & ) / d w ,since agt+et-= O is consistent with the CLT. The centrality dependence of Lapt is shown in Fig. 2 for 205K .Js"= 130 GeV Au-Au minimum bias events from STAR using 70% of all charged primary particles for charge-independent or isoscalar (closed triangles) and
264
Figure 2. ( p t ) difference factors for a minimum-bias distribution of 205k events with centrality estimated by chargedparticle multiplicity for chargeindependent fluctuations (solid points) and charge-dependent fluctuations (open points - multiplied by 3) with extrapolation to the true primary particle number for each centrality (bands).
-2 -4
charge-dependent or isovector (open triangles, including a factor 3 increase for clarity) fluctuation^^^^. The shaded bands represent extrapolations to 100% of the primary particles in the acceptance. Statistical errors are f 0 . 5 MeV/c, and systematic errors for the extrapolation are conservatively estimated to be f15%. This analysis reveals intriguing non-monotonic dependence on centrality. Detailed analysis of the CI centrality trend suggests that (pt) fluctuations are for more peripheral collisions rougly proportional to the number of binary collisions, but fall below this binary-collision trend for more central collisions, possibly indicating the growth of a dissipative medium. The increase of event-wise (pt) flucuations with centrality is arguably a manifestation of increasing structure in the velocity field of an intermediate-state QCD medium. 6. mt @ mt Two-point Correlations
The same mechanisms which increase the width of the event-wise mean-pt or (pt) distribution also produce correlations in the two-point mtl @I mt2 distribution9>l1. The distribution in Fig. 3 represents a combination of precision analysis techniques which reveal significant correlation structure at the permil level in RHIC events13. To achieve uniform statistics in each bin, measured pt is mapped to variable X(mt) so as to achieve a roughly uniform 1D frequency histogram. This transformation maps mt interval [mo,m] onto X interval [O, 11, with most of the visible structure falling in the mt - mo interval [0.1,1.0] GeV/c2. Two-particle densities defined on X18Xa for sibling pairs (from the same event) and mixed pairs (from pairs of similar events) are combined to form sibling/mixed ratios for four charge combinations. The charge combination CI I {[++I [--I} {[+-I [-+I} shown in
+
+
+
265
I Wl
Z0 -
0.7 0.6
0
0
a 0 0 0
0
-
2-0.9
2 0.8
0
owns
0.5 0.4
.-
0.3
..-
0.2 0.2
0.4
0.6
0.8
Figure
3.
Isoscalar
rnt @ rnt ratio distribu-
tions for data (left) and model fit (right) showing a largcscale saddle structure corresponding to @ t ) fluctuations
Fig. 3 contains charge-independent or isoscalar correlations. The dominant features in the ratio distribution shown in the left panel include quantuminterference and Coulomb-interaction correlations which contribute the diagonal ridge at lower mt terminating in the peak at highest mt due to hard-QCD processes, and a large-scale saddle shape, descending to low points at upper left and lower right, due to fluctuations in the effective temperature which dominates the distribution. These features represent non-statistical correlations (absence of correlation would be indicated by statistical fluctations about unit ratio). mt @I mt two-point correlations are directly related to ( p t ) variance differences. The covariance of the one distribution is equal to the variance excess of the other6. Modelling of large-scale mt @I mt correlations is based on the 2D LQvy distribution. The LQvy distribution reflects a dissipative system governed by the Langevin equation12. Correlation information is extracted from this distribution by a model fit with 2D LQvydistributions (example in the right panel of Fig. 3). The saddle-shaped correlation structure is thereby related to two-point correlations of temperature fluctuations in configuration space. 7. Axial Momentum-Space Correlations p t and n fluctuation measures are scale integrals of net autocorrelation distributions on axial momentum space (q,d)6. Variance excesses thus derive from two-particle correlations on ( q l ,q2,+1,42). Excess variance corresponds to transport of particle pairs to smaller values of difference variables - an increase of correlation (conversely, fluctuation suppression corresponds to pair transport to larger difference values). Two-particle momentum-space distributions are six-dimensional objects. We can project these distributions onto lower-dimensional difference subspaces. The symmetries of momentum space near midrapidity insure that these projections discard little or no correlation structure from the
266
primary distribution. Two-particle correlations are studied graphically by forming the ratio of the two-particle distribution of sibling pairs (from same event) to the distribution of mixed pairs (from different but similar events) used as a reference. Relative correlation amplitudes in central A-A collisions are typically at the p e r m i l level, which requires precision analysis13. Correlation mechanisms are isospin dependent. Particle pairs are separated as to type: like-sign (LS) and unlike-sign (US). Ratio distributions for different pair types are then combined algebraically to form charge i n d e p e n d e n t (CI = LS US) and charge dependent (CD = LS - US), respectively isoscalar and isovector, combinations. ) ( 4 1 ~ 4 2are ) Ratio'distributions for LS and US pairs on ( 7 7 1 ~ ~ 2and shown in Fig. 4. The striking diagonal bands on ( 4 1 , 4 2 ) are mainly due to elliptic flow. Invariance of correlation structure on sum variables QC = 771 772 and 4~ = 41 4 2 is evident. The two-point correlation structure is therefore completely contained in the ratio projections (.i 1 2AA2/CA2, where AA2 is a net autocorrelation) on the difference variables QA = 71 --772 and $A = $1 - 4 2 . This means that the joint distributions on (QA, 4 ~for) CI and CD (isoscalar and isovector) charge combinations in Fig. 5 contain all number-density correlation structure in the two-particle axial momentum space, separated according to isospin.
+
+
+
+
Figure 4. Top two panels are ( ~ 1 ~ ~spaces 2 ) for like-sign and unlike-sign pairs (left and right respectively). Bottom two panels are (&,q52) spaces for the same sign combinations. In either case one notes the invariance of correlation structure on the sum variable or absolute momentum-space position (main diagonal, from lower left to upper right) within the STAR acceptance.
We observe in Fig. 5 qualitatively different stuctures in CI and CD joint autocorrelations which represent an interplay between configuration-space structure and a complex velocity field. CD correlations (right panel) are derived from a localized (in configuration space) statistical suppression of
267
net-charge fluctuations during hadron formation conveyed to momentum space via large-scale velocity correlations (axial and transverse Hubble expansion). This feature was first observed in early correlation analysis of p-p collisions14, a consequence of canonical suppression of isovector fluctuations or equivalently local charge conservation. CD correlations are further modified by the presence of a medium at hadronization. The observed CD structure in central A-A collisions at RHIC is substantially modified from that in p-p, both at RHIC energies and at lower energies.
.Ol
.008
i
2
II 0
,006 ,004
1
,002
0
2
3
,
Figure 5 .
e ll<
* 0
,998
1
,996
-’
-3
-0).oo2
,oo4 .oo6
2 ,992 .99
Joint autocoraxial IIIOrrientum space for chargeindeperident (left panel) aiid cliargedependent (right panel) charge coiribinations, respectively isoscalar and isosvector correlations. ralatiutis
-0.008 -3
-0.01
u11
CI correlations (left panel) represent elliptic flow, jets and jet partners and suppression of local momentum fluctuations. These number-density correlations on two-point angle space represent the collineation of emitted particle trajectories expected from any localized velocity structure on the prehadronic medium (not only jets and large-scale flow), and from certain configurations of the emitting surface independent of source velocity. For a complete characterization, two-point ( p t ) axial correlations must also be measured. The combination of two-particle number-density correlations and ( p t ) correlations should provide direct information about arbitrary structure in prehadronic velocity fields and the state of the QCD medium. 8. Conclusions
Details of the strong centrality dependence of ( p t ) fluctuations suggest that stochastic multiple scattering is the primary mechanism. Initial-state scattering provides an early correlation signature who’s evolution to the final state tells us about collision dynamics, the nature of equlibration and the properties of the QCD medium. The structure of isoscalar twopoint mt @ mt distributions suggests correlated temperature fluctuations
268
for isoscalar correlations, perhaps derived from dissipated minijets. This structure gives us a more dispersive look at ( p t ) fluctuations. Axial number correlations manifest several isoscalar and isovector correlation mechanisms. The major themes are jet-like correlations even at low p t , source opacity and in-medium dissipation. This material represents a partial summary of the correlation structures revealed with a preliminary survey analysis of year-one RHIC data. A wealth of structure has emerged. RHIC collisions are not simple equilibrated systems; they are highly structured. The collisions might in fact be described as fluctuation/correlation dominated. The large relativemomentum scales (correlation lengths) observed on pseudorapidity (2 units) and azimuthal angle (2 rad) in Au-Au collisions, together with the large range in transverse momentum (up to 1- 2 GeV/c) required to span the dynamic range from soft-QCD physics to perturbative-QCD hard scattering indicate that the large-acceptance STAR detector is uniquely configured to explore this physics at RHIC.
References 1. G. Baym, H. Heiselberg, Phys. Lett., B469 (1999) 7-11 [nucl-th/9905022]; H. Heiselberg, Phys. Rep. 351, 161 (2001). 2. M. Stephanov, K. Rajagopal, E. Shuryak, Phys. Rev. D60 (1999) 114028 [hep-ph/9903292]. 3. A. Dumitru, R.D. Pisarski, Phys. Lett. B504 (2001) 282-290 [hepph/0010083]. 4. X.N. Wang, M. Gyulassy, Phys. Rev. D44 (1991) 3501. 5. P. Lipa, P. Carruthers, H. C. Eggers and B. Buschbeck, Phys. Lett. B285, 300 (1992); H. C. Eggers, P. Lipa, P. Carruthers and B. Buschbeck, Phys. Rev. D48,2040 (1993). 6. T.A. Trainor, hep-ph/0001148. 7. M. Gaidzicki, St. Mr6wczyliski, Z. Phys. C54 (1992) 127. 8. J.G. Reid (STAR Collaboration), Nucl. Phys. A698,611c-614c (2002) and private communication. 9. R. L. Ray (STAR Collaboration), “Correlations, Fluctuations and Flow Measurements from the STAR Experiment,” in the proceedings of the 16th International Conference on Ultra-Relativistic Nucleus-Nucleus Collisions - Quark Mutter-2002, to be published in Nucl. Phys. A (2003). 10. M.J. Tannenbaum, Phys. Lett. B498 (2001) 29. 11. A. Ishihara, U. Texas at Austin (STAR Collaboration), private communicat ion. 12. G. Wilk, Z. Wlodarczyk, Phys. Rev. Lett 84 (2000) 2770 [hep-ph/0002145]. 13. J.G. Reid, T.A. Trainor, Nucl. Inst. and Meth. A457 (2001) 378-383. 14. J. Whitmore, Phys. Repts. 27,187-273 (1976).
PARTICLE SPECTRA AND ELLIPTIC FLOW IN AU COLLISIONS AT RHIC
+ AU
S. MARGETIS Kent State University Physics Department Kent, OH 44242, USA E-mail: margetisQstar.physics.kent.edu AND THE STAR COLLABORATION Identified particle ratios, PT spectra and elliptic flow has been studied with the STAR apparatus at RHIC, Brookhaven, in AuSAu interactions at fi = 130GeV/c. The global features of the RHIC environment include an almost baryon free mid-rapidity, high Bjorken energy densities and a two-fold strangeness enhancement in central collisions. The study of the spectra’s inverse slope systematics supports an overall picture dominated by collective, hydro-like, thermal components with large radial flow. Large values of v2 (elliptic flow) have been measured suggesting a rescattering in the early phases of the collision; for the first time hydro models almost quantitatively describe the v2 behavior in the lower transverse momentum ( p ~ region. ) The higher p~ region exhibits an interesting behavior, which is still not understood in the context of current models.
1. Introduction
The goal of the high energy nuclear collision program is the creation and study of a system of deconfined quarks and gluons, also known as Quark Gluon Plasma (QGP) [‘I. A typical nucleus-nucleus collision undergoes through a series of ’phases’. The first phase includes the initial parton scattering, and all large momenta transfer (hard scattering) processes. If the system reaches sufficiently high energy densities, QGP might be briefly created. As the system expands and starts coollng down re-hadronization occurs and a hot and dense hadron gas is formed. During this phase flavor production and flavor exchange processes are possible (inelastic scattering). This phase terminates, or it reaches ‘chemical freeze-out’ when the system is dilute and ‘cold’ enough so that inelastic scattering stops. The system then enters its final phase of expansion where particles can still exchange momenta (elastic collisions) until it reaches the point of ‘thermal’ or ‘ki-
269
270
netic freeze-out’, the point where the system is so dilute that even elastic collisions cease, i.e. the mean free path of the particles is larger than the size of the system. This is the point where the system is ’photographed’, measured, by our detectors. Although the observed particle spectra come from the last phase of the evolution of the system it still carries with it a lot of information about the earlier stages of the evolution. Elliptic as well as radial flow, for example, are sensitive to the early stages of the evolution, where the system is hot and dense [2]. Flavor production (especially heavy flavor production) as well as high transverse momenta ( p ~phenomena ) also occur early in the evolution of the system. Most of this information is accessible in the study of ratios (chemistry), yields, spectra and correlations (dynamics) of the measured particles. 2. Experiment Most of the data reported here were recorded with the STAR detector at RHIC. The main tracking detector is a Time Projection Chamber (TPC) inside a 0.25 T magnetic field, which measures the yield and the momentum of charged particles with pseudo-rapidity up to 1t1.8. The trigger detectors involved a scintillator array surrounding the TPC (essentially triggering on mid-rapidity particle multiplicity) and a set of two hadronic calorimeters placed on either side of the experiment and at zero degrees relative to the beam axis (triggering mostly on spectator neutrons). All data presented here are corrected (if appropriate) for detector acceptance, tracking efficiency and background. Since STAR has full azimuth coverage, the typical acceptance and tracking efficiency is around 90%, for particles with PT > 200MeV/c. The minimum bias data sample is about one million processed events before any further cuts. Centrality dependence is usually done by cutting on mid-rapidity charged particle multiplicity. Comparisons with model calculations can relate this quantity to both impact parameter range and average number of participant nucleons. More details about the apparatus can be found in [3] and about the analysis in [‘I. 3. Particle spectra and ratios
3.1. Global features Figure 1 shows the pseudo-rapidity distribution of negative hadrons for the 5% most central Au+Au collisions at 130 GeV. The observed density, if
27 1
STAR, dNh-/dT\,pi> 100 MeV/c STAR, dNh-/dq, p _>~0 (extrapolated) 1 0 0 ~ ~ ~ " ' ~ ~ " " ~ ~ " ~ ' " ~ " ' " ' " " " " ' " ~ ' " " -1.2-1 4.8 -0.60 . 4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2
r\
Figure 1. Pseudo-rapidity distribution of negative hadrons for the 5% most central AuSAu collisions at 130 GeV.
one includes the positive hadrons, reaches the value of 580 f 18 which is the average of all RHIC experiments. It is interesting to note that this value is about 40% higher than the properly scaled p-p collisions at the same energy, a clear indication that hard processes (which scale with the number of binary collisions rather the number of participants, or wounded nucleons) play a significant role in particle production at this energy. This is in contrast to SPS energies where participant scaling is almost exact for every colliding system [4]. We also observe that the shape is rather flat over the two pseudo-rapidity units shown, especially if one considers the slight 'Jacobian' dip one gets if pseudo-rapidity instead of rapidity is plotted. This is compatible with a Bjorken-like, hydrodynamical, longitudinal boostinvariant picture, which again happens for the first time in the history of heavy ion collisions. We a going to further examine later the validity of the hydro hypothesis in these collisions. Figure 2 shows the measured @ T ) as a function of centrality. A slight increase in @ T ) is observed in central collisions (about 15%) relative to peripheral collisions, and a significant increase relative to both Pb+Pb collisions at SPS and p-p collisions at & = 1.8TeV measured by the NA49 and CDF collaborations correspondingly. We can use this information and attempt an energy density calculation in Bjorken's picture. The resulting energy density for central Au+Au collisions at 130 GeV is calculated to be
272
c
I
Figure 2. Mean PT as a function of collision centrality for negative hadrons. The number ranges denote the fraction of total inelastic cross section.
about 4.5 GeV/fm3, an about 50% increase from the reported NA49-SPS value ([4] and the references therein). It is also much higher than the lattice QCD calculations which predict a threshold for QGP production of about 1 GeV/fm3. 3.2. Particle Ratios and Thermal Fits
Particle ratios are important as they record the chemical freeze-out conditions of the expanding system. They can be used in thermal model calculations to check the hypothesis of thermalization and possibly extract the chemical freeze-out parameters (e.g. temperature and baryo-chemical potential). Specific ratios like the p/p can also characterize the overall environment at mid-rapidity; is it baryon rich like at SPS or baryon free? Also the Kaon/pion ratio directly relates to the question of relative strangeness production etc. We first examine the p/p ratio shown in Fig. 3 as a function of the collision energy. We observe a rapid increase of the ratio and an almost asymptotic behavior around the value of unity at RHIC energies. The reported values at RHIC are around 0.7 [6] which indicates a near (but not completely) baryon-free environment at RHIC. This observation is also corroborated by other measured antiparticle/particle ratios (e.g. both the A/A and s / E ratios have roughly the same value).
273
Figure 3.
Anti-proton to proton ratio as a function of collision energy.
Figure 4. Comparison of predicted ratios (lines) from thermal models and experimental data (symbols).
Figure 4 shows the comparison of the measured ratios to thermal model fits. The agreement is reasonable and the resulting fit parameters are: a chemical freeze-out temperature of 170-180 MeV, a baryo-chemical potential of about 50 MeV and an almost vanishing strange chemical potential. Details on the procedure and the particular model can be found in [’I. The K/n ratios are used to study strangeness production and strangeness enhancement. In order to evaluate this ratio we deduce the
274
0 A+A: K'Ix'
0.15
k! . Y
1
10
loL
42 (GeV) Figure 5. The K/T ratio at mid-rapidity as a function of collision energy. The curves are parametrizations to p-p data. Errors are statistical only. The STAR data systematic errors are shown as caps. The two STAR data are slightly displaced for clarity.
mid-rapidity pion density in central collisions from our measurements of negative hadrons [4], anti-protons [ 6 ] and the K- spectra [7]. For the most central collisions we deduce a K f / d ratio of 0.16f0.02 and a K-/T- ratio of 0.15 f 0.02. Figure 5 is a compilation of K/n results for central ion collisions. We observe that the K+/n+ ratio peaks at AGS energies while the K-/n- ratio increases monotonically with energy. The peaking of the positive ratio can be understood as an interplay of a dropping baryon density at mid-rapidity and an increasing Kaon-pair production rate [7.The same figure also shows parametrized p-p data (curves) and p-p data (triangles) at certain energies. Our measurement indicates a 50% enhancement over these collisions at similar energies. The enhancement is similar at SPS and RHIC for the negative ratio while the positive one is higher at SPS due to a larger net-baryon density at mid-rapidity. In order to complete the picture and disentangle the various mechanisms responsible for this enhancement on needs the analysis of heavy strange and multi-strange baryons, which is about t o be completed. 3.3. Slope systematics - Radial Flow
We now turn our discussion to the observed 'Temperature' (inverse slope parameter) discussion. Figure 6 shows the PT spectra of various particle
275
pairs (strange and non-strange baryons and mesons). It is apparent that their slopes are different and in particular the heavier particle has always a less steep slope (higher apparent temperature). This difference makes the K - / h spectra cross at about 1.5 GeV, thus the yields being higher for high momenta, i.e. it appears to be 'easier' to produce a high PT strange baryon than a strange meson, something which will require from us to re-think and search for novel baryon production mechanisms! A simpler explanation will be that this effect is an indication of strong radial flow present in the system. Radial flow has the effect of boosting the apparent p~ of heavier particle in proportion to their mass. Figure 7 summarizes the fitted slope parameters as a function of particle mass for RHIC and SPS energies. One should keep in mind that the fitted temperature also depends on the fitted PT range, especially in the case of heavy particles. Only a simultaneous, full fit, of the entire sample is the appropriate way to extract quantitative numbers.
Central Au + Au Collisions at 8
,o
> 9.
P
I ' i ' i STAR' z L t
1
I ' I ' I ' I
STAR
= 130 GeV
p
q
Q) 10
Transverse momentum pT (GeV/c) Figure 6. Transverse momentum spectra for various strange and non-strange baryons and mesons.
With the exception of multi-strange baryons, which appear to decouple earlier due to their lower inelastic cross sections, there is almost a linear dependence between the mass and the apparent temperature of the spectra, an indication of strong radial flow in the system. Comparing the RHIC
276
data with the reported SPS values we see that at RHIC the radial flow is even stronger than the SPS value, an indication of violent, explosive dynamics. Quantitative results were obtained through simultaneous fits to the spectra with a hydro-inspired 'blast-wave' model. The fits yield average flow velocities of ,B = 0.55 c and thermal or kinetic freeze-out temperatures of about 110 MeV, which is a typical and almost universal thermal freezeout temperature for all high energy heavy ion collisions.
I ' 0.6
I
-
0.5
-
0.4
-
0.3
-
0.2
-
I
I
&?&$,,p130 GeV STAR Reliminary
I
I
I
0
I
Gev
6""1?
(li
0
0
*
0 0.1
c I
0
I
0.25
0.5
I 0.75
I
i
I
I
1
1.25
1 5
1.75
Particle Mass (GeV/c2) Figure 7. energies.
Fitted slope parameters as a function of particle mass for RHIC and SPS
4. Elliptic Flow
The azimuthal anisotropy of the transverse momentum distribution for noncentral collisions is thought to be sensitive to the early evolution of the system ['I. The second Fourier coefficient of this anisotropy, v2 is called elliptic flow. It is an important observable since it is sensitive to the rescattering of the constituents in the hot and dense phase of the collision. This rescattering converts the initial spatial anisotropy of the overlap nucleons into momentum anisotropy. The spatial anisotropy decreases as the system expands and self-quenches thus making elliptic flow particularly sensitive
277
Figure 8. Elliptic flow as a function of centrality. Open rectangles show a range of values for v2 in the hydro limit.
to the early stages of the system evolution. Being dependent on rescattering, elliptic flow is therefore sensitive to the degree of thermalization of the system at the early times. Hydrodynamic models, which are based on the assumption of complete local thermalization, usually predict the strongest signals. Figure 8 shows the measured elliptic flow, v2, as a function of centrality. A very strong signal is observed in the data (filled circles) reaching the value of 6% for peripheral collisions, a value which is more than 50% higher than the SPS one, indicating a stronger early-time thermalization at RHIC. In the same figure the data are compared to hydro predictions (open rectangles) The agreement is very good everywhere except the very peripheral collisions where, anyway, the hydro model and the assumption of thermalization are thought to break down. We should note here that at RHIC is the first time that a hydro-model prediction describes the experimental measurements. Further studies showed that the hydro model can also describe the low PT behavior of identified particle flow ['I. The agreement breaks at large transverse momenta (above about one GeV). Figure,9 shows v2 as a function of PT for minimum bias Au+Au collisions ['']. The data (filled circles) exhibit a flattening around PT = 3 GeV. The pure hydro calculation starts deviating from the data at about 1 GeV. The various broken lines introduce high initial gluon densities in order 'quench' via gluon (dE/dx -like) radiation the high p~ particles. This high p~ behavior of the elliptic flow is still an unresolved puzzle in the RHIC data.
278
Figure 9. Elliptic flow as a function of p~ for minimum bias Au+Au collisions. The filled circles (data) are compared to pure hydro calculations (solid line) and hydro+pQCD calculations assuming various initial gluon densities.
Acknowledgments
I wish t o thank the organizers for the warm reception and the impeccable organization of the conference. This work was supported by the Division of Nuclear Physics and the Division of High Energy Physics of the Office of Science of the US. Department of Energy and other funding agencies. References E. Laermann, Nucl. Phys. A610, l c (1996). H. Sorge, Phys. Lett. B402, 251 (1997). K.H. Ackermann et al., Nucl. Phys. A661,681c (1999). C. Adler et al., Phys. Rev. Lett. 87,112303 (2001). 5. P.B. Munzinger et al., Phys. Lett. B518,241 (2001). 6. C. Adler et al., Phys. Rev. Lett. 86,4778 (2001). 7. C. Adler et al., submitted to Phys. Lett. B and nucl-ex/0206008. 8. K.H. Ackermann et al., Phys. Rev. Lett. 86,402 (2001). 9. C. Adler et al., Phys. Rev. Lett. 87,182301 (2001). 10. C. Adler et al., submitted to Phys. Rev. Lett. and nucl-ex/0206006. 1. 2. 3. 4.
A MODEL FOR THE COLOR GLASS CONDENSATE VERSUS JET QUENCHING A. P. CONTOGOURIS Department of Physics, McGill University, Montreal, Quebec, H3A 2T8, CANADA
F. K. DIAKONOS AND P. K. PAPACHRISTOU Nuclear and Particle Physics, UniUeTSaty of Athens, Panepistimaopolis, Athens 15771, GREECE A model for the Color Glass Condensate as opposed to jet quenching is proposed for the explanation of the presently available RHIC data. Good fits to these data are presented. A clear way to distinguish between the two possible explanations is also given.
Recent RHIC data on hadron (no) production at large transverse momentum p~ in central Au Au collisions show a clear suppression of the rates The usual explanation is that the phenomenon is due to jet quenching, which thus makes a probe of gluon plasma 2,3. In the present work we propose an explanation of the same data as due to the Color Glass Condensate 4 . Our account of the data provides also a way to distinguish between the two explanations. At very high energies the number of partons (mainly gluons) in a nucleus grows very rapidly and eventually leads to saturation 5,4. We will attempt to express this saturation in the simplest way, by invoking expressions used at small x . With g ( x 7 Q 2 the ) gluon distribution, at small x ( P g g ( x + ) 2Nc/x) a simple evolution equation is :
+
'.
Here R amounts to a free parameter, but will be taken as the radius of the quarks ( E 0.lfm). An approximate integration of the last term leads to the modified gluon distribution
where
X
The basic formula for p p
+ no + X
is
279
280
where K is a K-factor, here for simplicity taken K = 2 , 0 = 7rf2:
XT
and for
=
and
Also
where e.g. C(gg) = For N1N2 -+7ro
(3 - $ one has
+X
-
5)etc.
where T N ( b ) is the Glauber thikness function (= d Z p N ( r ) , p~ =density of nucleus 2 2 N = Au) normalized as JdLbTN(b) = 1. w e use a gaussian p N ( r ) e-r / a and b,, = 4.7 fermi. The inclusive E * is augmented by an intrinsic transverse momentum of a gaussian with (k$) = lGeV2. For the parton distributions Falp we use the set CTEQ 5, leading order and for the fragmentation functions we use the Binnewies et al., again leading order '. Finally, we use Q2= p$ in Eqs (2) and (3). Our results at 130 A GeV for Au -tAu + 7ro production at 0 = are shown in Fig. 1 (solid line). The dashed line shows the results without the effects of the Color Glass Condensate. On the same figure we plot the results for jet quenching corresponding to opacity = 3 (dotted line). Both the solid and the dotted line account well for the data '. However, at large p~ ( p 2 ~ 6GeV) the effect of the Color Glass Condensate tends to disappear and the solid line approaches the dashed line; this is due to the factor (= &-), which appears in the modified gluon distribution. On the other hand, jet quenching remains below, and this gives the possibility to distinguish between the two mechanisms. At very low p~ (< 2GeV) all lines diverge. Perturbative QCD is inapplicable and various effects, like recoil resummation, play a dominant role. N
k
6
Acknowledgments
A number of helpful discussions with N. Antoniou, A. Bialas, S. Jeon and E. Mavrommatis, as well as an independent check of some of our results by Z. Merebashvili are gratefully acknowledged. The work was also supported by the Natural Sciences and Engineering Research Council of Canada and by the Greek State Scholarships Foundation ( I K Y ) .
281
References
1. K. Adcox et al. (Phenix Collaboration), Phys. Rev. Lett. 88, 022301 (2002) and nucl-ex/0109003. 2. R. Baier, Yu. Dokshitzer, A. Mueller and D. Schiff, Nucl. Phys. B 484, 291 (1997) and Nucl. Phys. B 531,403 (1998). 3. M. Gyulassy, D. Levai and I. Vitev, Phys. Rev. Lett. 85,5535 (2000) 4. L. McLerran, hep-th/0202025. 5. L. Gribov, M. Levin and M. Ryskin, Phys. Rep. 100, 1 (1983); A. Mueller and J. Qiu, Nucl. Phys. B 268, 427 (1986). 6. J. Kwiezynski, Nucl. Phys. B (Proc. Suppl.) 39 (Issues 2-3), 58 (1995). 7. H.L. Lai et al. (CTEQ Collaboration), Eur. Phys. J. C 12,375 (2000). 8. J. Binnewies, B. Kniehl and G. Kramer, Phys. Rev. D 52, 4947 (1995). 9. G. Fai at al., hep-ph/Olll211.
+
WAVELET ANALYSIS I N PB PB COLLISIONS A T CERN-SPS
G. GEORGOPOULOS, P. CHRISTAKOGLOU, A. PETRIDIS, M. VASSILIOU Physics Department, University of Athens
, Greece
We apply a multiresolution analysis on the phase space (0.005 < p~ < I.5GeV/c,2.6 < 7 < 4.8) of the charged primary produced hadrons. A samle of central events from Pb+Pb interactions at 158 A GeV, recorded by the NA49(CERN-SPS) wide acceptance experiment, was analyzed . The purpose of the present event-by-event analysis, which is based on a two dimensional Discrete Wavelet Transform (DAU20), is t o measure the dynamical fluctuations according to the scale. We conclude the absence of events inheriting fluctuations in the scales probed and for the finest resolution scale the strenght of dynamical fluctuations is measured to be less than 2O/o.
1. Introduction
The ultimate goal in the study of relativistic heavy ion collisions is the production and characterization of an extended volume of deconfined quark and gluons, the QGP1. Recent data suggest that conditions consistent with a creation of the QCD phase transition are indeed reached in Pb+Pb at 158 A GeV collisions at the CERN(SPS).On the other hand it is suggested that phase instabilities near the QCD phase transition can result in non statistical fluctuations that are detectable in the final state observables The NA49 experiment' has already measured event-by-event fluctuations of average event properties. In particular NA49 studied the average transverse momentum < p~ > and the ratio of the produced number of kaons to pions K / T . The fluctuations of these quantities from event to event, test kinetic and flavor equilibration. The experimental have shown that genuine dynamical fluctuations are small (1.2 O / O in < p~ > and 2.8 O/O in K/7r at 90 O/O C.L.) As a next step, it is constructive to develop an analysis method which can identify fluctuations on any scale. This kind of analysis is supported by the theoretical predictiondo that small scale fluctuations are more easily 3f43596
282
283
washed out by diffusion, due to secondary scattering among the hadrons, while large scale fluctuations formed early in the collision are more likely to survive diffusion and consequently to be detected. In this paper we concentrate on a Multiresolution Analysis based on Discrete Wavelet Transform(DWT) in order to get the typical features of an event in terms of location and scale parameters1'. Wavelets are functions that satisfy certain mathematical requirements and are used in representing data or other hnctions12.The wavelet approach builds all functions from localized ones, which act as a microscop, using translations to localize and dilations to magnify the inspected function. A given distribution of an observable(s) can be represented by its wavelet transform. The method fits into the general topic of the representation of functions by orthogonal sets (non-orthogonal wavelets can introduce unphysical correlations). The wavelet coefficients can measure fluctuations with respect to the local mean of the given distribution. The localization is such that it is orthogonal to other location at the same scale, and other scales. This makes the distribution of the wavelet coefficients ideal for use in higher order statistics on a scale by scale basis14. Due to the high multiplicities in central Pb+Pb collisions at 158 AGeV recorded in the NA49 large acceptance spectrometer, a statistically significant determination of momentum and pseudorapidity(rapidity) distributions can be performed for single event, allowing for a multiresolution analysis on the above observables. Our analysis is based on DWT and not on the continuous wavelets since the coefficients of the later depend strongly on the geometrical acceptance of the detector. The choice of employing Daubeshie's (DAU20)13 wavelets was based on their properties which are the multifractality and the high resolution on the localization in space and scale, they provide.
2. Experimental Setup and Data Selection
The NA49 experimental setup is described in7. For the purpose of this analysis we used a dataset of 100.000central Pb+Pb collisions at 158 AGeV, that were selected by applying a trigger on the energy deposited on the NA49 forward calorimeter(VET0). A 5 O/O of the most central events were selected by the trigger which corresponds to an impact parameter range of b 5 3.5fm. In the analysis we have accepted events that were uniquely reconstructed at the target position. More than 1000 charged particles for a single central
284
Pb+Pb collisions at 158 AGeV are recorded by the NA49 detector system. Tracks were selected according to the ”global tracking chain”taking into account the split track corrections15. A cut on the extrapolated impact parameter of the particle track at the primary vertex was used to reduce the contribution of non-vertex particles originating fron weak decays and secondary interactions. Particles are selected in the region of transverse momentum 0.005 < p~ < 1.5 GeV/c and pseudorapidity 2.6 < 17 < 4.8 To estimate the amount of dynamical fluctuations in the distribution of an event-by-event observable it is important to have reference events where only statistical fluctuations are present. Thus,60.000 mixed events were analyzed. Mixed events were generated by randomly drawing numbers of particles from the track pool according to the multiplicity distribution of real events. Since the mixed events are random samples from the same inclusive track population, they will also produce the phase space distributions of the real events.
3. Analysis and Results
The l-dimensional discrete wavelet transform consists of applying a chosen orthogonal wavelet coefficient matrix of (N x N) dimension,like hierarchically, first to the full data vector of length N , then to the ”smooth” vector of length N/2, then to the ”smooth-... -smooth” vector of length N/4, and so on until only a trivial number of ”smooth-...-smooth” components (usually two) remain. The procedure is sometimes called a ”pyramidal algorithm”16. The value of the dimension N is related to the scale parameter j as N = 2j. The output of the DWT consists of these remaining components and all the ”detail” ones that were accumulated along the way. The information loss in the smooth components as the scale goes down by the power of two, is recorded in the ”detail” components at that scale since roughly the ”detail” components of each scale j are just the difference of the ”detail” components between this scale and the previous scale j-1. Therefore the detail coefficients include the whole information about the initial data distribution hierarchically classified with respect to the scale and to their position which is related to their index. A wavelet transform of a 2-dimensional array (N x N) (e.g.q,pt) is most easily obtained by transforming the array sequentially on its first index (for all values of the other index) then on its second. Each transformation corresponds to multiplication by an orthogonal matrix. By matrix associativity, the result is independent of the order
285
in which the indices were transformed. In order to quantify our results we adopt the l-D DWT power spectrum analysis technique[GeorOO]and expand it to 2-D analysis. Thus,we define the 2-D Wavelet Partition Function (WPF) as
k
l
where djkl are the detail coefficients for the j-th level of analysis with k,l the index numbers which indicate the position of the 2-D array and q can take any integer value of 2,3,4,5.... In order to study the response of Wqj for the different scales j (power spectrum) and also its dependence on the parameter q, we first parametrized the ( q , p t ) phase space of the charged particles produced in P b f P b collisions and then we generated events which were analyzed according to the multiresolution analysis. The power spectrum of ZnWq,j is shown on fig la. The shape of the 2-dimensional ( q , p t ) distribution of each event is reflected to the left part ( j 5 4) of the figure while the stochastic fluctuations of the distribution are reflected in the log-linear increase of the Wqj values for j 2 5. Figure l b uw shows the dependence of the relative on the q for the different scales <W,d > j. One can notice that for q I 5 the relative deviation remains constant for all scales while for q 2 5 it deviates. In the present analysis we employ q=5 to maximize the sensitivity of the method. The multiresolution analysis is then applied on the NA49 restricted ( 7 , p t ) distribution (fig.2) of the charged particles produced in central Pb+Pb at 158 AGeV collisions.In the current analysis we applied the discrete wavelet Daubechie’s transformation to construct the (N x N) transformation matrix with ( N = 2 j , j = 8), which then acts on the 2-dimensional array (q,p t ) for each event producing hierarchically the detail coefficients for every scale j. By plotting the values of hWqj as a function of the event multiplicity for different scales, we noticed that there is a linear dependence, as it is shown in fig. 3a(for q=5, j=6). In order our analysis to be independent of the event multiplicity we calculate the Qqj values, which are obtained by projecting ZnWqj on the line fitted to the data. The projected Qqj values are plotted on fig (3b) as function of the multiplicity. The analysis is based on the study of the spread of the Qqj distributions, which are shown on fig 4. together with the Gaussian fits for j=1,3,6 and 8.
286
To qualify our results we define a quantity event,=
' ~ jcharacterizing
each
where Rqj is the WPF corresponding to the ( 7 , p t ) distribution of each individual event, and < Rqj > is the mean value of the Rqj distribution of the total event sample of width * . From the analysis of a sample of 60.000 mixed data we observed that there was none event with qj value greater than five in more than one levels of analysis. In Fig.5a and 5b we display the value of qj as function of the event number for different levels of analysis j (j=2, ...,8) for 2000 real and mixed events correspondingly. At this point we want to mention that any fluctuation in the phase space under study will be present as a distribution and not as a &function. Therefore, one expects from the wavelet analysis to identify the existence of any dynamical fluctuation in more than one scales. To avoid any statistical fluctuations we require rlj > 5 and this to happen in more than one scales j ll. The above condition is set to the NA49 experimental data sample, and it may play the role of a selection criterion for identifying "critical" events, if any. No one event from 60.000 data sample fullfiled the above condition in order to be classified as a critical event. To estimate now the amount of dynamical fluctuations present in the data we define
Fig. 6 shows the contribution of dynamical fluctuations in the experimental data for each scale. The Odyn values in the left part of the figure (jI 4) is a reflection of the fluctuations concerning the shape of the two dimensional distrubution, as it is changing from event to event (see also comments on fig. la). The right part of the figure (45j58) gives the strength of dynamical fluctuations for the higher scales j , which is of the order of C7ddyn55°/0. For j = 8, which corresponds to the experimental resolution of the analyzing phase space, the measured value is Cdyn = 1,76'/0 f 0,68'/0. In order to test the sensitivity of the method to dynamical fluctuations present in the data for j=5,6,7,8 we used a simple fluctuation model to impose artificial fluctuations on mixed events and to study the response of the parameter estimation. Hence, after parametrizing the ( v , p ~phase )
287
space we produced a percentage of randomly distributed tracks which are dopped in a random way to the mixed data, in a bin where its size is defined according to the resolution and its position according to the parametrization. This simulation containing a known fluctuation is used to check which percentage of those produces a uiyn value corresponding to the same value observed in the data, where
Figure 7 shows the linear increase of uiyn as a function of the percentage of the input fluctuations. Varying the frequency of occurrence of the input fluctuation, we can determine the exclusion region shown in figure 8. The relative frequency F of events exhibiting the percentage of dopped events in the mixed data set, is plotted versus the fraction of dopped tracks included in the mixed event. We see that for j=8 and for F=0.6 fluctuations of the level of 0.7'/0 are ruled out at 9O0/o confidence level. 4. Summary
It is widely accepted that ultrarelativistic nucleus-nucleus collisions offer the conditions under which transitions from ordinary(confined) matter to a quark-gluon plasma(QGP) state can be transiently attained. In view of such a prospective it becomes imperative to identify experimental signals in the outcome of these collisions whose origin can be ascribed to the phase transition. One of the signatures suggested to identify a possible phase transition is to investigate the existence of local fluctuations, in the multihadron states and to measure the content of such fluctuations in different scales. Thus, we developed a multiresolution analysis method based on discrete wavelet transform (DAU20) in order to identify and classify events exhibiring dynamical fluctuations. A selection criterion was introduced in the event-by-event analysis,which came out from the study of mixed events, for identifying such critical ones in the real data sample. We concluded the absence of events inheriting fluctuations in the scales probed. This might be interpreted as either an evidence of thermalization of the system or void of critical signatures . For the finest resolution scale we measured the strenght of dynamical fluctuations to be g d y n = 1,76'/0 4~ 0,68'/0.
288
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
See for recent results: Proc. of Quark Matter 2001, Nucl. Phys.Af-398, (2002). R. Stock, Nucl. Phys.A661, 282c (1999). H. Heiselberg, Phys. Rep. 351, 161 (2001). S. Mrowczynski, Phys. Lett.B314, 118 (1993). M. Stephanov et. al, Phys. Rev. Let.81, 4816 (1998). A. Dimitru and R.Pisarski, Phys. Lett.B504, 282 (2001). S. Afanasiev et. al.(NA49 Collab.) Nucl. Inst. Meth.A430, 210 (1999). H. Appelshauser et. al., Phys. Lett.B459, 679 (1999). S. Afanasiev et. al.(NA49 Collab.) Phys. Lett.B86, 1965 (2001). E. V. Shuryak, M. A. Stephanov, Phys. Rev.C63, 64903 (2001). G. Georgopoulos, A. Petridis, M. Vassiliou Mod. Phys. Lett.Al5, 1051 (2000). I. Bearden et. al. (NA44 collab.) nucl-ez/OlO7OO7. "Ten Lectures in Wavelets" Siam (1992). "Wavelets in Physics", ed. by Li-Zhi Fang, R. L. Thews World Scient. (1998). C. Roland, Phd. Thesis University of Frankfurt (1999). W. Press et. al. Numerical Recipies in C Cambridge University Press (1992).
0
1
2
3
4
5
6
7
8
9
1
Figure 1. The power spectrum of h W q j (left plot). The dependence of the relative deviation ( u ( .~ ) / ( W q j ) )on the parameter q for different scales j (right plot). 93
289
.. .. ... ... ... .. ... ... ... ... ... ... ..... ... ... ... ... ... .. ..
.......
.......
~- .. ...... ... .. ~....... 11
Figure 2. The analyzed (7, p ~ phase ) space.
a
q=5, j=6
$8
-
0.4
-
02-
442 4.44.6t 1 . 1 . 1 . 1 . 1 , 1 . 1 . 1 1
300 350 400 450 500
550
600
650
,N
300 350
400
450 500
550
600 650 N,,
Figure 3. The dependence of Zn(W,j) on the multiplicity before (left plot) and after the correction (right plot).
290
“W,,
Figure 4.
The R,j distributions plotted with the Gaussian fits for j = 1,3,6and 8.
c
X
P
1
.
Figure 5 . The value of nj as a function of the event number for different scales j for 2000 real (bottom) and 2000 mixed (top) events.
291
+
,€a24 0.22 02
0.18
+
0.16 0.14
012 0.1
+
0.08 0.06 0.04
OM
+
+
+
+
Figure 6. The contribution of dynamical fluctuations u d y n in the experimental data for each scale j : (odyn)j=l = 0.1028 f 0.0140; (odyn)j=2 = 0.2338 f
0.0057; ( a d y n ) j = 3 = 0.1563f0.0079; ( a d y n ) j = 4 = 0.0876f 0.0611; (adyn)j=5 = 0.0345 f 0.0118; (adyn)j=6 = 0.0363 f 0.0060; (adyn)j=7 = 0.0206 f 0.0078; (gdyn)j=8 = 0.0176 f 0.0068
Figure 7.
The (T&~,, as a function of the percentage of artificial fluctuations.
292
Figure 8. Exclusion regions at 90% C.L for the frequency F of events containing fluctuations versus the percentage of artificial fluctuations.
HEAVY QUARK CHEMICAL POTENTIAL AS PROBE OF THE PHASE DIAGRAM OF NUCLEAR MATTER
P. G. KATSAS*, A. D. PANAGIOTOU
$
AND T. GOUNTRAS
University of Athens, Physics Department, Nuclear and Particle Physics Division, GR-15771 Athens, Hellas *E-mail: [email protected] $ Email: [email protected] We study the temperature dependence of the strange and charm quark chemical potentials in the phase diagram of nuclear matter, within a modified and generalized hadron gas model, in order t o consider phase transitions and to describe phenomena taking place outside the hadronic phase. We employ, in a phenomenological way, the Polyakov loop and scalar quark condensate order parameters, mass/temperature-scaled partition functions and enforce flavor conservation. We propose that the resulting variation of the heavy quark chemical potentials can be directly related to the quark deconfinement and chiral phase transitions. Then, the chemical potential of the strange and charm quark can be considered as an experimentally accessible "order parameter", probing the phase diagram of QCD.
1. Introduction
One of the main problems in the study of the phase transitions, occurring on the level of strong interactions, is finding an unambiguous observable, which would act as an experimentally accessible "order parameter" [l]. All proposed QGP signatures (strangeness enhancement, J/$ suppression, dileptons, resonance shift and broadening, etc.) have already been observed in heavy ion collisions, however, we have seen, that they also occur, to some extent, in p - p or p - A interactions where no QGP production is theoretically expected. The physical quantity needed should exhibit a uniform behavior within each phase, but should change when a critical point is reached in a phase transition. It has been earlier suggested [2-41 that the chemical potential of strange quarks may be the sought-for macroscopic and therefore measurable thermodynamic quantity. The case of [2+1] flavors was thoroughly studied and it was shown that the change in the sign of the strange quark chemical potential, from positive in the hadronic phase to
293
294
negative in the deconfined phase, may indeed be a unique indication of the deconfinement phase transition. Here we will review the basic aspects of the model and present the [2+2] flavors version, which is a generalization of the model with the inclusion of c-quark and charm hadrons. 2. Hadronic phase
Assuming that the system has attained thermal and chemical equilibration of four quark flavors (u, d, s, c), the partition function for the hadronic gas is written in the Boltzmann approximation:
lnZHG(T,V,A,, A,)
= 1nZz:
+ l n Z $ y g e + 1nZ$grm
(1)
where u d hZHG = 2,
+ zn(A; + A i 3 )
(2)
is the partition function for the non strange, strange and charm sectors, respectively. The charm sector also includes strangelcharm mesons and For simplicity baryons that lead to a coupling of the fugacities &,A,. we have assumed isospin symmetry A, = Ad = A, while the one particle Boltzmann partition function is given by:
The summation in Eq.(5) runs over the resonances of each hadron species with mass mi, and the degeneracy factor gj counts the spin and isospin degrees of freedom of the j-resonance. For the strange hadron sector, kaons with masses up to 2045 MeV/c2, hyperons up to 2350 MeV/c2 and cascades up to 2025 MeV/c2 are included, as well as the R- states at 1672 MeV/c2 and 2252 MeV/c2. For the charm hadron sector, we include purely charm mesons D+,D-,Do and baryons (Ac, C,) as well as strange-charm mesons
295
(D:) and baryons (Zc,Rc) which contain both heavy quark flavors. All known charm resonances are taken into account with masses up to 2.7 GeV/c2. To derive the Equation of State (EOS) of the hadron gas phase we simultaneously impose flavor conservation,
~a v aps ~a < N , - Nz > = - - l n Z H G ( T , v apc
< N,
- Nz
v,A,,
> = --lnZHG(T,
A,, A,) = 0
(6)
V,A,, A,, A,) = 0
(7)
which reduce to a set of coupled equations:
zK(A;lA, +3Za(A: - Ay3) zO(AcA;'
- A&l)
+ ZY(Ag2AS - AZ;X)',
+ Z~c(A,A,Ac
- AqAF1)
+z~c(A,A,A,
- A,lA,lA,l)
+ zDS((xcA,l - A; 1 A,-1
A,-1 )
- Ar'A,)
+ 2Z&&
- A,'A,2)
+ 2Zac(A:Ac - AL2AF1) = 0 +zAc,&(AcA~
+ z,c(A:A,
- A,2A,1)=0
(8) -
&1X,2) (9)
The above conditions, define the relation between all quark fugacities and temperature in the equilibrated primordial state. In the HG phase with finite net baryon number density, the chemical potentials p,, p, and p, are coupled through the production of strange and charm hadrons. Due to this coupling p,, pc > 0 in the hadronic domain. A more elegant formalism describing the HG phase is the Strangeness-including Statistical Bootstrap model (SSBM) [5,6]. It includes the hadronic interactions through the mass spectrum of all hadron species, in contrast to other ideal hadron gas formalisms. The SSBM is applicable only within the hadronic phase, defining the limits of this phase. In the 3-flavor case, the hadronic boundary is given by the projection on the 2-dimensional (T, p,) phase diagram of the intersection of the 3-dimensional bootstrap surface with the strangenessneutrality surface (p, = 0). Note that the vanishing of p, on the HG borderline does not apriori suggest that p, = 0 everywhere beyond. It only states that the condition p, = 0 characterizes the end of the hadronic phase. Figure 1 exhibits the hadronic boundary for two heavy quark flavors, obtained by imposing the conditions ps = 0 and p, = 0 to Eq's. (8), (9). Observe, that there exists an intersection point, at Tint 130 MeV and p r 325 MeV. For an equilibrated primordial state (EPS) above this temperature, i.e T > 130 MeV, and low pusvalues, we observe that as the temperature decreases, the condition p, = 0 is realized before the vanishing of p, (case I), whereas for T < 130 MeV and high p,, the opposite effect takes place (case 11). This behavior, may be of some importance towards
-
N
296
a possible experimental identification of a color superconducting phase, which is realized at a low temperature and high density region of the phase diagram (case 11).
2 6 0 , . , . , . , . , . ~ . , . , , , . , . , . , .
240
-”-. po=o--.
m-
0
\
50
100
EPS
150 200 250 300
350 400 450
500 550 600
Llght quark chemlcai potential(MeV)
Figure 1. The critical curves ps = 0 and pc = 0. We distinguish two cases depending on the location of the equilibrated primordial state (EPS).
3. Chirally symmetric QGP phase
The partition function for a four flavor Quark Gluon plasma has the form,
[
+
z n Z ~ c p ( TV,, pqrs,C) = 1 37r2T4 piT2 T 90
P4 +a 2r2
where m:, m: is the current strange and charm quark masses respectively. Flavor conservation within the QGP phase yields A, = A, = 1 or
P Y P ( T ,P q , pc) = P Y - q T ,P q , P s ) = 0
(11)
throughout this phase. Here the two order parameters, the Polyakov loop < L > and the scalar quark density < $+ > , have reached their asymptotic values. Note that the chirally symmetric quark-gluon plasma phase always corresponds to a vanishing heavy quark chemical potential.
297
4. Deconfined Quark Matter phase of [2+2] flavors
We argue that, beyond the hadronic phase, an intermediate domain of deconfined yet massive and correlated quarks arises, according to the following qualitative picture: The thermally and chemically equilibrated primordial state at finite baryon number density, consists of the deconfined valance quarks of the participant nucleons, as well as of q -Q pairs, created by quark and gluon interactions. Beyond but near the HG boundary, T 2 Td, the correlation-interaction between q - q is near maximum, as(T) 51, a prelude to confinement into hadrons upon hadronization. With increasing temperature, the correlation of the deconfined quarks gradually weakens, a s ( T )-+ 0, as color mobility increases. The mass of all (anti)quarks depends on the temperature and scales according to a prescribed way. The initially constituent mass decreases with increasing T > Td, and as the DQM region goes asymptotically into the chirally symmetric QGP phase, as T -+ T,, quarks attain current mass. Thus, we expect the equation of state in the intermediate DQM region to lead to the EoS of the hadronic phase, Eq. ( l ) , at T 5 Td, and to the EoS of the QGP, Eq. ( 6 ) , at T T,. In order to construct an empirical partition function for the desciption of the DQM phase, we use (a) the Polyakov loop < L >.v e-Fq/T 3 Rd(T 2 Td) = 0 4 1 as T=Td T, and (b) the scalar density < $$ >r R,(T 2 T d ) = 1 + 0 as T=Td 4 T,. The first describes the quark deconfinement while the latter is associated with the quark mass scaling.We assume that above the deconfinement temperature, quarks retain some degree of correlation and can be considered as hadron-like states. Therefore, near T d a hadronic formalism may still be applicable. This correlation/interaction gradually weakens, as a result of the progressive increase of color mobility. Each quark mass scales, decreasing from the constituent value to the current one as we reach the chiral symmetry restoration temperature (T + T,). Thus, we consider a temperature dependent mass for each quark flavor, approximated by: N
-+
m j ( T )= R,(T)(mf - my) +my where m f and my are the constituent and current quark masses respectively (the values m: = 5MeV,m: = 9MeV,mz = 170MeV,m: = 1.1GeV have been used). In the same spirit, we approximate the effective hadron-like mass:
$ ( T ) = Rx(T)(mi- my) + my where mi is the mass of each hadron in the hadronic phase and my is equal to the sum of the hadron’s quarks current mass (for example
298
m& = 175MeV,mg = 350MeV). In the partition function of the DQM phase, the former scaling is employed through the mass-scaled QGP partition function lnZGGp,where all quark mass terms are given by Eq.(12), while the latter is used in the mass-scaled hadronic partition function lnZkG, where all hadron mass terms are given by Eq.(13). Employing the described dynamics, we construct an empirical partition function for the DQM phase, lnZDQM(VlT,{Af))
=
11 - Rd(T)llnZ&~(v,T, {Af))
+ Rd(T)lnZ&P(V,T,{Af))
(f = q,s,c)
The factor [l- &(T)] describes the weakening of the interaction of the deconfined quarks, while the factor &(T) can be associated with the increase of color mobility as we approach the chirally symmetric QGP phase. The DQM partition function is a linear combination of the HG and QGP massscaled partition functions together with the general demand to describe both confinement and chiral symmetry restoration asymptotically. Note that below the deconfinement critical point T < Td, &(T) = 0 , leading to lnZDQM = lnZHG (with constituent quarks), whereas at the chiral symmeT,, &(T) = 1 and lnZDQM = lnZQGp try restoration temperature T (with current quark masses). In order to acquire the EoS of the DQM phase, we impose again the strangeness and charm neutrality conditions, leading to the set of equations respectively, N
[1- &(T)]P&(A,A,'
+ 3ZA(A:
+2Zg(AzA, - A2,A)',
+2ZA,(A~A, - A i 2 A i 1 ) ]
- &Xi') -Xi3)
+
+ z;(A,Ai
-Ai'Ai2)
Zsc(A,As~c
+ Rd(T)g,m:2K2($)
-)AlA ,l,
(A, - A)';
=0
(14)
and [1- &(T)][Zh(A,A,' 2; c,(A,A2, - A,'A,2)
+ zh,(AcA,'
+ z;c(A,A,A,
-
- &'A,)
A;
1 A, -1
+ A, -1 )
c,
+zAc(A:Ac
- A,2&1)]
(3
+ Rd(T)g,mE2K2 - (A,
- A)',
=0
(15)
which must be solved simultaneously. Note that because of the strange/charm hadrons D,, E, 0, there exists a coupling between the heavy quark fugacities A,, A., By solving the above equations, for a given chemical potential pq, we derive the variation of the strange and charm quark chemical potentials with temperature in the phase diagram.
299
5. Results for finite chemical potential In the case of %flavors and finite density, we had neglected all terms involving c-quarks (A, = I). In this case, only the variation of p, was considered and Figure 2 was derived. We observe that the strange quark chemical potential attains positive values in the hadronic phase, becomes zero upon deconfinement, it grows strongly negative in the DQM domain and finally returns to zero as the QGP phase is approached. It is important that ps behaves differently in each phase, as this is what we are looking for from the beginning in the search for an experimentally accessible "order parameter". The change in the sign of p, from positive in the hadronic phase to negative in the deconfined is an unambiguous indication of the quark deconfinement phase transition, as it is independent of assumptions regarding interaction mechanisms. In the case of [2+2] flavors the situation is slightly modified. Figure 3 exhibits the variation of the two correlated heavy quark chemical potentials with the temperature of the primordial state, as given by Eq's (14), (15). We observe that both are initially positive and then grow negative, although the change in their sign is realized at different temperatures, for example p, = 0 at TZ N lSOMeV, while pc = 0 at T i 215 MeV for a fixed value of the fugacity A, = 0.48. However, this difference can be easily understood if we consider Figure 3 in the framework of Figure 1. As already discussed in Sec. 2, for an equilibrated primordial state ( E P S ) with T > Tintand sufficiently low p,, pc becomes zero earlier than pus,as the system approaches hadronization (see Figure 1). This is the reason why T i > T i in Figure 3. For sufficiently high pg values and low temperatures, the opposite effect is present, i.e pUcchanges it's sign at a lower temperature than the strange quark chemical potential. The magnitude of the difference IT: - T i \ , will depend on the exact location of the state in the phase diagram. The fact that the p s , p c vanish at different temperatures, at the end of the respective hadronic domain, has further consequences, as it implies that there exists a quark "deconfinement region" rather than a certain critical line.
-
6. Experimental data
Over the last years, data from several nucleus-nucleus collisions have been analyzed within thermal statistical models, employing the canonical and grand-canonical formalisms [7-111. Table 1 summarizes some of the results for the quantities T, p, and p,, which have been deduced after performing a fit to the experimental data. Figure 4 shows the phase diagram with the
300
,
-
,
.
.
.
,
,
. ,
.
I
. ,
,
,~~~
50-
-100
-
.I50
-
-
1
Rd('
.ZOO
1+exp[-a(~a)l
-
1
1w
200
.
.
I
300
400
500
, 600
T (MeV)
Figure 2. Variation of ps with the temperature in the case of [2+1] quark flavors and different approximations or parameterizations of the order parameter Rd(T).
50
-150
1
.
I
I
I
100
200
300
m
Temperature (MeV)
Figure 3. Plot of the strange and charm quark chemical potentials in the phase diagram for A, = 0.48. Notice that the change in their sign is realized a t different temperatures.
Ideal Hadron Gas (IHG) and SSBM ps = 0 lines, as well as the location of the mean (T, p q ) values obtained for every collision. We observe that all interactions studied, are consistently situated inside the hadronic phase,
301
defined by the IHG model and exhibit positive p,. The sulfur-induced interactions, however are situated slightly beyond the hadronic phase defined by the SSBM. IHG calculations exhibit deviations from the SSBM as we 175 MeV, where the approach the critical deconfinement point T = T d S-S and S-Ag interactions are roughly located. Within the IHG model the condition p, = 0 is satisfied at a higher temperature (T 200 MeV), extending the hadronic phase to a larger region as can be seen in Figure 4. As a consequence, p, changes sign at a higher temperature also and this is the reason why p, > 0 in the analysis of [ll],although a temperature above deconfinement (according to the SSBM) is obtained. Therefore, an adjustment of the IHG curve to the SSBM boundary and a new fit to the data are necessary [12]. The data from RHIC at &=130, 200 AGeV are not included in our discussion, since at such high energies pq is very small and ps 0 throughout the phase diagram. The observation of negative heavy quark chemical potential requires a finite baryon density system. N
N
N
-
I . ' . ' " " ' ' ' '
-
DQM QGP
S
158 AOeV
\ \
40 AOeV 120
-
AU AU
801
0
""2 (?=o
loo1 HG
. 50' . 100' 150' . 200' . 250' . 300 ?! '
'
'
350
. 400' . 450 ' 500I
Light quark chemical potential (MeV)
Figure 4. (T, p q ) values of several interactions and their location in the phase diagram. The lines correspond t o the hadronic boundary within the SSB and IHG models.
7. Conclusions On the basis of the present analysis, we conclude that the heavy quark chemical potentials behave differently in each region (HG-DQM-QGP) of the phase diagram and, therefore, they can serve as a probe of the phase
302
Table 1. Deduced values for T, pq, ps from several thermal models and fits to experimental data for several interactions. Interaction/Experiment Si+Au(14.6 AGeV)/E802 Reference[4] Reference[9] Mean 134f6 135f4 135f3 176f12 194fll 182f5 66f10 66f10 Pb+Pb( 158 AGeV)/NA49 Reference[$] Reference[9] Reference[7] Mean 146f9 158f3 157f4 157f3 74f6 79f 4 81f7 78f3 23f2 22f3 25f4 Pb+Pb( 40 AGeV) /NA49 Reference[l] Reference[*] Mean 147313 150f 8 14 9f9 136314 132f 7 134f8 35f4 S+S(200 AGeV)/NA35 ReferenceIlO] Reference[ll] Reference[8] Mean 182f9 181fll 202f13 188f6 75f6 73f7 87f7 78f4 14414 17f6 16f7 S+Ag(2OO AGeV)/NA35 Reference[lO] Reference[ll] Reference[8] Mean 1 8 0f 3 17933 185f8 181f4 79f4 81f6 81f7 80f3 14f4 16f5 16f8 ’
*NA49 private communication
transitions. This is the first proposal of such an experimentally accessible ”order parameter” that holds for a finite baryon density state. The appearance of negative values of ps and pc, is a well-defined indication of the quark deconfinement phase transition, at T=Td, which is free of ambiguities related to microscopic effects of the interactions. It is important to add, that the observation of negative heavy quark chemical potentials would be also a clear evidence for the existence of the proposed DQM phase, meaning that chiral symmetry and deconfinement are apart at finite density. Until now, there is no known argument from QCD that the two transitions actually occur at the same temperature. Au+Au collisions at intermediate energies, for example 30 5 6 5 90 AGeV, should be performed t o experimentally test our proposals.
303
Acknowledgments
P. Katsas is grateful to the organizing committee, for the opportunity to participate in the conference. This work was supported in part by the Research Secretariat of the University of Athens. References 1. H. Satz, Nucl. Phys. Proc. Suppl. 94,204 (2001). 2. A. D. Panagiotou, G. Mavrornanolakis and G. Tzoulis, Heavy Ion Physics 4,
347 (1997). 3. A. D. Panagiotou and P. G. Katsas, to appear in J. Phys. G. 4. A. D. Panagiotou, P. G. Katsas, E. Gerodirnou J. Phys. G28, 2079 (2002).
5. A. S. Kapoyannis, C. N. Ktorides and A. D. Panagiotou, Phys. Rev. D58, 034009 (1998). 6. A. S. Kapoyannis, C. N. Ktorides and A. D. Panagiotou, Eur. Phys. J. C14, 299 (2000). 7. A. S. Kapoyannis, C. N. Ktorides and A. D. Panagiotou, t o appear in J. Phys.
G,(2002). 8. J. Sollfrank, J . Phys. G23, 1903 (1997). 9. F. Becattini, J. Cleyrnans and K. Redlich, Phys. Rev. C64,024901 (2001). 10. F. Becattini, J. Phys. G23, 1933 (1997). 11. F. Becattini, M. Gazdzicki and J. Sollfrank, Eur. Phys. J. C5,143 (1998). 12. T. Gountras, N. Davis, P. Katsas and A. D. Panagiotou, work in progress
GAP ANALYSIS FOR CRITICAL FLUCTUATIONS
RUDOLPH C. HWA 'Institute of Theoretical Science and Department of Physics University of Oregon, Eugene, OR 974034603, USA If hadronization in heavy-ion collisions involves a quark-hadron phase transition of the second order, then one expects correlations at all length scales as a manifestation of the characteristic feature of critical phenomena. In two dimensional 71-4space one should see clustering of hadrons of all sizes. When only a narrow strip is chosen in that space, then the clustering creates gaps where no particles are emitted in the 1D window. We discuss a method of analysis t o quantify the fluctuation of those gaps. Using the Ising model to simulate the critical fluctuation, a power law is found that characterizes the phase transition. How the method can be applied t o the heavy-ion data is discussed.
The aim of h eavy-ion collisions at high energies is to create quarkhadron plasma. To create such a quark-gluon system is not simply a matter of deconfinement, since hadrons are deconfined even in p p collisions and partons are momentarily liberated before hadronization. By quark-gluon plasma one means a thermalized system of quarks and gluons. Since thermalization takes some time to occur, however short, one cannot apply the conventional statistical notion of phase transition to the violent deconfinement p rocess. One way to test the existence of the plasma is to study not its beginning, but its end. If the hadronization of a thermalized plasma is a second-order phase transition or a smooth cross-over nearby, then there should be footprints of the phenomenon in the patterns of particles produced that fluctuate from event to event. I present here a simple method to detect such patterns and propose a measure that can quantify the critical fluctuation l . In previous meetings of this series o f Workshops I have discussed various possible signatures of critical behavior in heavy-ion collisions. In Torino 2000 the use of void analysis was presented. Since the physical basis of that analysis is the same as that of the gap analysis to be described here, let me review the scenario that is assumed for the collision process 3,4 so that the common background is clear. Consider a central A A collision that
304
305
creates an expanding cylinder of dense matter, mostly in the longitu dinal direction, but also radially at a much lower rate. Assuming that the matter is a thermalized quark-gluon plasma, it is hot in the interior and cooler toward the surface due to radial expansion. The surface of the plasma is defined by the point where the temperature is cooled down to the critical T, beyond which hadrons are formed. Thus the phase transition that we axe looking for takes place on the surface. The question is: are hadrons formed uniformly on that surface? There are many examples of critical systems where fluctuations are large at T,. At the critical point the system is in severe competition between the random and collective forces. One should expect the same phenomenon on the surface of the plasma cylinder in the manifestation of hadronic clusters and voids. This can be simulated by the 2D Ising model whose universality class is the one that the QCD system belongs to in a realistic range of quark masses. The issue that we want to address is the search for a measure that can quantify the fluctuations. There are two types of fluctuations involved here. One is the fluctuation from a uniform spatial distribution of hadrons in the 77-4 plane, i.e., a pattern of clusters and voids, in each event. The other is the fluctuation of such patterns from event to event. The study of the fluctuations of spatial patterns has general utility, far beyond the study of critical behavior. By using the factorial moments as a measure of the spatial fluctuation in any given event, we have studied the fluctuations of those moments in an analysis called erraticity to quantify chaoticity and criticality '. On a 2D space it is more convenient to study directly the void sizes 4. If a heavy-ion experiment has good coverage of the azimuthal angle 9, then it is even simpler to focus the attention to a narrow strip in y (at midrapidity) and a very narrow window in p~ (for 200 < p~ < 210 MeV, say) so that one can study the limited number of particles in 4 over a range of n/2, or T ,or 2n. In this 1D space the exact positions of the particles can be precisely determined, and not more than 20 particles, say, enter into our consideration. In such a scenario one can apply the gap analysis to be described below, the details of which can be found in The idea is simple. First, transform the 4 variable to the cumulative variable X that varies from 0 to 1 and in which the average pa rticle density is uniform. Next, define xi to be the gap in X between the neighboring N , with X , = 0 and X N + = ~ 1, by particles at Xi and Xi+l, i = 1, . . a ,
306
definition. Thus
CE1xi = 1. Then define the moments
for positive values (integers) of q. The set { G 4 } with 2 5 q 5 Q, Q being some number less than 10, say, is our quantification of the event struct ure. The set fluctuates from event to event, especially at the critical point. The gap analysis is to use {G,} as a basis to construct a measure that can be examined for possibilities of critical behavior. To that end, define sq = (G, In
G,)
(2)
where the average is performed over all events. Let sit be the statistical contribution to s,, i.e., letting G, be replaced by the moments Git of the statistica lly simulated particle distribution in X . Then define
s, = .,/sit
(3)
This is our measure whose nontrivial dependence on q reveals the nature of the dynamical fluctuation in the system. Before discussing the properties of critical fluctuation, it is pertinent to mention the properties of S, that has already been found experimentally in hadronic collisions. In analyzing the NA22 data the Wuhan group has a preliminary result on S, that was shown to me at this Workshop and was presented in my talk 7. It is a gap analysis of particles in the rapidity space for p p collision at fi = 22 GeV. The result shows a straightline behavior in In S, vs lnq, thus indicating a power law
where Q = 0.319 f0.015. An analysis of the simulated events of PYTHIA has also been done, and the straightline behavior yields a value of Q = 0.118 f0.005. The NA22 result on Q is higher than those of both PYTHIA and ECOMB, which has Q = 0.156 '. Evidently, the models are inadequate in describing the dynamical fluctuations and the analysis has proven to be discriminating. Returning t o the problem of critical fluctuation, let me just summarize the result from using the 2D Ising model to simula te the clusters and voids in the q-C#I space and then doing the gap analysis in a narrow strip along the C#I variable. Fig. 1 shows the log-log plot of S, vs q, revealing a power law as in Eq. (4)with Q = 1.67. In the analysis a threshold density po = 20
307
3
2.5
I
T = T,
cn" 2 S 1.5
1
0 ' 0.5
I
I
2
1
Figure 1. Power-law behavior of S, at Tc.
is used to define a void cell in the sense that a cell (in the Ising lattice with 4x4 sites) having a density p < po is regarded as a void. One may regard po as a control parameter rela ted to ArlAp~of the window in an experiment. In place of po we can use the average number of gaps (M) as an alternative control parameter that depends on po. The advantage is that ( M ) is directly measurable. As po is varied, both S, and ( M ) change, but a power-law behavior such as that in Eq. (4)always exists, though with varying a. The value of a depends on (M) linearly as shown in'Fig. 2. We thus have the result = a0
+ E (M) 2
(5)
where (YO = -0.258 is not as important as the value of the slope = 0.055
(6)
1.5
1
0.5
0 10
20
30
<M>
40
Figure 2. The dependence of the index Q: on the average number of gaps at T,.
<
The index is a numerical output that depends on no numerical i nput in this Ising problem. It is perhaps the most succinct characterization of the critical phenomenon, beside the critical exponents. The latter depend on the temperature of a critical system near T,,but T is not directly measurable in heavy-ion collisions. Here we have an index 5 that is eminently measurable and is the only numerical constant that can be meaningfully associated with critical fluctuation. It is not difficult to make cuts in APT and Ay in the heavy- ion experiments to limit the average number of particles in a narrow strip in #J. If those particles are found not to be uniformly distributed in q5 for every event, then it is a sign that some dynamical fluctuations are at play, and the gap analysis proposed here is a way to quantify those fluctuations. If nontrivial values of the exponents Q! and the index 5 are found to exist, one can envision a rich variety of phenomenological studies on what other kinematically controllable par ameters they may depend upon, providing valuable information about the quark-gluon system. The application of the
309
gap analysis to the heavy-ion data is therefore strongly urged. I am grateful to Q. H. Zhang for his collaboration in this work. This work was supported, in part, by the U. S. Department of Energy under Grant No. DE-FG03-96ER40972.
References 1. 2. 3. 4.
R. C. Hwa, and Q. H. Zhang, Phys. Rev. C 66, 014904 (2002). R. C. Hwa, Nucl. Phys. B (Proc. Suppl.) 92, 173 (2001). R. C. Hwa, and Y. F. Wu, Phys. Rev. C 60, 054904 (1999). R. C. Hwa, and Q. H. Zhang, Phys. Rev. C 62, 054902 (2000); 64, 054904
(2001). 5. R. C. Hwa, Acta Phys. Pol. B 27, 1789 (1996); Z. Cao and R. C. Hwa, Phys. Rev. D 61,074011 (2000). 6. Z. C m and R. C. Hwa, Phys. Rev. E 56, 326 (1997). 7. Y. F. Wu and Y. T. Bai (private communication). 8. R. C. Hwa, and Q. H. Zhang, Phys. Rev. D 62, 014003 (2000).
This page intentionally left blank
This page intentionally left blank
Session on Complexity and Strong Interactions Chairperson: R. C. Hwa
This page intentionally left blank
TURBULENT FIELDS AND THEIR RECURRENCES PREDRAG CVITANOVIC AND YUEHENG LAN Center for Nonlinear Science, School of Physics, Georgia Institute of Technology, Atlanta 30338-0430, U.S.A. E-mai1:[email protected]. edu We introduce a new variational method for finding periodic orbits of flows and spatio-temporally periodic solutions of classical field theories, a generalization of the Newton method to a flow in the space of loops. The feasibility of the method is demonstrated by its application to several dynamical systems, including the Kuramoto-Sivashinsky system.
1
Introduction
Chaos is the norm for generic Hamiltonian flows, and for path integrals that implies that instead of a few, or countably many extremal configurations, classical solutions populate fractal sets of saddles. For the path-integral formulation of quantum mechanics such solutions were discovered by Gutzwiller who derived a trace formula that relates a semi-classical approximation of the energy eigenspectrum to the classical periodic solutions. While the theory has worked very well in quantum mechanical applications, these ideas remain largely unexplored in quantum field theory. The classical solutions for most strongly nonlinear field theories are nothing like the harmonic oscillator degrees of freedom, the electrons and photons of QED; they are unstable and highly nontrivial, accessible only by numerical techniques. The new aspect, prerequisite to a semi-classical quantization of strongly nonlinear field theories, is the need to determine a large number of spatio-temporally periodic solutions for a given classical field theory. Why periodic? The dynamics of strongly nonlinear classical fields is turbulent, not “laminar”, and how are we to think about turbulent dynamics? Hopf and Spiegel 4,5,6 have proposed that the turbulence in spatially extended systems be described in terms of recurrent spatiotemporal patterns. Pictorially, dynamics drives a given spatially extended system through a repertoire of unstable patterns; as we watch a turbulent system evolve, every so often we catch a glimpse of a familiar pattern. For any finite spatial resolution, for a finite time the system follows approximately a pattern belonging to a finite alphabet of admissible patterns, and the long term dynamics can be thought of as a walk through the space of such patterns, just as chaotic dynamics with a low dimensional attractor can be thought of as a succession of nearly periodic (but unstable) motions. So periodic solutions are needed both to quantify “turbulence” in classical field theory, and as a starting point for the semi-classical quantization of a quantum field theory. There is a great deal of literature on numerical periodic orbit searches. Here we take as the starting point CvitanoviC et al. webbook, and in Sec. 2 briefly review the Newton-Raphson method for low-dimensional flows described by ordinary differential equations (ODES), in order to motivate the Newton descent approach that we shall use here, and show that it is equivalent to a cost function minimization 313
314
method. The problem one faces with high-dimensional flows is that their topology is hard t o visualize, and that even with a decent starting guess for a point on a periodic orbit, methods like the Newton-Raphson method are likely t o fail. In Sec. 3 we describe a new method for finding spatio-temporally periodic solutions of extended, infinite dimensional systems described by partial differential equations (PDEs), and in Sec. 4 we discuss a simplification of the method specific to Hamiltonian flows. The idea is t o make an informed rough guess of what the desired periodic orbit looks like globally, and then use variational methods t o drive the initial guess toward the exact solution. Sacrificing computer memory for robustness of the method, we replace a guess that a point is on the periodic orbit by a guess of the entire orbit. And, sacrificing speed for safety, we replace the Newton-Raphson iteration by the Newton descent, a differential pow that minimizes a cost function computed as deviation of the approximate flow from the true flow along a smooth loop approximation to a periodic orbit. In Sec. 5 the method is tested on several systems, both infinite-dimensional and Hamiltonian, and its virtues, shortcomings and future prospects are discussed in Sec. 6 . 2
Periodic orbit searches
A periodic orbit is a solution ( x ,T ) , x E R d , T E R of the periodic orbit condition
T>O (1) fT(x)=x, for a given flow or mapping x + f tx. Our goal here is t o determine periodic orbits of flows defined by first order ODES
dx
dt = V ( X ) ,
xEM
C
Rd ,
v) E TM
(2,
in many (even infinitely many) dimensions d. Here M is the phase space (or state space) in which evolution takes place, TM is the tangent bundle, and the vector field v ( x ) is assumed smooth (sufficiently differentiable). A prime cycle p of period Tp is a single traversal of the orbit. A cycle point of a flow which crosses a PoincarC section np times is a fixed point of the f * p iterate of the PoincarC section return map f , hence one often refers t o a cycle as a “fixed point”. By cyclic invariance, stability eigenvalues and the period of the cycle are independent of the choice of an initial point, so it suffices t o solve Eq. (1) at a single cycle point. Our task is thus t o find a cycle point x E p and the shortest time Tp for which Eq. (1) has a solution. If the cycle is an attracting limit cycle with a sizable basin of attraction, it can be found by integrating the flow for sufficiently long time. If the cycle is unstable, simple integration forward in time will not reveal it, and methods t o be described here need t o be deployed. In essence, any method for solving numerically the periodic orbit condition F ( x ) = x - f T ( x )= 0 is based on devising a new dynamical system which possesses the same cycle, but for which this cycle is attractive. Beyond that, there is a great freedom in constructing such systems, and many different methods are used in practice.
315
Figure 1. Newton method: bad initial guess d b )leads to the next guess far away from the desired zero of F ( z ) . Sequence . . * ,dm), dm+l), . . ., starting with a good guess converges superexponentially to 2'.
2.1 Newton Descent in 1 Dimension Newton's method for determining a zero x* of a function F ( z ) of one variable is based on a linearization around a starting guess do):
F ( z ) M F ( z ( 0 ) )+ F'(x'O))(x - x ( 0 ) ) .
(3)
An improved approximate solution dl)of F ( x ) = 0 is then x ( l ) = do)F ( Z ( ~ ) ) / F ' ( Z.(Provided ~)) that the mth guess is sufficiently close to x* (see Fig. l), the Newton iteration Jm+1)
= J m ) - F(z(m))/F'(x(m))
(4)
converges to x* super-exponentially fast. In order to avoid jumping too far from the desired x*, one often initiates the search by the damped Newton method,
A x ( m )= z ( ~ + ' ) z
( ~=) -F(J"))/F'(x("))
AT,
0 < AT 5 1 ,
takes small AT steps at the beginning, reinstating to the full AT = 1jumps only when sufficiently close to the desired x*. Let us now take the extremely cautious approach of keeping all steps infinites, ,... by the fictitious imally small, and replacing the discrete sequence d m )dm+l) time T flow x = x ( r ) :
If a simple zero, F'(z*)# 0, exists in any given monotone lap of F ( x ) , it is the attractive fixed point of the flow Eq. ( 5 ) .
316
While reminiscent of “gradient descent” methods, this is a flow, rather than an iteration. For lack of established nomenclature we shall refer t o this method of searching for zeros of F ( x ) as the Newton descent, and now motivate it by rederiving it from a minimization principle. Rewriting Eq. (5) in terms of a “cost function” F ( X ) ~ , 7310
I d d r = - a d z = - - - In F(x)’) dx, F(x) (2dx and integrating,
we find that the deviation of F ( s ) from F ( x * ) = 0 decays exponentially with the fictitious time,
F ( x ( 7 ) )= F(x0)e-T ,
(6) with the fixed point x* = limT+m x( 7) reached at exponential rate. In other words, the Newton descent, derived here as an overcautious version of the damped Newton method, is a flow that minimizes the cost function F(x)’.
2.2 Multi-dimensional Newton Descent Due to the exponential divergence of nearby trajectories in chaotic dynamical systems, fixed point searches based on direct solution of the fixed-point condition Eq. (1) as an initial value problem can be numerically very unstable. Methods that start with initial guesses for a number of points along the cycle are considerably more robust and safer. Hence we consider next a set of periodic orbit conditions
Fi(X) = xi - f i ( X ) = 0 ,
x E ELdBd”
(7)
where the periodic orbit traverses n E { 1 , 2 , . . .} Poincar6 sections (multipoint shooting method f(z) is the Poincare return map from one section to the next, and the index i runs over d n values, that is d dimensions for each PoincarC section crossing. In this case the expansion Eq. (3) yields the Newton-Raphson iteration ’ g 8 ) ,
where J ( x ) is the [dn x dn] Jacobian matrix of the map f ( x ) . The Newton descent method Eq. (5) now takes form
Contracting both sides with Fi(s)and integrating, we find that
c dn
P ( X )
=
i=l
Fi(X)Z
317
can be interpreted as the cost function Eq. (6), also decaying exponentially, F2(z(.)) = F2( z(0) ) e- 2rwith , the fictitious time gradient flow Eq. (9) now taking a multi-dimensional form:
Here we have considered the case of x a vector in a finite-dimensional vector space, with F 2 ( x )the penalty for the distance of F ( z ) from its zero value at a fixed point. Our next task is t o generalize the cost function t o a cost functional F2[x] which measures the distance of a loop x ( s ) E L ( T )from a periodic orbit x ( t ) E p .
3
Newton Descent in Loop Space
For a flow described by a set of ODES, multipoint shooting method of Sec. 2.2 can be quite efficient. However, multipoint shooting requires a set of phase space PoincarC sections such that an orbit leaving one section reaches the next one in a qualitatively predictable manner, without traversing other sections along the way. In turbulent, high-dimensional flows such sequences of sections are hard t o come by. One cure for this ill might be a large set of PoincarC sections, with the intervening flight segments short and controllable. Here we shall take another path, and discard fixed PoincarC sections altogether. Emboldened by success of methods such as the multipoint shooting (which eliminates the long-time exponential instability by splitting an orbit into a number of short segments, each with a controllable expansion rate) and the cyclist relaxation methods (which replace map iteration by a contracting flow whose attractor is the desired periodic orbit of the original iterative dynamics), we now propose a method in which the initial guess is not a finite set of points, but an entire smooth, differentiable closed loop. A general flow Eq. (2) has no extremal principle associated with it (we discuss the simplification of our method in the case of Hamiltonian mechanics in Sec. 41, so there is a great deal of arbitrariness in constructing a flow in a loop space. We shall introduce here the simplest cost function which penalizes mis-orientation of the local loop tangent vector G(z) relative to the dynamical velocity field v(z) of Eq. (2), and construct a flow in the loop space which minimizes this function. This flow is corralled by general topological features of the dynamics, with rather distant initial guesses converging to the desired orbit. Once the loop is sufficiently close t o the periodic orbit, faster numerical algorithms can be employed to pin it down. In order to set the notation, we shall distinguish between (see Fig. 2): closed path: any closed (not necessarily differentiable) continuous curve J
cM.
loop: a smooth, differentiable closed curve x ( s ) E L C M , parametrized by s E [ 0 , 2 ~ ] with x ( s ) = z(s 2 ~ )with , the the magnitude of the loop tangent vector fixed by
+
318
Figure 2. (a) A continuous path; (b) a differentiable loop L with its tangent velocity vector 8 ; (c) a periodic orbit p defined by the vector field v ( z ) .
the (so far arbitrary) parametrization of the loop,
dx
d ( z ) = -, ds
z = z(s) E L .
annulus: a smooth, differentiable surface z(s, r ) E L ( r ) swept by a family of loops L ( T ) ,by integration along a fictitious time flow (see Fig. 3 a))
x. = - .
67
periodic orbit: given a smooth vector field w = w(z), (z,w) E TM, periodic orbit z ( t ) E p is a solution of
dx
= w(z),
such that z ( t ) = z ( t + Tp),
where Tp is the shortest period of p .
3.1 Newton Descent in the Loop Space
In the spirit of Eq. (lo), we now define a cost function for a loop and the associated fictitious time T flow which sends an initial loop L(0) via a loop family L ( T )into the periodic orbit p = L(m),see Fig. 3 a). The only thing that we are given is the velocity field w(z), and we want to “comb” the loop L ( r ) in such a way that its tangent field 6 aligns with w everywhere, see Fig. 3 b). The simplest cost functional for the job is F 2 ( r )=
ds (6 - X W ) ~ , d = d ( z ( s ) ) , v = v(z(s)).
(12)
As we have fixed the loop period to s = 2 ~the , parameter X = X(z(s), r ) is needed to match the magnitude of the tangent field d (measured in the loop parametrization units s) to the velocity field w (measured in the dynamical time units t ) . The simplest choice of the s parametrization is obtained by requiring that the ratio of
319
Figure 3. (a) An annulus L ( 7 ) with vector field connecting smoothly the initial loop L ( q ) to periodic orbit p . (b) In general the loop field S ( x ) does not coincide with v ( x ) ;for a periodic orbit p , it does so at every x E p .
the magnitude of the tangent vector and the velocity vector be the same everywhere on the loop, X(7) = lal/lwl.
(13)
X so defined is a global variable of the loop L ( r ) ,function of r only. In the limit where the loop is the desired periodic orbit p , X is the ratio of the dynamical period Tp to the loop parametrization period 2n, X = Tp/2n.More general choices of the parametrization s will be discussed elsewhere. l1 Proceeding as in the derivation of the multidimensional Newton descent Eq. (ll),we now arrive at the PDE for the fictitious time r flow which evolves the initial loop L(0) into the desired periodic orbit p
Here A is the matrix of variations of the flow (its integral around p yields the linearized stability matrix for the periodic orbit p ) . Integrating d -(6
- XW)
= -(6 - Xu),
(15) dr we find again that the flow in the fictitious time r flow drives the loop exponentially to L ( m )= p , see Fig. 3 a):
6 - Xw = e-'(6 - Xw)(,=o.
(16)
3.2 Loop Initialization Replacement of a finite number of points along a trajectory by a closed smooth loop, and of the Newton-Raphson iteration by the Newton descent flow results in
320
a second order PDE for the loop evolution. The loop parameter s converges (up to a proportionality constant) to the dynamical time t as the loop converges to the desired periodic orbit. The flow parameter T plays the role of a fictitious time. Our aim is to apply this method to high-dimensional flows; and thus we have replaced the initial ODE dynamics Eq. (2) by a very high-dimensional PDE. And here our troubles start - can this be implemented at all? How do we get started? A qualitative understanding of the dynamics is the essential prerequisite to successful periodic orbit searches. We start by long-time numerical runs of the dynamics, in order to get a feeling for frequently visited regions of the phase space (“natural measure”), and to search for close recurrences.We construct the initial loop L(0) using the intuition so acquired. Taking a fast Fourier transform of the guess, keeping the lowest frequency components, transforming back to the initial phase space helps smooth the initial loop L(0). A simple linear stability analysis shows that the smoothness of the loop is maintained by flow in the fictitious time T . This, as well as worries about the marginal stability eigenvalues and other details of the numerical integration of the loop flow Eq. (14), will be described in the forthcoming publication. l1 Suffice it to say that the numerical work is extensive, but one is rewarded by periodic orbits that have not been obtainable by the methods employed previously. 4
Extensions of the Method
In the classical mechanics, particle trajectories are also solutions of a variational principle, the Hamilton’s variational principle. For example, one can determine a periodic orbit of a billiard by wrapping around a rubber band of roughly correct topology, and then moving the points along the billiard walls until the length (that is, the action) of the rubber band is extremal (maximal or minimal under infinitesimal changes of the boundary points). In other words, extremization of action requires only D-dimensional (degrees of freedom) rather than 2D-dimensional (dimension of the phase space) variations. Can we exploit this fact to simplify our calculations in Newtonian mechanics? The answer is yes, and easiest to understand in terms of the Hamilton’s variational principle which states that classical trajectories are extrema of the Hamilton’s principal function (or, for fixed energy, the action)
lo tl
R(Ql,t1; Qo, t o ) =
d t l ( Q ( t )4, ( t ) ,t )
7
where C(q, q , t ) is the Lagrangian. Given a loop L ( T ) we can compute not only the tangent “velocity” vector 5 , but also the local loop “acceleration” vector
and indeed, as many s derivatives as needed. Matching the dynamical acceleration (I,(.) with the loop “acceleration” 6(x) results in an equation for the evolution of the loop d -(ii - P a ) = -(ii - P a ) , dr
321
where X2 appears instead of X for dimensional reasons. This equation can be reexpressed in terms of loop variables z(s); the resulting equation is somewhat more complicated than Eq. (14), but the saving is significant - only 1/2 of the phase-space variables appears in the fictitious time flow. More generally, the method works for Lagrangians of form L(q,q, q, . . . ,t ) , with considerable computational savings. l1 5
Applications
We now offer several examples of the application of the Newton descent in the loop space, Eq. (14).
5.1
Unstable Recurrent Patterns in a Classical Field Theory
One of the simplest and extensively studied spatially extended dynamical systems is the Kuramoto-Sivashinsky system l4 ut = (u2)I - %c
- z4fxmz
(17) which arises as an amplitude equation for interfacial instabilities in a variety of contexts. The “flame front” u(z,t ) has compact support, with z E [0,27r] a periodic space coordinate. The u2 term makes this a nonlinear system, t is the time, and u is a fourth-order “visco~ity”damping parameter that irons out any sharp features. Numerical simulations demonstrate that as the viscosity decreases (or the size of the system increases), the “flame front” becomes increasingly unstable and turbulent. 15,16 The task of the theory is to describe this spatic-temporal turbulence and yield quantitative predictions for its measurable consequences. As was argued in Ref. 17, turbulent dynamics of such systems can be visualized as a walk through the space of spatio-temporally unstable but recurrent patterns. In the PDE case we can think of a spatio-temporally discretized guess solution as a surface covered with small but misaligned tiles. Decreasing Eq. (12) by Newton descent means smoothing these strutting fish scales into a smooth solution of the PDE in question. In case at hand it is more convenient to transform the problem to Fourier space. If we impose the periodic boundary condition u ( t ,z 27r) = u ( t ,z) and choose to study only the odd solutions u(-z, t ) = - u ( z , t ) , l7 the spatial Fourier series for the wavefront is
+
m
u(z,t ) = i
C
ak(t)exp(ikt), k=-m with real Fourier coefficients a-k = -ak, and Eq. (17) takes form
(18)
m
ak = ( k 2 - uk4)ak - k
C
amak-,,,
(19)
m=-m
After the initial transients die out, the magnitude of ak decreases exponentially with k4,justifying use of Galerkin truncations in numerical simulations. As in numerical work on any PDE we thus replace Eq. (17) by a finite but high-dimensional system of ODES. The initial searches for the unstable recurrent patterns for this spatially
322
-1
-1
m"
P
-2
-2 -1
0
-2
(4
(b)
=5
Figure 4. (a) An initial guess L(O), and (b) the periodic orbit p reached by the Newton descent, the Kuramoto-Sivashinsky system in a spatio-temporally turbulent regime (viscosity parameter u = 0.01500, d = 32 Fourier modes truncation). In discretization of the initial loop L(0) each point has to be specified in all d dimensions; here the coordinates {as, a7, a s } are picked arbitrarily, other projections from d = 32 dimensions to a subset of 3 coordinates are equally (un)informative.
extended system found several hundreds of periodic solutions close to the onset of spatiotemporal chaos, but a systematic exploration of more turbulent regimes was unattainable by the numerical techniques employed. l87l7 With decreasing viscosity Y the system becomes quite turbulent, with the spatiotemporal portraits of the flame front u(z,t ) a complex labyrinth of eddies of different scales and orientations, and its Fourier space dynamics Eq. (19) a complicated high-dimensional trajectory. In Fig. 4 we give an example of a Newton descent calculation for this system for the viscosity parameter significantly lower than in the earlier investigations. Although the initial guess L(0) is quite far from the final configuration p = L(m), the method succeeds in molding the starting loop into a periodic solution of this high dimensional flow. A systematic exploration of possible shortest cycles and hierarchy of longer cycles will be reported elsewhere. l1
5.2 Henon-Heiles and Restricted 3-body Problems Next we offer two examples of the applicability of the extension of the Newton descent of Sec. 4 to low-dimensional Hamiltonian flows. HCnon-Heiles Hamiltonian l9 1 Y3 H = -(k2 + g2 + z2 + y2) + z 2 y - 2 3 is frequently used in astrophysics. Fig. 5 shows an application of the method of Sec. 4 to a periodic orbit search restricted to the configuration space. In the HCnon-Heiles case the acceleration (a,, a u ) depends only on the configuration coordinates (2,y). More generally, the a's could also depend on (k,g). For example, the restricted three-body problem equations of motion 2o
+ 2 - (1 - p)-z + p - p z - l + p 6 r; Y Y = -25 + y (1 - p ) - - p T
2 = 2jc
-
6
7-2
323
-0.61
,
-0.2
-0.6
X
Figure 5. (a) An initial loop L(O), and (b) the periodic orbit p reached by the Newton descent, the HBnon-Heiles system in a chaotic region, E = 0.1794.
X
X
Figure 6. (a) An initial loop L(O), and (b) the periodic orbit p reached by the Newton descent, the restricted three body problem in the chaotic regime, /I = 0.04, TP = 2.7365.
= J(z
+ p)2 + y 2 ,
r2 = J(x - 1 - p)2 + y2
describe the motion of a “test particle” in a rotating frame under the influence of the gravitational force of two heavy bodies with masses 1 and p << 1, fixed at ( - p , 0) and (1 - p , 0) in the (z, y) coordinate frame. The periodic solutions of Eq. (21) correspond to periodic or quasi-periodic motion of the test particle in the inertial frame. Fig. 6 shows an application of the Newton descent method to this problem. 6
Summary and Future Directions
The periodic orbit theory approach to classically turbulent field theory is to visualize turbulence as a sequence of near recurrences in a repertoire of unstable spatiotemporal patterns. So far, existence of a hierarchy of spatio-temporally periodic solutions, and applicability of the periodic orbit theory in evaluation of global averages for spatially extended nonlinear system has been demonstrated in one example, the Kuramoto-Sivashinsky system. l4 The parameter ranges previously explored probe the weakest nontrivial “turbulence”, and it is an open question to
324
what extent the approach remains implementable as such classical fields go more turbulent. The bottleneck has been the lack of methods for finding even the simplest periodic orbits in high-dimensional flows, and the lack of intuition as to what such orbits would look like. Here we have proposed the Newton descent method, a very conservative method which emphasizes topological robustness at a considerable cost to numerical speed, in order to be able to find at least the shortest spatio-temporally unstable periodic solutions of a given (infinite dimensional) classical field theory. Because our method used information from a large number of points in phase space, global topology is encoded in the correlations between these points. As we still clueless as to what the solutions should look like, currently we have no way of telling to which periodic orbit the loop space flow Eq. (14) will converge, other than the “nearest” periodic orbit of topology “similar” to the initial guess. What we have described here is only a proof of principle. In devising the Newton descent method we have made a series of restrictive choices, many of which could be profitably relaxed. The choice of a Euclidean metric cost function F2(.r)has no compelling merit, other than notational simplicity. For a flow like the Kuramoto-Sivashinsky the a l , a2, . . . directions are clearly more important than uk, ak+l, . . . for large k, and that is not encoded in the current form of the cost function. A more inspired choice would use intrinsic information about dynamics, replacing bijFi F, by AijFiFj or some more appropriate metric that penalizes straying away in the unstable directions more than deviations in the strongly contracting ones. Particular classes of systems might be better described by extremization of altogether different cost functions, such the action of the Hamiltonian mechanics. Loop parametrization. Once it is understood that given a vector field w(z), the objective is to determine a loop z(s) whose tangent vectors point along w(z) everywhere along the loop, there is no reason to use the dynamical system time t to parametrize the loop - any length parameter s will do, and some other choice might be more effective in numerical discretizations. Zero modes. We eliminate the marginal eigendirection along the loop by “gauge fixing”, fixing one point on the loop by a Poincar6 section. This seems superfluous and should be eliminated by ensuring that the average displacement of loop points along the loop is zero, or by other criteria that guarantee exponential convergence to the desired periodic orbit. The Newton descent method introduced here replaces the superexponentially contracting Newton-Raphson iteration by an exponentially contracting flow. Keeping the fictitious time step d~ infinitesimal is both against the spirit of the Newton method, and not what we do in practice; once the approximate loop is sufficiently close to the desired periodic orbit, d.r is replaced by discrete steps of size d.r + 1, in order that the super-exponential convergence of the Newton method be regained.
325
References
1. M.C. Gutzwiller, Chaos in Classical and Quantum Mechanics (Springer-Verlag,
New York 1990). 2. P. CvitanoviC, Physica A 288, 61 (2000); nlin.CD/0001034. 3. E. Hopf, Bereich. Sachs. Acad. Wiss. Leipzig, Math. Phys. Kl. 94, 19 (1942), 4. D.W. Moore and E.A. Spiegel, Astrophys. J. 143, 871 (1966). 5. N.H. Baker, D.W. Moore and E.A. Spiegel, Quatr. J. Mech. and Appl. Math. 24, 391 (1971). 6. E.A. Spiegel, Proc. Roy. SOC.A413, 87 (1987). 7. J. Stoer and R. Blirsch, Introduction to Numerical Analysis (Springer-Verlag, New York 1983). 8. P. CvitanoviC et al, Classical and Quantum Chaos (Niels Bohr Institute, Copenhagen, 2003), www .nbi .dk/ChaosBook. 9. V.I. Arnol’d, Ordinary Differential Equations (Springer-Verlag, New York 1992). 10. W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C (Cambridge University Press, 1992). 11. Y. Lan and P. CvitanoviC, A variational method for finding periodic orbits (in preparation). 12. J.W. Thomas, Numerical Partial Differential Equations (Texts in Applied Mathematics, Springer-Verlag, New York 1995). 13. D. Auerbach, P. CvitanoviC, J.-P. Eckmann, G.H. Gunaratne and I. Procaccia, Phys. Rev. Lett. 58, 2387 (1987). 14. Y. Kuramoto and T. Tsuzuki, Progr. Theor. Physics 55, 365 (1976); G.I. Sivashinsky, Acta Astr. 4, 1177 (1977). 15. I.G. Kevrekidis, B. Nicolaenko and J.C. Scovel, SIAM J . Applied Math. 50, 760 (1990). 16. C. Foias, B. Nicolaenko, G.R. Sell and R. TCmam, J. Math. Pures et Appl. 67,197 (1988). 17. F. Christiansen, P. CvitanoviC, V. Putkaradze, Nonlinearity 10, 55 (1997), chao-dyn/9606016. 18. S.M. Zoldi and H.S. Greenside, Phys. Rev. E 57, R2511 (1998). 19. M. HBnon and C. Heiles, Astron. J. 69, 73 (1964). 20. V. Szebehely, Theory of Orbits (Academic Press, New York 1967).
NONEXTENSIVE STATISTICAL MECHANICS APPLICATIONS TO NUCLEAR AND HIGH ENERGY PHYSICS*
C . TSALLIS Centro Brasileiro de Pesquisas Fisicas R u a Xavier Sagaud 150, 22290-180 R i o de Janeiro, RJ, Brazil E. P. BORGES Escola Politecnica, Uniwersidade Federal da Bahia, R u a Aristides Novis 2 40210-630 Salvador-BA, Brazil
A variety of phenomena in nuclear and high energy physics seemingly do not satisfy the basic hypothesis for possible stationary states to be of the type covered by Boltzmann-Gibbs (BG) statistical mechanics. More specifically, the system appears t o relax, along time, on macroscopic states which violate the ergodic assumption. Some of these phenomena appear t o follow, instead, the prescriptions of nonextensive statistical mechanics. In the same manner that the BG formalism is based on the entropy SBG = - k C , p i lnpi, the nonextensive one is based on the form S, = k ( l - E , p : ) / ( q - 1) (with S1 = S B G ) . Typically, the systems following the rules derived from the former exhibit an exponential relaxation with time toward a stationary state characterized by an exponential dependence on the energy (thermal equilibrium), whereas those following the rules derived from the latter are characterized by (asymptotic) power-laws (both the typical time dependences, and the energy distribution at the stationary state). A brief review of this theory is given here, as well as of some of its applications, such as electron-positron annihilation producing hadronic jets, collisions involving heavy nuclei, the solar neutrino problem, anomalous diffusion of a quark in a quark-gluon plasma, and flux of cosmic rays on Earth. In addition to these points, very recent developments generalizing nonextensive statistical mechanics itself are mentioned.
*to appear in the proceedings of the x internatzonal workshop on multzparticle productzon - correlatzons and fluctuations in qcd (8-15 june 2002, Crete), ed. n. antoniou (world scientific, singapore, 2003) [email protected], [email protected]
326
327
1. Introduction The foundation of statistical mechanics comes from mechanics (classical, quantum, relativistic, or any other elementary dynamical theory). Consistently, in our opinion, the expression of entropy to be adopted for formulating statistical mechanics (and ultimately thermodynamics) depends on the particular type of occupancy of phase space (or Hilbert space, or Fock space, or analogous space) that the microscopic dynamics of the system under study collectively favors. In other words, it appears to be nowadays strong evidence that statistical mechanics is larger than Boltzmann-Gibbs (BG) statistical mechanics, that the concept of physical entropy can be larger than W
i
(hence SBG = k In W for equal probabilities), pretty much as geometry is known today to be larger than Euclidean geometry, since Euclid’s celebrated parallel postulate can be properly generalized in mathematically and physically very interesting manners. Let us remind the words of A. Einstein expressing his understanding of Eq. (1): Usually W is put equal to the number of complexions [...I. I n order to calculate W , one needs a complete (molecular-mechanical) theory of the system under consideration. Therefore it is dubious whether the Boltzmann principle has any meaning without a complete molecular-mechanical theory or some other theory which describes the elementary processes. S = log W const. seems without content, from a phenomenological point of view, without giving in addition such an Elementartheorie. This standpoint is, in our opinion, quite similar to the position that Riemann adopted when he began his study of ‘‘the concepts which lie at the base of Geometry”. Along this line, it is our understanding that the entropy expressed in Eq. (1) is, on physical grounds, no more irreducibly universal than Euclidean geometry with regard to all possible geometries, which, by the way, also include the fractal one, which inspired the theory addressed in this paper. Nonextensive statistical mechanics ’, to which this brief review is dedicated, is based on the following expression
5
+
It is well known that for microscopic dynamics which relax on an ergodic
328
occupation of phase space, the adequate entropic form to be used is that of Eq. (1). Such assumption is ubiquitously satisfied, and constitutes the physical basis for the great success of BG thermostatistics since over a century. We believe that a variety of more complex occupations of phase space may be handled with more complex entropies. In particular, it seems that Eq. (2), associated with an index q which is dictated by the microscopic dynamics (and which generically differs from unity), is adequate for a vast class of stationary states ubiquitously found in Nature. In recent papers, E.G.D. Cohen and M. Baranger also have addressed this question. A significant amount of systems, e.g., turbulent fluids and references therein), electron-positron annihilation collisions of heavy nuclei solar neutrinos l2>l3, quark-gluon plasma 14, cosmic rays 15, self-gravitating systems 17, peculiar velocities of galaxy clusters 18, cosmology 19, chemical reactions 20, economics motion of Hydra wiridissima 24, theory of anomalous kinetics 25, classical chaos 26, quantum chaos 27, quantum entanglement 28, anomalous diffusion 29, long-range-interacting many-body classical Hamiltonian systems (30 and references therein), internet dynamics 31, and others, are known nowadays which in no trivial way accomodate within BG statistical mechanical concepts. Systems like these have been handled with the functions and concepts which naturally emerge within nonextensive statistical mechanics We may think of q as a biasing parameter: q < 1 privileges rare events, while q > 1 privileges common events. Indeed, p < 1 raised to a power q < 1 yields a value larger than p , and the relative increase p q / p = pq-l is a decreasing function of p , i.e., values of p closer to 0 (rare events) are benefited. Correspondingly, for q > 1, values of p closer to 1 (common events) are privileged. Therefore, the BG theory (i.e., q = 1) is the unbiased statistics. A concrete consequence of this is that the BG formalism yields exponential equilibrium distributions (and time behavior of typical relaxation functions), whereas nonextensive statistics yields (asymptotic) power-law distributions (and relaxation functions). Since the BG exponential is recovered as a limiting case, we are talking of a generalization, not an alternative. To obtain the probability distribution associated with the relevant stationary state (thermal equilibrium or metaequilibrium) of our system we must optimize the entropic form (2) under the following constraints (576
798,
9910711,
21722923,
2y32933.
2p32:
329
and
where { Ei} is the set of eigenvalues of the Hamiltonian (with specific boundary conditions), and U, is a fixed and finite number. This optimization yields
P F CP 7 3
(7)
3
P being the optimization Lagrange parameter associated with the generalized internal energy U,. Equation (5) can be rewritten as
pi
0:
[1- (1 - q)p'Ei]l/(l-d = - e , -P'Ei
,
(8)
where P' is a renormalized inverse "temperature", and the q-exponential function is defined as ez = [l+ (1 - q)x]l/('-Q)= 1/[1- (q - l)z]'/(q-') (with e? = ez). This function replaces, in a vast number of relations and phenomena, the usual BG factor. In particular, the ubiquitous Gaussian distribution o( e--az2 becomes generalized into the distribution 0: egaqz21/[1+(q - l)a,z2I1/(Q-') (fat-tailed if q > 1). 2. Generalizing nonextensive statistical mechanics Nonextensive statistical mechanics generalizes the BG theory. It presumably addresses (mu1ti)fractal-like occupancy of phase space at the associated stationary states (e.g., metaequilibrium), instead of the usual, homogeneous, occupancy which satisfies ergodicity. Is there any fundamental reason for stopping here? We do not think so. In fact, roads pointing towards generalizations of (or alternatives for) nonextensive statistical mechanics are already open in the literature. Let us mention here two of them (already exhibiting some successes), namely (i) crossovers between qstatistics and q'-statistics ( 1 5 and references therein), and (ii) the recently introduced Beck-Cohen superstatistics 34. Both of them address the energy
330
distributions corresponding to the stationary states, and are perfectly compatible, as we shall show. More precisely, the first type can be thought as a particular case of the second type. However, statistical mechanics is much more than a distribution correctly corresponding to the stationary state. Indeed, if any given entropy S( {pi}) is optimized by a specific distribution pi, all entropic forms which are monotonic functions of S will be optimized by the same distribution. Nevertheless, only a very restricted class of such entropic forms can be considered as serious candidates for constructing a full thermostat istical theory, eventually connected with thermodynamics. In particular, one expects the correct entropy to be concave and stable. Such is the case 35 of S, as well as of the generalized entropy recently proposed 36i37 for the just mentioned superstatistics 34. We briefly address these questions in this Section. Let us first consider the following differential equation:
The solution is given by
y = eax .
(10)
We can use this result in at least three manners which are of interest in statistical mechanics: (i) We may refer to the sensitivity to the initial conditions, and consider x = t , where t is time, y = E = limAz(o),oAx(t)/Az(0), where Az(t) denotes the discrepancy of two initial conditions in a one-dimensional map (or, for higher-dimensional systems, the analogous situation for the axis along which the maximum dynamical instability occurs), and a = XI # 0, where XI is the Lyapunov exponent. In this case Eq. (10) reads in the familiar form:
~ ( t=) exl .
(11)
(ii) We may refer to the relaxation towards the stationary state (thermal equilibrium), and consider z = t , y = [O(t)- O(oo)]/[O(O) - O(oo)],where 0 is the average value of a generic observable, and a = -1/r < 0, where r is a relaxation time. In this case Eq. (10) reads in the typical form:
(iii) We may refer to the energy distribution at thermal equilibrium of a Hamiltonian system, and consider x = Ei, where Ei is the energy of the
331
i-th microscopic state, y = Zp(Ei), where p is the energy probability and Z the partition function, and -a = ,6 is the inverse temperature. In this case Eq. (10) reads in the familiar BG form:
This distribution is of course the one that optimizes the entropy SBGunder the standard constraints for the canonical ensemble. Let us next generalize Eq. (9) as follows: dY =ayq
dx
(Y(0) = 1 ) .
(14)
The solution is given by y
=
e y = [1+(1 - q)ax]1/(1-q) ,
(15)
e: being from now on referred to as the q-exponential function (ey = eZ).
The three above possible physical interpretations of such solution now become (i) For the sensitivity to the initial conditions,
<(t)= e i q t
=
[I
+ (I
-
q)X,
t]1/(1-44 ,
(16)
where A, # 0 is the generalized Lyapunov coefficient (see 26), and, at the edge of chaos, A, > 0 and q < 1. (ii) For the relaxation,
where T, > 0 is a generalized relaxation time, and typically q 2 1 38. (iii) For the energy distribution, we get the form which emerges in nonextensive statistical mechanics, namely 2*32
where usually, but not necessarily, ,Ok > 0 and q 2 1. This distribution is the one that optimizes the entropy S, under appropriate constraints for the canonical ensemble. Let us next unify Eqs. (9) and (14) as follows: dY -
dx = a l y
+ a,yq
(y(0) = 1) .
(19)
332
This is a simple particular case of Bernoulli equation, and its solution is given by
This solution reproduces Eq. (10) if a, = 0, and Eq. (15) if a1 = 0. It corresponds to a crossover from a q # 1 behavior at small values of 2, to a q = 1 behavior at large values of 2. The crossover occurs at 2, 11 l / " q - l)a11 38. 3. Applications
Let us now briefly review five recent applications of the ideas associated with nonextensive statistical mechanics to phenomena in nuclear and high energy physics, namely electron-positron annihilation 7,8, collisions of heavy nuclei the solar neutrino deficit l 2 > l 3quark , anomalous diffusion 14, and the flux of cosmic rays 15. Electron-positron annihilation: In high energy collisions of an electron with a positron, annihilation occurs and, immediately after, typically two or three hadronic jets are produced. The probability distribution of their transverse momenta is non-Boltzmannian, more strongly so with increasing energy of collision. This phenomenon has defied theoreticiens since several decades, particularly since Hagedorn l6 quantitatively analyzed such data in the frame of BG statistical mechanics. A phenomenological theory has been recently proposed by Bediaga et a1 ', which beautifully fits the data. The fitting parameters are two, namely the temperature T and the entropic index q. For each energy collision E, a set of (T,q)is determined. It numerically comes out that q depends on the energy (like q(c0) - q(EJ K EF1'2 €or increasingly large E,, and q(Ec 0) N l ) ,but T does not! This invariance of T with respect to the collision energy constitutes the central hypothesis of the physical scenario advanced long ago by Hagedorn. This scenario is now confirmed. The ingredients for a microscopic model within this approach have also been proposed 8. Heavy nuclei collisions: A variety of high-energy colliiions have been discussed in terms of the present nonextensive formalism. Examples include proton-proton, central Pb-Pb and other nuclear collisions Along related lines, entropic inequalities applied to pion-nucleon experimental phase shifts have provided strong evidence of nonextensive quantum statistics ll. gJOgll,
=
9t10.
333
Solar neutrino problem: The solar plasma is believed to produce large amounts of neutrinos through a variety of mechanisms (e.g., the proton-proton chain). The calculation done using the so called Solar Standard Model (SSM) results in a neutrino flux over the Earth, which is roughly the double of what is measured. This is sometimes referred to as the neutrino problem or the neutrino enigma. There is by no means proof that this neutrino flux defect is due to a single cause. It has recently been verified that neutrino oscillations do seem to exist (12 and references therein), which would account for part of the deficit. But it is not at all clear that it would account for the entire discrepancy. Quarati and collaborators l3 argue that part of it - even, perhaps, an appreciable part of it - could be due to the fact that BG statistical mechanics is used within the SSM. The solar plasma involves turbulence, long-range interactions, possibly long-range memory processes, all of them phenomena that could easily defy the applicability of the BG formalism. Then they show l3 in great detail how the modification of the “tail” of the energy distribution could considerably modify the neutrino flux to be expected. Consequently, small departures from q = 1 (e.g., 1q - 11 of the order of 0.1) would be enough to produce as much as 50% difference in the flux. This is due to the fact that most of the neutrino flux occurs at what is called the Gamow peak. This peak occurs at energies quite above the temperature, i.e., at energies in the tail of the distribution. Quark diffusion: The anomalous diffusion of a charm quark in a quark-gluon plasma has been analyzed by Walton and Rafelski l4 through both nonextensive statistical mechanical arguments and quantum chromodynamics. The results coincide, as they should, only for q = 1.114. Cosmic rays: The flux @ of cosmic rays arriving on Earth is a quantity whose measured range is among the widest experimentally known (33 decades in fact). This distribution refers to a range of energies E which also is impressive (13 decades). This distribution is very far from exponential: See Figs. 1 and 2. This basically indicates that no BG thermal equilibrium is achieved, but some other (either stationary, or relatively slow varying) state, characterized in fact by a power law. If the distribution is analyzed with more detail, one verifies that two, and not one, power-law regimes are involved, separated by what is called the “knee” (slightly below 10l6 e V ) . At very high energies, the power-law seems to be interrupted by what is called the “ankle” (close to 1019 e V ) and perhaps a cutoff. The momenta
334
[JPtoff
= ((E= d E @ ( E ) ( E- (E))2]/[~~d "E " "@f (f E ) ] (1 = 1,2,3) as functions of the cutoff energy Ec,toff (assumed to be abrupt for simplicity) are calculated as well: See Figs. 3, 4 and 5. At high cutoff energies, ( E ) saturates at 2.48944... x lo9 eV 15, a value which is over ten times larger than the Hagedorn temperature (close to 1.8 x lo8 el/ 8). In the same limit, we obtain for the specific-heat-like quantity M2 N ( E 2 )21 6.29 x 1021 (eV)2.Finally, M3 21 ( E 3 )diverges with increasing Ec,toff. This is of course due to the fact that, in the high energy l/E3.*;consequently the third moment integrand limit, @ 0: 1 / E h - 2 vanishes like l/E0.4,which is not integrable at infinity. One may guess that, along such wide ranges (of both fluxes and energies), a variety of complex intra- and inter-galactic phenomena are involved, related to both the sources of the cosmic rays as well as the media they cross before arriving on Earth. However, from a phenomenologicalviewpoint, the overall results amount to something quite simple. Indeed, by solving a simple differential equation, a quite remarkable agreement is obtained 15. This differential equation is the following one: Mi
N
This differential equation has remarkable particular cases. The most famous one is (q', q ) = (1,2), since it precisely corresponds to the differential equation which enabled Planck, in his October 1900 paper, to (essentially) guess the black-body radiation distribution, thus opening (together with his December 1900 paper) the road to quantum mechanics. The more general case q' = 1 and arbitrary q is a simple particular instance of the Bernoulli equation, and, as such, has a simple explicit solution (Eq. (20) with a1 = -b' and a4 = -b). This solution has proved its efficiency in a variety of problems, including in generalizing the Zipf-Mandelbrot law for quantitative linguistics (for a review, see Montemurro's article in the Gell-Mann-Tsallis volume 33). Finally, the generic case q > q' > 1 also has an explicit solution (though not particularly simple, but in terms of two hypergeometric functions; see 38) and produces, taking also into account the ultra-relativistic ideal gas density of states, the above mentioned quite good agreement with the observed fluxes. Indeed, if we assume 0 < b' << b and q' < q, the distribution makes a neat crossover from a power-law characterized by q at low energies to a power-law characterized by q' at high energies, which is exactly what the cosmic rays exhibit to a quite good approximation. Let us finally mention that the first possible microscopic
335
interpretation of our phenomenological theory has just been suggested 39. For possible effects of a slightly nonextensive black-body radiation on cosmic rays see 40. Finally, other aspects related to cosmic rays have been shown to exhibit fingerprints of nonextensivity 41.
4. Conclusions
In nuclear and high energy physics, there is a considerable amount of anomalous phenomena that benefit from a thermostatistical treatment which exceeds the usual capabilities of Boltzmann-Gibbs statistical mechanics. This fact is due to the relevance of long-range forces, as well as to a variety of dynamical nonlinear dynamical aspects, possibly leading to nonmarkovian processes, i.e., long-term microscopic memory. Some of these phenomena appear to be tractable within nonextensive statistical mechanics, and we have illustrated this with a few typical examples. For the particular case of cosmic rays, we have indicated their average energy ( E )E 2.48944... x lo9 eV, and the specific-heat-like quantity ( E 2 )- ( E ) 2II 6.29 x lo2' ( e V ) 2 with , the hope that they might be usefully compared to related astrophysical quantities, either already available in the literature, or to be studied. Along the same veine we have also presented the dependence of such momenta on a possibly existing high-energy cutoff. In addition to this, we have sketched the possible generalization of nonextensive statistical mechanics on the basis of a recently introduced entropic form 36, whose stationary state is the Beck-Cohen superstatistics 34. The metaequilibrium distribution associated with a crossover between q-statistics and q'-statistics can be seen as a particular case of this generalized nonextensive statistical mechanics. It is worthy to mention at this point that the present attempts for further generalization of BG statistical mechanics are to be understood on physical grounds, and by no means as informational quantities that can be freely introduced to deal with specific tasks, and which can in principle depend on as many free (or fitting) parameters as one wishes. Examples of such informational quantities are the Renyi entropy (depending on one parameter and being usefully applied in the multifractal characterization of chaos), Sharma-Mittal entropy (which contains both Renyi entropy and S, as particular cases), and very many others that are available in the literature. The precise criteria for an entropic form to be considered a possible physical entropy are yet not fully understood, although it is already clear that it must have a microscopic dynamical foundation. It seems however
336
reasonable to exclude, at this stage, those forms which, an contrast with SBG and S,, (i) are not concave (or convex), since this would seriously damage the capability for thermodynamical stability and for satisfactory thermalization of different systems put in thermodynamical contact, or (ii) are not stable, since this would imply that the associated quantity would not be robust under experimental measurements. These crucial points, and several others (probably equally important, such as the finite entropy production per unit time), have been disregarded by Luzzi et a1 42 in their recent criticism of nonextensive statistical mechanics. Indeed, Renyi entropy, Sharma-Mittal entropy (that Luzzi et a1 mention without any justification at the same epistemological level as S,) are neither concave nor stable f o r arbitrary values of their parameters. These and other information measures (most of them not concave and/or not stable) have been freely introduced, along various decades, as optimizing tools for specific tasks. They can certainly be useful for various purposes, which do not necessarily include the specific one we are addressing here: a thermodynamically meaningful generalization of the Boltzmann-Gibbs physical entropy.
Acknowledgments We are indebted to T. Kodama, G. Wilk, I. Bediaga, E.G.D. Cohen, M. Baranger and J. Anjos for useful remarks that we have received along the years. One of us (CT) is grateful to M. Gell-Mann for many and invaluable discussions on this subject. This work has been partially supported by PRONEX/MCT, CNPq, and FAPERJ (Brazilian agencies).
References 1. A. Einstein, Annalen der Physik 33, 1275 (1910) [Translation: Abraham Pais, Subtle is the Lonl..., Oxford University Press, 1982)]. 2. C. Tsallis, J. Stat. Phys. 52,479 (1988). 3. E.G.D. Cohen, Physica A 305,19 (2002). 4. M.Baranger, Physica A 305,27 (2002). 5. C. Beck, G. S. Lewis and H. L. Swinney, Phys. Rev. E 63,035303 (2001); C. Beck, Phys. Rev. Lett. 87, 180601 (2001); C. Beck, Europhys. Lett. 57,329 (2002). 6. T. Arimitsu and N. Arimitsu, Physica A 305,218 (2002). 7. I. Bediaga, E. M. F. Curado and J. Miranda, Physica A 286, 156 (2000). 8. C. Beck, Physica A 286, 164 (2000). See also C. Beck, Physica D 171, 72 (2002). 9. M. Rybczynski, Z. Wlodarczyk and G. Wilk, Rapidity spectra analysis in t e r n s of non-extensive statistic approach, preprint (2002) [hepph/0206157];
337
10. 11.
12.
13.
14. 15. 16. 17.
O.V. Utyuzh, G. Wilk and Z. Wlodarczyk, J. Phys. G 26, L39 (2000); G. Wilk and Z. Wlodarczyk, Phys. Rev. Lett. 84,2770 (2000); G. Wilk and Z. Wlodarczyk, in Non Extensive Statistical Mechanics and Physical Applications, eds. G. Kaniadakis, M. Lissia and A. Rapisarda, Physica A 305,227 (2002); G. Wilk and Z. Wlodarczyk, in Classical and Quantum Complexity and Noneztensive Thermodynamics, eds. P. Grigolini, C. Tsallis and B.J. West, Chaos, Solitons and Fractals 13,Number 3, 547 (Pergamon-Elsevier, Amsterdam, 2002); F.S. Navarra, O.V. Utyuzh, G. Wilk and Z. Wlodarczyk, N. Cimento C 24,725 (2001); G. Wilk and Z. Wlodarczyk, in Proc. 6th International Workshop on Relativistic Aspects of Nuclear Physics (RANP2000, Tabatinga, Sao Paulo, Brazil, 17-20 October 2000); R. Korus, St. Mrowczynski, M. Rybczynski and Z. Wlodarczyk, Phys. Rev. C 64,054908 (2001); G. Wilk and Z. Wlodarczyk, Traces of nonextensivity in particle physics due to fluctuations, ed. N. Antoniou (World Scientific, 2003), to appear [ h e p ph/0210175]. C.E. Aguiar and T. Kodama, Nonextensive statistics and multiplicity distribution in hadronic collisions, Physica A (2003), in press. D.B. Ion and M.L.D. Ion, Phys. Rev. Lett. 81,5714 (1998); M.L.D. Ion and D.B. Ion, Phys. Rev. Lett. 83,463 (1999); D.B. Ion and M.L.D. Ion, Phys. Rev. E 60,5261 (1999); D.B. Ion and M.L.D. Ion, in Classical and Quantum Complexity and Nonextensive Thermodynamics, eds. P. Grigolini, C. Tsallis and B.J. West, Chaos , Solitons and Fractals 13,Number 3, 547 (PergamonElsevier, Amsterdam, 2002); M.L.D. Ion and D.B. Ion, Phys. Lett. B 474, 395 (2000); M.L.D. Ion and D.B. Ion, Phys. Lett. B 482,57 (2000); D.B. Ion and M.L.D. Ion, Phys. Lett. B 503,263 (2001). M. Coraddu, M. Lissia, G. Mezzorani and P. Quarati, Super-Kamiokande hep neutrino best fit: A possible signal of nonmaxwellaan solar plasma, Physica A (2003), in press [hep-ph/0212054]. G. Kaniadakis, A. Lavagno and P. Quarati, Phys. Lett. B 369,308 (1996); P. Quarati, A. Carbone, G. Gervino, G. Kaniadakis, A. Lavagno and E. Miraldi, Nucl. Phys. A 621,345c (1997); G. Kaniadakis, A. Lavagno and P. Quarati, Astrophysics and space science 258,145 (1998); G. Kaniadakis, A. Lavagno, M. Lissia and P. Quarati, in Proc. 5th International Workshop on Relativistic Aspects of Nuclear Physics (Rio de JaneireBrazil, 1997); eds. T. Kodama, C.E. Aguiar, S.B. Duarte, Y . Hama, G. Odyniec and H. Strobele (World Scientific, Singapore, 1998), p. 193; M. Coraddu, G. Kaniadakis, A. Lavagno, M. Lissia, G. Mezzorani and P. Quarti, in Nonextensive Statistical Mechanics and Thermodynamics, eds. S.R.A. Salinas and C. Tsallis, Braz. J. Phys. 29, 153 (1999); A. Lavagno and P. Quarati, Nucl. Phys. B, Proc. Suppl. 87, 209 (2000); C.M. Cossu, Neutrini solari e statistaca d i Tsallis, Master Thesis, Universita degli Studi di Cagliari (2000). D.B. Walton and J. Rafelski, Phys. Rev. Lett. 84,31 (2000). C. Tsallis, J.C. Anjos and E.P. Borges, Fluxes of cosmic rays: A delicately balanced anomalous stationary state, astro-ph/0203258 (2002). R. Hagedorn, N. Cim. 3,147 (1965). A.R. Plastino and A. Plastino, Phys. Lett. A 174,384 (1993); J.-J. Aly and J.
338
18. 19.
20. 21. 22. 23.
24. 25. 26.
Perez, Phys. Rev. E 60,5185 (1999); A. Taruya and M. Sakagami, Physica A 307,185 (2002); A. Taruya and M. Sakagami, Gravothermal catastrophe and Tsallis’ generalized entropy of self-gmvitating systems II. Thermodynamic properties of stellar polytrope, Physica A (2003), in press [cond-mat/0204315]; P.H. Chavanis, Gravitational instability of isothermal and polytropic spheres, Astronomy and Astrophysics (2003), in press [astro-ph/0207080]; P.-H. Chavanis, Astro. and Astrophys. 386,732 (2002). A. Lavagno, G. Kaniadakis, M. Rego-Monteiro, P. Quarati and C. Tsallis, Astrophysical Letters and Communications 35,449 (1998). V.H. Hamity and D.E. Barraco, Phys. Rev. Lett. 76, 4664 (1996); V.H. Hamity and D.E. Barraco, Physica A 282, 203 (2000); L.P. Chimento, J. Math. Phys. 38, 2565 (1997); D.F. Torres, H. Vucetich and A. Plastino, Phys. Rev. Lett. 79, 1588 (1997) [Erratum: 80, 3889 (1998)l; U. Tirnakli and D.F. Torres, Physica A 268,225 (1999); L.P. Chimento, F. Pennini and A. Plastino, Physica A 256, 197 (1998); L.P. Chimento, F. Pennini and A. Plastino, Phys. Lett. A 257,275 (1999); D.F. Torres and H. Vucetich, Physica A 259, 397 (1998); D.F. Torres, Physica A 261, 512 (1998); H.P. de Oliveira, S.L. Sautu, I.D. Soares and E.V. Tonini, Phys. Rev. D 60,1213011 (1999); H.P. de Oliveira, I.D. Soares and E.V. Tonini, Physica A 295, 348 (2001); M.E. Pessah, D.F. Torres and H. Vucetich, Physica A 297,164 (2001); M.E. Pessah and D.F. Torres, Physica A 297,201 (2001); C. Hanyu and A. Habe, Astrophys. J. 554,1268 (2001); E.V. Tonini, Caos e universalidade e m modelos cosmologicos com pontos criticos centro-sela, Doctor Thesis (Centro Brasileiro de Pesquisas Fisicas, Rio d e Janeiro, March 2002) G.A. Tsekouras, A. Provata and C. Tsallis, Non-extensive properties of the cyclic lattice Lotka- Volterra model, in preparation (2003). C. Anteneodo, C. Tsallis and A.S. Martinez, Europhys. Lett. 59,635 (2002). L. Borland, Phys. Rev. Lett. 89,098701 (2002); Quantitative Finance 2,415 (2002). R. Osorio, L. Borland and C. Tsallis, in Noneztensive Entropy - Intenlisciplinary Applications, M. Gell-Mann and C. Tsallis, eds. (Oxford University Press, 2003), in preparation; see also F. Michael and M.D. Johnson, Financial marked dynamics, Physica A (2003), in press. A. Upadhyaya, J.-P. Rieu, J.A. Glazier and Y. Sawada, Physica A 293,549 (2001). J. A. S. de Lima, R. Silva and A. R. Plastino, Phys. Rev. Lett. 86, 2938 (2001). C. Tsallis, A.R. Plastino and W.-M. Zheng, Chaos, Solitons & Fractals 8, 885 (1997); U.M.S. Costa, M.L. Lyra, A.R. Plastino and C. Tsallis, Phys. Rev. E 56,245 (1997); M.L. Lyra and C. Tsallis, Phys. Rev. Lett. 80, 53 (1998); U. Tirnakli, C. Tsallis and M.L. Lyra, Eur. Phys. J. B 11, 309 (1999); V. Latora, M. Baranger, A. Rapisarda, C. Tsallis, Phys. Lett. A 273,97 (2000); F.A.B.F. de Moura, U. Tirnakli, M.L. Lyra, Phys. Rev. E 62,6361 (2000); U. Tirnakli, G. F. J. Ananos, C. Tsallis, Phys. Lett. A 289, 51 (2001); H. P. de Oliveira, I. D. Soares and E. V. Tonini, Physica A 295, 348 (2001); F. Baldovin and A. Robledo, Europhys. Lett. 60,518 (2002); F. Baldovin
339
27. 28.
29.
30.
31. 32.
33.
and A. Robledo, Phys. Rev. E 66, 045104(R) (2002); E.P. Borges, C. Tsallis, G.F.J. Ananos and P.M.C. Oliveira, Phys. Rev. Lett. 89, 254103 (2002); U. Tirnakli, Physica A 305, 119 (2002); U. Tirnakli, Phys. Rev. E 66, 066212 (2002). Y . Weinstein, S. Lloyd and C. Tsallis, Phys. Rev. Lett. 89, 214101 (2002). S. Abe and A.K. Rajagopal, Physica A 289, 157 (2001), C. Tsallis; S. Lloyd and M. Baranger, Phys. Rev. A 63, 042104 (2001); C. Tsallis, P.W. Lamberti and D. Prato, Physica A 295, 158 (2001); F.C. Alcaraz and C. Tsallis, Phys. Lett. A 301, 105 (2002); C. Tsallis, D. Prato and C. Anteneodo, Eur. Phys. J. B 29, 605 (2002); J. Batle, A.R. Plastino, M. Casas and A. Plastino, Conditional q-entropies and quantum separability: A numerical exploration, quant-ph/0207129 (2002). A.R. Plastino and A. Plastino, Physica A 222, 347 (1995); C. Tsallis and D.J. Bukman, Phys. Rev. E 54, R2197 (1996); C. Giordano, A.R. Plastino, M. Casas and A. Plastino, Eur. Phys. J. B 22,361 (2001); A. Compte and D. Jou, J. Phys. A 29, 4321 (1996); A.R. Plastino, M. Casas and A. Plastino, Physica A 280, 289 (2000); M. Bologna, C. Tsallis and P. Grigolini, Phys. Rev. E 62, 2213 (2000); C. Tsallis and E.K. Lenzi, in Strange Kinetics, eds. R. Hilfer et al, Chem. Phys. 284, 341 (2002) [Erratum (2002)]; E.K. Lenzi, L.C. Malacarne, R.S. Mendes and I.T. Pedron, Anomalous diffusion, nonlinear fractional Fokker-Planck equation and solutions, cond-mat/0208332 (2002); E.K. Lenzi, C. Anteneodo and L. Borland, Phys. Rev. E 63, 051109 (2001); E.M.F. Curado and F.D. Nobre, Derivation of nolinear Fokker-Planck equations b y means of approximations to the master equation, Phys. Rev. E 67, 0211XX (2003), in press; C. Anteneodo and C. Tsallis, Multiplicative noise: A mechanism leading to nonextensive statistical mechanics, cond-mat/0205314 (2002). C. Anteneodo and C. Tsallis, Phys. Rev. Lett 80, 5313 (1998); V. Latora, A. Rapisarda and C. Tsallis, Phys. Rev. E 64,056134 (2001); A. Campa, A. Giansanti and D. Moroni, in N o n Extensive Statistical Mechanics and Physical Applications, eds. G. Kaniadakis, M. Lissia and A. Rapisarda, Physica A 305, 137 (2002); B.J.C. Cabral and C. Tsallis, Phys. Rev. E 66,065101(R) (2002); E.P. Borges and C. Tsallis, in Non Extensive Statistical Mechanics and Physical Applications, eds. G. Kaniadakis, M. Lissia and A. Rapisarda, Physica A 305, 148 (2002); A. Campa, A. Giansanti, D. Moroni and C. Tsallis, Phys. Lett. A 286, 251 (2001); M.-C. Firpo and S. Ruffo, J. Phys. A 34, L511 (2001); C. Anteneodo and R.O. Vallejos, Phys. Rev. E 65, 016210 (2002); R.O. Vallejos and C. Anteneodo, Phys. Rev. E 66,021110 (2002); M.A. Montemurro, F. Tamarit and C. Anteneodo, Aging in a n infinite-range Hamiltonian of coupled rotators, Phys. Rev. E (2003), in press cond-mat/0205355 (2002). S. Abe and N. Suzuki, Phys. Rev. E 67, 016106 (2003). E.M.F. Curado and C. Tsallis, J. Phys. A: Math. Gen. 24, L69 (1991) [Corrigenda: 24, 3187 (1991) and 25, 1019 (1992)]; C. Tsallis, R.S. Mendes and A.R. Plastino, Physica A 261, 534 (1998). S.R.A. Salinas and C. Tsallis, eds., Nonextensive Statistical Mechanics and
340
34. 35. 36.
37. 38. 39. 40. 41.
42.
Thermodynamics, Braz. J. Phys. 29,No. 1 (1999); S. Abe and Y. Okamoto, eds., Nonextensive Statistical Mechanics and its Applications, Series Lecture Notes in Physics (Springer-Verlag, Berlin, 2001); G. Kaniadakis, M. Lissia and A. Rapisarda, eds., Non Extensive Statistical Mechanics and Physical Applications, Physica A 305,No 1/2 (Elsevier, Amsterdam, 2002); M. GellMann and C. Tsalli, eds., Nonextensive Entropy - Interdisciplinary Applications (Oxford University Press, 2003), in preparation; H.L. Swinney and C. Tsallis, eds., Anomalous Distributions, Nonlinear Dynamics, and Nonextensivity, Physica D (2003), in preparation. An updated bibliography can be found at the web site http://tsallis.cat.cbpf.br/biblio.htm C. Beck and E.G.D. Cohen, Superstatistics, Physica A (2003), in press [condmat/0205097]. S. Abe, Phys. Rev. E 66, 046134 (2002). C. Tsallis and A.M.C. Souza, Constructing a statistical mechanics for BeckCohen superstatistics, Phys. Rev. E 67,0261XX (1 Feb 2003), in press [condmat/0206044]. A.M.C. Souza and C. Tsallis, Stability of the entropy for superstatistics, preprint (2003) [cond-mat/0301304]. C. Tsallis, G. Bemski and R.S. Mendes, Phys. Lett. 257,93 (1999). C. Beck, Generalized statistical mechanics of cosmic rays, preprint (2003) [cond-mat/0301354]. L.A. Anchordoqui and D.F. Torres, Phys. Lett. A 283,319 (2001). G. Wilk and Z. Wlodarcsyk, Nucl. Phys. B (Proc. Suppl.) 75A,191 (1999); G. Wilk and Z. Wlodarczyk, Nonexponential decays and nonextensivity, Phys. Lett. A 290,55 (2001). R. Luzzi, A.R. Vasconcellos and J.G. Ramos, On the question of the so-called “Nonextensive thermo-statistics”, preprint (2002, IFGW-Unicamp internal report); Science 298,1171 (2002).
341
1 o4 1 o1
1 0" 1 o5 10"
L
In
1 o-20
1 o-" 1o"6
1 o-'~ 1
Figure 1. Flux of cosmic rays as function of their energy. See [15] for details.
342
-l l i
7% lo8 N
E
5 lo6
N
u
g
104
X m hlo2
t?
Boltzmann-Gibbs
lo8 lolo
1014 1 0 ' ~ 1 0 ' ~ lo2'
Energy [eV] Figure 2.
Same as in Fig. 1, but the ordinate is now multiplied by ( E n e ~ g y ) ~
3
>' 2.5 -
8
U
2
n
1.5
(E)
1 A
w
1.5
0.5
0.5
v
0 1 o7
Figure 3.
I
1 O'O
~
,
,
,
Ecutoff l ,
.
1015
Ecutoff W l ( E ) as a function of the cutoff energy.
,
,
1O2O
343
F a;, U
n
b0 Y
W WB
A
N
PC
v
(E2)(black dashed curves) and M2 curves) as functions of the cutoff energy. Figure 4.
G
( E 2 ) - ( E ) 2 (red continuous
F a;,
U
n
n
m
PC
v
+
( E 3 )- 3(E)(E)2 2(,?q3 (red Figure 5 . ( E 3 ) (black dashed curves) and M3 continuous curves) as functions of the cutoff energy. At vanishing Eculoff,M3 vanishes from below, i.e., with slightly negative values.
TRACES OF NONEXTENSIVITY IN PARTICLE PHYSICS DUE TO FLUCTUATIONS G .WILK The Andrzej Sottan Institute for Nuclear Studies; Hoia 69; 00-689 Warsaw, Poland E-mail: [email protected]
Z.WLODARCZYK Institute of Physics, dwiptokrzyska Academy; Konopnickiej 15; 25-405 Kielce, Poland E-mail: [email protected] We present a short review of traces of nonextensivity in particle physics due to fluctuations.
1
Introduction: connection of fluctuations and nonextensivity
Both the notion of fluctuations and that of nonextensivity are nowdays widely known, albeit mostly in the fields of research only indirectly connected with particle physics. Both turns out to be very fruitful and interesting and this is clearly demonstrated by all other lectures given at this workshop (see especially 1 , 2 ) . This lecture will be devoted to the case in which evident nonextensivity of some expressions originate in intrinsic fluctuations in the system under consideration (the origin of which is usually not yet fully understood)a. The best introduction to this problem is provided by the observation that in some cosmic ray data (like depth distribution of starting points of cascades in Pamir lead chamber 1 3 ) one observes clear deviations from the naively expected exponential distributions of some variables which are evidently better fitted by the power-like formulas:
dN = c o n s t . e x p ( - hh)
+const. [I (1)
dh
Here N denotes the number of counts at depth h (cf. l 3 for details). Whereas in l 3 we have proposed as explanation a possible fluctuations of the mean -(#) free path X in eq. (1) characterised by relative variance w = (((79 (u)2
2
0.2, in the same data were fitted by power-like (L6vy type) formula as above keeping X fixed and setting q = 1.3. In this way we have learned aOur encounter with this problem is presented in works
344
3,4,5,6,7,819,10,11,12.
345
about Tsallis statistics and Tsallis nonextensive entropy and distributionsb. By closer inspection of the above example we have been able to propose a new physical meaning of the nonextensivity parameter q , as a measure of intrinsic fluctuations in the system Fluctuations are therefore proposed as a new source of nonextensivity which should be added l 4 to the previously known and listed in literature sources of the nonextensivity (like long-range correlations, memory effects or fractal-like structure of the corresponding phases space 2, . To demonstrate this conjecture let us notice that for q > 1 case, where E E (0, co),one can write a kind of Mellin transform (here a = &)5 : 475.
where f q > l ( l / X ) is given by the following gamma distribution:
with p = d o and with mean value and variation in the form:
For the q < 1 case E is limited to E E [0, X o / ( l - q ) ] . Proceeding in the same way as before (but with a' = -a = L) one gets: 1--P
where f q < l ( l / X ) is given by the same gamma distribution as in (3) but this time with a + a' and p = P(E) = a'&, - E . Contrary to the q > 1 case, this time the fluctuations depend on the value of the variable in question, i.e., the mean value and variance are now both €-dependent:
bSee Tsallis lecture and references therein (cf. also 5,11) for detailed necessary information concerning Tsallis statistics and the non-extensivity.
346
However, in both cases the relative variances,
remain &-independent and depend only on the parameter q =. It means therefore that 4,5 (at least for the fluctuations distributed according to gamma distribution)
L = exp
(-):
*
Lq = exp,
(-6)
=(~XP
(-:))
,
(8)
with q = 1 f w for q > 1 (+) and q < 1 (-), i.e., there is connection between the measure of fluctuations w and the measure of nonextensivity q (it has been confirmed recently in 14). 2
2. I
Where are the fluctuations coming from? Generalities
The proposed interpretation of the parameter q leads immediately to the following question: why and under what circumstances it is the gamma distribution that describes fluctuations of the parameter A ? To address this question let us write the usual Langevin equation for the stochastic variable X 4,5:
[t +
dX
-+ dt with damping constant considered, namely:
4=
T
(
4 = const > 0.
and with source term
1 XrJ - = T
]
[(t) X =
3
whereas
(9)
4, different for the two cases XO 4 = + q > ~ = -.
T
(10)
For the usual stochastic processes defined by the white gaussian noise form of [ ( t ) one obtains the following Fokker-Plank equation for the distribution CNotice that, with increasing a or a’ (i.e., for q + 1) both variances (7) decrease and asymptotically gamma distributions (3) becomes a delta function, f ( l / A ) = b(A - XO). dIt means that ensemble mean ( r ( t ) ) = 0 and correlator (for sufficiently fast changes) (t(t)t ( t + At)) = 2 D b ( A t ) . Constants T and D define, respectively, the mean time for changes and their variance by means of the following conditions: (X(t)) = A0 exp and ( A 2 ( t = 00)) = D r . Thermodynamical equilibrium is assumed here (i.e., t >> T , in which case the influence of the initial condition vanishes and the mean squared of X has value corresponding to the state of equilibrium).
4
(-5)
347
function of the variable X
where the intensity coefficients K1,2 are defined by eq.(9) and are equal to
X Kl(X) = q5 - 7
+ DX
and
Kz(X) = 2 0 X 2 .
475:
(12)
From it we get the following expression for the distribution function of the variable A:
which is, indeed, a gamma distribution (3) in variable 1/X, with the constant c defined by the normalization condition, d(l/X)f(l/X) = 1, and depending on two parameters: p ( ~ = ) and aq = with q5q = q5q>l,q 1 and q < 1. This means that we have obtained eqs.(7) with w = $ and, therefore, the parameter of nonextensivity q is given by the parameter D and by the damping constant r describing the white noise.
2.2
Temperature fluctuations
The above discussion rests on the stochastic equation (9). Therefore the previously asked question is not yet fully answered but can be refrazed in the following way: can one point the possible physical situation where such fluctuations could be present in the realm of particle physics? Our proposition to answer it is to identify A = T , i.e., to concentrate on the possibility of fluctuations of temperature widely discussed in the literature in different contexts 15. In all cases of interest to us temperature T is variable encountered in statistical descriptions of multiparticle collision processes 16. Our reasoning goes then as follows: Suppose that we have a thermodynamic system, in a small (mentally separated) part of which the temperature fluctuates with AT T . Let X ( t ) describe stochastic changes of the temperature in time. If the mean temperature of the system is ( T )= TOthen, as result of fluctuations in some small selected region, the actual temperature equals T' = TO- n$(t)T and the inevitable exchange of heat between this selected region and the rest of the system leads to the process of equilibration of the temperature which is described by the following equation of the type of eq. (9) l7 : dT 1 - - - (T' - T ) + f l q = 0 N
at
7
348
(here flq 1, i.e., to
eIt should be noticed that in the case of q < 1 temperature does not reach stationary state because, cf. Eq. (6), ( l / T ) = l/(To - &/a’),whereas for q > 1 we had < 1/T >= l/To. As a consequence the corresponding LBvy distributions are defined only for E E (0,Toa’) because for E + Toa’, < T >t 0. Such asymptotic (i.e., for t / T t m) cooling of the system (T t 0) can be also deduced form Eq. ( 1 4 ) for E t Toa’.
349
D
Figure 1. ( a ) Normal exponential p~ distributions (i.e., q = 1) for T = 200 MeV (black symbols) and T = 250 MeV open symbols). (b) Typical event from central Pb Pb at Ebeam = 3 A.TeV (cf. text for other details) for = 200 MeV for q = 1 (black symbols) exponential dependence and q = 1.05 (open symbols).
+
Here m is the mass of produced particle and T is, for the q = 1 case, the temperature of the hadronic system produced. Although very small (lq 11 0.015) this deviation, if interpreted according to eq. (8)), leads to quite large relative fluctuations of temperature existing in the nuclear collisions, ATIT N 0.12. It is important to stress that these are fluctuations existing in small parts of hadronic system in respect to the whole system rather than of the event-by-event type for which, ATIT = O.O6/fl+ 0 for large N (cf. for relevant references). Such fluctuations are potentially very interesting because they provide direct measure of the total heat capacity C of the system,
-
(P = +) in terms of w = q - 1. Therefore, measuring both the temperature of reaction T and (via nonextensivity q # 1) its total heat capacity C , one can not only check whether an approximate thermodynamics state is formed in a single collision but also what are its theromdynamical properties (especially in what concerns the existence and type of the possible phase transitions 7 ) . To observe such fluctuations an event-by-event analysis of data is needed 7 . Two scenarios must be checked: (a) T is constant in each event but because of different initial conditions it fluctuates from event to event and ( b ) T fluctuates in each event around some mean value To. Fig. 1 shows typical
350
+
event obtained in simulations performed for central Pb Pb collisions taking place for beam energy equal &,earn = 3 A.TeV in which density of particles in central region (defined by rapidity window -1.5 < y < 1.5) is equal to = 6000 (this is the usual value given by commonly used event generators 20). In case (a) in each event one expects exponential dependence with T = Tevent and possible departure from it would occur only after averaging over all events. It would reflect fluctuations originating from different initial conditions for each particular collision. This situation is illustrated in Fig. la where p~ distributions for T = 200 MeV (black symbols) and T = 250 MeV (open symbols) are presented. Such values of T correspond to typical uncertainties in T expected at LHC accelerator at CERN. Notice that both curves presented here are straight lines. In case ( b ) one should observe departure from the exponential behaviour already on the single event level and it should be fully given by q > 1. It reflects situation when, due to some intrinsically dynamical reasons, different parts of a given event can have different temperatures 4,5. In Fig. l b black symbols represent exponential dependence obtained for T = 200 MeV (the same as in Fig. la), open symbols show the power-like dependence as given by (15) with the same T and with q = 1.05 (notice that the corresponding curve bends slightly upward here). In this typical event we have 18000 secondaries, i.e., practically the maximal possible number. Notice that points with highest p~ correspond already to single particles. As one can see, experimental differentiation between these two scenarios will be very difficult, although not totally impossible. On the other hand, if successful it would be very rewarding - as we have stressed before. One should mention at this point that to the same cathegory of fluctuating temperature belongs also attempt 21 to fit energy spectra in both the longitudinal and transverse momenta of particles produced in the e+e- annihilation processes at high energies, novel nonextensive formulation of Hagedorn statistical model of hadronization process "J4 and description of single particle spectra * J 2 .
-
2.3 Nonexponential decays Another hint for intrinsic fluctuations operating in the physical system could be the known phenomenon of nonexponential decays 9. Spontaneous decays of quantum-mechanical unstable systems cannot be described by the pure exponential law (neither for short nor for long times) and survival time probability is P ( t ) cc t-6 instead of exponential one. It turns out that by using random matrix approach, such decays can emerge in a natural way from the possible fluctuations of parameter y = 1 / in ~ the exponential distribution
351
P ( t ) = exp(-yt). Namely, in the case of multichannel decays (with v channels of equal widths involved) one gets fluctuating widths distributed according to gamma function
I(. -
and strength of their fluctuations is given by relative variance ,<. :, which decreases with increasing v. According to 4, it means therefore that, indeed,
with the nonextensivity parameter equal to q = 1
+ I.
3 Summary There is steadily growing evidence that some peculiar features observed in particle and nuclear physics (including cosmic rays) can be most consistently explained in terms of the suitable applications of nonextensive statistic of Tsallis. Here we were able to show only some selected examples, more can be found in However, there is also some resistance towards this idea, the best example of which is provided in 2 2 . It is shown there that mean multiplicity of neutral mesons produced in p - collisions as a function of their mass (in the range from m, = 0.55 GeV to MT = 9.5 GeV) and the transverse mass mT spectra of pions (in the range of mT N 1 15 GeV), both show a remarkable universal behaviour following over 10 orders of magnitude the same power law function C x W p(with x = m or x = mT) with P N 10.1 and P N 9.6, respectively. In this work such a form was just postulated whereas it emerges naturally in q-statistics with q = 1 1/P 1.1 (quite close to results of 21). We regard it as new, strong imprint of nonextensivity present in multiparticle production processes (the origin of which remains, however, to be yet discovered). This interpretation is additionally supported by the fact that in both cases considered in 22 the constant c is the same. Apparently there is no such phenomenon in A A collisions which has simple and interesting explanation: in nuclear collisions volume of interaction is much bigger what makes the heat capacity C also bigger. This in turn, cf. eq.(16), makes q smaller. On should then, indeed, expect that qhadronic >> qnuczear, as observed. 5711.
+
+
N
352
As closing remark let us point out the alternate way of getting nonextensive (i.e., with q # 1) distributions for thermal models (cf. our remarks in and the more recent ideas presented in 23). Notice that if we allow for the temperature T to be energy E dependent, i.e., that T = T ( E )= TO+ a . ( E - fi) (with a = l / C v ) then the usual equation on the probability P ( E ) that a system A (interacting with the heat bath A' with temperature T) has energy
dln[P(E)]
-
-dE/T
P ( E ) exp N
(-g)
(19)
becomes dlnP(E)
1
N
2'0
+ a(E - E- )d E
P(E)-
(20) with q = l+a. This approach could then find its possible application to studies of fluctuations on event-by-event basis 24 (with all reservations expressed in 25 accounted for).
Acknowledgments GW would like to thank Prof. N.G. Antoniou and all Organizers of X-th International Workshop on Multiparticle Production, Correlations and Fluctuations in QCD for financial support and kind hospitality.
References 1. In what concerns fluctuations see A.Bialas, these proceedings (and references therein). 2. In what concerns nonextensivity see C.Tsallis, these proceedings and references therein. In particular see Nonextensive Statistical Mechanics and its Applications, S.Abe and Y.Okamoto (Eds.), Lecture Notes in Physics LPN560, Springer (2000). 3. G.Wilk and Z.Wlodarczyk, Nucl. Phys. B (Proc. Suppl.) A75 (1999) 191. 4. G.Wilk and Z.Wlodarczyk, Phys. Rev. Lett. 84 (2000) 2770. 5. G.Wilk and Z.Wlodarczyk, Chaos, Solitons and Fractals 13/3 (2001) 581. 6. O.V.Utyuzh, G.Wilk and Z.Wlodarczyk, J . Phys. G26 (2000) L39.
353
7. O.V.Utyuzh, G.Wilk and Z.Wlodarczyk, How to observe fluctuating temperature?, hep-ph/0103273. 8. FSNavarra, O.V.Utyuzh, G.Wilk and Z.Wlodarczyk, Nuovo Cim. 24C (2001) 725. 9. G.Wilk and Z.Wlodarczyk, Phys. Lett. A290 (2001) 55. 10. M.Rybczynski, Z.Wlodarczyk and G.Wilk, Nucl. Phys. (Proc. Suppl.) B97 (2001) 81. 11. G.Wilk and Z.Wlodarczyk, Physica A305 (2002) 227. 12. M.Rybczynski, Z.Wlodarczyk and G.Wilk; Rapidity spectra analysis an terms of non-extensive statistic approach; Presented at the XI1 ISVHECRI, CERN, 15-20 July 2002; hep-ph/0206157. 13. G.Wilk and Z.Wlodarczyk, Phys. Rev. D50 (1994) 2318. 14. C.Beck, Physica A305 (2002) 209; Phys. Rev. Lett. 87 (2001) 180601 and Europhys. Lett. 57 (2002) 329. 15. L.D .Landau, I.M.Lifschitz, Course of Theoretical Physics: Statistical Physics, Pergmon Press, New York 1958. See also: L.Stodolsky, Phys. Rev. Lett. 75 (1995) 1044. For different aspects of T fluctuations see: T.C.P.Chui, D.R.Swanson, M. J.Adriaans, J.A.Nissen and J.A.Lipa, Phys. Rev. Lett. 69 (1992) 3005; C.Kitte1, Physics Today 5 (1988) 93; B.B.Mandelbrot, Physics Today 1 (1989) 71; H.B.Prosper, Am. J . Phys. 61 (1993) 54; G.D.J.Phillires, Am. J. Phys. 52 (1984) 629. For particle physics aspects see: E.V.Shuryak, Phys. Lett. B423 (1998) 9 and S.Mr6wczynski, Phys. Lett. B430 (1998) 9. 16. See, for example, proceedings of QM2001, eds. T.J.Hallman et al., Nucl. Phys. A698 (2002) and references therein. 17. L.D.Landau and I.M.Lifschitz, Course of Theoretical Physics: Hydrodynamics, Pergamon Press, New York 1958 or Course of Theoretical Physics: Mechanics of Continous Media, Pergamon Press, Oxford 1981. 18. W.M.Alberico, A.Lavagno and P.Quarati, Eur. Phys. J. C12 (2000) 499 and Nucl. Phys. A680 (2001) 94c. 19. C.Beck, Physica A286 (2000) 164. 20. K.J.Escola, see l6 p. 78. 21. I.Bediaga, E.M.F.Curado and J.M.de Miranda, Physica A286 (2000) 156. 22. M.Gaidzicki and M.I.Gorenstein, Phys. Lett. B517 (2001) 250. 23. M.P.Almeida, Physica A300 (2001) 424. 24. R.Korus, St.Mr6wczyriski, M.Rybczy6ski and Z.Wlodarczyk, Phys. Rev. C64 (2001) 054908. 25. M.Rybczyriski, Z.Wlodarczyk and G.Wilk, Phys. Rev. C64 (2001) 027901.
CHAOS CRITERION AND INSTANTON TUNNELING IN QUANTUM FIELD THEORY
V.I. KUVSHINOV Institute of Physics, 220072 Belarus, Minsk, Scorina, 68. Tel. 375-172-84-I 6-28, fax: 375-172-84-08-79. E-mail: [email protected] A.V. KUZMIN Institute of Physics, 220072 Belarus, Minsk, Scorina, 68. E-mail: avkuzminQdragon. bas-net. b y In the present work we discuss the possibility of introduction the notion of chaos for quantum fields and its possible manifestations. We show that classical chaos squeezes dilute instanton gas. We propose chaos criterion for quantum fields and sketch ways for its justification.
1. Introduction
Phenomenon of chaos attracts much attention in various fields of physics. Originally it was associated with problems of classical mechanics and statistical physics. Substantiation of statistical mechanics initiated intensive study of chaos and uncovered its basic properties mainly in classical mechanics. One of the main results in this direction was a creation of KAM theory and understanding of the phase space structure of Hamiltonian systems It was clarified that the root of chaos was local instability of dynamical system 3 . Local instability leads to mixing of trajectories in phase space and thus to non-regular behavior of the system and chaos 4,5. Significant property of chaos is its prevalence in various natural phenomena. It explains a large number of works in this field. Large progress is achieved in understanding of chaos in semi-classical regime of quantum mechanics via analysis of the spectral properties of the system both numerically and theoretically However semi-classical restrictions are important, because a large number of energy levels in small energy interval is needed to provide a certain statistics or small Plank's 192.
'i8.
354
355 constant is needed to reduce path integrals to Gaussian form ’. Investigation of the stability of classical field solutions faces difficulties caused by infinite number of degrees of freedom. That is why authors often restrict their consideration by investigation of some model field configurations mainly in gauge field theories (GFT) lo. A steady interest to chaos in gauge field theories is connected with the fact that chaotic solutions are inherent in them ll. There are also a lot of footprints on chaos in HEP 12, nuclear physics (energy spacing distributions) l 3 , I 4 . There are papers devoted chaos in quantum field theory 15. But there is no generally recognized definition of chaos for quantum fields 14. This fact restricts use of chaos theory in field of elementary particle physics. At the same time it is well known that field equations of all four types of fundamental interactions have chaotic solutions and high energy physics reveales the phenomenon of intermittency 17. The aim of this work is to discuss the possibility of existence the notion of chaos and to propose chaos criterion for quantum field systems. We sketch the way for its justification. On the example of model quantum mechanical system the enhancement of instanton tunnelling rate in presence of classical chaos is demonstrated. We discuss possibility to observe analogue phenomenon in GFTs as possible manifestation of chaos. This paper is structured as follows. Primarily we formulate generalized Toda criterion of local instability (see also Refs. which is needed for our further discussion and justification of chaos criterion in quantum field theory. On the particular examples of model systems of classical gauge fields we demonstrate its connection with well known chaos criteria based on KAM-theory and notion of nonlinear resonance 21 and provide some numerical results. The correspondence with existing chaos criteria is not the single objective for this demonstration. Our aim also to show that classical chaos is the phenomenon inherent in gauge field theories - which after quantization are the modern theories of fundamental particle interactions. On this way it is possible to take into account even some quantum properties of fields Then we formulate chaos criterion for quantum field systems and demonstrate its correspondence with classical criterion of local instability (generalized Toda criterion) in semiclassical limit of quantum mechanics (finite number of degrees of freedom) for the case of constant increments of local instability. At the end of the paper we discuss problems arose on the way of its justification for quantum fields (infinite number of degrees of freedom). Influence of classical chaos on instanton tunneling is also discussed. 18,19320)
22723.
356
2. Generalized Toda criterion Toda criterion of local instability for classical mechanical systems was at first formulated in Ref. 18. It was reformulated for Hamiltonian systems with two degrees of freedom by Salasnich 19. Local instability of classical conservative Hamiltonian system with finite number of freedoms and finite (at finite energy) available phase space volume leads to mixing, destruction of the first integrals of motion and all that one calls chaos Bellow we demonstrate the agreement between Toda criterion and criterion of classical chaos based on KAM theory and conception of nonlinear resonance and discuss the relations between these two methods. General conclusion, on our opinion, is that Toda criterion is more rough then detailed consideration distinguishing individual resonances. It gives in some sense coarse grained description of classical dynamics. However, in distinction from detailed consideration, when Toda criterion is applied input needed for estimation of the dynamics from the viewpoint of chaos is essentially less and systems with large number of degrees of freedom can be considered. Toda criterion for Hamiltonian systems with bounded motion and any finite number of degrees of freedom was derived in Ref. 20, where system with Hamiltonian having the following form was considered: 498.
H =
1 -3 + V(q3 2
, p”
(PI ,...,P N )
;
a= (41)...,
QN),
N
> 1.
(1)
Behavior of the classical system is locally unstable if distance between two neighboring trajectories grows exponentially with time in some region of the phase space. Solution of linearized Hamilton equations valid in small region near arbitrary point of the phase space has the form:
Here {Ci} is a full set of projectors and X i = Xi(&) are eigenvalues of the stability matrix G:
From (2) it is seen: a) If there is i such as Re& > 0 then the distance between neighboring trajectories grows exponentially with time and motion is locally unstable. According Liouville’s theorem stretching of phase space flow in one direction (Rexi > 0) is accompanied by its compression in other direction
357
(directions) in order to keep phase space volume constant. That means the existence of Re X j < 0. Thus for local instability of motion we can demand existence of Re XI, # 0. Rexi = 0 then there is no local instability b) If for any i = and the motion is regular. It is easy to see that G2 = diag(-C, -E). Therefore if (-&) , i = 1,N are eigenvalues of the matrix (-C) then
(-ti) = A:,
i = 1,N , A: = x:+~, i = 1 , .
(4)
Thus without loss of generality we can imply that Re& 2 0. Notice that
ti = -A:
= (Im Xi)2 - (Re Xi)2 - 2i Im X i Re X i , i = 1 , . (5) -
Since matrix C is real and symmetric its eigenvalues {ti}, i = l , N are real. Therefore any eigenvalue of the stability matrix G is real or pure imaginary or equals zero. Thus the generalized Toda criterion for classical Hamiltonian systems with any finite number of freedoms can be formulated as follows: a) If ti 2 0 , V i = 1,N then behavior of the system is regular near the point I&. b) If 3i = 1 , N : < 0 then behavior of the system is locally unstable near the point I&. If one of these conditions holds in some region of the configuration space then the motion is stable or chaotic respectively in this region. These results for systems with two degrees of freedom coincide with ones obtained in Ref. 19. Now let us check the accordance between Toda criterion of local instability and methods based on KAM-theory '. For this purpose we use model system originated from classical gauge field theory. Namely, S U ( 2 ) spatially homogeneous model field system considered in Ref. 21. Derivation of its Hamiltonian and motivations for its consideration can be found elsewhere (see references therein). Hamiltonian of the system has the form: 21t24
+
+ +
H =1 (py + p i ) -g 1 2w 2 ( q f q;) i g 2 q f q z sin2 5. (6) 2 8 Here g is the coupling constant of gauge fields, w denotes vacuum expectation value of Higgs fields, q1,2 are modules of gauge field vectors, p1,2 their time direvatives and t is some angle parameter (for more details see Refs. 21,24). This system has two degrees of freedom and eigenvalues of the
358
matrix C, see (3), can be easily calculated in this case. They are: [1,2
=
1
1
5 (v;;+ v;;)f 2
+ v;;)2- 4 (vp;;- (V$)
-$&I:
with the following elements of the matrix C: 1 1 v:; = -g2v2+g2qg sin2<, v;; = qg2v2+g2qfsin2[, 4
(7)
v,; = 2g2qlq2 sin2[. (8)
If we use the following denotations borrowed from Ref.
B = v,; +
c = &';v;; - (&/;)2,
19:
E;
(9)
then the correspondence of generalized Toda criterion in the case of two degrees of freedom with results of the Ref. l9 becomes obvious. Namely, if B > 0 (this is true for any parameter values of the system under consideration) and C > 0 then both eigenvalues (7) are larger then zero and we have regular behavior of the system. Otherwise, if B > 0 and C < 0 then one of the eigenvalues (7) is less then zero and according Toda criterion motion is locally unstable (and chaotic). It is seen from the expressions (8)-(9) that at low energies the system behaves near the minimum of the potential where the parameter C is positive and thus the motion is stable. At large enough energies parameter C becomes negative and motion is unstable. While increasing the energy, the system may reach the region of unstable motion and, therefore, the order-to-chaos transition occurs 19. It can be shown (see Ref. 25) that for the system under consideration the critical energy of order-chaos transition estimated using Toda criterion equals: 3g2v4
E, = 32 sin2[' Now we derive the same quantity by means of another technique based on KAM-theory 5 . It is convenient to rewrite Hamiltonian (6) using actionangle variables:
Here w = tgv and we get:
v=
el C O S ~e2.
COS~
(13)
359
Here Ho describes the non-perturbed system, EV represents the perturbation. If we represent the potential of perturbation as a sum of Fourier components and average them over time then non-resonance and non-constant components give a negligible contribution 8 . Therefore averaged potential of perturbation has the form: 1
V N -I& (2 + c~~2 (el + e,)) . 8
(14)
Hamilton equations can be written in the following form:
{'
$ = $El51
61 = - 81 ~E p2 sin$
Here E is the energy of the system a. Equations (15) describe the behavior of the system determined by the resonance Fourier component of the perturbation. It is easy to see that equations (15) coincide with equations describing the motion of the nonlinear oscillator. It is well known that non-resonance Fourier components, we have neglected, destroy the separatrix of the nonlinear oscillations and stochastic layer appears *. Thus we have to concentrate our attention on dynamics near the separatrix of nonlinear oscillations. In the vicinity of the separatrix using equations (15) we build the map:
Here we have changed phase $ on T ,t is a discretization time interval and H , = eE2/8w2. The local instability exists if the following condition is held 5 :
Using this condition it is easy to show that the critical value of the energy corresponding to order-chaos transition E, equals:
For the estimation we accepted here t = l / w . &Weassume that perturbation is weak and it does not change significantly the energy of the system.
360
40 20 N
a
0 -20 -40
-7.5
-5
-2.5
0
2.5
5
7.5
Figure 1. Poincare cross-section made by the plane q1 = 0. Parameter values are 'u = 10, 5 = ~ / 4 g , = 1. Energy of the system is fixed and equals the critical energy of order-chaos transition estimated using Toda criterion.
We have obtained the critical energy of the system corresponding to order-chaos transition by two different ways and see that expressions (10) and (18) are similar. They give the same dependence on system parameters and small difference in numerical factor originated from estimative character of the previous calculations. It is seen that much less labor inputs is needed when Toda criterion is used. For further clarification of Toda criterion we built Poincare crosssections for particular values of the system parameters (see Fig.1). The parameters are chosen to give small value of E (see (12)) in order methods of KAM-theory to be applied, namely, E = 0.04. In Fig.1 the Poincare cross-section for the energy given by (10) is shown. It is seen that Toda criterion gives good estimation for critical energy of order-chaos transition. We mean the following. Chaotic layer exists at smaller energies (rigorously speaking at any non-zero energies), but it is more narrow then shown in Fig.1. By order-chaos transition (with rise of the energy of the system) we mean the process of melting the regions of phase space'with regular motion and growing the regions with chaotic one. Critical energy E, tells us at what energy chaotic regions begin to occupy significant part of the available phase space. From this point of view Toda criterion gives good estimation for characteristic energy scale at which chaotic behavior begins to dominate.
361
Thus we conclude that Toda criterion of local instability can serve for investigation of classical behavior of dynamical systems from the viewpoint of chaos. However further investigations of its applicability bounds and accuracy are also needed.
3. Chaos criterion for quantum fields In this section we discuss the possibility of introduction the notion of chaos for quantum fields and some of its consequences. Now we give some qualitative arguments which bring us to formulation of chaos criterion in quantum field theory (QFT) '. From statistical mechanics and ergodic theory it is known that chaos in classical systems is a consequence of the property of mixing 4,8. Mixing means rapid (exponential) decrease of correlation function with time '. In other words, if correlation function exponentially decreases then the corresponding motion is chaotic, if it oscillates or is constant then the motion is regular 26. We expand criterion of this type for quantum field systems. All stated bellow remains valid for quantum mechanics, since mathematical description via path integrals is the same. For field systems the analogue of classical correlation function is twopoint connected Green function:
Here W [ f lis generating functional of connected Green functions, f a r e the sources of the fields, z, y are 4-vectors of space-time coordinates. We formulate chaos criterion for quantum field theory in the following form: a) If two-point Green function (19) exponentially goes to zero when the distance between its arguments goes to infinity then system is chaotic. b) If it oscillates or remains constant in this limit then we have regular behavior of quantum system (decreasing weaker then exponential one is also allowed). To check the agreement between generalized Toda criterion and formulated quantum chaos criterion in framework of quantum mechanics we shall calculate two-point Green function in semi-classical approximation for some quantum mechanical system. In the case of constant increments of local instability {Xi} two-point connected Green function (19) can be represented bQuantum mechanics is considered as QFT in 0 + 1 dimensions.
362
in the form:
From the expression (20) it is seen a) If classical motion is locally unstable (chaotic) then according Toda criterion there is real eigenvalue Xi. Therefore Green function (20) exponentially goes to zero for some i when (tl - t z ) -+ +oo. Opposite is also true. If Green function (20) exponentially goes to zero under the condition (tl - t 2 ) +oo for some i, then there exists real eigenvalue of the stability matrix and thus classical motion is locally unstable. b) If all eigenvalues of the stability matrix G are pure imaginary, that corresponds classically stable motion, then in the limit (tl - t 2 ) -+ +oo Green function (20) oscillates as a sine. Opposite is also true. If for any i Green functions oscillate in the limit ( t l - t 2 ) -+ +oo then {Xi} are pure imaginary for any i and classical motion is stable and regular. Thus we have demonstrated for any finite number of degrees of freedom that proposed quantum chaos criterion coincides with Toda criterion in the semi-classical limit of corresponding quantum mechanical system if increments of local instability are constant (corresponding principle). In the case of non-constant As the calculation of two-point connected Green function in the whole range of variation of its arguments is several order more complicated problem then in the case of constant ones. The condition for Green function to be finite in the limit of infinite distance between its arguments forced us to eliminate exponentially growing item from the expression (20). However, it is not so in general case, when we can consider increments of instability as constants just in small region around the considered point of configuration space. Therefore we can not demand the elimination of exponentially growing item and the expression for two-point connected Green function valid in sufficiently small region of configuration phase space is:
~~(tt, t2>= ~ Y ) , x i ( t 2 - - t l )
+ Dt)e-Xi(tZ-tl)
(21)
where Df), Dt)are arbitrary constants and tl - t 2 is assumed to be sufficiently small. Thus for non-constant As we can describe local behavior of Green function, but we are not able to predict its global behavior that is needed for proposed chaos criterion to be applied. Another problem we have faced is that up to now we considered quantum mechanical systems, which have finite number of degrees of freedom. But
363
our final aim is to deal with chaos in quantum fields which possess infinite number of freedoms. What happens if N goes to infinity when classical increments of local instability are non-constant is not clear yet. It seems possible that taking into account of pure quantum effects besides semiclassical approximation is needed. 4. Chaos and instanton tunneling
In Sec. 2 we used model system of gauge fields to demonstrate the applicability of Toda criterion for description of chaotic properties of the system. This system was not chose accidentally. It is well known that chaotic behavior is inherent property of classical gauge field theories (see discussion in Sec. 1). On the other hand it is also well established fact that classical chaos strongly influences on quantum processes such as quantum mechanical tunneling 28. The question is there any influence of chaotic behavior of classical gauge fields on quantum tunneling between degenerate vacua in quantum field theories such as QCD? Tunneling is described by instantons in this case 29. This language can also be used for description of quantum mechanical tunneling (recent example see in Ref. 30). We demonstrated on the example of model one-dimensional quantum mechanical system with periodic potential perturbed by periodic in time perturbation that classical chaos squeezes dilute instanton gas and chaotic instanton solutions appears 31. The question to be answered if this mechanism works in field theory (QCD), where the search of instanton induced events is one of the main problems 32. 5. Conclusion In the present paper we provided further analytical and numerical justification of Toda criterion of local instability. It was used to check accordance between proposed chaos criterion for quantum fields and existing classical criteria when increments of local instability are constant 20. The general conclusion is that Toda criterion does not give detailed description of phase space structures, however it works well when estimations of chaotic properties of the system are needed. Toda criterion gives qualitatively good results with less labor inputs needed compared with the case when standard methods are used. Investigation of accordance with existing classical chaos criteria made on particular model system is not enough and further investigations are needed. We discussed problems arose on the way of further justification of pro-
364
posed chaos criterion. Particulary, proposed chaos criterion is justified for quantum mechanical system with any (large) number of degrees of freedom when its classical increments of local instability are constant, whereas the case of non-constant increments is not considered yet. Also correspondence between localization in the phase space of field system (on the lattice) and in space-time has to be studied. The question about influence of classical chaos on instanton tunneling in gauge theories was touched.
Acknowledgements Discussions with R.G. Shulyakovsky are gratefully acknowledged.
References 1. A.N. Kolmogorov, Reports AS USSR 98, 527 (1954) (in Russian). 2. V.I. Arnold, Izvestiya Akad. Nauk SSSR 25, 21 (1961) (in Russian). 3. N.S. .Krylov , Works on substantiation of statistical physics (Izdatelstvo Akad. Nauk SSSR, Moskow-Leningrad, 1950) (in Russian). 4. A.J. Lichtenberg and M.A. Lieberman, Regular and Stochastic Motion (Springer-Verlag, New York, 1983). 5. G.M. Zaslavsky and R.Z. Sagdeev, Introduction in nonlinear physics (Moskow, Nauka, 1988) (in Russian). 6. T. Prosen and M. Robnik, J. Phys. A 27, 8059 (1994); 7. M.C. Gutzwiller, Chaos in Classical and Quantum Mechanics (Springer, New York, 1990). 8. G.M. Zaslavsky, Stochasticity of dynamical systems (Moskow, Nauka, 1984) (in Russian). 9. B. Li and M. Robnik, J. Phys. A 28, 4843 (1995). 10. T. Kawabe and S. Ohta, Phys. Rev. D 44, 1274 (1991). 11. T.S. Biro, S.G. Matinyan and B. Muller, Chaos and Gauge Field Theory (World Scientific, 1994). 12. H.B. Nielsen, H.H. Rugh and S.E. Rugh, E-print: chao-dyn/9605013. 13. A. Bohr and B. Mottelson, Nuclear Structure (New York, Benjamin, 1967). 14. Bunakov V E, in Proceedings of the XXXII Winter School of PNPI ( S.Petersburg, PNPI Press, 1998) p.5. 15. T.S. Biro, B. Muller, S.G. Matinyan, E-print: hep-th/0010134. 16. Deterministic Chaos in General Relativity, Editors: Hobill et al., NATO AS1 Series B; Physics Vol. 332 (Plenum Press, New York, 1994). 17. E.A. De Wolf, I.M. Dremin, W. Kittel, Phys. Rep. 270, 1 (1996). 18. M. Toda, Phys. Lett. A 48, 335 (1974). 19. L. Salasnich, E-print: nucl-th/9707035. 20. V.I. Kuvshinov and A.V. Kuzmin, Phys. Lett. A 296, 82 (2002). 21. V.I. Kuvshinov, A.V. Kuzmin, Nod. Phenom. in Complex Syst. 4, 64 (2001). 22. S.G. Matinyan and B. Muller, Phys. Rev. Lett. 78, 2515 (1997).
365
23. V. I. Kuvshinov, A. V. Kuzmin, to be publ. in J. of Nonl. Math. Phys. 9 (2002) N4. 24. V. I. Kuvshinov, A. V. Kuzmin, N o d . Phenom. an Complex Syst. 5, 204 (2002). 25. V. I. Kuvshinov, A. V. Kuzmin, Nonl. Phenom. an Complex Syst. 3, 299 (2000). 26. H.G. Schuster, Detrministic Chaos: An Introduction (Physik-Verlag Weinheim, 1984). 27. M. Robnik, Open Sys. B Information Dyn. 4, 211 (1997). 28. M. Latka, P. Grigolini and B.J. West, Phys. Rev. A 50, 1071 (1994); 0. Bohigas, S. Tomsovic and D. Ullmo, Phys. Rept. 223, 43 (1993). 29. A. Belavin, A. Polyakov, A. Schwasz, and Yu. Tyupkin, Phys.Lett. B59, 85 (1975); G. 't Hooft, Phys.Rev.Lett. 37, 8 (1976),Phys.Rev. D14, 3432 (1976). 30. K.-I. Aoki, A. Horikoshi, M. Taniguchi and H. Terao, E-print: hepth/9812050. 31. V.I. Kuvshinov, A.V. Kuzmin and R.G. Shulyakovsky, Acta Phys. Pol. B 33, 1721 (2002). 32. S. Moch, A. Ringwald, F. Schrempp, NucLPhys. B 5 0 7 134 (1997); A. Ringwald, F. Schrempp, Phys.Lett. B438 217 (1998); J.Phys. G25 1297 (1999); PhysLett. B495 249 (1999); Comput.Phys.Commun. 132 267 (2000); Phys.Lett. B 5 0 3 331 (2001).
This page intentionally left blank
Session on Correlations and Fluctuations (Methods and Applications) Chairperson: M. Spyropoulou-Stassinaki
This page intentionally left blank
BRIEF INTRODUCTION TO WAVELETS
I.M. DREMIN Lebedev Physical Institute, Leninsky pr 53 Moscow 119991, Russia E-mail: dreminO1pi.m Wavelets are widely used now for the analysis of local scales (or frequencies) important in physical events, biological objects, natural phenomena etc. They provide unique information about time or scale evolution. In this review paper we intend t o describe briefly what are wavelets, how t o use them, when we do need them, why they are preferred and where they have been applied. Therefore, after defining wavelets we proceed t o the multiresolution analysis and fast wavelet transform as a standard procedure for dealing with discrete wavelets, show what specific features of signals (functions) can be revealed by such an analysis, but can not be found by other methods (e.g., by Fourier-expansion), and, finally, give some examples of practical applications.
1. Introduction Wavelets have become a necessary mathematical tool in many investigations. They are used in those cases when the result of the analysis of a particular signup should contain not only the list of its typical frequencies (scales) but also knowledge of the definite local coordinates where these properties are important, i.e., the size and location of its fluctuations. The wavelet basis is formed by using dilations and translations of a particular function defined on a finite interval. Its finiteness is crucial for the locality property of the wavelet analysis . Commonly used wavelets generate a complete orthonormal system of functions with a finite support. That is why by changing the scale (dilations) they can distinguish the local characteristics of a signal at various scales, and by translations they cover the whole region in which it is studied. Due t o the completeness of the system, they also allow for the inverse transformation to be properly done. aThe notion of a signal is used here for any ordered set of numerically recorded informa, tion about some processes, objects, functions etc. The signal can be a function of some coordinates, would it be the time, the space or any other (in general, n-dimensional) scale.
369
370
In the analysis of nonstationary signals, the locality property of wavelets gives a substantial advantage over Fourier transform which provides US only with knowledge of the global frequencies (scales) of the object under investigation because the system of the basic functions used (sine, cosine or imaginary exponential functions) is defined over an infinite interval. The literature devoted to wavelets is highly voluminous, and one can easily get a lot of references by sending the corresponding request to Internet web sites. Mathematical problems are treated in many monographs in detail (e.g., see I , 2 , 3, ', 5 ) . Introductory courses on wavelets can be found in the books 6 , 7 , 8, '. The review papers adapted for physicists and practical users were published in Physics-Uspekhi journal lo, 'I. To make this review shorter, we omit all Figures referring the reader to above papers (use website www.ufn.ru). It has been proven that any function can be written as a superposition of wavelets, and there exists a numerically stable algorithm to compute the coefficients for such an expansion. Moreover, these coefficients completely characterize the function, and it is possible to reconstruct it in a numerically stable way by knowing these coefficients. Because of their unique properties, wavelets were used in functional analysis in mathematics, in studies of (mu1ti)fractal properties, singularities and local oscillations of functions, for solving some differential equations, for investigation of inhomogeneous processes involving widely different scales of interacting perturbations, for pattern recognition, for image and sound compression, for digital geometry processing, for solving many problems of physics, biology, medicine, technique etc (see the recently published books 12, 13, 14, 15). This list is by no means exhaustive. The programs exploiting the wavelet transform are widely used now not only for scientific research but for commercial projects as well. Some of them have been even described in books (e.g., see Is). At the same time, the direct transition from pure mathematics to computer programming and applications is non-trivial and asks often for the individual approach to the problem under investigation and for a specific choice of wavelets used. Our main objective here is to describe in a suitable way the bridge that relates mathematical wavelet constructions to practical signal processing. Namely practical applications considered by A. Grossman and J. Morlet 17, l8 have led to fast progress of the wavelet theory related to the work of Y. Meyer, I. Daubechies et al. The main bulk of papers dealing with practical applications of wavelet analysis uses the so-called discrete wavelets which will be our main concern
371
here. The discrete wavelets can not be represented by analytical expressions (except for the simplest one) or by solutions of some differential equations, and instead are given numerically as solutions of definite functional equations containing rescaling and translations. Moreover, in practical calculations their direct form is not even required, and only the numerical values of the coefficientsof the functional equation are used. This is a very important procedure called multiresolution analysis which gives rise to the multiscale local analysis of the signal and fast numerical algorithms. Each scale contains an independent non-overlapping set of information about the signal in the form of wavelet coefficients, which are determined from an iterative procedure called the fast wavelet transform. In combination, they provide its complete analysis and simplify the diagnosis of the underlying processes. After such an analysis has been done, one can compress (if necessary) the resulting data by omitting some inessential part of the encoded information. This is done with the help of the so-called quantization procedure which commonly allocates different weights to various wavelet coefficients obtained. In particular, it helps erase some statistical fluctuations and, therefore, increase the role of the dynamical features of a signal. This can however falsify the diagnostic if the compression is done inappropriately. Usually, accurate compression gives rise to a substantial reduction of the required computer storage memory and transmission facilities, and, consequently, to a lower expenditure. The number of vanishing moments of wavelets is important at this stage. Unfortunately, the compression introduces unavoidable systematic errors. The mistakes one has made will consist of multiples of the deleted wavelet coefficients, and, therefore, the regularity properties of a signal play an essential role. Reconstruction after such compression schemes is then no longer perfect. These two objectives are clearly antagonistic. Nevertheless, when one tries to reconstruct the initial signal, the inverse transformation (synthesis) happens to be rather stable and reproduces its most important characteristics if proper methods are applied. The regularity properties of wavelets used also become crucial at the reconstruction stage. The distortions of the reconstucted signal due to quantization can be kept small, although significant compression ratios are attained. Since the part of the signal which is not reconstructed is often called noise, in essence, what we are doing is denoising the signals. Namely at this stage the superiority of the discrete wavelets becomes especially clear. Thus, the objectives of signal processing consist in accurate analysis
372
with help of the transform, effective coding, fast transmission and, finally, careful reconstruction (at the transmission destination point) of the initial signal. Sometimes the first stage of signal analysis and diagnosis is enough for the problem to be solved and the anticipated goals to be achieved. One should however stress that, even though this method is very powerful, the goals of wavelet analysis are rather modest. This helps us describe and reveal some features, otherwise hidden in a signal, but it does not pretend to explain the underlying dynamics and physical origin although it may give some crucial hints to it. Wavelets present a new stage in optimization of this description providing, in many cases, the best known representation of a signal. With the help of wavelets, we merely see things a little more clearly. To understand the dynamics, standard approaches introduce models assumed to be driving the mechanisms generating the observations. To define the optimality of the algorithms of the wavelet transform, some (still debatable!) energy and entropy criteria have been developed. They are internal to the algorithm itself. However, the choice of the best algorithm is also tied to the objective goal of its practical use, i.e., to some external criteria. That is why in practical applications one should submit the performance of a "theoretically optimal algorithm" to the judgements of experts and users to estimate its advantage over the previously developed ones. Despite very active research and impressive results, the versatility of wavelet analysis implies that these studies are presumably not in their final form yet. We shall try to describe the situation in its status nascendi.
2. Wavelets for beginners Each signal can be characterized by its averaged (over some intervals) values (trend) and by its variations around this trend. Let us call these variations as fluctuations independently of their nature, be they of dynamic, stochastic, psychological, physiological or any other origin. When processing a signal, one is interested in its fluctuations at various scales because from these one can learn about their origin. The goal of wavelet analysis is to provide tools for such processing. Actually, physicists dealing with experimental histograms analyze their data at different scales when averaging over different size intervals. This is a particular example of a simplified wavelet analysis treated in this Section. To be more definite, let us consider the situation when an experimentalist measures some function f(z) within the interval 0 2 z 2 1, and the best
373
resolution obtained with the measuring device is limited by 1/16th of the whole interval. Thus the result consists of 16 numbers representing the mean values of f(x) in each of these bins and can be plotted as a 16-bin histogram. It can be represented by the following formula
where s 4 , k = f(k/16)/4, and P 4 , k is defined as a steplike block of the unit norm (i.e. of height 4 and widths 1/16) different from zero only within the k-th bin. For an arbitrary j, one imposes the condition Jdxl'pj,k12 = 1, where the integral is taken over the intervals of the lengths Axj = 1/2j and, therefore, P j , k have the following form 'pj,k = 2jI2'p(2jx - k) with 'p denoting a steplike function of the unit height over such an interval. The label 4 is related to the total number of such intervals in our example. At the next coarser level the average over the two neighboring bins is taken. Up to the normalization factor, we will denote it as S3,k and the difference between the two levels as d 3 , k . To be more explicit, let us write down the normalized sums and differences for an arbitrary level j as
or for the backward transform (synthesis)
1
sj,2k = -(sj-l,k
Jz
-k dj-l,&);
1
Sj,Zk+l = -(sj-l,k
Jz
- dj-l,k).
(3)
Since, for the dyadic partition considered, this difference has opposite signs in the neighboring bins of the previous fine level, we introduce the function 1c, which is 1 and -1, correspondingly, in these bins and the normalized functions $j,k = 2j/21c,(2jx - k). This allows us to represent the same function f(x) as 7
7
k=O
k=O
One proceeds further in the same manner to the sparser levels 2, 1 and 0 with averaging done over the interval lengths 1/4, 1/2 and 1, correspondingly. The most sparse level with the mean value of f over the whole
374
interval denoted as SO,O provides
The functions q j , k ( Z ) and $ j , k ( Z ) are normalized by the conservation of the norm, dilated and translated versions of them. In the next Section we will give explicit formulae for them in a particular case of Haar scaling functions and wavelets. In practical signal processing, these functions (and more sophisticated versions of them) are often called low and high-pass filters, correspondingly, because they filter the large and small scale components of a signal. The subsequent terms in Eq. (5) show the fluctuations (differences d j , k ) at finer and finer levels with larger j . In all the cases (1)-(5) one needs exactly 16 coefficients to represent the function. In general, there are 2 j coefficients S j , k and 2jn - 2 j coefficients d j , k , where j , denotes the finest resolution level (in the above example, j, = 4). All the above representations of the function f (z) (Eqs. (1)-(5)) are mathematically equivalent. However, the latter one representing the wavelet analyzed function directly reveals the fluctuation structure of the signal at different scales j and various locations k present in a set of coefficients d j , k whereas the original form (1) hides the fluctuation patterns in the background of a general trend. In practical applications the latter wavelet representation is preferred because for rather smooth functions, strongly varying only at some discrete values of their arguments, many of the high-resolution d-coefficients in relations similar to Eq. (5) are close to zero (compared to the "informative" d-coefficients) and can be discarded. Bands of zeros (or close to zero values) indicate those regions where the function is fairly smooth. At first sight, this simplified example looks somewhat trivial. However, for more complicated functions and more data points with some elaborate forms of wavelets it leads to a detailed analysis of a signal and to possible strong compression with subsequent good quality restoration. This example also provides an illustration of the very important feature of the whole approach with successive coarser and coarser approximations to f called the multiresolution analysis and discussed in more detail below.
375
3. Basic notions and Haar wavelets
To analyze any signal, one should, first of all, choose the corresponding basis, i.e., the set of functions to be considered as "functional coordinates". In most cases we will deal with signals represented by the square integrable functions defined on the real axis. For nonstationary signals, e.g., the location of that moment when the frequency characteristics have abruptly been changed is crucial. Therefore the basis should have a compact support, i.e. it should be defined on the finite region. The wavelets have this property. Nevertheless, with them it is possible to span the whole space by translation of the dilated versions of a definite function. That is why every signal can be decomposed in the wavelet series (or integral). Each frequency component is studied with a resolution matched to its scale. Let us try to construct functions satisfying the above criteria. An educated guess would be to relate the function p(z) to its dilated and translated version. The simplest linear relation with 2M coefficients is 2M-1 k=O
with the dyadic dilation 2 and integer translation k. At first sight, the chosen normalization of the coefficients h k with the "extracted" factor fi looks somewhat arbitrary. Actually, it is defined a'posteriori by the traditional form of fast algorithms for their calculation (see Eqs. (20) and (21) below) and normalization of functions p j , k ( z ) , $ j , k ( Z ) . It is used in all the books cited above. For discrete values of the dilation and translation parameters one gets discrete wavelets. The value of the dilation factor determines the size of cells in the lattice chosen. The integer M defines the number of coefficients and the length of the wavelet support. They are interrelated because from the definition of h k for orthonormal bases
it follows that only finitely many h k are nonzero if p has a finite support. The normalization condition is chosen as
L M
dzp(z) = 1.
The function p(z) obtained from the solution of this equation is called a
376
scaling functionb. If the scaling function is known, one can form a "mother wavelet " (or a basic wavelet ) $(x) according to 2M-I
k=O
where
gk = (-l)kh2M-k-1.
(10)
The simplest example would be for M = 1 with two non-zero coefficients hk equal to 1/& i.e., the equation leading to the Haar scaling function 'PH (XI: ' P H ( 2 ) = 'PH(2Z)
+ 'PH(2a:- 1).
(11)
One easily gets the solution of this functional equation ' P H ( Z ) = e(x)e(l -
21,
(12)
where B(x) is the Heaviside step-function equal to 1 at positive arguments and 0 at negative ones. The additional boundary condition is ' p ~ ( 0 )= 1, c p ~ ( 1 )= 0. This condition is important for the simplicity of the whole procedure of computing the wavelet coefficients when two neighboring intervals are considered. The "mother wavelet " is $H(z) =
e(%)e(i- 2 4 - 0 ( 2 -~ ip(i - .).
(13)
with boundary values defined as I ) H ( O ) = 1, $ ~ ( 1 / 2 )= -1, $ ~ ( 1 )= 0. This is the Haar wavelet l9 known since 1910 and used in the functional analysis . Namely this example has been considered in the previous Section for the histogram decomposition. This is the first one of a family of compactly supported orthonormal wavelets MI) : $H =1 $. It possesses the locality property since its support 2M - 1 = 1 is compact. The dilated and translated versions of the scaling function 'p and the "mother wavelet" $
Pj,k = 2i/2p(23'x - k),
is often called also a "father wavelet
"
but we will not use this term.
(14)
377
form the orthonormal basis as can be (easily for Haar wavelets) checkedc. The Haar wavelet oscillates so that
1, 00
dz+(z) = 0.
(16)
This condition is common for all the wavelets. It is called the oscillation or cancellation condition. From it, the origin of the name wavelet becomes clear. One can describe a "wavelet" as a function that oscillates within some interval like a wave but is then localized by damping outside this interval. This is a necessary condition for wavelets to form an unconditional (stable) basis. We conclude that for special choices of coefficients h k one gets the specific forms of "mother" wavelets, which give rise to orthonormal bases. The wavelet coefficients s j , k and d j , k can be calculated as sj,k
=
dj,k
=
I I
dzf(z)Pj,k(z),
(17)
dxf(z)$'j,k(z).
(18)
However, in practice their values are determined from the fast wavelet transform described below. These coefficients are referred as sums (s) and differences (d), thus related to mean values and fluctuations. If only the terms with d-coefficients in (5) are considered, the result is called as the wavelet expansion. For the histogram interpretation, this procedure would imply that one is not interested in average values but only in the histogram shape determined by fluctuations at different scales.
4. Multiresolution analysis and Daubechies wavelets Though the Haar wavelets provide a good tutorial example of an orthonormal basis, they suffer from several deficiences. One of them is. the bad analytic behavior with the abrupt change at the interval bounds, i.e., its bad regularity properties. By this we mean that all finite rank moments of the Haar wavelet are different from zero - only its zeroth moment, i.e., the integral (16) of the function itself is zero. This shows that this wavelet is not orthogonal to any polynomial apart from a trivial constant. The =Wereturn back to the general case and therefore omit the index H because the same formula will be used for other wavelets.
378
Haar wavelet does not have good time-frequency localization. Its Fourier transform decays like I w I - ~ for w + m,. The goal is to find a general class of those functions which would satisfy the requirements of locality, regularity and oscillatory behavior. Note that in some particular cases the orthonormality property sometimes can be relaxed. They should be simple enough in the sense that they are of being sufficiently explicit and regular to be completely determined by their samples on the lattice defined by the factors 23. The general approach which respects these properties is known as the multiresolution approximation. This works in practice when applied to the problem of finding out the coefficients of any filter h k and g k . They can be directly obtained from the definition and properties of the discrete wavelets. These coefficients are defined by relations (6) and (9). The orthogonality of the scaling functions, of wavelets to the scaling functions, of wavelets to all polynomials up to the power ( M - 1) and the normalization condition can be written as equations for hk which uniquely define them (see ll). In case M = 2 they lead to following values of coefficients:
(19) These coefficients define the simplest D4 (or p)) wavelet from the famous family of orthonormal Daubechies wavelets ( D 2 M with ) finite support. For the filters of higher order in M , i.e., for higher rank Daubechies wavelets, the coefficients can be obtained in the same manner. The wavelet support is equal to 2M - 1 . It is wider than for the Haar wavelets. However the regularity properties are better. The higher order wavelets are smoother compared to D4.
5. Fast wavelet transform
Fast wavelet transform allows to proceed with all the computation within short time interval because it uses the simple iterative procedure. Therefore it is crucial for all work with wavelets. The coefficients s j , k and d j , k carry information about the content of the signal at various scales and can be calculated directly using the formulas ( 1 7 ) , ( 1 8 ) . However this algorithm is inconvenient for numerical computations because it requires many ( N 2 )operations where N denotes a number of the sampled values of the function. We will describe a faster algorithm. In practical calculations the coefficients hk are used only without referring to the shapes of wavelets.
379
In real situations with digitized signals, we have to deal with finite sets of points. Thus, there always exists the finest level of resolution where each interval contains only a single number. Correspondingly, the sums over k will get finite limits. It is convenient to reverse the level indexation assuming that the label of this fine scale is j = 0. It is then easy to compute the wavelet coefficients for more sparse resolutions j 2 1, Multiresolution analysis naturally leads to an hierarchical and fast scheme for the computation of the wavelet coefficients of a given function. In general, one can get the iterative formulas of the fast wavelet transform
where
These equations yield fast algorithms (the so-called pyramid algorithms) for computing the wavelet coefficients, asking now just for U ( N ) operations to be done. Starting from SO,^, one computes by iteration all other coefficients provided the coefficients hm, gm are known. The explicit shape of the wavelet is not used in this case any more. The remaining problem lies in the initial data. If an explicit expression for f(x) is available, the coefficients SO,k may be evaluated directly according to (22). But this is not so in the situation when only discrete values are available. In the simplest approach they are chosen as SO& = f (k). 6. The Fourier and wavelet transforms
As has been stressed already, the wavelet transform is superior to the Fourier transform, first of all, due to the locality property of wavelets. The Fourier transform uses sine, cosine or imaginary exponential functions as the main basis. It is spread over the entire real axis whereas the wavelet basis is localized. An attempt to overcome these difficulties and improve time-localization while still using the same basis functions is made by the so-called windowed Fourier transform. The signal f ( t ) is considered within some time interval (window) only. However, all windows have the same width.
380
In contrast, the wavelets $ automatically provide the time (or spatial location) resolution window adapted to the problem studied, i.e., to its essential frequencies (scales). Namely, let t o , 6 and W O , 6, be the centers and the effective widths of the wavelet basic function $(t) and its Fourier transform. Then for the wavelet family + j , k ( t ) (15) and, correspondingly, for wavelet coefficients, the center and the width of the window along the t-axis are given by 2j(to k) and 2j6. Along the w-axis they are equal Thus the ratios of widths to the center position to 2-jwo and 2-j6,. along each axis do not depend on the scale. This means that the wavelet window resolves both the location and the frequency in fixed proportions to their central values. For the high-frequency component of the signal it leads to a quite large frequency extension of the window whereas the time location interval is squeezed so that the Heisenberg uncertainty relation is not violated. That is why wavelet windows can be called Heisenberg windows. Correspondingly, the low-frequency signals do not require small time intervals and admit a wide window extension along the time axis. Thus wavelets well localize the low-frequency "details" on the frequency axis and the high-frequency ones on the time axis. This ability of wavelets to find a perfect compromise between the time localization and the frequency localization by automatically choosing the widths of the windows along the time and frequency axes well adjusted to their centers location is crucial for their success in signal analysis . The wavelet transform cuts up the signal (functions, operators etc) into different frequency components, and then studies each component with a resolution matched to its scale providing a good tool for time-frequency (position-scale) localization. That is why wavelets can zoom in on singularities or transients (an extreme version of very short-lived high-frequency features!) in signals, whereas the windowed Fourier functions cannot. In terms of traditional signal analysis , the filters associated with the windowed Fourier transform are constant bandwidth filters whereas the wavelets may be seen as constant relative bandwidth filters whose widths in both variables linearly depend on their positions. The wavelet coefficients are negligible in the regions where the function is smooth. That is why wavelet series with plenty of non-zero coefficients represent really pathological functions, whereas "normal" functions have "sparse" or "lacunary" wavelet series and easy to compress. On the other hand, the Fourier series of the usual functions have a lot of non-zero coefficients, whereas "lacunary" Fourier series represent pathological functions. Thus these two types of analysis can be considered as complementary rather than overlapping ones.
+
381
7. Technicalities
One can already start the signal analysis with above procedures. However, there are several technical problems which should be mentioned. At some length they are described in the cited monographs and review papers. 0
0
The number of possible wavelets at our disposal is much larger than the above examples show. Let us mention coiflets, splines, frames, wavelet packets etc. Usually, one chooses for the analysis the particular basis that yields the minimum entropy. Multiresolution analysis can be performed in more than one dimensions. In two dimensions, dilations of the resulting orthonormal wavelet basis control both variables simultaneously, and the two-dimensional wavelets are given by the following expression:
2%(2jz - k,2jy - Z),
0
j , k,Z E 2,
(23)
where @ is no longer a single function: on the contrary, it consists of three elementary wavelets. To get an orthonormal basis of Wo one has to use in this case three families cp(z - k)$(y - Z), $(z k)cp(y-Z), $ ( z - k ) $ ( y - Z ) . Then the two-dimensional wavelets are 2jcp( 2j2 -k)$(2jy -1) , 2j$(2jz -k)cp(2jy - 1) , 2j$( 2jz -k)+( 2jy 1). In the two-dimensional plane, the analysis is done along the horizontal, vertical and diagonal strips with the same resolution in accordance with these three wavelets. A set of geometrical objects is decomposed into two layers. The study of many operators acting on a space of functions or distributions becomes simple when suitable wavelets are used because these operators can be approximately diagonalized with respect to this basis. Orthonormal wavelet bases provide a unique example of a basis with non-trivial diagonal, or almost-diagonal, operators. The operator action on the wavelet series representing some function does not have uncontrollable sequences, i.e., wavelet decompositions are robust. One can describe precisely what happens to the initial series under the operator action and how it is transformed. In a certain sense, wavelets are stable under the operations of integration and differentiation. That is why wavelets, used as a basis set, allow us to solve differential equations characterized by widely different length scales found in many areas of physics and chemistry. Moreover, wavelets reappear as eigenfunctions of certain operators. The so-called non-standard matrix multiplication
382
0
is a useful procedure for dealing with operators. The analysis of any signal includes finding the regions of its regular and singular behavior. One of the main features of wavelet analysis is its capacity of doing a very precise local analysis of the regularity properties of functions. This allows us to investigate, characterize and easily distinguish some specific local behaviors such as approximate selfsimilarities and very strong oscillatory features. The two-microlocal analysis is used 2o to reveal the pointwise behavior of any function from the properties of its wavelet (20) of the real-valued coefficients. The two-microlocal space CS%”’ n-dimensional functions f (distributions) is defined by the following simple decay condition on their wavelet coefficients dj,k Idj,k(Zo)I
0
5 c2-(++”’(1+ 12’Zo
- kl)-”,
(24)
where s and s’ are two real numbers. This is a very important extension of the Holder conditions. The two-microlocal condition is a local counterpart of the usual uniform condition. It expresses the singular behavior of the function itself at the point 20 in terms of the k-dependence of its wavelet coefficients at the same point. In signal analysis , real-life applications produce only sequences of numbers due to the discretization of continuous time signals. This procedure is called the sampling of analog signals. The behavior of wavelet coefficients across scales provides a good way of describing the regularity of functions whose samples coincide with the observations at a given resolution. Moreover, to save the computing time, one can use not a complete set of wavelet coefficients dj,k but only a part of them omitting small coefficients not exceeding some threshold value E . This standard estimation method is called estimation by coefficient thresholding.
8. Scaling
By scaling one usually implies the self-similarity of the analyzed object or event. In turbulence, e.g., this is revealed as ”whorls inside whorls inside whorls ,..” which leads to the power-like behavior of some distributions. In particle physics we speak about ”jets inside jets inside jets...”. More generally, we say that some signals (objects) possess self-similar (fractal) properties. It means that by changing the scale one observes at a new scale the features similar to those previously noticed at other scales. Since wavelet analysis just consists in studying the signal at various scales
383
by calculating the scalar product of the analyzing wavelet and the signal explored, it is well suited to revealing the fractal peculiarities. In terms of wavelet coefficients it implies that their higher moments behave in a powerlike manner with the scale changing. Namely, let us consider the sum 2, of the q-th moments of the coefficients of the wavelet transform at various scales j
k
where the sum is over the maxima of Idj,kl. Then it was shown 21, 22 that for a fractal signal this sum should behave as
i.e.,
Thus the necessary condition for a signal to possess fractal properties is the linear dependence of log Z,(j) on the level number If this requirement is fulfilled the dependence of T on q shows whether the signal is monofractal or multifractal. Monofractal signals are characterized by a single dimension and, therefore, by a linear dependence of r on q, whereas multifractal ones are described by a set of such dimensions, i.e., by non-linear functions r(q). Monofractal signals are homogeneous, in the sense that they have the same scaling properties throughout the entire signal. Multifractal signals, on the other hand, can be decomposed into many subsets characterized by different local dimensions, quantified by a weight function. The wavelet transform, if done with wavelets possessing the appropriate number of vanishing m e ments, removes lowest polynomial trends that could cause the traditional box-counting techniques to fail in quantifying the local scaling of the signal. The function r(q)can be considered as a scale-independent measure of the fractal signal. It can be further related to the Renyi dimensions, Hurst and Holder exponents. The range of validity of the multifractal formalism for functions can be elucidated with the help of the two-microlocal methods generalized to the higher moments of wavelet coefficients. Thus, wavelet analysis goes very far beyond the limits of the traditional analysis which uses the language of correlation functions (see, e.g., 23) in approaching much deeper correlation levels.
i.
384
9. Applications
Wavelets become widely used in pure and applied science. Here we describe just two examples of wavelet application to analysis of one- and two-dimensional objects (see l l ) . The single variable example is provided by the time variation of the pressure in an aircraft compressor. The goal of the analysis of this signal is motivated by the desire to find the precursors of a very dangerous effect (stall+surge) in engines leading to their destruction. It happened that the dispersion of the wavelet coefficients can serve as a precursor of this effect. Let us mention that the similar procedure has been quite successful in analysis of other engines, of heartbeat intervals and diagnosis of a disease. The two-dimensional wavelet analysis can be used in for recognition of objects shapes. It has been applied, e.g., for pattern recognition of the finger prints (this helped save a lot of computer memory, in particular), of the erythrocytes and their classification. It was used also for analysis of patterns in very high multiplicity events. Lead-lead collisions at 158 GeV/c with multiplicities exceeding 1000 charged particles were analyzed in the two-dimensional phase space and wavelet coefficients for low scales j < 6 were omitted. Then the long-range images of events were obtained by the inverse transform. They showed some quite peculiar features of longrange correlations, in particular, the ring-like structure reminding that of Cherenkov rings. Many other examples can be found in the cited literature and in Web sites.
10. Conclusions The beauty of the mathematical construction of the wavelet transformation and its utility in practical applications attract researchers from both pure and applied science. Moreover, the commercial outcome of this research has become quite important. We have outlined a minor part of the activity in this field.
References 1. Meyer Y Wavelets and Operators (Cambridge: Cambridge University Press, 1992) 2. Daubechies I T e n Lectures o n Wavelets (Philadelphia: SIAM, 1991) 3. Meyer Y, Coifman R Wavelets, Calderon-Zygmund and multilinear opemtors (Cambridge: Cambridge University Press, 1997)
385
4. Meyer Y Wavelets: Algorithms and Applications (Philadelphia: SIAM, 1993) 5. Progress in Wavelet Analysis and Applications (Eds Y Meyer, S Roques) (Gif-sur-Yvette: Editions Frontieres, 1993) 6. Chui C K A n Introduction to Wavelets (San Diego: Academic Press, 1992) 7. Hernandez E, Weiss G A First Course on Wavelets (Boca Raton: CRC Press, 1997) 8. Kaiser G A Friendly Guide to Wavelets (Boston: Birkhauser, 1994) 9. Wavelets: A n Elementary Treatment of Theory and Applications Ed Koornwinder T (Singapore: World Scientific, 1993) 10. Astafyeva N M Physics-Uspekhi 39 1085 (1996) 11. Dremin I M, Ivanov 0 V, Nechitailo V A Physics-Uspekhi 44 447 (2001) 12. Wavelets in Physics Ed Van den Berg J C (Cambridge: Cambridge University Press, 1998) 13. Mallat S A Wavelet Tour of Signal Processing (New York: Academic Press, 1998) 14. Erlebacher G , Hussaini M Y, Jameson L M Wavelets Theory and Applications (Oxford: Oxford University Press, 1996) 15. Wavelets in Medicine and Biology Eds. Aldroubi A, Unser M (Boca Raton: CRC Press, FL, 1994) 16, Carmona R, Hwang W-L, Torresani B Practical Time-Fkequency AnaZysis (San Diego: Academic Press, 1998) 17. Grossman A, Morlet J in MathematicafPhysics, Lectures on Recent Results Vol. 1 (Ed. L. Streit) (Singapore: World Scientific, 1985) 18. Morlet J, Arens G, Fourgeau E, Giard D Geophysics 47 203, 222 (1982) 19. Haar A Muth Ann 69 331 (1910) 20. Jaf€ard S, Meyer Y Memoirs of the American Mathematical Society 123 n587 (1996) 21. Muzy J F, Bacry E, Arneodo A Phys Rev Lett 67 3515 (1991); Int I Bifurc Chaos 4 245 (1994) 22. Arneodo A, d'Aubenton-Carafa Y, Thermes C Physica (Amsterdam) 96D 291 (1996) 23. De Wolf E, Dremin I M, Kittel W Phys Rep 270 1 (1996)
MULTIPARTICLE CORRELATIONS IN Q-SPACE
HANS C. EGGERS Department of Physics University of Stellenbosch 7’600 Stellenbosch, South Africa
THOMAS A. T R A I N O R CENPA 354290 University of Washington Seattle, WA 98195, USA We introduce Q-space, the tensor product of an index space with a primary space, to achieve a more general mathematical description of correlations in terms of qtuples. Topics discussed include the decomposition of Q-space into a sum-variable (location) subspace 8 plus an orthogonal difference-variable subspace ID, and a systematisation of q-tuple size estimation in terms of p-norms. The “GHP sum” prescription for q-tuple size emerges naturally as the 2-norm of difference-space vectors. Maximum- and minimum-size prescriptions are found to be special cases of a continuum of psizes.
1. Correlations in P-space
Traditionally, particles emitted by high-energy hadronic or heavy ion collisions have been visualised in terms of a collection of points populating what we call primary space P or P-space, with each particle i represented by a &dimensional vector xi = (zil,zi2, . . . ,z i d ) . Examples of P-spaces are three-momentum, with xi = (piz,piy,p i , ) and rapidity-azimuth, with xi = (yi,4i).Particle correlations can be studied either by binning B with a suitable partition ( LLcoarse-graining”), or by analysing distributions of relative distances between primary vectors xi directly. In this contribution, we focus exclusively on the latter approach. Correlations are a matter of definition; typically they are deviations of joint q-particle distributions from a pre-defined null hypothesis or reference process1 such as a uniform distribution or a q-fold convolution of the differential one-particle distribution. Under a “dilute-fluid” assumption that correlation strength decreases with particle cluster (q-tuple) size, low-order
386
387
q-tuples - particle pairs, triplets, quartets etc. - are usually studied as a first approximation, with higher-order q-tuples as perturbations. In general, one selects out of N particles all possible combinations" of q-tuples (q = 1,2,3,. . .) for statistical analysis, using for example cumubants2 as differential correlation measures. The characterisation of q-tuples is therefore fundamental to correlation analysis. The simplest properties of a given q-tuple are its location and size. Location can be defined, for example, as the centre of mass of the q particles,
Y
. a=1
or as the location of any one of the particles. In its simplest incarnation, the mathematical realisation of size should be a nonnegative real number which reduces to zero whenever all q particles occupy the same position in P: in other words, size should be a norm based o n relative coordinates. Contrary to naive expectation, the best prescription for this is not immediately obvious. While a 2-tuple's size is clearly described in terms of the distance Ixi - xj 1, q-tuples of higher order permit a range of choices. In Ref.3 this problem was addressed in terms of different "topologies", summarised pictorially in Fig. 1 for some representative 4-tuples.
.
cp*
',f I
.
. 0
Y Figure 1. An event, shown as a collection of N points in the P-space (y, 4). Also shown are examples of three topologies used to quantify 4-tuple size.
For every topology, interpair distances can be combined in several ways to form different size estimators. The q(q - 1)/2 interpair distances making analytical manipulations, it is more convenient to consider all N ! / ( N - q)! ordered q-tuples rather than the N ! / ( N - q)!q! unordered ones.
388
up the GHP topology can, for example, be summed to yield the GHP sum size; or size can be based on the largest of these distances, resulting in the GHP max size estimate. Particle number within d-dimensional spheres centered on individual particles defines the Star max size, while the Snake Integral seeks to quantify size in terms of a linear succession of distances between ordered points. Historically, the GHP max and Star m a prescriptions were heuristic inventions within the correlation integral literature: the q=2 simplest case4 was extended in Refs.5 and6 to the GHP max prescription, 1 Nq
CiGHP)(t?) = - {No. q-tuples ( i l , . . . , i q )with all [xi, -xi,,, I < e} , (2) and in ref^.^?^?^ to the Star max,
(with 6 ( x ) the Heaviside function), which can, on multiplying out the (4-1) inner sums and using 6(i!-lxi-xjl) = 6(f?-maxj(lxi-xjI)),be made to exhibit the Star max prescription on the q-tuple (i, jl,. . . ,jq-l)explicitly,
nj
N
Factorial extensions were published in Ref.3. Note that, in summing indiscriminately over all q-tuples, the above correlation integrals implicitly assume that correlation structure is independent of location. 2. Q-space by example
The concept of Q-spaces is not new: In high-energy hadronic collisions, two-particle correlations have long been visualised in two-particle spaces". Figure 2 illustrates by means of a simple example how a Q-space is constructed from P-space. A typical event in a one-dimensional (d=l) primary space is represented by the dots on the lines below the x1 axis and to the left of the x2 axis. Particle pairs are then represented by all possible dots in the Q=2 space as shown by representative dashed lines.b bMaking up a "pair" from a particle with itself would result in a dot lying on the diagonal line. Such usage corresponds to the transition from factorial to ordinary statistics. We ignore associated issues in this contribution.
389
Each pair in this example is represented by a vector XQ as shown. The location corresponds to the component vector Xs along the diagonal, and the 2-tuple size to the magnitude of XD,since lXsl = 1x1 +x21/fi = 4% and JXDJ =1.1 - 2 2 ) / f i . This can be derived algebraically by using the Qspace representation XQ = zlU1 z 2 U 2 in terms of the index unit vectors UIJ, which are rotated to the basis vectors h 1 , 2 shown in Fig. 2 to find Xs = h I ( h 1 -XQ) and XD = h 2 ( h a 0x0).
+
..
Figure 2. Example of Q-space for q=2, d = l . The primary space event, represented on the margins, maps onto particle pain in the Q-space square. Vector XQ,representing a particular pair, decomposes into a location vector Xs and a size vector XD. Unit vectors G1,2 are rotated to hl and h, which span sum space and difference space respectively.
Correspondingly, 3-tuples can be visualised for one-dimensional Pspaces as shown in Fig. 3. Here, one particular ordered triplet is shown in all its exchange-permutation incarnations, all of which represent the same unordered triplet; the associated symmetry is clearly visible when viewing the Q-space down the main diagonal as in Fig. 3(b). The set of unit vectors (ul,U 2 , U3) is transformed to
hl = (Ul h2
+ + U2
= (U, -
+
U 3 ) / 6
(5)
(6) (7)
h, = (U1 U 2 - 2U3)/& where h 1 points along the main diagonal (shown as the dashed line), while h 2 and h 3 span the plane, indicated by the dotted lines in Fig. 3, which is normal to the main diagonal. This normal plane exemplifies diference space &, within which only relative coordinates appear, while the main diagonal represents the sum space S3 measuring, once again, q-tuple location.
390
Figure 3. A 3-tuple (with d = l ) showing (a) a 3-tuple point plus its five permutationsymmetric counterparts and the main diagonal (dashed line), and (b) the same system when viewed down the main diagonal. Also shown are the corresponding difference vectors XD. The dotted lines are coplanar with the difference space @.
3. Formalism for Q-space The general formalism for vectors in Q is now easily understood. The particle vectors xi,i = 1 , . . .q of a q-tuple in d-dimensional primary space P are combined into the Q-space-vector
XQ = c x i u i . i= 1
Q is spanned by unit vectors which are the product of unit vectors ui, i = 1 , . . . ,q living in indez-space 1, and the basis vectors of P (implicit in xi). The d-dimensional sum space 9, is spanned by 1 , h, = - x u 2
fi
(9)
i=,
and the basis vectors of P. The corresponding index-space projection of XQ onto sum space, Xs h 1 ( h 1 . X Q ) ,is given by
=
where X is the q-tuple CMS as before. Furthermore, the algebraic complement of S q is spanned by the set of (q - 1) orthonormal vectors' in 1,
h, = (UI - U Z ) / d 5 , CAny basis set connected to this one by an orthogonal transformation will also do.
(11)
391
which, together with the B basis, are used to define a basis for difference space ID,. The index space projection of XQ onto ID, is given by a
xDE C h i (hi.XQ) .
(14)
i=2 Since every XQ E Q has a unique decomposition into Xs and X D , Q is the direct sum of 8, and ID,. The relationship between the different spaces can hence be summarised as
I,@ lP = Q = 9* @ ID, ,
(15)
corresponding to the dimensionality relation dim@,) x dim(P) = dim(Q) = dim(9,) dim(ID,) or, explicitly, q x d = qd = d ( q - 1)d. Within this framework, the first and most obvious measure of size is the 2-norm of X D . Starting either with Eq. (14) or by subtraction, XD = XQ - Xs, we find the suggestive form
+
+
a
i= 1
which yields
which, apart from the prefactor, is exactly the GHP 2-sum or rms size measure. The GHP 2-sum thus appears to be the natural measure of size in Q-space. 4. Generalised psizes
While the above result represents a strong endorsement of the Q-space approach to multiparticle correlations, there is more. Generalased p-norms are the extension of Eq. (17) to arbitrary p 2 1: Given an m-dimensional vector A = ( u l , . . .,urn),the p-norm of A is defined by
392
Special cases are the p=2 norm appearing in Eq. (17) and the “max” norm,
The condition p 2 1, required for IlAll to satisfy the Minkowski inequality /(A BII, 5 llAllp llBllp,can be relaxed to p E R if we do not insist on llAl1 being a norm and allow it to be merely a “size measure” or “psize”. We then also have negative-p sizes and in particular
+
+
p - lim i -w
( ( A ( (=,
mp( \ a i l ) ,
Iail
# 0 Vi .
(20)
Fig. 4 shows as a simple example for a two-component vector A = ( 2 1 , 22) the set of isonorms satisfying llAlip = 1 = [ 1 ~ 1 1 P 1221P]1/p i.e. a set of curves showing which vectors A have the same “psize”. Only positive 2 1 and 2 2 are shown as llAllp is reflection-symmetric about q = O and 2 2 = 0 . The usual circle for constant 2-norm is complemented by the straight diagonal line (or, in the full plane, diamond shape) of p=l and various shapes in between. Of particular interest are the max and min size measures shown as the solid line and dashed line respectively. Based on the above, we can define a “GHP sum psize” as follows:
+
which for p 2 1 is a norm. Eq. (17) is seen to be the special case S i y ) = fi llXDII 2 , while the GHP max definition is the special case p + 00,
representing the size prescription used in the correlation integral (2). A given Q-space vector A will therefore yield a set of size measures liAllp I p E R, p # O}. which includes the “min”, “max” as well as the usual 2-norms. An infinite set of psizes of a given q-tuple is, however, mostly redundant; for practical purposes, a subset such as p E (-00, 0.5,2, +coo) is probably sufficient. The sizes Sq,pcan be understood in terms of projections in Q-space as follows. Define the set of pair plane normal vectors hij (ui - uj)/fi; these can be considered to span the respective pair ( i , j ) ’ s difference space KD:”). Eq. (21) then can be written as
{
393
2-
1.5 -
Figure 4. Example of p-sizes: A vector A = ( 2 1 , ~has ~ )many different norms or, more generally, p-sizes. Conversely, different sets of vectors A have the same p-sizes, as shown in this picture. The usual quarter-circle Pythagorean norm (p=2) is complemented by various convex and concave “isonorm” curves. Of particular interest are the p = +w (maximum) and p = -co (minimum) sizes.
The set of h i j ’ s are clearly not mutually orthogonal, given that q(q - 1)/2 vectors hij all live in the (q - 1)-dimensional difference space IDg: indeed, we have hij - h k l = (6ik 6jl - 6il - 6jjk)/2 and, conversely, Eqs. (11)-(13) show that the h g 2 2 are simple sums of the hij’s. Fig. 5 shows, for the simple case q=3 and d=l, the normal vectors in the difference space 4 (equivalent to viewing the Q cube of Fig. 3(a) down the main diagonal) as well as the distances from the respective pair planes dij = Ixi - xj(/fi.
+
5. Q-space and other size measures
The symmetry of Q-space clearly favours the GHP topology over the corresponding Star and Snake topologies. It is possible, nevertheless, to accommodate the latter into Q-space. A simple ad hoc definition of the Star psize of the q-tuple centered on xi would be
394
i x3 plane plane
(x, - xp( = 0 plane
(b) Figure 5. Example (d=l, q=3) of the nonorthogonal projection (not decomposition!) of Q-space vectors in terms onto “pair plane normals” hij . The difference space vector X D ,shown as a point in (b), is projected onto the pair plane normal vectors h l z , h 2 3 and h 3 1 to yield the distances d i j between X D and the respective pair planes /xi- x j 1 = 0. The views are once again down the main diagonal.
This is the pgeneralised size measure corresponding to the Star max correlation integrals of Eq. (3)-(4). Alternatively, we can, for the Star q-tuple centered on i=l, define a vector X&,,(XI) ,/7jxlhl and from this find the vector
=
XQ - X,S,,,(Xi)
- xl)Uz
+ +
- xi)uq ;
(25) then the size measure (24) can be applied directly in terms of its uj components. Clearly, the pair (Xgtar(xl),X&,,(XI))is not orthogonal to hi and hence lives neither in S, nor IDq; also, this pair will be different for every q-tuple centre xi. This is consistent with the fact that sizes of Star q-tuples are, by definition, different for the different centre particles. Star and Snake topologies can be accommodated in the Q-space presented here, but, due to their obvious asymmetry, do not fit comfortably into the explicitly symmetric Q-space framework. Other approaches such as conditional spaces (for the Star) and spaces based on strict ordering such as in time series (for the Snake) will probably yield more natural interpretations for these cases. X,D,,,(Xi)
(XZ
* * *
(Xq
6. Summary
It is becoming clear that the concept of Q-spaces is yielding insight into the fundamental structure of correlations of point sets. As subspaces of the final-state phase space q = N , Q-spaces have a solid theoretical and historical foundation, while offering a structured set of numbers characterising
395
q-tuples, starting with location and size. Generalising simple notions of norms to an infinite set of psizes, we find that the “ m u ” and 2-sum sizes are just special cases within this wider set. Nonorthogonal “pair planes” projections are found to be important, in keeping with the obvious fact that relative distances are also mutually constrained and hence dependent. Most importantly, the usual 2-norm of the difference space vector is found to be the GHP sum size prescription, which therefore should be afforded more attention in higher order correlation analyses. Extensions such as shape characterisers and higher-order measures of size are easily conjured up.
Acknowledgements This work was supported in part by the National Research Foundation of South Africa and by the United States Department of Energy.
References 1. H.C. Eggers, in: 30th Int. Symp. on Multiparticle Dynamics, World Scientific (2001) pp. 291-302; hep-ex/0102005 2. A. Stuart and J.K. Ord, Kendall’s Advanced Theory of Statistics, Vol.1, fifth edition, Oxford University Press, New York (1987). 3. P. Lipa, P. Carruthers, H. C. Eggers and B. Buschbeck, Phys. Lett. B285, 300 (1992); H. C. Eggers, P. Lipa, P. Carruthers and B. Buschbeck, Phys. Rev. D48, 2040 (1993). 4. P. Grassberger and I. Procaccia, Phys. Rev. Lett. 50, 346 (1983). 5. Hentschel and Procaccia, Physica 8D, 435 (1983). 6. P. Grassberger, Phys. Lett. A97, 227 (1983). 7. G. Paladin and A. Vulpiani, Lett. Nuovo Cimento 41, 82 (1984). 8. K. Pawelzik and H. G. Schuster, Phys. Rev. A35,481 (1987). 9. H. Atmanspacher, H. Scheingraber and G. Wiedenmann, Phys. Rev. A40, 3954 (1989). 10. L. FOB,Phys. Rep. 22, l(1975).
FLUCTUATIONS IN HUMAN ELECTROENCEPHALOGRAM
RUDOLPH
c. H W A ~AND THOMAS c. FERREE~
‘Institute of Theoretical Science and Department of Physics University of Oregon, Eugene, O R 97403-5203, U S A Dynamic Neuroamaging Laboratory, Department of Radiology University of California at San h n c i s c o , San Francisco, C A 94143-0628, U S A The human electroencephalogram(EEG) that records the brain electrical activities shows a high degree of fluctuations both spatially on the scalp and temporally over various time scales. Since the human brain dynamics is that of a highly nonlinear system, we shall examine the nature of the fluctuations in the EEG time series in the framework of nonlinear analysis. By using detrended fluctuation analysis we find scaling behaviors that provide very useful and hitherto unrealized information about the characteristics of the brain function.
What is a particle physicist doing with EEG? What I a m going to tell you is something not out of reach of most of us gathered here, since we are all familiar with the study of correlations and fluctuations. The brain may be far more complicated than the physics of particle production, but the observables are not more complicated, only different. After a little fluctuation analysis, the EEG signals can be reduced to a set of numbers in just the same way that an event of multiparticle production is described by a set of numbers specifying the momenta of the detected particles. From that point onward the technique of treating those numbers in the extraction of useful information is almost the same. It is quite rewarding to come to the realization that an interdisciplinary area canbe developed to bridge the gulf separating particle physics and neuroscience, and I am here to share that excitement with you. Human neuroscience at the global scale is at this stage mainly an inductive science based primarily on the phenomenology of noninvasive probing of the human brain. There is essentially no in-principle deduction except at the cellular and molecular levels. EEG is just one among several methods of getting information about the brain activities. It is cheaper and more convenient than, for example, MEG (magnetoencephalogram) and
396
397
MFU (magnetic resonance imaging). Recent development of the EEG device can provide a 128-probe net over the scalp with voltage readings in all channels at the rate of 250 points/sec. In a few minutes a huge amount of data can be generated in the form of 128 time series, each of which shows irregular oscillations with what appear to be random fluctuations. TOan untrained eye there seems to be no discernable differences among the time series of the various channels, in much the same way that the many events of high-energy collisions look alike. In particle physics we have some basic principles to help us to organize the data, such as relativity, conservation of momentum, particle identity, etc. In EEG there seem to be no rock solid rules guiding the interpretation. One can make some inferences from the known properties of the neuronal behavior but not enough to construct a reliable framework to interpret the data collected at the scalp level. The conventional approach is to do Fourier analysis. Since it is based on linear superposition, we feel that a more appropriate method of treating the signals from the highly nonlinear system should be a type of nonlinear analysis. Thus from the very beginning our path diverges from most of the rest of the EEG community. (To be fair it should be mentioned that there exists a subcommunity of researchers who study the chaotic behavior of the brain based on known nonlinear analysis.) In a sense that is helpful to me, since I am new to neuroscience (but fortunately with the help of an experienced collaborator), and convenient for you, since you don’t have to know anything about the brain to appreciate what I am going to tell you. Our aim is to study fluctuations of the time series. Fluctuations from what? For a pure sine wave of many cycles, the only sensible fluctuation would be from the horizontal axis. But for a staircase wave form climbing up over a short duration, it is more sensible to study the fluctuation measured from the straightline that goes up. Unifying the two extreme cases, the meaningful fluctuations are measured from the semilocal trends, which is the linear fit of the wave in the interval being considered. This is called detrended fluctuation analysis (DFA), which has been applied in recent years to such problems as DNA nucleotides and heartbeat time series 2. We shall use it in a slightly different way in that the time series are not integrated over, due to the continuous nature of our input data input. The idea is t o calculate the variance of the wave from the semilocal trend for a given window size, average over many such windows of the same size, and then to study the rms of the deviation as a function of the window size. Clearly, small windows allow only the study of short-range correlations, and large windows can include long-range correlations.
398 100
80 60
s3. 40 W
b
-20 I 0
0.05 0.1 0.15 0.2 0.25 0.3
0.35 0.4 0.45 0.5
t (sec) Figure 1. Examples of EEG time series in three channels for a single subject. The vertical scales of Ch. 1 and Ch. 2 are shifted upward by 60 and 30 p V , respectively. Dashed lines are linear fits in the windows of 0.1 sec width.
We show in Fig. 1 a sample of three EEG time series in different channels for a single subject. The dotted lines indicate the linear fits of the semilocal trends in windows of 0.1 sec wide. One can see an oscillatory mode in Channel 3 with a basic period of 0.1 sec. But no such oscillation can readily be identified in Channel 1. The fluctuations that we shall quantify are the deviations of the solid lines from the dotted lines. We shall then study the dependence of such deviations on the window size. I shall omit the mathematical details and refer the interested reader to Let me just dwell on the essentials here. Let the the published papers average rms deviation from the semilocal trend in a window of size Ic be denoted by F(Ic). What we have found is that in nearly all 128 channels of every subject that we have examined there is a power-law behavior 334.
F ( k ) 0: k"
(1)
in each of two regions. Figure 2 shows such behavior for the three channels shown in Fig. 1. The fact that the scaling behavior is not valid throughout the whole range of it implies that there is an intrinsic scale in the problem.
399
It is not difficult to relate the position of the bend to a frequency of oscillation that is dominant in each time series. That frequency is about 10 Hz, which corresponds to the well-known a resonance in the Fourier spectrum of EEG. On either side of the bend we can use the power-law (1) to describe the scaling behavior, and assign the two scaling exponents a1 and a2 to characterize the nature of the fluctuations in the two regions of time scale.
43.5 3 2.5 -
k 2 E 1.5 M 1 -
0.5 -
0 -0.5
1
1
I
2
3
4
5
6
7
In k Figure 2. F ( k ) vs /c for the three channels of Fig. 1. The vertical scales of Ch. 1 and Ch. 2 are shifted upwards by 1.0 and 0.5 units, respectively.
We now have 128 pairs of ai values for each subject. That is a huge reduction of information contained in the EEG recordings of 10 sec long. Nevertheless, 256 numbers still constitute a large set from which we must further do some analysis to extract pertinent information. The situation is similar to an event in a high-energy collision where 128 particles are produced, whose momentum vectors can be plotted in the 2D ( p ~ , p ~ ) space with the azithumal angle ignored. In that plot (called the Peyrou plot more than thirty years ago) we can enter each vector as a point and get 128 points, indicating where all the particles are produced in an event.
400 So also here we can make a scatter plot of a2 vs a1 in which the 128 points characterize the brain state of a subject. The words 'brain state' are used here only in parallel to the final state of a collision without a careful definition of what a brain state is, an issue that is of great interest to neuroscientists. In Fig. 3 we show the scatter plot of a typical healthy subject. If a scatter plot for a stroke subject is shown, it would look similar. Generally, a2 is less than a1 at each channel. Obviously, one can consider the mean and the deviation from the mean in such plots. That would be the simplest analysis that can be brought to such distributions.
0.5 Figure 3. Scatter plot of channels in Figs. 1 and 2.
012
1
1.5
vs a1 for one subject. Open circles indicate the three
In particle physics we make projections of the 2D plot to the two axes and generate the p~ and p~ distributions for each event and then perform averaging over all events. Usually, such a procedure is based on the supposition (with some phenomenological support) that the p~ and p~ distributions are independent. If an event corresponds to a subject in the EEG case, then we do not want to perform averaging over all subjects, since we want to detect the differences among the normal and abnormal subjects.
401
On the other hand, if we have a very long time series lasting several minutes, say, for one person, then an event may correspond to a 10 sec segment and we may want to study the stability of the ai parameters from segment to segment, as time progresses in the long recording session. In that case we also do not want to average over the events. The correlation between a1 and a2 is of interest and can be studied. Without assuming that a1 and a2 are independent, we can nevertheless perform projections onto the two axes and consider the P(a1) and P(a2) distributions separately. Normalized moments of these distributions can be calculated, as we do routinely for multiplicity distributions in multiparticle production. Let us define the normalized ai variables
= (.i(j)/
.i(j)
(w)
(2)
where (ai)is the average of ai over all channels, j being the channel index, and i = 1or 2. The normalized moments are then
where N is the number of channels. We have found that except for low values of q all subjects exhibit the exponential behavior
~ 4 ' ~ )exp(pi q )
(4)
c(
for q 2 5. Since that behavior is true for both i = 1 and 2, we find further that a power-law behavior
exists for nearly all q and for all subjects examined. Thus q that is also p 2 / p 1 is a global measure that characterizes all ai values of a subject. The correlation between the two scaling exponents can also be studied by considering the ratio
P = a2/a1
(6)
for each channel. With the normalized moments of Nq
= ((Pj/
(Pj))'
)7
P defined as (7)
we find that the exponential behavior Nq
exp(vq)
is again valid for all subject, this time for nearly all q. Thus v is another measure of the time series that is distinct from 7.
402
32.5 -
0
0
20 0
0
0 0
o
0
i
o
I
2
3
4
5
6
7
8
Figure 4. A plot of u vs 77 for all 28 subjects: normal (open) and stroke (filled).
In Fig. 4 we show the scatter plot of ( 7 , ~ )for 28 subjects, among whom 18 are normal subjects labeled by open circles, and 10 are stroke subjects labeled by filled circles. One sees that the points for the stroke subjects generally are lower than those for the normal subjects in that plot. Unfortunately, the separation between the two groups is not distinct enough to render the plot useful as a measure for stroke detection. This conclusion differs from that reported earlier [4],where we saw more clear separability, because we have subsequently excluded some of the channels examined earlier due to poor contacts and other defective aspects of the EEG recordings. However, we have found that another measure related to the moments can serve effectively to distinguish the stroke from the normal subjects. We shall report on that finding elsewhere. It has been a great pleasure to present this subject to the participants of this Workshop. As a physicist, I find it much easier to convey the ideas and some technical details to other physicists even in this unfamiliar territory of EEG signals. And to a large extent I have also been able to sense the responsiveness of the audience. It is far more difficult to do the same with neuroscientists who are unfamilar with the techniques used in our work.
403
That is the burden that any one working in an interdisciplinary area must bear in the beginning and hopefully reduce in time. But what is most rewarding is the realization that what we have been doing with the particles created in our laboratories can give rise to a new way of looking at the electrical signals coming from our brain. In both areas phenomenological analysis can lead to insights into the workings of the underlying dynamics, or at least in this case provide something that may become beneficial for a tangible purpose. We are grateful to Prof. Don Tucker and Dr. Phan Luu for supplying the EEG data for our analysis. We have also benefited from the computational assistance of Wei He. This work was supported, in part, by the U. S. Department of Energy under Grant No. DE-FG03-96ER40972, and the National Institutes of Health under Grant No. R44-NS-38829.
References 1. C.-K. Peng, S. Havlin, H. E. Stanley, and A. L. Goldberger, Chaos 5 , 82 (1995). 2. C.-K. Peng, S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley, and A. L. Goldberger, Phys. Rev. E 49, 1685 (1994). 3. R. C. Hwa and T. C. Ferree, Nonlinear Phenomena in Complex Systems 5 , 302 (2002). 4. R. C. Hwa and T. C. Ferree, Phys. Rev. E 66, 021901 (2002).
This page intentionally left blank
List of Participants Antoniou Nikos Department of Physics University of Athens 15771 Athens, Greece [email protected]
Contogouris Andreas Department of Physics University of Athens 15771 Athens, Greece [email protected]
Avramis Spyros Department of Physics University of Athens 15771 Athens, Greece [email protected]
Contoyiannis Yiannis Department of Physics University of Athens 15771 Athens, Greece ycont @ panafonet.gr
Bai Yuting Institute of Particle Physics Huazhong Normal University Wuhan 430079 China [email protected]
Cvitanovic Predrag School of Physics, Georgia Tech Atlanta, GA 30332-0430 USA predrag.cvitanovic@ physics.gatech.edu
Bialas Andrzej Institute of Physics Jagellonian University Reymonta 4 Krakow 30-059 Poland [email protected] Brouzakis Nikos Department of Physics University of Athens 15771 Athens, Greece [email protected] Buschbeck Brigitte Institut fuer Hochenergiephysik Nikolsdorfergasse 18 Wien A-1050 Austria [email protected]
Del Fabbro Alessio Department of Theoretical Physics University of Trieste Strada Costierall Miramare-Grignano Trieste 34014 Italy [email protected] Diakonos Fotis Department of Physics University of Athens 15771 Athens, Greece fdiakono@ cc.uoa.gr Dremin Igor Lebedev Physical Institute Leninsky pr. 53 Moscow 119991 Russia dremin@ lpi.ru
406 Eggers Hans University of Stellenbosch P/Bag X1 Matieland 7602 South Africa eggers @ sun.ac.za Ganoti Paraskevi Department of Physics University of Athens 15771 Athens, Greece pganoti @ cc.uoa.gr Georgopoulos George Department of Physics University of Athens 15771 Athens, Greece [email protected] Giovannini Alberto Universita’ di Torino via Giuria 1 Torino 10125 Italy [email protected] Hwa Rudolph lnstitue of Theoretical Science University of Oregon Eugene OR 97403 USA [email protected] Kapoyannis Athanasios Department of Physics University of Athens 15771 Athens, Greece [email protected] Katsas Panayotis Department of Physics University of Athens 15771 Athens, Greece [email protected]
Kittel Wolfram University of Nijmegen Toemooiveld 1 Nijmegen 6525 ED The Netherlands [email protected] Kopytine Mikhail Kent State University Bld 902BlSTAR Brookhaven National Lab Upton NY 11973 USA kopytine @ bnl.gov Koussouris Konstantinos Department of Physics University of Athens 15771 Athens, Greece kkous @cc.uoa.gr Ktorides Christos Department of Physics University of Athens 15771 Athens, Greece [email protected] Kuvshinov Viatcheslav Institute of Physics NAS Scorina av.68 Minsk 220072 Belarus Minsk 220072 Belarus [email protected] Liu Lianshou Institute of Particle Physics Huazhong Normal University Wuhan 430070 China [email protected] Manjavidze Ioseb JINR (Russia) & Inst. Physics (Georgia) Lab. of Nucl. Problems Dubna Mascow Reg. 14 1980 Russia [email protected]
407
Margetis Spyridon Physics Department, Kent State University Kent Ohio 44262 USA [email protected] Mohanty Sandipan The Department of Theoretical Physics Sotlvegatan 14 A Lund 223 62 Sweden [email protected] Papachristou Pandelis Department of Physics University of Athens 15771 Athens, Greece [email protected] Pisarski Rob Brookhaven Natl. Lab. Bldg. 510A Upton NY 11973 USA pisarski @quark.phy.bnl.gov Saridakis Manos Department of Physics University of Athens 15771 Athens, Greece [email protected] Sarkisyan-GrinbaumEdward CERN and University of Antwerpen EP DIV./G24410 GENEVE 23 CH-1211 Switzerland [email protected] Sawidis Georgios NP, NRCDemokritos Ag.Paraskevi Athens Greece-1 5341 savvidy @mail.demokritos.gr
Schmitz Norbert Max-Planck-Institut feur Physik Foehringer Ring 6 Munich 80805 Germany [email protected] Seyboth Peter Max-Planck-Institut fuer Physik Foehringer Ring 6 Munich 80805 Germany pxs @mppmu.mpg.de Spyropoulou-StassinakiMartha Department of Physics University of Athens 15771 Athens, Greece [email protected] Tetradis Nikolaos Department of Physics University of Athens 15771 Athens, Greece niko1aos.tetradis@ cem.ch Todorova-Nova Sharka NIKHEF and CERN (EP) Geneva CH-1211 Switzerland [email protected] Trainor Thomas A. University of Washington CENPA 354290 Seattle Washington 98195 USA [email protected] Ugoccioni Roberto Universita’ di Torino via Giuria 1 Torino 10125 Italy Roberto.Ugoccioni @ to.infn.it
408
Wetterich Christof
TP. Univ.Heidelberg Philosophenweg 16 Heidelberg 69120 Germany c.wetterich 62thphys.uni-heidelberg.de
Wilk G n e g o n Soltan Institute for Nuclear Studies ul.Hoza 69 Warsaw 00-681 Poland [email protected]
This page intentionally left blank