Modelling microorganisms in food
Related titles: Foodborne pathogens: hazards, risk analysis and control (ISBN-13: 978-1-85573-454-8; ISBN-10: 1-85573-454-0) As incidence of foodborne disease continues to rise, the effective identification and control of pathogens becomes ever more important for the food industry. With its distinguished international team of contributors, Foodborne pathogens provides an authoritative and practical guide to effective control measures and how they can be applied to individual pathogens. Understanding pathogen behaviour (ISBN-13: 978-1-85573-953-6; ISBN-10: 1-85573-953-4) Pathogens respond dynamically to their environment. An understanding of pathogen behaviour is critical both because of evidence of increased pathogen resistance to established sanitation and preservation techniques, and because of the increased use of minimal processing technologies which are potentially more vulnerable to the development of resistance. This collection summarises the wealth of recent research in this area and its implications for microbiologists and QA staff in the food industry. Emerging foodborne pathogens (ISBN-13: 978-1-85573-963-5; ISBN-10: 1-85573-963-1) Developments such as the increasing globalisation of the food industry, constant innovations in technologies and products, and changes in the susceptibility of populations to disease, have all highlighted the problem of emerging pathogens. Pathogens may be defined as emerging in a number of ways. They can be newly discovered (e.g. through more sensitive analytical methods), linked for the first time to disease in humans, or first associated with a particular food. A pathogen may also be defined as ‘emerging’ when significant new strains emerge from an existing pathogen, or if the incidence of a pathogen increases. Designed for microbiologists and QA staff in the food industry, and food safety scientists working in governments and academia, this collection discusses ways of identifying emerging pathogens and includes chapters on individual pathogens, their epidemiology, methods of detection and means of control. Details of these books and a complete list of Woodhead’s titles can be obtained by:
• visiting our web site at www.woodheadpublishing.com • contacting Customer Services (e-mail:
[email protected]; fax: +44 (0) 1223 893694; tel.: +44 (0) 1223 891358 ext. 130; address: Woodhead Publishing Ltd, Abington Hall, Abington, Cambridge CB21 6AH, England)
Modelling microorganisms in food Edited by Stanley Brul, Suzanne van Gerwen and Marcel Zwietering
CRC Press Boca Raton Boston New York Washington, DC
Cambridge England
Published by Woodhead Publishing Limited, Abington Hall, Abington Cambridge CB21 6AH, England www.woodheadpublishing.com Published in North America by CRC Press LLC, 6000 Broken Sound Parkway, NW, Suite 300, Boca Raton, FL 33487, USA First published 2007, Woodhead Publishing Limited and CRC Press LLC © 2007, Woodhead Publishing Limited The authors have asserted their moral rights. This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. Reasonable efforts have been made to publish reliable data and information, but the authors and the publishers cannot assume responsibility for the validity of all materials. Neither the authors nor the publishers, nor anyone else associated with this publication, shall be liable for any loss, damage or liability directly or indirectly caused or alleged to be caused by this book. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming and recording, or by any information storage or retrieval system, without permission in writing from Woodhead Publishing Limited. The consent of Woodhead Publishing Limited does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from Woodhead Publishing Limited for such copying. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library. Library of Congress Cataloging in Publication Data A catalog record for this book is available from the Library of Congress. Woodhead Publishing ISBN-13: 978-1-84569-006-9 (book) Woodhead Publishing ISBN-10: 1-84569-006-0 (book) Woodhead Publishing ISBN-13: 978-1-84569-294-0 (e-book) Woodhead Publishing ISBN-10: 1-84569-294-2 (e-book) CRC Press ISBN-13: 978-0-8493-9149-1 CRC Press ISBN-10: 0-8493-9149-0 CRC Press order number: WP9149 The publishers’ policy is to use permanent paper from mills that operate a sustainable forestry policy, and which has been manufactured from pulp which is processed using acid-free and elementary chlorine-free practices. Furthermore, the publishers ensure that the text paper and cover board used have met acceptable environmental accreditation standards. Typeset by Ann Buchan (Typesetters), Middlesex, England Printed by T J International Limited, Padstow, Cornwall
Contents
Contributor contact details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 S. Brul, University of Amsterdam and Unilever Food and Health Research Institute, S. J. C. van Gerwen, Unilever Foods, The Netherlands, and M. H. Zwietering, Wageningen University, The Netherlands 1.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Part I Building models for predictive microbiology 2
Predictive microbiology: past, present and future . . . . . . . . . . . . . . . . . 7 T. A. McMeekin, L. A. Mellefont and T. Ross, Australian Food Safety Centre of Excellence, University of Tasmania, Australia 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Turning data into knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Case studies of critical analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Recent systematic analysis of literature and the advent of quantitative microbial risk assessment (QMRA) . . . . . . . . . . . . . 15 2.5 QMRA and predictive microbiology . . . . . . . . . . . . . . . . . . . . . . . 15 2.6 Advances in technology for the application of predictive models 17 2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3
Experimental design, data processing and model fitting in predictive microbiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 M. A. J. S. van Boekel and M. H. Zwietering, Wageningen University, The Netherlands 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
vi
Contents 3.2 3.3 3.4 3.5 3.6 3.7 3.8
4
5
6
Experimental design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sources of further information and advice . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Uncertainty and variability in predictive models of microorganisms in food . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. J. Nauta, National Institute for Public Health and the Environment, The Netherlands 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Case study – part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Imprecise predictive models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Case study – part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 A closer look at variability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 A closer look at uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Separation of uncertainty and variability . . . . . . . . . . . . . . . . . . . 4.8 Case study – epilogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Categorising questions of food professionals . . . . . . . . . . . . . . . . 4.10 GMPs for unpredictable microbes . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Current developments and future trends: towards novel predictive models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modelling lag-time in predictive microbiology with special reference to lag phase of bacterial spores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. P. P. M. Smelt and S. Brul, University of Amsterdam, The Netherlands 5.1 Introduction: general aspects of lag-time . . . . . . . . . . . . . . . . . . . 5.2 Lag-time of bacterial spores, transformation of the spore to vegetative cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Quantitative aspects, mathematical modelling . . . . . . . . . . . . . . . 5.4 Lag-times in real foods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application of models and other quantitative microbiology tools in predictive microbiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. D. Legan, Kraft Foods, USA 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Applications of models and databases . . . . . . . . . . . . . . . . . . . . .
24 34 36 38 39 39 42
44
44 45 46 48 51 52 54 57 59 60 63 65 65
67
67 69 73 78 79 79
82 82 82 83
Contents 6.4 6.5 6.6 6.7 6.8 6.9 7
vii
Access to models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Other quantitative microbiology tools . . . . . . . . . . . . . . . . . . . . . . 98 Future trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Sources of further information and advice . . . . . . . . . . . . . . . . . 104 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Predictive models in microbiological risk assessment . . . . . . . . . . . . M. H. Zwietering, Wageningen University, The Netherlands, and M. J. Nauta, National Institute for Public Health and the Environment, The Netherlands 7.1 Quantitative microbiological risk assessment . . . . . . . . . . . . . . . 7.2 Quantitative microbiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Recontamination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Linking models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Information sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Representativeness of models . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Food Safety Objectives and risk assessment . . . . . . . . . . . . . . . . 7.8 Examples of structured approaches of risk assessment . . . . . . . . 7.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
110
110 111 112 113 119 120 121 122 124 125
Part II New approaches to microbial modelling in specific areas of predictive microbiology 8
9
The non-linear kinetics of microbial inactivation and growth in foods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. G. Corradini and M. Peleg, University of Massachusetts, USA 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 The traditional primary models of inactivation and growth . . . . 8.3 Traditional secondary models . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Sigmoid isothermal survival curves . . . . . . . . . . . . . . . . . . . . . . 8.5 Non-isothermal inactivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Empirical growth models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Simulation of non-isothermal growth curves . . . . . . . . . . . . . . . 8.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
129 129 130 132 143 145 149 154 156 158 158
Modelling of high-pressure inactivation of microorganisms in foods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 A. Diels, I. Van Opstal, B. Masschalck and C. W. Michiels, Katholieke Universiteit Leuven, Belgium 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
viii
Contents 9.2 9.3 9.4 9.5 9.6
Factors affecting microbial inactivation by high-pressure processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current models: strengths and weaknesses . . . . . . . . . . . . . . . . . Further developments in the modelling of pressure– temperature processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 Mechanistic models of microbial inactivation behaviour in foods . . A. A. Teixeira, University of Florida, USA 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Case for mechanistic models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Development of mechanistic models for microbial inactivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Model validation and comparison with others . . . . . . . . . . . . . . 10.5 Applications of microbial inactivation mechanistic models . . . . 10.6 Strengths, weaknesses, and limitations of mechanistic models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 Future trends with mechanistic models . . . . . . . . . . . . . . . . . . . . 10.8 Sources of further information and advice . . . . . . . . . . . . . . . . . 10.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Modelling microbial interactions in foods . . . . . . . . . . . . . . . . . . . . . . F. Leroy and L. De Vuyst, Vrije Universiteit Brussel, Belgium 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Measuring growth and interactions of bacteria in foods . . . . . . . 11.3 Developing models of microbial interactions . . . . . . . . . . . . . . . 11.4 Applications and implications for food processors . . . . . . . . . . . 11.5 Future trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 A kinetic model as a tool to understand the response of Saccharomyces cerevisiae to heat exposure . . . . . . . . . . . . . . . . . . . . . F. Mensonides, University of Amsterdam (current address: EML Research gGmbH, Germany), B. Bakker, Vrije Universiteit Amsterdam, The Netherlands, and S. Brul, K. Hellingwerf and J. Teixeira de Mattos, University of Amsterdam, The Netherlands 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Experimental data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Addendum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
162 169 183 190 191 198 198 199 203 205 208 210 211 212 212 214 214 216 217 221 222 224
228
228 230 237 239 240 242 246
Contents
ix
13 Systems biology and food microbiology . . . . . . . . . . . . . . . . . . . . . . . S. Brul, University of Amsterdam, The Netherlands and H. V. Westerhoff, Free University of Amsterdam, The Netherlands 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Systems biology: biology at last . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Systems biology and food microbiology . . . . . . . . . . . . . . . . . . . 13.4 Food production: metabolic engineering . . . . . . . . . . . . . . . . . . 13.5 Food safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Areas for systems food microbiology in microbial food spoilage research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7 Models of microbial ecology and food consumption . . . . . . . . . 13.8 Sources of further information and advice . . . . . . . . . . . . . . . . . 13.9 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11 Addendum 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.12 Addendum 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
250
250 250 259 261 270 273 274 276 277 277 282 286
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Contributor contact details (* = main contact)
Editors and Chapter 1 Prof. dr Stanley Brul Dept Molecular Biology and Microbial Food Safety Swammerdam Institute for Life Sciences University of Amsterdam Nieuwe Achtergracht 166 1018 WV Amsterdam The Netherlands
Professor Marcel H. Zwietering Laboratory of Food Microbiology Wageningen University and Research Centre Bomenweg 2 6703 HD Wageningen The Netherlands email:
[email protected]
email:
[email protected]
Chapter 2
Senior Scientist Dept Advanced Food Microbiology Unilever Food & Health Research Institute Olivier van Noortlaan 120 3133 AT Vlaardingen The Netherlands
Tom A. McMeekin,* Lyndal A. Mellefont and Tom Ross Australian Food Safety Centre of Excellence University of Tasmania Private Bag 54 Hobart , TAS 7001 Australia
email:
[email protected] Dr Suzanne J. C. van Gerwen Unilever Foods Nassaukade 3 3071 JL Rotterdam The Netherlands email:
[email protected]
email:
[email protected] [email protected] [email protected]
Chapter 3 Tiny van Boekel* and Marcel H. Zwietering
xii
Contributor contact details
Department of Agrotechnology & Food Sciences Wageningen University and Research Centre P.O. Box 8129 6700 EV Wageningen The Netherlands email:
[email protected] [email protected]
Senior Scientist Dept Advanced Food Microbiology Unilever Food & Health Research Institute Olivier van Noortlaan 120 3133 AT Vlaardingen The Netherlands email:
[email protected]
Chapter 6 Chapter 4 Maarten J. Nauta National Institute for Public Health and the Environment (RIVM) P.O. Box 1 3720 BA Bilthoven The Netherlands
Dr David Legan Kraft Foods Technology Center 801 Waukegan Road Glenview IL 60025 USA email:
[email protected]
email:
[email protected]
Chapter 7 Chapter 5 Jan P. P. M. Smelt* Laboratory for Molecular Biology and Microbial Food Safety Swammerdam Institute for Life Sciences University of Amsterdam Nieuwe Achtergracht 166 1018 WV Amsterdam The Netherlands email:
[email protected] Prof. dr Stanley Brul Dept Molecular Biology and Microbial Food Safety Swammerdam Institute for Life Sciences University of Amsterdam Nieuwe Achtergracht 166 1018 WV Amsterdam The Netherlands email:
[email protected]
Professor Marcel H. Zwietering* Laboratory of Food Microbiology Wageningen University and Research Centre Bomenweg 2 6703 HD Wageningen The Netherlands email:
[email protected] Maarten J. Nauta National Institute for Public Health and the Environment (RIVM) P.O. Box 1 3720 BA Bilthoven The Netherlands email:
[email protected]
Chapter 8 Maria G. Corradini* and Micha Peleg Department of Food Science
Contributor contact details University of Massachusetts Amherst MA 01003 USA
Vrije Universiteit Brussel Pleinlaan 2 B-1050 Brussels Belgium
email:
[email protected] [email protected]
email:
[email protected] [email protected]
xiii
Chapter 12 Chapter 9 Ann Diels, Isabelle Van Opstal, Barbara Masschalck and Prof. Chris W. Michiels* Laboratory of Food Microbiology Katholieke Universiteit Leuven Kasteelpark Arenberg 22 B-3001 Leuven Belgium email:
[email protected]
Chapter 10 Professor Arthur A. Teixeira Agricultural and Biological Engineering Department 207 Frazier Rogers Hall P.O. Box 110570 University of Florida Gainesville FL 32611-0570 USA email:
[email protected]
Chapter 11 Frédéric Leroy* and Luc De Vuyst Research Group of Industrial Microbiology and Food Biotechnology (IMDO)
Femke Mensonides* Bioinformatics and Computational Biochemistry Group EML Research gGmbH Schloss-Wolfsbrunnenweg 33 69118 Heidelberg Germany email:
[email protected] Barbara Bakker Department of Molecular Cell Physiology Faculty of Earth and Life Sciences Vrije Universiteit Amsterdam De Boelelaan 1085 1081 HV Amsterdam The Netherlands Prof. dr Stanley Brul Dept Molecular Biology and Microbial Food Safety Swammerdam Institute for Life Sciences University of Amsterdam Nieuwe Achtergracht 166 1018 WV Amsterdam The Netherlands email:
[email protected] Senior Scientist Dept Advanced Food Microbiology Unilever Food & Health Research Institute
xiv
Contributor contact details
Olivier van Noortlaan 120 3133 AT Vlaardingen The Netherlands
Nieuwe Achtergracht 166 1018 WV Amsterdam The Netherlands
email:
[email protected]
email:
[email protected]
Klaas Hellingwerf and Joost Teixeira de Mattos Swammerdam Institute for Life Sciences – Mol Microbial Physiology University of Amsterdam Nieuwe Achtergracht 166 1066 WV Amsterdam The Netherlands
Senior Scientist Dept Advanced Food Microbiology Unilever Food & Health Research Institute Olivier van Noortlaan 120 3133 AT Vlaardingen The Netherlands
email:
[email protected]
Chapter 13 Prof. dr Stanley Brul* Dept Molecular Biology and Microbial Food Safety Swammerdam Institute for Life Sciences University of Amsterdam
email:
[email protected] Hans V. Westerhoff Laboratory for Molecular Cell Physiology Faculty of Earth and Life Science Free University Amsterdam de Boelelaan 1085 NL-1081 HV Amsterdam The Netherlands email:
[email protected]
1 Introduction S. Brul, University of Amsterdam and Unilever Food and Health Research Institute, S. J. C. van Gerwen, Unilever Foods, The Netherlands, and M. H. Zwietering, Wageningen University, The Netherlands
Food microbiology has always been governed in many ways by the desire to be able to predict microbial growth and survival, the main idea being that if microbial behaviour can be captured in models it is much more straightforward to arrive at risk assessment procedures and quality assurance models (discussed extensively in e.g. McMeekin and Ross, 2002). Quantitative microbial ecology, the discipline that emerged from this wish, has become increasingly important in food safety management activities (see for instance the FAO/WHO guidelines for microbiological risk management, e.g. Codex, 2004). However, many uncertainties remain. In fact as the level of microbial knowledge increases, so also does the clarity of the issues in the field. The parsimonious models that have been derived do not always fit the actual growth or inactivation data of the microbes. In addition, it is now quite clear that, although they are mainly genetically identical, bacterial populations do not behave in the same way and, in fact, are generally not homogeneous mixtures of identical cells (see the recent excellent discussion by Smits et al., 2006; for experimental data see e.g. Balaban et al., 2004 or Veening et al., 2006). Such problems are becoming more and more important since most industries are exploring the use of minimal processing techniques that allow them to make better products but which also operate closer to the death, survival and growth boundaries and thus require even greater precision from the models. Other books have already been written on predictive microbiology, but it remains one of the most important areas in food microbiology. Most books have focused on growth models and, to a lesser extent, on inactivation models which are at least as important for efficient food safety management. Moreover, with the rapid developments in the field of molecular microbial physiology, powered by the
2
Modelling microorganisms in food
‘omics’ revolutions, we can now start to understand the mechanisms behind the phenomena that are unveiled by microbial modelling approaches. Thus we can now extensively validate their robustness from a biological perspective and correct them where appropriate [proposed by Brul et al. (2002) amongst others]. What is more, these new insights have the potential to move quantitative microbiology from an empirical basis to a mechanistic one. Thus, we have included in the book some of the latest developments in microbial systems biology in the form of an outlook into the (near) future. Nevertheless, such promise can be fulfilled only if a thorough understanding of specific issues in food microbiology continues to prevail. Typical examples of these include the importance of understanding strain phenotypic diversity within a species, the occurrence of heterogeneity with respect to preservation process stress resistance of individual cells in genetically homogeneous cultures and the study of microbial interactions in food. In the book we have gathered leading experts in the field, each one of whom has made a significant contribution to model building. Part I of our book, ‘Building models for quantitative microbial ecology’, discusses best practice in constructing quantitative models in food microbiology. In Part II of the book, ‘New approaches to microbial modelling in specific areas of quantitative microbial ecology’, we discuss specific areas and some of the new approaches based on molecular, ‘genomics’ insight in microbial behaviour in foods. Part I opens with a chapter on ‘Predictive microbiology: past, present and future’ by leading experts from the University of Tasmania. McMeekin et al. begin with a historical account of the field as a whole. This is followed by an up-to-date account of new developments and areas to explore. The chapter establishes a baseline in linking microbial modelling to the technological developments needed to put the most recent results speedily into practice and link them to quantitative microbial risk assessment. In Chapter 3 van Boekel and Zwietering discuss the key aspects of experimental design, data processing and model fitting which need to be considered. They provide the framework for the basic requirements for value-added quantitative models. Chapter 4 covers uncertainty and variability in models of microorganisms in food. Nauta gives an account of the variability and uncertainty related to food constitution, environmental conditions and microbial responses in predictive food microbiology. He illustrates this and provides ways to deal with these phenomena using a simplified case study. Smelt and Brul deal in Chapter 5 with the highly relevant area of modelling lagtimes in predictive microbiology with special reference to the lag phase of bacterial spores. Bacterial spores are probably the toughest living structures on earth and cause the food microbiologist great concern due to their ability to survive thermal preservation treatments. The issue of heterogeneity in behaviour is a key one in spore research, as is highlighted by the authors. Valuable contributions now being made by mechanistic studies are indicated. In Chapter 6 Legan discusses the key points of interest to the end users of models, together with other quantitative microbiology tools in predictive microbiology. The practicalities of modelling for industrial users are highlighted, thus
Introduction
3
providing a framework for transforming modelling into useful concepts in food manufacturing practice. Finally in Chapter 7 quantitative microbial models are really put to work for food risk assessment by Zwietering and Nauta. Their role in indicating the relative effect of interventions in a quantitative manner, while pointing the way towards the optimum decisions based on current understanding is the leading theme in this chapter. The authors point out that, while they are highly valuable, it has to be realised that model predictions cannot be considered absolute. Therefore decisions should always be based both on modelling and biological expert knowledge. Part II of the book starts off with Chapter 8, in which Corradini and Peleg highlight once more the importance of microbial variability in food microbiology. The preference of significant non-linear kinetics of microbial inactivation and growth in foods corroborates this statement. The authors highlight that, although inactivation and growth are usually treated as totally separate topics, their modelling has some common mathematical features. Chapter 9 then goes on to consider how predictive models can be applied to an area of food preservation where innovative strategies incorporating the use of high-pressure (HP) processing are the central theme. Michiels and co-workers point out how remarkable and consistent deviations from first-order behaviour become apparent as data on the HP inactivation kinetics of microorganisms accumulate in the literature. The various options to choose from, while establishing the best model to describe HP-induced inactivation, are discussed in detail. Along with microbial inactivation models, the authors also discuss the key importance of making the right choice of test strain and of being relentless in ensuring reproducibility of strain conservation procedures. Chapter 10 by Teixeira touches the issue of ensuring a mechanistic meaning/ interpretation of models. This chapter focuses on the development and application of mechanistic models capable of more accurately predicting thermal inactivation of bacterial spores, important in sterilisation and pasteurisation treatments in the food, pharmaceutical and bioprocess industries. The chapter highlights how crucial it is that mechanistic models are able to predict accurate responses to process conditions outside the range of conditions in which valid rate parameters can be estimated. The hurdles encountered in developing mechanistic models are also covered. Chapter 11 by Leroy and de Vuyst addresses the issue that microbes generally do not occur in foods in isolation. Thus the chapter shows that, while the complexity of microbial interactions and the implications of competitive growth in foods are frequently overlooked in predictive microbiology and other modelling studies, such microbial interactions may play pivotal roles in the development of foodborne pathogens or spoilage. Chapter 12 by Teixeira de Mattos et al. offers a case study showing how biochemical level mechanistic modelling concepts, a form of systems biology discussed in Chapter 13, can be used for the study of microbial physiology. The authors use the example of the stress response of bakers’ yeast, Saccharomyces cerevisiae, subjected to a continuous temperature challenge.
4
Modelling microorganisms in food
Chapter 13 by Brul and Westerhoff discusses how such systems biology approaches can be incorporated in food microbiology in a broader context. The chapter opens with an introduction to what the basis of systems biology is and where its concepts emerged from. It then highlights how some of the issues in understanding mechanistically biological events and characterizing them mathematically, brought up in Chapter 10, can be dealt with. In summary, we feel that with the spectrum of current, innovative and thoughtprovoking issues covered by the various chapters of the book, it will make a useful contribution to the area, both as a reference and as a source of new approaches in the field of modelling microorganisms in food.
1.1
References
Balaban NQ, Merrin J, Chait R, Kowalik L and Leibler S ( 2004) Bacterial persistence as a phenotypic switch, Science, 305, 1622–1625. Brul S, Montijn R, Schuren F, Oomes S, Klis FM, Coote P and Hellingwerf KJ (2002a) Detailed process design based on genomics of survivors of food preservation processes, Trends Food Sci Technol, 13, 325–333. Codex Alimentarius Commission (2004) Report of the 20th session of the Codex Committee on general principles, Paris, France, 3–7 May 2004, ALINORM 04/27/33A, APPENDIX II. McMeekin TA and Ross T (2002) Predictive microbiology: providing a knowledge-based framework for change management, Int J Food Microbiol, 78, 133–153. Smits WK, Kuipers OP and Veening JW (2006) Phenotypic variation in bacteria: the role of feedback regulation, Nat Rev Microbiol, 4, 259–271. Veening JW, Kuipers OP, Brul S, Hellingwerf KJ and Kort R (2006) Effects of phosphorelay perturbations on architecture, sporulation and spore resistance in biofilms of Bacillus subtilis, J Bacteriol, 188, 3099–3109.
Part I Building models for predictive microbiology
2 Predictive microbiology: past, present and future T. A. McMeekin, L. A. Mellefont and T. Ross, Australian Food Safety Centre of Excellence, University of Tasmania, Australia
2.1
Introduction
Perhaps it is as a consequence of more than 30 years’ research on predictive microbiology and the numerous reviews and book chapters on the subject that the prospect of preparing an introductory chapter for this text was somewhat daunting – would there be anything left to write about predictive microbiology that we (the University of Tasmania group), or other long-term residents of the predictive modelling fraternity, had not written about before and, of even greater concern, written about recently? However, consideration of the subsequent chapters in this book and an analysis of recent publications indicates clearly that the field continues to develop apace. Furthermore, because predictive microbiology is dependent on the integration of traditional microbiology skills with those found in the disciplines of mathematics, statistics and information systems and technology, advances in any of these areas provide potential opportunities to increase the utility, and applicability, of predictive models. Within the discipline of predictive microbiology, it is becoming increasingly obvious that, whilst predictive modelling continues to rely heavily on studies of microbial population behaviour in foods (quantitative microbial ecology), integration of this knowledge with observations of attendant physiological events and with molecular approaches provides a basis for a holistic understanding at the cellular and subcellular level. Thus, predictive microbiology continues to be underpinned by state-of-the-art science without losing sight of practical considerations, including validation in foods and practical application for specific purposes
8
Modelling microorganisms in food
supporting food safety management approaches, such as microbial risk assessment, or evaluating traditional and innovative food processing techniques. To provide context, we first consider the historical development of the concept of predictive microbiology and the importance of advances in the electronics industry which produced the enabling technology required to turn the concept, and reams of data on microbial behaviour in foods, into knowledge which could then be applied to assess both the stability and safety of foods. Contemporary research in predictive microbiology is then considered through analysis of papers in 2005 in the International Journal of Food Microbiology. For the future, we discern a trend towards critical analysis of the literature that may be linked to increasing familiarity with rigorous quantitative risk assessments, and we predict further dependence of the latter on predictive models and increasing use of stochastic simulation models. Finally, we draw attention to existing and developing technologies that will assist in the successful application of predictive models.
2.1.1 Predictive microbiology: historical development Many authors cite the development of log-linear microbial death kinetics by Bigelow et al. (1920), Bigelow (1921) and Esty and Meyer (1922) as the first example of a predictive model to find widespread application in the food industry. However, it is now generally accepted that the linear relationship is, in many instances, a simplification of more complex patterns of microbial death involving ‘shoulders’ and ‘tails’ (Peleg and Cole, 1998). Nevertheless, as an approximation, it has had remarkable success in underpinning the thermally processed foods industry by predicting time/temperature combinations sufficient to reduce spores of Clostridium botulinum type A by 12-log-cycles in non-acid foods. An interesting contemporary analysis of thermal inactivation patterns (van Asselt and Zwietering, 2006) provides further insight into the success of the log-linear model, perhaps suggesting that specific deviations are not of global significance and identifying situations where these need to be taken into account. Severe heat treatments cause significant changes to the sensory and nutritional attributes of foods and, in today’s quality conscious world, much attention is given to producing foods that retain superior eating qualities but, of course, must remain safe. An example is the production of refrigerated foods of extended durability (REPFEDs) (Mossel and Struijk, 1991), for which models for shelf life and safety have been proposed (Membré et al., 2006). McMeekin and Ross (2002) commented that, whilst the botulinum cook may have been the first predictive model to find widespread utility, it was developed for a single purpose and, thus, was unlikely to have been conceived with a more general purpose in mind. The wider concept of predictive microbiology appears to have first been enunciated by Scott (1937) who foreshadowed the potential to predict microbial spoilage of meat and the potential for the growth of pathogens on carcasses during their transformation from muscle to meat when cooled from 30 °C or more to < 7 °C. For the next 25–30 years, the field of predictive microbiology remained
Predictive microbiology: past, present and future
9
quiescent, probably due to the lack of technology to allow practical application of the concept. The impasse was overcome in the 1960s and 1970s with the advent of practical electronics technologies that enabled continual monitoring of time and temperature combinations during processing, distribution and storage. Electronic temperature loggers allowed many food processes to advance to the state of knowledge available to the canning industry through thermocouples and chart recorders to validate thermal processes. Advances in electronics also led to the information technology revolution which has affected all facets of modern life and which provided impetus for the rapid development of predictive modelling research on both spoilage and pathogenic microorganisms. The appearance of review articles provides an indicator that a field of scientific endeavour is developing a degree of maturity. In the case of predictive microbiology, the first ‘review’ was Roberts and Jarvis (1983), who also first coined the term ‘predictive microbiology’. Since then, reviews and general articles have appeared at regular intervals (Farber, 1986; McMeekin and Olley, 1986; Baird-Parker and Kilsby, 1987; Ross and McMeekin, 1994; McDonald and Sun, 1999), four international conferences have been held, three with published proceedings [Journal of Industrial Microbiology, 12 (3–5), 1993; International Journal of Food Microbiology, 73 (3–4), 2002; International Journal of Food Microbiology, 100 (1–3), 2005]. In addition, two books [Predictive Microbiology: Theory and Application, McMeekin et al., 1993, and Modelling Microbial Responses in Food, McKellar and Lu (editors), 2003] and the current text have been published, together with a major report on predictive microbiology for the meat industry (Ross, 1999). An interesting dichotomy of approach was the dominance of kinetic models for spoilage prediction (Spencer and Baines, 1964; Nixon, 1971; Olley and Ratkowsky, 1973a,b) and probability models to address risks posed by C. botulinum and other toxigenic microorganisms (Genigeorgis, 1981; Roberts et al., 1981). Both kinetic and probability modelling approaches continue to be used, with the latter increasingly applied to deal with microbial responses under harsh conditions where the response of individual cells is uncertain and not amenable to description by a deterministic kinetic model. The latter models work well only where population growth or death is an inevitable consequence of the conditions imposed, and they were classified by Bridson and Gould (2000) as the domain of ‘classical microbiology’ as opposed to ‘quantal microbiology’ where uncertainty dominates. The differentiation between kinetic and probability models and eventual integration by Ross and McMeekin (1994) was commented upon by Devlieghere et al. (2006) who referred to the ‘twilight zone’, where the modelling needs to shift from a kinetic model, when the environmental conditions still support growth, to a probability model when the growth/no growth interface is reached, demonstrating the near link between the two modelling types. Devlieghere et al. (2006) also gave a succinct account of model types: empirical versus mechanistic models, primary, secondary and tertiary models, thermal inactivation and non-thermal inactivation models. More detailed commentary on model types can be found in McKellar and Lu (2003).
10
Modelling microorganisms in food
Table 2.1 Categorisation of ‘predictive microbiology’ papers in the Special Issue of Int J Food Microbiol, 100, 2005 Subject area
No. of papers
Growth/no growth boundaries Microbial lag Microbial growth Secondary modelling Mycology Inactivation kinetics Database development Process design and optimisation Quantitative risk assessment
Table 2.2
3 6 5 3 1 3 1 3 6
Applications of predictive microbiology
Area of application
Examples
Hazard Analysis Critical Control Point (HACCP)
Preliminary hazard analysis Identification and establishment of critical control point(s) Corrective actions Assessment of importance of interaction between variables Estimation of changes in microbial numbers in a production chain Assessment of exposure to a particular pathogen Prediction of the growth of specific food spoilers Prediction of growth of specific food pathogens Effect of altering product composition on food safety and spoilage Effect of processing on food safety and spoilage Evaluation of effect of out-of-specification circumstances Consequences of temperature in the cold chain for safety and spoilage Education of technical and especially non-technical people Number of samples to be prepared Defining the interval between sampling
Risk assessment
Microbial shelf life studies Product research and development
Temperature function integration and hygiene regulatory activity Education Design of experiments
2.1.2 Predictive microbiology: current activity Taking the 2005 contents of the International Journal of Food Microbiology as a barometer of the diversity of current research in Food Microbiology reveals a Special Issue (Volume 100; see Table 2.1) comprising 31 papers presented at the 4th International Conference on Predictive Modelling of Microbial Behaviour in
Predictive microbiology: past, present and future
11
Foods held in Quimper, France, in 2003. In other issues of the International Journal of Food Microbiology in 2005 (Volumes 98–105), a further 44 modelling related papers were published. Excluding the Special Issue, this represents ~ 19% of the 236 papers published in that journal in 2005. If we assume that papers published in regular issues in 2005 resulted from work in the period 2003–2005, several of the categories above continue to be well represented, viz: microbial lag (six papers), microbial growth and secondary models (17 papers), inactivation kinetics (four papers) and quantitative risk assessment (six papers). However, modelling of fungal growth, for many years under-represented in comparison with bacterial studies, was described in five papers (versus one in the Special Issue), perhaps indicating a developing trend. In addition to the number of papers published in a field, their impact is also an important consideration in judging the significance of the work. Thus, it is interesting to note that, in the last quarter of 2005 and the first quarter of 2006, three of the top 20 downloaded non-review articles from the International Journal of Food Microbiology considered aspects of predictive modelling. Thus, we have an extensive scientific basis from which to launch the myriad of applications which have been suggested over many years for predictive models, starting with Scott (1937) and by Devlieghere et al. (2006) in the very comprehensive multi-authored volume Safety in the Agri-food Chain edited by Luning, Devlieghere and Verhé (2006). These are shown in Table 2.2.
2.2
Turning data into knowledge
Predictive microbiology is based on the premise that responses of microbial populations to environmental conditions are reproducible. Consequently, by characterising environments in terms of those factors affecting growth, survival and inactivation, it is possible, from past observations, to predict responses under similar conditions by monitoring the environment rather than resorting to further microbiological analysis (Ross et al., 2000). The processes involved in developing and evaluating predictive models are now well established (McMeekin et al., 1993; McKellar and Lu, 2003) and essentially comprise construction of a database (the domain of bioinformatics) which, when combined with biomathematics, to develop mathematical models, provides the basis for predictive microbiology software (Baranyi and Tamplin, 2004) through which application is made possible. A prime example of the database-modelsoftware triangle is ComBase (www.combase.cc). Originally, this was a combination of the US Pathogen Modeling Program database and the UK Growth Predictor (formerly FoodMicroModel) database, and it has been strongly supplemented by ‘mining’ the literature. In October 2005, ComBase contained > 35 000 response records for pathogens and 4000 for spoilage organisms (M. Tamplin and J. Baranyi, pers. comm.). To this were added in 2006, 8000 datasets held by the Australian Food Safety Centre of Excellence (AFSCoE) and comprising ~ 5500 kinetic datasets and ~ 2500 probability of growth (boundary model) datasets.
12
Modelling microorganisms in food
A similar system used as a food safety decision support tool is Sym’Previus, developed in France and described by Leporq et al. (2005). For an interesting account by Jozsef Baranyi of the philosophy underlying databases, readers are referred to McMeekin et al. (2006). It has been observed (A. Astin, Dairy Food Safe Victoria, Australia, pers comm., 2005; H. Dornom, Dairy Australia, pers. comm., 2005) that the Australian dairy industry is data rich, but knowledge poor. Undoubtedly, this is true of dairy industries in other countries and of other food industry sectors, highlighting the need for databases and predictive models. The hallmark of predictive microbiology is to turn data into knowledge. 2.2.1 Critical analysis of the literature The options to provide data for predictive model development are:
• to conduct de novo experiments, with the advantage (or, at least, the potential)
•
of a well-constructed experimental design covering the growth or death domain of interest and data capture at sufficiently close intervals to produce accurate and robust models to describe the effect of environmental variables on microbial population dynamics, or to ‘mine’ the literature for data describing the behaviour of an organism of concern which, potentially, has the disadvantage that the primary purpose of the experiment was not to produce a predictive model or that experimental details are not adequately described, e.g. the physiological state of the inoculum. Nevertheless, there are some remarkable examples in the early literature of properly conducted and documented experiments from which models can be developed with confidence, e.g. Barber (1908) and Monod (1949).
As all graduate students will be aware, a survey of pertinent, published literature is a starting point for any research project. However, it is a desideratum that the review provides a critical analysis and is not simply a chronological account or catalogue of published research, which limits its value greatly. Many datasets, by themselves, will not provide a sufficient basis for model development; when many are combined and analysed, patterns of microbial behaviour under defined conditions are likely to emerge. Even without proceeding to the next step of a mathematical description in the form of a model, the pattern, in its own right, may well provide useful information. Such information may indicate areas where more data are required before developing a model, where a region suggests that interesting physiological studies should be undertaken or even provide clues leading to improvements in process parameters, product formulation or storage and distribution conditions.
2.3
Case studies of critical analysis
2.3.1 Growth of Escherichia coli on meat Earlier, we attributed enunciation of the concept of predictive microbiology to Scott (1937), despite the fact that models were not developed at that time from his
Predictive microbiology: past, present and future
13
database of the effects of temperature, water activity and carbon dioxide concentration on the development of microbial populations on meat. Nevertheless, on the basis of the semi-quantitative patterns observed, Scott’s work was sufficiently informative to specify the combination of factors (hurdles) to enable fresh meat shipped from Australia to arrive in the UK in good condition. Scott (1937) also wrote of predicting the growth of microorganisms during the chilling of carcasses in a meat works. When predictive modelling took off in the 1970s/1980s, a significant contributor to the literature was C.O. Gill, then with the Meat Industry Research Institute, New Zealand (MIRINZ), who, with his colleagues, produced models describing the growth of E. coli during meat chilling processes. Gill’s work was quickly incorporated into a predictive software package, the Process Hygiene Index (PHI) (Gill et al., 1991a,b). Despite being based on a limited number of data points, the PHI appeared to provide a reasonable estimate of the potential for growth of E. coli during chilling. Similar work in Australia, at about the same time, by Smith (1987) led to an equivalent E. coli temperature model, and this type of study has continued in Australia to the present, resulting in the Australian Quarantine Inspection Service (AQIS) revising the Export Control (Meat and Meat Products) Orders (AQIS, 2005). Thus, chilling of meat carcasses in Australian export abattoirs is regulated on the basis of an E. coli growth model combined with data from electronic temperature loggers and software developed by Meat and Livestock Australia (MLA). These are used in a strictly controlled manner to provide an estimate for the potential growth of E. coli, expressed as the Refrigeration Index (http:// www.mla.com.au). The underpinning model, based on > 1000 observations of the effects of temperature, water activity, pH and lactate concentration on E. coli growth, is detailed in Ross et al. (2003). The model was extensively evaluated by Mellefont et al. (2003) and compared very favourably with other published models for describing E. coli growth on meat carcasses.
2.3.2 Inactivation of E. coli during production of uncooked, comminuted, fermented meats (UCFM) Ross (1999), in a report prepared for the Australian meat industry, presented a critical review of predictive models and their application in the meat industry. A similar critical review of knowledge of E. coli inactivation in fermented meat processing (Ross and Shadbolt, 2001) was commissioned following a serious Enterohaemorrhagic E. coli (EHEC) outbreak in Australia in 1995 from a fermented meat product. This was a precursor to de novo experimentation to develop and validate a model to describe inactivation of E. coli during the fermentation and maturation of fermented meats. Subsequently, the relevant legislation was amended by the Australia New Zealand Food Authority [now Food Standards Australia New Zealand (FSANZ)] to include a requirement that:
14
Modelling microorganisms in food the process of fermentation and any other subsequent processes must reduce prior to sale from the processing factory by 99.9% or greater the number of Escherichia coli organisms potentially present in an uncooked comminuted meat product.
The models developed provided strong support for assessing the ability of fermentation and maturation conditions to satisfy this criterion. Of the many studies reviewed, which were undertaken to measure the amount of E. coli inactivation during UCFM production, few authors attempted to provide an interpretation of the microbial ecology of the process. Instead, specific processes, or specific changes to processes, were evaluated. Those results offered little ability to predict the safety of other processes. However, by combining and reanalysing the available data, it was concluded that:
• the amount of inactivation that occurs during a specific UCFM process is governed largely by the times and temperatures used in the process;
• the extent of inactivation can be predicted using the predictive model, but accuracy of predictions is limited to + 0.5 – + 1.0 log kill;
• while lower pH and lower water activities are associated with greater inactiva•
tion, these factors explain much less of the observed inactivation than does temperature; an interpretation of the microbial ecology of UCFM processes, consistent with the available data, provides producers with flexibility to design processes that achieve the required level of E. coli inactivation and to maximise the diversity of product styles.
2.3.3 Thermal death kinetics and Campylobacter in broilers Examples of systematic analysis of the literature to develop food safety management tools include van Asselt and Zwietering (2006), who analysed 4066 thermal inactivation curves for foodborne pathogens, and Adkin et al. (2006), who used a systematic review and analysis to assist the development of Campylobacter control strategies in the UK broiler industry. The approach adopted by van Asselt and Zwietering (2006) provides a global assessment, taking into account only the largest effects before proceeding to deal with more precise effects encountered in specific situations. Analysis at the ‘macro-level’ revealed that many factors, reported in the literature to have an effect on the D-value, were smaller than the variability of all published D-values. A specific example was the disappearance of a shoulder reported for inactivation of Salmonella in beef (Juneja and Eblen, 2000) when the dataset was considered as part of a much larger dataset derived from several sources [for references see van Asselt and Zwietering (2006), Table 1.1]. Earlier work from the same laboratory reported a critical analysis of the irradiation parameter D10 for bacteria and spores under various conditions (van Gerwen et al., 1999) and recontamination in factory environments (den Aantrekker et al., 2003). The objective of Adkin et al. (2006) was to produce an evidence-based ranking of the major contributing factors and sources of Campylobacter in broilers
Predictive microbiology: past, present and future
15
produced in the UK. This was achieved by extraction of data from 159 research papers and allocation of the findings into 14 sources of on-farm contamination and 37 contributing factors. A ‘relevancy score’ was developed to weight and rank the findings for applicability to the Great Britain broiler industry based on country of origin of the data, sample size, strength of association and document type. On this basis, staggered depopulation of flocks and multiple bird ‘houses’ on-farm were identified as factors contributing to increased risk. Vice versa, the introduction of a hygiene barrier, parent company and certain rearing seasons were assessed as decreasing risk. Despite the resource-intensive nature of the systematic review compared to narrative studies, the authors concluded that significant advantages were obtained, including increased transparency and the ability to investigate patterns and trends of value in the development of practical control measures for Campylobacter in UK broiler flocks.
2.4
Recent systematic analysis of literature and the advent of quantitative microbial risk assessment (QMRA)
QMRA reached food safety paradigm status in a much shorter time frame than HACCP (5–10 years, compared with 25–30 years). McMeekin and Ross (2002) attributed this to initiation of the concept of QMRA for food safety at the highest international level through the Codex Alimentarius Commission (CAC) in response to the General Agreement on Tariffs and Trade, Uruguay Round Agreements on Sanitary and Phytosanitary Measures and Technical Barriers to Trade (Hathaway, 1993; Hathaway and Cook, 1997). Arising from the CAC initiative, the Food and Agriculture Organisation and the World Health Organisation jointly commissioned studies to investigate selected pathogen/product combinations and to develop guidelines for best practice in QMRA. The reports arising from those studies are listed in Table 2.3. Each report was written by an international drafting group, usually comprising at least 6–10 experts. Subsequently, each was reviewed on several occasions through expert consultations, selected peer reviews and by members of the public in response to a call for comment. In both the Salmonella and Listeria monocytogenes risk assessments the expert consultations comprised a total of 36 experts (including the drafting group) plus, respectively, 12 and 20 peer reviewers. This level of expert input, review and comment suggests that the risk assessments of selected pathogen/product combinations have been thoroughly and systematically constructed.
2.5
QMRA and predictive microbiology
The relationship between HACCP, QMRA and predictive microbiology was described by McMeekin and Ross (2002) as the food safety triangle in which
16
Modelling microorganisms in food
Table 2.3 Reports published or ‘in press’ in the FAO/WHO Microbiological Risk Assessment Series (all available from: http://www.who.int/foodsafety/publications/ micro/en/index.html) Year
Topic and report type
2002
Risk assessments of Salmonella in eggs and broiler chickens: Interpretative Summary. Risk assessment of Salmonella in eggs and broiler chickens: Technical Report. Hazard characterisation for pathogens in food and water: Guidelines. Risk assessment of Listeria monocytogenes in ready-to-eat foods: Interpretative Summary. Risk assessment of Listeria monocytogenes in ready-to-eat foods: Technical Report. Enterobacter sakazakii and other microorganisms in powdered infant formula: Meeting Report. Exposure assessment of microbiological hazards in foods: Guidelines. Risk assessment of Vibrio vulnificus in raw oysters: Interpretative Summary and Technical Report (in press – electronic version available online). Risk assessment of choleragenic Vibrio cholera O1 and O139 in warm water shrimp in international trade: Interpretative Summary and Technical Report (in press). Enterobacter sakazakii and Salmonella in powdered infant formula: Meeting Report. Risk assessment of Campylobacter spp. in broiler chickens: Interpretative Summary. Risk assessment of Campylobacter spp. in broiler chickens: Technical Report. Risk assessment of Vibrio parahaemolyticus in seafood: Interpretative Summary and Technical Report. Risk characterization of microbiological hazards in foods: Guidelines.
2002 2003 2004 2004 2004 2005 2005 2005 2006 2006 2006 2006 2006
predictive models underpin the needs of food safety, both through the use of risk assessment tools to indicate strategies for food safety management or design of specific HACCP plans to enact those strategies. In our opinion, predictive microbiology will continue to support the full potential of HACCP and QMRA, and it clearly provides an important method to estimate the sum of microbial increases and sum of microbial inactivation through various stages in the farm-to-fork continuum, which underpins the Food Safety Objectives approach being advocated by the International Commission for Microbiological Specifications for Foods (ICMSF, 2002). Predictive microbiology models embedded in stochastic simulation models, relying on software packages such as @Risk (Palisade Corporation, Ithaca, NY, USA), Analytica® (Lumina Decision Systems Inc., Los Gatos, CA, USA) and Crystal Ball® (Decisioneering Inc., Denver, CO, USA), have developed as the preferred approach for QMRA (Cassin et al., 1998; Fazil et al., 2002; den Aantrekker et al., 2003; Ross and McMeekin, 2003; Oscar, 2004). Stochastic modelling techniques continue to be developed to address food safety issues, respectively:
• single cell variability of Listeria monocytogenes grown on liver paté and
Predictive microbiology: past, present and future
• •
17
cooked ham to compare challenge test data to predictive simulation (Francois et al., 2006); post-thermal treatment prevalence and concentration of Bacillus cereus spores in REPFEDS (Membré et al., 2006); and the very important debate of the impact of cattle testing strategies on human exposure to BSE agents in Japan (Tsutsui and Kasuga, 2006).
In the latter study, the influence of removing specified risk materials and of age limits for test cattle on human exposure to the BSE agent in Japan was addressed using a Monte Carlo simulation model to construct a stochastic model. Sensitivity analysis, conducted for key input variables, identified removal of risk materials as the predominant factor in reducing risk, resulting in an estimated decrease of 95% in infectivity of meat for human consumption, whereas the impact of changing the age limit for testing was small. Stochastic modelling approaches will continue to be developed and refined, for example, within the ambitious PathogenCombat project (www.pathogencombat .com), an EU FP 6 project comprising more than 40 European laboratories and the Australian Food Safety Centre of Excellence. PathogenCombat has eight specific objectives, including ‘the development of new mathematical models for pathogen control throughout the food chain and at time of consumption’. Specifically, it is proposed to achieve this objective by: 1. identification of CCP by use of stochastic simulation models based on quantification of numbers of pathogens, including mycotoxin-producing fungi and their virulence/toxicity expression along the entire food chain, and 2. estimation of microbial food safety risk at time of consumption based on stochastic simulation modelling of number of pathogens and their virulence/ toxicity gene expression.
2.6
Advances in technology for the application of predictive models
Chapters 6 and 7 in McMeekin et al. (1993) dealt, respectively, with ‘Predicting shelf life of chill stored food by temperature function integration’ and ‘Evaluation of hygiene and microbiological safety of products and processes’ also using temperature function integration (TFI). Chapters 8 and 9 considered the ‘hardware’ available at that time: time–temperature indicators based on chemical or physical phenomena and electronic data loggers. In the interim, both approaches continue to be used, with the balance now greatly in favour of electronic devices. The ‘trick’ was to link the recorded temperature history to the model describing the accumulated effect on microbial population behaviour via the relative rate concept, an early example of which was described by Nixon (1971) and refined soon after by Olley and Ratkowsky (1973a,b) in their quest for a universal spoilage model. Nixon (1977) also devised and patented an early electronic temperature logger. The basic principle was that, for each temperature able to be discriminated,
18
Modelling microorganisms in food
there was a unique rate of spoilage value held in a PROM (programmable read only memory) which was used to control the rate of an electronic clock. Temperature signals were, thus, converted to corresponding rate which were integrated over time. Similar devices were commercially available in the early 1990s, including the Don Whitley Time Temperature Function Integrator, the Remonsys Smartlog and the Delphi temperature logger, details of which are given in Chapter 7 of McMeekin et al. (1993). Electronic temperature loggers available today are based on the same TFI principle, but convert environmental measurements (data) mathematically into predictions of microbial behaviour (knowledge) and are widely used for both shelf life and safety applications. However, there are potential limitations with data recovery, e.g. loss of information with non-return of loggers, retrospective analysis of the microbiological consequence of a temperature profile and manual examination for ‘progress reports’ as the product moves through the supply chain (McMeekin et al., 2005). To overcome these limitations, technologies for near time reporting are being introduced. Some of these, such as radio frequency identification, have, in fact, been available for several decades. An alternative is meshed wireless network technology, an example of which is the Smart-Trace™ system developed by Ceebron Pty Ltd, Sydney, Australia. This uses an ad hoc wireless mesh network platform in which disposable sensor tags, located on individual pallet loads, relay information from one to another and on to an external reader. Systems such as Smart-Trace™, when aligned with a predictive models overlay, connect the quest for traceability and authentication of product origin with the requirements of food safety and stability management. Potentially, this allows the latter goals to be reported in near real-time and new levels of precision and flexibility in managing the safety and stability of food during storage and distribution to be achieved (McMeekin et al., 2005).
2.7
Conclusions
In this introductory chapter, our themes have ranged far and wide, charting the historical development of predictive microbiology, current activity in the field and likely developments. The latter appear to be inevitable consequences of the quantitative approach to the microbial ecology of foods embodied in predictive microbiology, viz: systematic and critical analysis of the literature, elucidation of knowledge and its summary in mathematical models, better decision support through use of stochastic simulation modelling techniques and application of that knowledge through integration with enabling technologies. Several of the latter, particularly those connecting food safety and quality management with traceability and authenticity outcomes, present exciting opportunities. Are we on the verge of being able to predict the next food safety hazard before it emerges? If so, then we should recall the dictum enunciated by Lord Kelvin (William Thomson, 1824– 1907):
Predictive microbiology: past, present and future
19
When you can measure what you are speaking about, and express it in numbers you know something about it; but, when you cannot … your knowledge is of a meagre and unsatisfactory kind – it may be the beginning of knowledge, but you have scarcely advanced to the state of Science.
Lord Kelvin, in his quest to characterise an absolute temperature scale, designed and used appropriate technology and understood the value of accurate and precise measurements at close intervals. Likewise, connecting good science and rigorous analysis with appropriate technology continues to point the way forward for ‘Predictive Microbiology’.
2.8
References
Adkin A, Hartnett E, Jordan L, Newell D and Davison H (2006) Use of a systematic review to assist the development of Campylobacter control strategies in broilers, J Appl Microbiol, 100, 306–315. AQIS (Australian Quarantine Inspection Service) (2005) Export Control (Meat and Meat Products) Orders 2005. http://www.comlaw.gov.au/ComLaw/Legislation/Legislative InstrumentCompilation1.nsf/ Baird-Parker AC and Kilsby DC (1987) Principles of predictive food microbiology, J Appl Bacteriol, Symposium Supplement, 43S–49S. Baranyi J and Tamplin ML (2004) ComBase: a common database on microbial responses to food environments, J Food Prot, 67, 1967–1971. Barber MA (1908) The rate of multiplication of Bacillus coli at different temperatures, J Infect Dis, 5, 379–400. Bigelow WD (1921) The logarithmic nature of thermal death time curves, J Infect Dis, 29, 528–536. Bigelow WD, Bohart GS, Richardson AC and Ball CO (1920) Heat penetration in processing canned foods, National Canners Association Bulletin, 16–L. Bridson EY and Gould GW (2000) Quantal microbiology, Lett Appl Microbiol, 30, 95–98. Cassin MH, Lammerding AM, Todd ECD, Ross W and McColl RS (1998) Quantitative risk assessment for Escherichia coli O157:H7 in ground beef hamburgers, Int J Food Microbiol, 41, 21–44. Den Aantrekker ED, Beumer RR, Van Gerwen SJC, Zwietering MH, Van Schothorst M and Boom RM (2003) Estimating the probability of recontamination via the air using Monte Carlo simulations, Int J Food Microbiol, 87, 1–15. Devlieghere F, Francois K, De Meulenaer B and Baert K (2006) Modelling food safety, in PA Luning, F Devlieghere and R Verhé (Eds) Safety in the Agri-food Chain, Wageningen, Wageningen Academic Publishers, 397–438. Esty JR and Meyer KF (1922) The heat resistance of the spores of B. botulinus and related anaerobes, J Infect Dis, 31, 650–663. Farber JM (1986) Predictive modelling of food deterioration and safety, in MD Pierson and NJ Stern (Eds) Foodborne Microorganisms and Their Toxins, New York, Marcel Dekker, 57–90. Fazil AM, Ross T, Paoli G, Vanderlinde P, Desmarchelier P and Lammerding AM (2002) A probabilistic analysis of Clostridium perfringens growth during food service operations, Int J Food Microbiol, 73, 315–329. Francois K, Devlieghere F, Uyttendaele M, Standaert AR, Geeraerd AH, Nadal P, Van Impe JF and Debevere J (2006) Single cell variability of L. monocytogenes grown on liver paté and cooked ham at 7 °C: comparing challenge test data to predictive simulations, J Appl Microbiol, 100, 800–812.
20
Modelling microorganisms in food
Genigeorgis CA (1981) Factors affecting the probability of growth of pathogenic microoganisms in foods, J Amer Vet Med Assoc, 179, 1410–1417. Gill CO, Harrison JCL and Phillips DM (1991a) Use of a temperature function integration technique to assess the hygienic adequacy of beef carcase cooling process, Food Microbiol, 8, 83–94. Gill CO, Jones SDM and Tong AKW (1991b) Application of a temperature function integration technique to assess the hygienic adequacy of a process for spray chilling beef carcasses, J Food Prot, 54, 731–736. Hathaway SC and Cook RL (1997) A regulatory perspective on the potential uses of microbial risk assessment in international trade, Int J Food Microbiol, 36, 127– 133. Hathaway SC (1993) Risk assessment procedures used by the Codex Alimentarius Commission and its subsidiary and advisory bodies, Joint FAO/WHO Foods Standards Programme, 20th Session of the Codex Alimentarius Commission, Geneva. International Commission for the Microbiological Specifications for Foods (ICMSF) (2002) Microorganisms in Foods 7e, Microbiological testing in food safety management, New York, Kluwer Academic Plenum Publishers. Juneja VK and Eblen BS (2000) Heat inactivation of Salmonella typhimurium DT104 in beef as affected by fat content, Lett Appl Microbiol, 30, 461–467. Leporq B, Membré J-M, Dervin C, Buche P and Guyonnet JP (2005) ‘The Sym'Previus’ software, a tool to support decisions to the foodstuff safety, Int J Food Microbiol, 100 231– 237. Luning PA, Devlieghere F and Verhe R (2006) Safety in the Agri-food Chain, Wageningen, Wageningen Academic. McDonald K and Sun DW (1999) Predictive food microbiology for the meat industry: a review, Int J Food Microbiol, 52, 1–27. McKellar RC and Lu X (Eds) (2003) Modeling Microbial Responses in Food, Boca Raton, FL, CRC Press. McMeekin TA and Olley JN (1986) Predictive microbiology, Food Technol Aust, 38, 331– 334. McMeekin TA and Ross T (2002) Predictive microbiology: providing a knowledge-based framework for change management, Int J Food Microbiol, 78, 133–153. McMeekin TA, Olley JN, Ross T and Ratkowsky DA (1993) Predictive Microbiology: Theory and Application, Taunton, Research Studies Press. McMeekin TA, Baranyi J, Bowman JP, Dalgaard P, Kirk M, Ross T and Zwietering MH (2006) Information systems in food safety management, Int J Food Microbiol, 112, 181– 194. McMeekin TA, Szabo L and Ross T (2005) Connecting science with technology to improve microbial food safety management, Int Rev Food Sci Technol, Winter 2005/2006, 125– 130. Mellefont LA, McMeekin TA and Ross T (2003) Performance evaluation of a model describing the effects of temperature, water activity, pH and lactic acid concentration on the growth of Escherichia coli, Int J Food Microbiol, 82, 45–58. Membré J-M, Amézquita A, Bassett J, Giavedoni P, Blackburn C De W and Gorris LGM (2006) A probabilistic modelling approach in thermal inactivation: estimation of postprocess Bacillus cereus spore prevalence and concentration, J Food Prot, 69, 118–129. Monod J (1949) The growth of bacterial cultures, Annu Rev Microbiol, 3, 371–394. Mossel DAA and Struijk CB (1991) Public health implications of refrigerated pasteurised “sous vide” food, Int J Food Microbiol, 13, 187–206. Nixon PA (1971) Temperature integration as a means of assessing storage conditions, in Report on Quality in Fish Products, Seminar No. 3, Wellington, Fishing Industry Board, pp. 34–44. Nixon PA (1977) Temperature function integrator, United States Patent 4,061,033. Olley J and Ratkowsky DA (1973a) Temperature function integration and its importance in
Predictive microbiology: past, present and future
21
the storage and distribution of flesh foods above the freezing point, Food Technol Aust, 25, 66–73. Olley J and Ratkowsky DA (1973b) The role of temperature function integration in monitoring fish spoilage, Food Technol NZ, 8, 13, 15, 17. Oscar TP (2004) Simulation model for enumeration of Salmonella on chicken as a function of PCR detection time score and sample size: Implications for risk assessment, J Food Prot, 67, 1201–1208. Peleg M and Cole MB (1998) Reinterpretation of microbial survival curves, Crit Rev Food Sci Nutr, 38, 353–380. Roberts TA and Jarvis B (1983) Predictive modelling of food safety with particular reference to Clostridium botulinum in model cured meat systems, in TA Roberts and FA Skinner (Eds) Food Microbiology: Advances and Prospects, Orlando, FL, Academic Press, pp. 85–95. Roberts TA, Gibson AM and Robinson A (1981) Prediction of toxin production by Clostridium botulinum in pasteurised pork slurry, J Food Technol, 16, 337–355. Ross T (1999) Predictive Food Microbiology Models in the Meat Industry (MSRC.003), Sydney, Meat and Livestock Australia. Ross T and McMeekin TA (1994) Predictive microbiology, Int J Food Microbiol, 23, 241– 264. Ross T and McMeekin TA (2003) Modeling microbial growth within food safety risk assessments, Risk Analysis, 23, 179–197. Ross T and Shadbolt C (2001) Predicting Escherichia coli inactivation in uncooked comminuted fermented meat products, Sydney, Meat and Livestock Australia. Ross T, Dalgaard P and Tienungoon S (2000) Predictive modelling of the growth and survival of Listeria in fishery products, Int J Food Microbiol, 62, 231–245. Ross T, Ratkowsky DA, Mellefont LA and McMeekin TA (2003) Modelling the effects of temperature, water activity, pH and lactic acid concentration on the growth rate of Escherichia coli, Int J Food Microbiol, 82, 33–43. Scott WJ (1937) The growth of micro-organisms on ox muscle. II. The influence of temperature, Journal of the Council of Scientific and Industrial Research, Australia, 10, 338–350. Smith MG (1987) Calculation of the expected increases of coliform organisms, Escherichia coli and Salmonella typhimurium, in raw blended mutton tissue, Epidemiol Infect, 99, 323–331. Spencer R and Baines CR (1964) The effect of temperature on the spoilage of wet fish. I. Storage at constant temperature between –1°C and 25°C, Food Technol, Champaign, 18, 769–772. Tsutsui T and Kasuga F (2006) Assessment of the impact of cattle testing strategies on human exposure to BSE agents in Japan, Int J Food Microbiol, 107, 256–264. Van Asselt ED and Zwietering MH (2006) A systematic approach to determine global thermal inactivation parameters for various food pathogens, Int J Food Microbiol, 107, 73–82. Van Gerwen SJC, Rombouts FM, Van’t Riet K and Zwietering MH (1999) A data analysis of the irradiation parameter D10 for bacteria and spores under various conditions, J Food Prot, 62, 1024–1032.
3 Experimental design, data processing and model fitting in predictive microbiology M. A. J. S. van Boekel and M. H. Zwietering, Wageningen University, The Netherlands
3.1
Introduction
Models are simplifications of the real world and they are tools to handle complex matters in research, in product and process design, in risk analysis and in management. Furthermore, they can be used to test, combine and represent hypotheses. Models exist in many forms, but here we refer to mathematical models. This implies that functional relationships between variables are expressed in algebraic and/or differential equations. A very simple relationship is, for example, the algebraic equation y = a + bx. In this equation y is the dependent variable (the response), x the independent, controllable variable, while a and b are the parameters. Parameters can be given numerical values that describe some characteristic, for instance a growth rate. When it comes to modelling the behaviour of microorganisms in foods, there are some general good modelling practices to take into account, as well as some specific aspects relating to microorganisms. One goal of modelling is to understand a system, but another one is to predict future behaviour. Modelling is partly a theoretical exercise but, in order to validate models, we need to confront them with the real world, i.e. with data. Data are collected from observational studies (not relevant for microbial modelling) or from experimental studies by manipulating the system under study. The data that we collect are error-ridden, and therefore the models that we fit to data will be error-ridden as well, that is to say, the parameters in the model that we estimate by fitting the model to the data will be
Experimental design, data processing and model fitting
23
uncertain. A major problem with modelling behaviour of microorganisms is variability and uncertainty. As living systems, microorganisms behave in an inherently variable way. Usually, we are studying large numbers of microorganisms, but the basis for population behaviour is of course the behaviour of individual cells. Next to this biological variation, our methods of detection are not so accurate and this causes uncertainty. So, we are really in need of statistics to be able to deal with this uncertainty and variation. Modelling can be seen as an iterative process (Fig. 3.1). Usually, the cycle starts with a hypothesis or a conjecture, and this hypothesis needs to be tested. In the case of microbial modelling, testing is done by studying the behaviour of microorganisms in experiments, so design of experiments is needed. Experiments can be carried out in well-controlled systems (precisely controlled media, for instance) or with foods (that are heterogeneous and have a variable composition; they are not well defined in most cases). The design should be tailored to the research question, and it is therefore essential to state the research question explicitly. The way we collect our data is crucial. It is not possible to correct for a bad experimental design in a later stage of data processing, other than by starting all over again, but that is obviously a waste of time and resources. After collecting data we need to process them to extract the relevant information or, to put it another way, to be able to separate noise from information that is contained within the data. To be able to separate noise from real information we need information about experimental errors, and this should also be part of the experimental design. After analysis, the results need to be critically evaluated. The hypothesis is thus confronted with the data and, in many cases, the hypothesis will need to be adjusted. This requires new experiments, and so we enter the modelling cycle once again. This continues until the researcher is confident that the hypothesis and the data are consistent, if possible supported by some measure.
Conjecture, hypothesis
Analysis of experimental results
Design of experiments
Experiments
Fig. 3.1
The iterative process of modelling.
24
Modelling microorganisms in food
An important aspect in ‘good modelling practice’ is the principle of parsimony, sometimes also referred to as Ockham’s razor. It implies that models should contain just enough parameters to describe the phenomenon under study, but no more. The point is that models will fit better to data when they contain more parameters. The price to be paid for this phenomenon is that the uncertainty in the parameters dramatically increases with the number of parameters, and so also does the complexity. Hence, the balance should be sought in such a way that there are enough parameters to describe the data satisfactorily but no more than that. For that reason, it is important always to check the precision of parameter estimates obtained. If a parameter is obtained with low precision, this can mean two things: i) it is redundant, ii) the data do not contain enough information to estimate the parameter well. It is the modeller’s responsibility to take action when this happens. When two models are able to describe data more or less equally well, preference should be given to the model with the lowest number of parameters.
3.2
Experimental design
As stated in the introduction, experimental design is crucial for the next stages in the modelling process. Admittedly, it is probably impossible to come up with a perfect design for microbial models because there are too many factors involved. There is variability between strains and even within a pure culture, phenomena that we are only now starting to describe in molecular mechanistic terms. The history of test microorganisms may have been different, there is variation in the inoculum’s size, the composition of media used for growing is variable, and the available analysis methods lead to variable results. Even though a perfect design is not possible, a good design is worth thinking about. Careful thought needs to be given to the research question: what exactly do we want to study? What sort of models are we interested in? Is the modelling exercise meant to explore factors that are relevant for the case under study (model building) or do we have a fair idea of the model? Do we need experiments to estimate parameters as precisely as possible (model parameter estimation); do we want to validate our model? Depending on the answers to these questions, we can start to design experiments. Many statistical textbooks and papers are available on experimental design in general (e.g. Box et al., 1978; Atkinson and Donev, 1992; Hu, 1999; Rasch and Gowers, 1999; Versyck, 2000; Rasch, 2004), but not so many are specifically tuned to microbiological experiments (Davies, 1993; Versyck, 2000; Rasch, 2004). Here we will limit ourselves to a general discussion of experimental design.
3.2.1 Response surface modelling If the research question is about studying multiple factors at a time (such as temperature, pH, water activity, salt level, etc.) that may have an impact on a specific response (such as lag-time, growth rate, etc.), the statistical technique
Experimental design, data processing and model fitting
25
Fig. 3.2 Schematic comparison of and a one-factor-at-the time design (A, ◊) and a 22 factorial design (õ, B). X1 and X2 are the factors to be varied.
categorized as DOE (design of experiments) may be appropriate. It is then a better approach than the commonly used one-factor-at-a-time approach, which implies that one factor is varied (e.g. pH) while all other factors (e.g. temperature, water activity, etc.) are kept constant (see Fig. 3.2(a) for a two factor design). This can be used if one is specifically interested in the curvature of the effect of various factors. The problem with the one-factor-at-a-time approach is that possible interaction between factors will go unnoticed. Other designs are needed to investigate interaction terms. The most complete design is then a factorial design, meaning that all factors will be studied in combination. Figure 3.2(b) gives an example of a 22 factorial design, i.e. two factors are tested at two levels. A 2n factorial design supports the building of a linear model with or without interaction terms between the factors. A general equation for this is: n
n
j =1
j =1 k = j +1
y = β0 + ∑ β j X j + ∑
n
∑β
jk
X jXk
[3.1]
The third term on the right-hand side of the equation accounts for interaction between the factors. The two levels are usually boundary values of the research range. However, a 2n factorial design is not suited for building a model to estimate non-linear properties between factors. A 3n factorial design is more suitable for this purpose provided there are less than four factors. Three test levels are used (high, middle, low) and a suitable polynomial model to account for non-linear (quadratic) interaction is: n
n
y = β0 + ∑ β j X j + ∑ j =1
n
∑β
j =1 k = j +1
n
jk
X j X k + ∑ β jj X 2j j =1
[3.2]
A major drawback of full factorial design is that the number of experiments needed goes quickly out of hand. If, for instance, four factors are tested at three levels, the number of experiments needed is already 34 = 81. In order to reduce the necessary number of experiments, fractional factorial designs have been developed. This means that not all factor combinations are tested, but rather only a fraction of them. By choosing the experimental settings in
26
Modelling microorganisms in food
Fig. 3.3 Schematic representation of a central composite design for two factors.
a clever way, maximum information is obtained with a minimum of costs. In that class of designs, several options exist, for instance the Box–Behnken design, the Plackett–Burman design, the Doehlert matrix, to name a few. Some applications of these designs have been reviewed (Rasch, 2004). Fractional factorial designs are especially suitable for screening purposes in order to roughly estimate the effect of each factor (Davies, 1993). If fractional rather than full factorial designs are used, the price to be paid is that estimations of interaction designs will be less accurate. Again, the choice for either one of them depends on the question to be answered. If an optimum is to be found over a number of factors, for instance optimal growth, a central composite design is appropriate. The design is then centred around the main region of interest with fewer measuring points at the extreme of the region; Fig. 3.3 shows a schematic representation of a central composite design for two variables. In a central composite design each independent variable is tested on five levels; it is a reduced full 5n factorial design. If one searches for optima, one should realise that polynomial equations are often parabolic and that if the real biological or physical effect is very asymmetric then a wrong estimation of the optimum is determined. Rasch (2004) has reviewed several applications of DOE in the domain of microorganism modelling. The outcomes of these designs can be modelled via polynomials, which can be represented as three-dimensional surfaces, hence the name response surface modelling (RSM). Central composite designs in particular form the basis for such response surfaces. RSM is helpful in learning about the effects of factors and, most importantly, about their interaction. However, it is doubtful whether polynomials are useful for microbial modelling. Polynomials are empirical in the sense that they are not based on any presumed mechanism. The major drawback is that the
Experimental design, data processing and model fitting
27
parameters of such models do not have a physical meaning, while extrapolation outside the experimental region for which they were derived is not allowed. Also, polynomial models tend to be over-parameterized (Ratkowsky 2004), and so the principle of parsimony is not obeyed, as discussed above. The question of whether mechanistic models, as opposed to empirical ones, enable the behaviour of microorganisms to be characterized is an interesting one. It is fair to say that growth and inactivation of microorganisms are such complicated processes that the mechanisms governing them are not well understood. At best, the current microbial models are semi-mechanistic, that is to say, the mathematical relationship is empirical, but they may be cast in such a form that their parameters can be given a physical interpretation. For example, in the Gompertz growth model, parameters relate to lag-time, growth rate and asymptotic value. The term ‘interaction’ is used above. It should be noted, however, that an interactive term in the polynomial equation does not automatically imply real physical or biological interaction between two factors. Take for example the case where both pH and temperature (T) act linearly on a variable, and both act in a multiplicative and independent manner: y = a(T + b)(pH + c)
[3.3]
This equation results in a polynomial equation y = acT + abpH + aTpH + abc = fT + gpH + hTpH + i
[3.4]
with an interaction term, although in principle in the original equation no interaction is observed between the effects of T and pH.
3.2.2 Optimal design While (fractional) factorial design works with polynomials, leading to response surface models, a different approach is needed for parameter estimation when the model structure is assumed known. For instance, if the researcher has decided that a modified Gompertz model is suitable for the problem at hand (i.e. the model building phase is finished), he can decide next to design experiments in such a way that the resulting parameters will have the highest precision possible. Thus, the variances of the estimated parameters should be as small as possible. As it happens, these variances depend on the experimental design as well as on the precision with which measurements can be made. In order to appreciate how experimental design can help in finding parameters with low variances, it is instructive to study parameter sensitivities. A parameter sensitivity function expresses how sensitive a model is with respect to the parameter under study, and it is found by taking the partial derivative of the model with respect to a parameter. For example, for a firstorder exponential decay model, N = N0 exp(–kt), the sensitivity s with respect to parameter k (rate constant) is:
sk =
∂ ( N 0 exp(− kt )) = − N 0 t exp(−kt ) ∂k
[3.5]
28
Modelling microorganisms in food
0.9
Concentration (arbitrary units)
0.7 0.5 y = exp(–kt ) 0.3 0.1 –0.1 0
1
2
3
4
5
sk
–0.3 –0.5
Time (arbitrary units)
Fig. 3.4 Example of parametric sensitivity of parameter k (sk) in the model N = N0exp(–kt) for N0 = 1 and k = 1.
where N is concentration and t is time. Figure 3.4 shows this sensitivity graphically. It illustrates that the region where the model is most sensitive to the parameter is near t = 1 in this example, and it implies that the most informative results will come out if measurements are taken around this time value. In contrast, measurements taken at, say, t > 3, will not yield much information about the model. This changes drastically, however, if N is measured over various decades and if the error in log (N) has a constant standard deviation, then the error in N does not. In this case, the model to fit becomes ln (N) = ln (N0) – kt which is a linear model, in which the parameter sensitivity is:
sk =
∂ (ln( N 0 ) − kt ) = −t ∂k
[3.6]
This means that the sensitivity is greatest at large times, so one should measure over long periods. A first-order model is a rather simple one and, in practice, models will be more complicated. A model can be represented very generally by: y = f (θ, x)
[3.7]
y is the dependent variable (response, e.g. number of microorganisms), x the independent variables (e.g. time, pH) while θ represents the parameters. If we have u settings of x (1..u) and p parameters θ (1..p), we can compose a matrix of sensitivity functions, called the design matrix V:
Experimental design, data processing and model fitting
∂f (θ , x1 ) ∂θ 1 ( , θ ∂ f x2 ) ∂θ 1 V= . . ∂f (θ , xu ) ∂θ 1
∂f (θ , x1 ) ∂θ 2 ∂f (θ , x 2 ) ∂θ 2 . . ∂f (θ , xu ) ∂θ 2
. . . . . . . . . .
∂f (θ , x1 ) ∂θ p ∂f (θ , x 2 ) ∂θ p . . ∂f (θ , xu ) ∂θ p
29
[3.8]
From this design matrix, a so-called Fisher information matrix F = (VTV) can be formed (VT is the transpose of matrix V). Finally, the variance–covariance matrix of the estimated parameters M is defined as: M = (V T V)–1 s2
[3.9]
in which s2 is the residual variance, i.e. the residual sums of squares divided by the degrees of freedom (number of observations minus the number of parameters). This equation shows nicely that the variance of estimated parameters depends on two factors. 1. The Fisher information matrix, built upon the experimental design matrix V; it is important to realize that the Fisher matrix is not determined by experimental observations but by experimental settings of the x-values, in other words, experimental design. 2. The residual variance s2, which obviously depends on experimental observations. Consequently, if we want to minimize the elements of the variance– covariance matrix M to minimize the variances of the parameters, the following considerations are in order. • The residual variance should be small. This is achieved by employing a correct model (so that the residuals will be small) and making precise measurements (so that the spread of experimental measurements is small). • The elements of the inverse of the design matrix (VTV)–1 should be small. This can be achieved by careful spreading out of the experiments and/or by increasing the number of experiments The important message of the above discussion is that the information or design matrix F is important in experimental design. If the determinant of (VTV), symbolized by |VTV|, is maximized this minimizes the volume of the confidence region, and this is what we should aim for because it gives the most precise estimate. This criterion is therefore called ‘D-optimal design’, where the ‘D’ stands for determinant. (The determinant is a scalar computed from the elements of the matrix; a determinant exists only for square matrices.) Microbial models are usually non-linear models. D-optimal designs for non-linear models depend on the values of the parameters (in contrast to linear models), and the problem is that we
30
Modelling microorganisms in food
usually do not know the parameters, at least not accurately. The designs are, therefore, called locally optimum using the best guess for the parameters. Basically, it follows from optimum design theory (Atkinson and Donev, 1992) that a function is sought that is minimized in searching over the design region; this function often depends on the variances of the parameter estimates. If a model contains p parameters, at least N = p trials will be required, but if the variance needs to be estimated from the data more than N = p trials are necessary. The design for a non-linear model depends on the values of the parameters. Because of this dependence, sequential designs are more important for non-linear than for linear models. This means that we start with a guess, carry out the experiment and update the design with this new information. An informative treatment on optimal designs is given by Atkinson and Donev, 1992. D-optimal designs have been used in quantitative microbiology (e.g. Grijspeerdt, 1999; Versyck et al., 1999; Bernaerts et al., 2000; Versyck, 2000; Bernaerts et al., 2005). Besides D-optimal design, other designs are possible, such as A-optimal design, E-optimal design, T-optimal design (Atkinson and Donev, 1992). The design to choose depends on the goal of the experiment. An E-optimal design in relation to quantitative microbiology has been used by Bernaerts et al. (2005). To indicate briefly the differences between the various criteria, the following points can be made. The A-optimal design leads to a minimization of the mean estimation error variance; the D-criterion maximizes the global variance of the parameter estimates without considering the accuracies of the individual parameters; the Ecriterion minimizes the largest parameter estimation uncertainty (Versyck, 2000). T-optimal design is for testing various models, i.e. to optimize model discrimination. We will not discuss these designs any further here but refer to literature (Atkinson and Donev, 1992; Versyck, 2000).
3.2.3 Design in practice It should be noted that the optimal designs referred to in the previous paragraph are optimal only in the estimation of the parameters, and a researcher often wants two different things: 1. test curvature, i.e. look for a suitable model; 2. estimate the parameters accurately. Optimal designs are focused only on this second objective, while an equidistant spread of points often characterises a curvature best. Therefore, intermediate approaches might be followed with some points being selected mainly for the accurate prediction of the parameters (often largely spread out) and some to determine or test the supposed curvature. To explain this approach a relatively simple example is worked out. Suppose we have obtained the data points shown in Table 3.1. The first thing to do is to plot the data; this helps in searching for the right model: see Fig. 3.5. As shown in this figure, it might be a linear relationship, but there is also a hint of non-linearity. This is a typical example where the experimental design is not good enough to discover
Experimental design, data processing and model fitting
31
Table 3.1 Hypothetical data showing concentration as a function of time up until t = 4 Time t (arbitrary units) 0 1 2 3 4
Concentration N (arbitrary units) 0 1.3 2.6 3.7 4.4
Fig. 3.5 Plot of data shown in Table 3.1. The line shown is the result of linear regression.
possible curvature. Whether or not this is important depends on the goal. If the interest is also in times beyond t = 4, this design cannot answer the question as to whether the relationship remains linear. The remedy here is to investigate also beyond t = 4. Suppose we do that and we find results as displayed in Table 3.2 (note that the first five data points are the same as in Table 3.1). The plot of these data is given in Fig. 3.6.
32
Modelling microorganisms in food Table 3.2 Hypothetical data showing concentration as a function of time up until t = 20 Time t (arbitrary units) 0 1 2 3 4 5 7 10 15 20
Concentration N (arbitrary units) 0 1.3 2.6 3.7 4.4 5.3 6.5 7.7 9.0 9.6
Fig. 3.6 Plot of data shown in Table 3.2. The line is the fit of a first-order equation.
It is clear that the relationship is definitely not linear; the fit is shown for a firstorder reaction N = N ×0 (1–exp(–kt)). (In fact, the data were simulated using such a first-order equation.) It is generally so that in order to detect deviations from linearity at least some 30–40% change in concentration needs to be observed before the data allow discrimination between linear and non-linear models. Once
Experimental design, data processing and model fitting
33
an acceptable fit is obtained, the next step is to find experimental settings such that reliable parameters follow. This is where, for instance, D-optimal designs are useful. So, based on the result in Fig. 3.6 we may decide that a first-order model is suitable. We have two parameters, N0 and the rate constant k. We can calculate the required matrices, as discussed above, and find the settings for which the determinant is maximized. However, in this case, the parameter N0 enters the model linearly (see Section 3.3) and optimal designs will, in general, not depend on the value of linear parameters (Atkinson and Donev, 1992). That leaves us with only one parameter to estimate and the resulting matrix reduces to one equation, which is similar to the one displayed in the sensitivity equation (3.5):
sk =
∂ ( N 0 (1 − exp(−kt ))) = N 0 t exp(−kt ) ∂k
We need an estimate of k to evaluate the function. A guess for this estimate can come from the data shown in Table 3.2, used to determine possible curvature. The resulting value is k = 0.15, and the plot for the sensitivity function is shown in Fig. 3.7 [This analysis is equal as with equation (3.5) and Fig. 3.4.] It follows that the function is maximized at t = 7 in this case. This means that if we take a few measurements around this time we will get the most accurate parameter estimate for the rate constant k. If we have models with two parameters or more that enter the model nonlinearly, the determinant of the resulting matrix needs to be maximized. The
Fig. 3.7 Model fit and parametric sensitivity of parameter k (sk) in the model N = N0(1 – exp(–kt) for N0 = 10 and k = 0.15.
34
Modelling microorganisms in food
worked example with matrices for the above model is given in the appendix. An intuitive design with three time-points would be (0,10,20) in this case, but measuring at (5.62,5.62,20) gives a factor 3.3 lower determinant of the information or design matrix F [see also equation (3.9)] and therefore smaller confidence regions (D-optimal design). Optimal design calculations can thus guide the selection of the points to measure, but should not be followed blindly, since in the above example it might still be decided to measure at time zero, although for this model this experimental point (mathematically) does not give any information (for parameter estimation). Other examples of optimal design are available (Atkinson and Donev, 1992; Versyck, 2000; Bernaerts et al., 2005).
3.3
Data processing
The collection of data is a subject in its own right. It concerns questions as to which strain should be selected, or which mixtures should be studied, which growth media are to be used, etc. Also the question of which detection method should be used is important. Rasch (2004) has recently addressed these questions and we refer to her paper for more details. Once we have proposed a model and collected our data, the next step is to confront the model with the data, i.e. data processing. As a first remark, it should be noted that when zero rates are measured they should not be used for modelling (Ratkowsky, 2004), or they should be included as censored data (µ < a). Confronting models with data means that we should have some measure to see whether our proposed model makes any sense at all in view of the data obtained. So, the first thing to do after the first fitting exercise is to state something about the goodnessof-fit (or lack of fit for that matter). Once we can be confident that our model is not too far off, we can proceed with studying the parameter estimates. This is usually done by regression techniques. Basically, either linear or non-linear regression can be used. The term ‘linear’ in linear regression refers to parameters; an operational definition of a model that is linear in the parameters is that the first derivative of a model with respect to a parameter should not contain that parameter. Two linear models (linear in their parameters) are: y = a + bx y = a + bx + cx2
[3.10]
When a partial derivative is taken with respect to each of the parameters a, b, c in these two equations, it will not contain that parameter. Also, the second equation is called a linear model, even though it is not linear with respect to x. If the first derivative with respect to a parameter does contain that parameter, we have a nonlinear model. For instance, equation (3.5) is such a derivative and it can be seen that it still contains the parameter k, so that the first-order model is indeed a non-linear model. Linear regression is the easiest method because an analytical solution exists for calculating the estimate. Non-linear regression does not have an analytical solution
Experimental design, data processing and model fitting
35
for estimation of parameters, and it requires iteration after giving an initial estimate of the parameter. There is a danger with non-linear regression that a local solution is found rather than a global one. So, this would lead to the conclusion that linear regression is to be preferred. There is, however, a potential danger. The assumptions for carrying out linear regression are quite strict, and one of the most important of these concerns the error structure of data. They should be normally distributed with equal variance (homoscedasticity). If that is the case, regression is possible, be it linear or non-linear. However, sometimes it is possible to transform non-linear models into linear ones, for instance by taking logarithms or by taking inverses. For example, a non-linear Arrhenius type model has been proposed to model the dependence of lag-time λ on temperature T:
B C λ = exp − A + + 2 T T
[3.11]
where A, B, C are the parameters. This is clearly a non-linear model. By taking the logarithm, the model changes into:
B C ln λ = − A + + 2 T T
[3.12]
and we now have a linear model analogous to the second one in equation (3.10). The potential problem is now that, if homoscedastic data are normally distributed, they will no longer have these properties after transformation, and then regression will lead to biased results. However, if data are not normally and/or homoscedastically distributed, transformation may actually result in better statistical properties. According to Schaffner (1998), most microbial growth rate data are heteroscedastic, i.e. they do not have a constant variance, but a high variance at high growth rates and a low variance at low growth rates. This implies that something needs to be done before applying regression. One possibility is to apply weighted regression by giving more weight to more precise data, and less weight to less precise data. Another possibility is to transform the data to stabilize the variance. For instance, a log-transformation of data having a constant coefficient of variation (rather than a constant variance) results in homoscedastic data. Usually, the logarithm of numbers of microorganisms is modelled, since errors in numbers are relative rather than absolute. Concerning for example specific growth rate data, there is debate in the literature about which transformation is best. According to Schaffner (1998) it should be a logtransformation, according to Ratkowsky (2004) a square root transformation has a stabilizing effect. Zwietering et al. (1994) state that asymptote data do not need a transformation, while a square root transformation is needed for growth rate and a logarithmic transformation for the lag-time. Ratkowsky (2004) performed an analysis of various transformations and concluded that the assumption about the error structure and the subsequent data transformation is not very critical as long as the model fit is good. In any case, it pays to study the error structure of the data for subsequent modelling.
36
3.4
Modelling microorganisms in food
Model fitting
3.4.1 Good fit A very simple test to check whether the model is a good fit is by studying residuals. Residuals reflect the difference between the model and the actual data. If a model is correct (i.e. reflecting the behaviour of the data), residuals should no longer contain information; in other words, they should be distributed randomly. If a model is not correct, this becomes immediately obvious because then the residuals will show a trend. Figure 3.8 shows an example. Another factor to evaluate is the homoscedasticity of the residuals; homoscedasticity means that the magnitude of residuals does not vary systematically with the independent variable. Checking residuals graphically is easy and very informative. Also, a quantitative test for residuals is available, called a runs test (Motulsky and Ransnas, 1987; Ratkowsky, 2004). What is not recommended with non-linear regression is the use of R2 and adjusted R2 to judge model performance; unfortunately this seems to be common practice. The reason why these parameters should not be used has been made very clear by Ratkowsky (2004). It is nothing more than the ratio of the sum of squares due to regression to that of the total sum of squares of the response variable around its mean. This does not say much about the fit of a model, that is to say that a low R2 probably indicates a problem, but a high R2 does not necessarily indicate a good fit. Incidentally, these reasons are not only valid for microbial models but for models in general (e.g. Johnson, 1992). R2 and adjusted R2 are in principle suited for linear models, but even then there are pitfalls (Ratkowsky, 2004). The residual mean square (RMS) is a measure for the discrepancy between observed and predicted values. If the RMS is more or less the same as the experimental variance, the model apparently fits the data well. If the RMS is much higher than the experimental variance, this signals a problem with the model. In order to know the experimental variance, replicates are needed. Whether the RMS differs significantly from the experimental error can be tested via the F-test (Ratkowsky, 2004). Formally, F-tests are not valid for non-linear models but, in practice, they can be used. Another indication to check whether a model fits reasonably well is to look at the precision of parameter estimates. If this precision is very low, that may be an indication that a model is not performing well, although another reason may be that the data are not suitable for testing the model under study. Once we have some confidence in a model, the next step is to see how it performs in comparison with other models. This is the phase of model criticism or model discrimination.
3.4.2 Model discrimination It may be that more than one model is tested, and it is not uncommon to find that more than one model is able to explain the data. In such a case, it may be useful to apply model discrimination. For nested models, the F-test can be used (Motulsky and Ransnas, 1987). A model is nested if addition or removal of one parameter
Experimental design, data processing and model fitting
37
Fig. 3.8 Example of residual plots, with trends (A) and without trends (B).
leads to the same model structure; so, a first-order polynomial is a nested model of a second-order polynomial. If two competing models have the same number of parameters, the F-test can be calculated from the residual sums of squares (RSS):
F=
RSS1 RSS 2
[3.13]
(This is the formal way; in this case one could also just compare the RSS and the
38
Modelling microorganisms in food
smallest one will point to the better model.) If two competing models have a different number of parameters, the equation becomes:
F=
(RSS1 − RSS 2 )/ (df 2 − df1 ) RSS 2 / df 2
[3.14]
In this equation, df stands for degrees of freedom (number of measurements – number of parameters). This means of discriminating models has been applied by, for instance, Zwietering et al. (1990, 1993) and López et al. (2004). Another tool for model discrimination is the so-called Akaike criterion (AIC), which stems from information theory (Burnham and Anderson, 1998). Models do not need to be nested for this criterion. The interesting aspect of this criterion is that it introduces a penalty when a model has more parameters. It implies that a model with more parameters will have to perform substantially better than a model with fewer parameters in order to survive. The equation for the Akaike criterion is:
2( p + 1)( p + 2) RSS AIC = n ln + 2( p + 1) + n− p−2 n
[3.15]
(this is actually the version of AIC corrected for a small number of parameters, see Burnham and Anderson, 1998). In this equation, n stands for the number of measurements and p for the number of parameters. The model that has the lowest AIC number performs the best, at least from a statistical point of view. The AIC has been applied also by Lopez et al. (2004) in their evaluation of various microbial models. The Bayesian information criterion (BIC), sometimes called the Schwartz criterion after its developer, acts in a similar way:
RSS BIC = n ln + p ln n n
[3.16]
It is also possible to apply some sort of model averaging meaning that, if there is no single model that performs much better, common parameters in the competing models can be averaged, with or without some weighting (Burnham and Anderson, 1998).
3.5
Future trends
There are currently many developments taking place in the area of modelling. We name just a few without going into detail; the interested reader is referred to the literature. Developments come especially from the area of artificial intelligence. Applications in relation to microbiological modelling are reported, for instance, with neural networks (Schepers et al., 2000; Jeyamkondan et al., 2001; Lou and Nakai, 2004). Artificial neural networks need to be trained by feeding data to the network, which means that their power increases with increased learning. Such networks could be useful for in-line control. A major drawback is that they are actually black
Experimental design, data processing and model fitting
39
box models; it is less clear what happens inside the models, and therefore they are less attractive in scientific learning. Another application stemming from artificial networks is the Bayesian belief networks (BBNs); they have their roots in Bayesian statistics, which implies a different philosophy from the commonly accepted classical, frequentist method. The difference is mainly that Bayesian statistics accepts prior knowledge to be incorporated into estimation: evidence from data is combined with this prior knowledge in the form of probability statements. The result is expressed in what is called a posterior distribution, which summarizes the updated knowledge; a recent review can be found in van Boekel et al. (2004). Interestingly, this approach is now increasingly used in microbiological modelling. BBNs are not black box models because they require mechanistic insight to be put into the model. Some application examples can be found in Barker et al. (2002, 2005a,b) and Pouillot et al. (2003). Another development is the use of Monte Carlo simulation, mainly focused on risk assessment in relation to microbiological models. Some examples are given in Bemrah et al. (1998), Cassin et al. (1998), Lindqvist et al. (2002), den Aantrekker et al. (2003) and Kusumaningrum et al. (2004).
3.6
Sources of further information and advice
The references below will give much more detail. In general, we would like to stress that a critical attitude is required. Modelling can be a very powerful tool to study microbial phenomena, to get a better mechanistic insight and, last but not least, to make predictions of the microbial state of food and the risks and benefits associated with its consumption. It is the view of the present authors that researchers using this powerful tool should be aware of how to use it. If it is used without careful consideration of the goal for which it is being used this can lead to dangerous situations. It is hoped that this chapter, and indeed this book, will contribute to this awareness, so that quantitative microbiology can become even more powerful than it already is.
3.7
References
Atkinson AC and Donev AN (1992) Optimum Experimental Designs, Oxford, Clarendon Press. Barker GC, Talbot NLC and Peck MW (2002) Risk assessment for Clostridium botulinum: a network approach, Int Biodeterior Biodegradation, 50, 167–175. Barker GC, Malakar PK, Del Torre M, Stecchini ML and Peck MW (2005a) Probabilistic representation of the exposure of consumers to Clostridium botulinum neurotoxin in a minimally processed potato product, Int J Food Microbiol, 100, 345–357, 2005a. Barker GC, Malakar PK and Peck MW (2005b) Germination and growth from spores: variability and uncertainty in the assessment of food borne hazards, Int J Food Microbiol, 100, 67–76.
40
Modelling microorganisms in food
Bemrah, N, Sanaa, M, Cassin, MH, Griffiths, MW and Cerf, O (1998) Quantitative risk assessment of human listeriosis from consumption of soft cheese made from raw milk, Prev Vet Med, 37, 129–145. Bernaerts K, Versyck KJ and Van Impe JF (2000) On the design of optimal dynamic experiments for parameter estimation of a Ratkowsky-type growth kinetics at suboptimal temperatures, Int J Food Microbiol, 54, 27–38. Bernaerts K, Gysemans KPM, Nhan Minh T and Van Impe JF (2005) Optimal experiment design for cardinal values estimation: guidelines for data collection, Int J Food Microbiol, 100, 153–165. Box GEP, Hunter WG and Hunter JS (1978) Statistics for Experimenters, New York, John Wiley and Sons. Burnham KP, Anderson DR (1998) Model Selection and Inference. A Practical Information and Theoretic Approach, New York, Springer. Cassin M, Lammerding AM, Todd ECD, Ross W and McColl RS (1998) Quantitative risk assessment for Escherichia coli O157:H7 in ground beef hamburgers, Int J Food Microbiol, 41, 21–44. Davies KW (1993) Design of experiments for predictive microbial modeling, J Ind Microbiol, 12, 295–300. Den Aantrekker ED, Beumer RR, van Gerwen SJC, Zwietering MH, van Schothorst M and Boom RM (2003) Estimating the probability of recontamination via the air using Monte Carlo simulations, Int J Food Microbiol, 87,1–15. Grijspeerdt K and Vanrolleghem P (1999) Estimating the parameters of the Baranyi model for bacterial growth, Food Microbiol, 16, 593–605. Hu R (1999) Food product design. A Computer-aided Statistical Approach, Lancaster, Technomic. Jeyamkondan S, Jayas DS and Holley RA (2001) Microbial modelling with artificial neural networks, Int J Food Microbiol, 64, 343–354. Johnson ML (1992) Review – Why, When, and How Biochemists Should Use Least Squares, Anal Biochem, 206, 215–225. Kusumaningrum HD, Van Asselt ED, Beumer RR, Zwietering MH (2004) A quantitative analysis of cross-contamination of Salmonella and Campylobacter spp. via domestic kitchen surfaces, J Food Prot, 67, 1892–1903. Lindqvist R, Sylven S, Vagsholm I and Axelsson L (2002) Quantitative microbial risk assessment exemplified by Staphylococcus aureus in unripened cheese made from raw milk, Int J Food Microbiol, 78, 155–170. Lopez S, Prieto M, Dijkstra J, Dhanoa MS and France J (2004) Statistical evaluation of mathematical models for microbial growth, Int J Food Microbiol, 96, 289–300. Lou W and Nakai S (2001) Artificial neural network-based predictive model for bacterial growth in a simulated medium of modified-atmosphere-packed cooked meat products, J Agric Food Chem, 49, 1799–1804. Motulsky HJ and Ransnas LA (1987) Fitting curves to data using nonlinear regression: a practical and nonmathematical review, FASEB J, 1, 365–374. Pouillot R, Albert I, Cornu M and Denis J-B (2003) Estimation of uncertainty and variability in bacterial growth using Bayesian inference. Application to Listeria monocytogenes, Int J Food Microbiol, 81, 87–104. Rasch M (2004) Experimental design and data collection, in RC McKellar and X Lu (Eds) Modeling Microbial Responses in Food, Boca Raton FL, CRC Press, pp. 1–20. Rasch DVLR and Gowers JI (1999) Fundamentals in the Design and Analysis of Experiments and Surveys, Munchen/Wien, R Oldenbourg Verlag. Ratkowsky DA (2004) Model fitting and Uncertainty, in RC McKellar and X Lu (Eds) Modeling Microbial Responses in Food, Boca Raton FL, CRC Press, pp. 151–196. Schaffner DW (1998) Predictive food microbiology Gedanken experiment: why do microbial growth data require a transformation? Food Microbiol, 15, 185–189. Schepers AW, Thibault J and Lacroix C (2000) Comparison of simple neural networks and
Experimental design, data processing and model fitting
41
nonlinear regression models for descriptive modeling of Lactobacillus helveticus growth in pH controlled batch cultures, Enzyme Microb Technol, 26, 431–445. van Boekel MAJS, Stein A and Van Bruggen A (2004) Bayesian Statistics and Quality Modelling in the Agro Food Production Chain, Dordrecht/Boston/London, Kluwer Academic Press. Versyck K (2000) Dynamic input design for optimal estimation of kinetic parameters in bioprocess models. Leuven, Belgium, Catholic University of Leuven. Versyck KJ, Bernaerts K, Geeraerd AH and Van Impe JF (1999) Introducing optimal experimental design in predictive modelling: a motivating example, Int J Food Microbiol, 51, 39–51. Zwietering MH, Jongenburger I, Rombouts FM and Van ’t Riet K (1990) Modeling of the bacterial growth curve, Appl Environ Microbiol, 56, 1875–1881. Zwietering, MH, Rombouts, FM and Van ’t Riet K (1993) Some aspects of modeling microbial quality of food, Food Control, 4, 89–96. Zwietering MH, Cuppers, HGAM, De Wit JC and Van ’t Riet, K (1994) Evaluation of data transformations and validation of a model for the effect of temperature on bacterial growth, Appl Environ Microbiol, 61, 195–203.
42
3.8
Modelling microorganisms in food
Appendix
To determine the D-optimal design of the first-order reaction model the following worked example shows the approach taken for a D-optimal design: N = N0(1 – exp(–kt)) We can now calculate the derivatives: ∂f = 1 − exp(−kt ) ∂N 0 ∂f = N0 t exp(− kt ) ∂k
If we decide to take three measurements the V matrix becomes: 1 − exp(−kt1 ) V = 1 − exp(−kt 2 ) 1 − exp(−kt ) 3
N0 t1 exp(−kt1 ) N0 t 2 exp(−kt 2 ) N 0 t 3 exp(−kt 3 )
If we distribute the points evenly at t = 0, 10 and 20, we get for the matrix V: 0 0 V = 1 − exp(−10k ) 10 N0 exp(−10k ) 1 − exp(−20k ) 20 N exp(−20k ) 0
With estimates from the initial dataset as N0 = 10 and k = 0.15 (from the data of Table 3.2), this becomes: 0 0 V = 0.777 22.31 0.950 9.96
Now we can calculate the Fisher information matrix:
The determinant of this matrix is D = 1.51 * 597 – 26.8 * 26.8 = 181.3 and the inverse matrix is
Experimental design, data processing and model fitting
43
It can be seen that the Fisher information matrix depends on the value of the parameters and cannot be determined (exactly) before the experiment, as it can with linear models. As in linear regression with other t-values (design), other values of the Fisher matrix and the determinant will follow. Furthermore, note that this entire procedure does not depend on the actual y-values, so these calculations can be done before the experiment is actually carried out (but for non-linear models initial estimates of the parameters are necessary). We can now for example try and see if we get better results with a design by moving the first two points. For example, the design with two values at t = 5.62 and one at t = 20 gives as determinant 600, so a factor 3.3 higher. The variance– covariance matrix of this case now becomes − 0.062 2.12 − 0.062 0.0026
(V T V)–1 =
So it is clear that all factors in the matrix have become smaller, resulting in smaller confidence intervals of the parameters. It should be noted that in this example the measuring point at t = 0 does not give information, since it is already included in the model that the response is 0 at this point. For the investigator, this point gives a useful verification, but mathematically speaking it does not give any information to estimate the two parameters.
4 Uncertainty and variability in predictive models of microorganisms in food M. J. Nauta, National Institute for Public Health and the Environment, The Netherlands
4.1
Introduction
Predictive models in food microbiology are developed for quantitative prediction of the increase and decrease in concentrations of microorganisms in food products. This is useful for food manufacturers and food safety authorities in developing and analysing production processes, determining shelf life and setting food safety standards. Its use is based on the assumption that similar microorganisms behave similarly under (nearly) identical conditions: given time, temperature, acidity and other characteristics of a food product, the increase or decrease in the concentration of a microorganism in the food is predictable, based on previous experience and microbiological knowledge. This chapter deals with the problem that, although in our experience this principle works quite well, predictive models cannot provide exact predictions of what will happen. There is variability and uncertainty related to food constitution, environmental conditions and microbial responses. The problem is illustrated by a simplified case study. The discussion focuses on growth models, but for inactivation models the arguments given are alike. First, the reasons for imprecision in food microbiological predictions are explained, then variability and uncertainty as different phenomena are described in more detail. Next, the implications of variability and uncertainty in microbial responses for application of predictive models in food management and risk assessment are discussed. This chapter ends with a commentary on future trends and research needs.
Uncertainty and variability in predictive models
4.2
45
Case study – part 1
Consider a consumer who finds out that there is something wrong with her refrigerator: the thermometer inside indicates a temperature of 10 °C. She knows this is too high for the minimally processed vegetable product inside, but is unsure whether it is still safe to eat it. The food has been stored there for three days and there are reasons to assume the temperature problem has persisted this whole period. The consumer once followed a short course in food microbiology and knows there is a potential problem with Bacillus cereus, and in this example we assume that this is the only microorganism of interest. She now turns to an expert for advice. This expert applies predictive modelling to help her. The first thing the expert does is to apply a user friendly software package like the Pathogen Modeling Program (PMP) of the US Department of Agriculture– Agricultural Research Service (USDA–ARS). With this model, given aerobic conditions at 10 °C, pH 6.5, water activity aw = 0.994 and nitrite concentration 150 mg/l (i.e. the optimal aerobic growth conditions at 10 °C), it is predicted that no growth will occur. However, the expert is aware that this is not the complete answer. As shown by the growth curve produced by the PMP (see Fig. 4.1) a confidence interval surrounds the prediction, with the upper confidence limit after three days at a 3 log increase in concentration. This suggests that considerable growth may have occurred after three days of storage. As an alternative, the expert applies some other models available from the literature. First, he uses a simple ‘worst case’ model for B. cereus as described by
Fig. 4.1 The output graph of the Pathogen Modeling Progam (PMP) software (USDA, ARSERRC, version 6.1) for aerobic B. cereus at 10 °C and conditions as indicated in the main text. Dashed lines are the upper and lower confidence limit.
46
Modelling microorganisms in food
7
Worst case PMP best PMP UCL
6
Log increase
5 4 3 2 1 0 0
1
2
3
4
5
Days
Fig. 4.2 Alternative model predictions for B. cereus at 10 °C . The growth curve described by the ‘worst case’ model of Zwietering et al. (1996) is compared to the prediction of the Pathogen Modeling Program (PMP) and the upper confidence limit (UCL) of the PMP. In the ‘worst case’ model there is no lag phase. In a short time interval (between 3 and 4 days) the UCL of the PMP lies above the worst case prediction.
Zwietering et al. (1996), a secondary growth model neglecting the lag phase (see also Nauta, 2000). This model assumes a square-root model for temperature with minimum growth temperature Tmin = 0 °C, so ∆ log Nt = a T 2 t
[4.1]
with 10-based log, model parameter a = 0.013 °C–2 day–1, temperature T (°C) and time t (days). This model predicts an increase in the concentration of B. cereus of 3.9 logs. The results obtained by the different models are shown in Fig. 4.2. Clearly, different models do not predict the same thing. It is not only the case that the predicted increase is quantitatively different, but also the different models do not provide a definitive answer to the question of whether growth will occur at all. Based on these findings, the expert can tell the consumer that something may have gone wrong with the food in her refrigerator, but that he is not sure. Probably this is not the answer the consumer was waiting for.
4.3
Imprecise predictive models
The case study so far illustrates some interesting aspects of predictive models. As
Uncertainty and variability in predictive models
47
is the case for the PMP, the prediction may be attended by a confidence interval. The PMP predictions for B. cereus are based on data of Benedict et al. (1993). These authors obtained counts of colony forming units (cfu) in brain heart infusion medium for a mixed culture of three strains. The growth curves generated by PMP are the result of a statistical analysis of these data, and consequently the lower and upper confidence limits (LCL, UCL) represent the 95% confidence intervals for the estimated means. As noted in the PMP documentation, these confidence limits are provided to indicate the precision of the estimates. This lack of precision in the estimates of predictive models is usually not a surprise for food microbiologists. It is a consequence of variability and uncertainty, two concepts that will be explained further in this chapter. Variability and uncertainty always attend microorganisms, certainly in a natural environment like food. Of course, predictive modellers are also aware of this fact. The first issue to address is whether this imprecision is important. This relates to the objective of modelling when a predictive model is applied. If, for example, a food manufacturer wants to exclude the possibility of bacterial growth in a food product, a worst case model like the one applied in the case study is a good choice. If the worst case model predicts that no growth will occur in a given production process, there is little reason for concern and no additional modelling is required. The only problem that may emerge is that the ‘worst case scenario’ is not really the worst case, as can be seen for example in Fig. 4.2, where the upper confidence limit of the PMP is larger than the worst case around the fourth day. In many instances a worst case model prediction for growth or inactivation is not sufficient. The food production process may inevitably include steps that do allow growth or minimise inactivation. The challenge is then to reduce the initial contamination or the growth potential to an acceptable level, and models may provide a helpful tool to do this. Also, one may be mainly interested in the probability of growth or inactivation or the probability of a certain (critical) level occurring. Hence, the challenge of predictive modelling is often to apply the data and knowledge about microbiology as efficiently as possible, for a quantitative prediction which is as precise and relevant as possible. Part of this challenge is to describe the uncertainty and variability in such a way that the imprecision can be quantified. This allows the assessment of the probability of growth, the probability of growth to critical concentrations and, ultimately, the assessment of risks. The ‘uncertainty’ and ‘variability’ that lead to imprecise model predictions are not the same thing. ‘Variability’ represents a true heterogeneity of the population of subjects considered, it is a consequence of the physical system and irreducible by additional measurements. It can be observed and quantified. One type of variability is stochasticity, where heterogeneity is a consequence of randomness, like the result of throwing dice. This stochasticity can be considered as essentially different from inter-individual variability, which is a description of differences between members of a population, like the variability in height of children in a school class. The latter is mainly a consequence of genetics, differences in nutrition and other environmental factors, but to some extent also an effect of
48
Modelling microorganisms in food
randomness. ‘Uncertainty’ represents a lack of perfect knowledge and may be reduced by gathering additional knowledge, for example by further measurements. In principle, this lack of knowledge can be quantified based on some assumptions and beliefs. The confidence interval that results from a statistical analysis, for example, usually serves as such a quantification of uncertainty. Other uncertainties, like a lack of representativeness, may be very hard to quantify, especially if data on a process step or a model parameter are missing. As variability and uncertainty can both be represented by probability distributions, and variability in one context may have to be interpreted as uncertainty in another, the two are easily mixed up. Consider, for example, the tossing of a coin. Before tossing, the outcome (head or tail) is uncertain. Doing the tossing can reduce this uncertainty. This tossing itself is a stochastic process: it is either head or tail with probability 50%. Next, if we toss the coin several times we (most likely) get variable results. The number of heads and tail after tossing ten times will be variable if we repeat that experiment, but it will be known if it is recorded. So after the experiment there is variability but no uncertainty. Mathematically, this is all the same thing, as the description of uncertainty and variability can be done by (or derived from) the same probability distribution: two events with probability 0.5. However, as will be explained later in this chapter, identification of the difference between variability and uncertainty may be essential for a correct interpretation of predictive models.
4.4
Case study – part 2
Part 1 of the case study ended without a clear answer to the consumer, so the expert decides to work further on the problem. In the worst case model, the assumption of a negligible lag phase can be considered as rather extreme, given the PMP prediction that the lag-time is not yet passed after three days. A problem here is that it is not clear in which state the B. cereus cells are at the start of the storage in the refrigerator, that is whether they are dormant spores, activated spores, vegetative cells after germination, in a stationary phase or in the growth phase. This depends on the history of the food product and the cells, which is not known in this example and is therefore a source of uncertainty that is not discussed further here. Therefore the consulted expert uses an alternative model similar to the worst case model, as applied by Nauta et al. (2003), which does include a lag phase λ. With µ = aµ (T – Tmin,µ)2
[4.2]
λ = aλ/(T – Tmin,λ)2
[4.3]
and
the increase in concentration at time t becomes: ∆ log Nt = MAX [0, µ (t – λ)]
[4.4]
Uncertainty and variability in predictive models
49
Table 4.1 Results obtained by Valero et al. (2000) in growth experiments with B. cereus in nutrient broth over the temperature range 5–30 °C. Count data are applied to calculate values of the growth rate (µ) and lag phase duration (λ), by fitting the data to growth curves applying the model of Baranyi et al. (1993). Results for a mesophilic (MESO) and a psychrotrophic (PSYC) strain are the average of two very similar replicate growth curves. Note that in this table the time unit is hour (h), and growth rate is based on increase in the natural logarithm (ln) of the population size. (Reprinted from Food Microbiol, 17, Valero M et al., Growth of Bacillus cereus in natural and acidified carrot substrates over the temperature range 5–30 °C, 605–612, Copyright 2000, with permission from Elsevier.) Temperature °C
MESO –1
growth rate (h ) 5 8 10 12 16 25 30
– – – 0.017 0.13 0.64 1.089
PSYC lag (h)
growth rate (h–1)
lag (h)
no growth no growth no growth 88.01 13.64 3.38 1.93
0.027 0.055 0.073 0.107 0.133 0.529 0.687
148.77 43.6 38.96 29.38 14.34 3.44 1.96
Table 4.2 Estimated values of the growth model parameters, using data presented in Table 4.1 (Valero et al., 2000). Mean and standard deviation (between brackets) as given are used in a Normal distribution, which is assumed to describe within strain variability and is implemented as variability per industrial batch. Note that, in contrast to Table 4.1, rate is per day and based on 10-based logarithms here Strain
aµ (day °C2)–1
Tmin,µ (°C)
aλ (day °C2)
Tmin,λ (°C)
MESO PSYC
0.026 (0.001) 0.0079 (0.001)
9.15 (0.50) 0.04 (1.14)
37.7 (2.71) 66.7 (10.0)
8.44 (0.83) 3.30 (1.32)
The two different minimum growth temperatures Tmin,µ and Tmin,λ are conceptual, one indicating the lowest temperature for which, in the model, the growth rate µ is larger than zero, and the other indicating the lowest temperature for which the lag phase λ is smaller than infinitely long. Of these the largest one indicates the ‘true’ minimum growth temperature, below which no growth can be observed. Based on count data of Valero et al. (2000), obtained in nutrient broth, these minimum growth temperatures and the two additional model parameters are estimated for two different strains (see Tables 4.1 and 4.2). One of these strains is characterised as ‘mesophilic’, implying that it grows best at moderate temperatures, and the other as ‘psychrotrophic’, implying that it grows best at low temperatures. Applying the mean estimates for the model parameters, the predicted increase in concentration is 0 log units for the mesophilic strain and 1.19 for the psychrotrophic strain after three days at 10 ºC. So again the expert gets two different results. To improve his analysis the expert therefore decides to include the published
50
Modelling microorganisms in food
standard deviations (Table 4.2) of the parameter estimates in his calculations. He turns his model into a Monte Carlo simulation model, as explained below. In this model, equation (4.4) is now applied with independent values of µ and λ obtained by randomly sampling values for the four model parameters of equations (4.2) and (4.3) for each time series. The advantage of this method is that, like the model input, the model output now has the form of a probability distribution, indicating the probabilities of a range of model outcomes. Several steps are to be taken to allow this sampling. First, at seven different temperatures per strain a growth curve is fitted through the count data. For this purpose Valero et al. (2000) used the growth model of Baranyi et al. (1993). The mean values of two replicated growth curves per temperature are published as reproduced in Table 4.1. For the mesophilic and the psychrotrophic strain this yields respectively four and seven parameter estimates for both growth rate µ and lag-time λ, that can be used to estimate the values of the four parameters in equations (4.2) and (4.3). As a second step, transforming equations (4.2) and (4.3) to linear relations √µ = √aµ(T – Tmin,µ)
[4.5]
1/√λ = (T – Tmin,λ)/√aλ
[4.6]
and
the mean and standard deviation of the estimates of the four model parameters can be obtained by linear regression. Note that for this not all sample data, but the published average values of µ and λ given in Table 4.1 are applied. [Given the high values of R2 as published by Valero et al. (2000), this was considered reasonable by Nauta et al. (2003).] As a third step, these means and standard deviations as given in Table 4.2 are taken as parameters of a Normal distribution describing the variability between the units of food product considered, that is packages of vegetable puree. In his Monte Carlo analysis, for each simulation run, the expert takes independent samples from these Normal distributions to calculate a random growth curve. A set of runs then represents a set of independent vegetable puree packages. Now, after 5000 simulation runs, the mean increase for the mesophilic strain is 0.00008 log units and for the psychrotrophic strain it is 1.10 log units. A sample of different growth curves for the latter strain is shown in Fig. 4.3. Although the mean results with the Monte Carlo analysis are almost identical to the deterministic approach, the Monte Carlo has the advantage that it gives variable results. For example, it shows that in about 7% of the simulations the increase in concentration for the psychrotrophic strain is predicted to be larger than 2 log units, and in 0.3% of the simulations it is larger than 3 log units. Also, it can be found that in about 1% of the simulation runs the lag-time of the mesophilic strain is passed, and a little growth (< 0.1 log unit) has occurred. With these results in mind the expert struggles with the question of how to report this to the consumer, and decides to give that some extra thought.
Uncertainty and variability in predictive models
51
Fig. 4.3 Results of a Monte Carlo simulation of the variability in growth curves of a psychrotrophic B. cereus strain, compared to the ‘worst case’ model prediction of Zwietering et al. (1996) (line with black squares). Circles indicate the deterministic prediction of the growth curve based on mean values of µ and λ.
4.5
A closer look at variability
The case study sheds some more light on sources of variability. First, strain variability is introduced through the distinction between psychrotrophic and mesophilic B. cereus strains. This difference in ability to grow at low temperatures is probably a rather extreme example of variability between bacterial strains, typical for B. cereus. However, the phenomenon that different strains have different growth characteristics is common. It implies that the parameter values of predictive models for a microbial species should always be regarded with some reservation in a single case of one isolate in one food product. Predictive models are usually based on data from one or a few strains only and, as a good scientific practice, microbiologists prefer to use the same strain for different experiments and across different research laboratories to guarantee repeatable experiments and to obtain comparable results. A consequence of this approach is that variability between natural strains (and thus in food products) may be underestimated. To meet the objective of predicting the probability of growth to a certain level of a random isolate of a bacterial species, a different approach may be required. In that case, the variability between strains should be studied and characterised (Nauta and Dufrenne, 1999). More sources of variability can be identified in food microbiology. Next to
52
Modelling microorganisms in food
between-strain variability there is also biological variability within strains. Another obvious example is the variability in composition of the food, certainly in compound food products. Depending on the variables included in the predictive models, also variability in growth conditions like pH, temperature and even storage time can be considered as sources of variability in predictive models. These can be incorporated in (secondary) models, but otherwise they can be considered as ‘normal’ sources of variability. The next step of the expert in the case study is the application of Monte Carlo simulation to describe the variability. This Monte Carlo method is widely applied in microbial risk assessment, and is relatively easy to use [see for example Vose (2000)]. This method allows the incorporation of variability and uncertainty in model parameters by replacing fixed parameter values by random values sampled from probability distributions as defined by the modeller. However, the modeller should be well aware of what he or she is actually doing. Mixing up uncertainty and variability, or improperly mixing up different sources of variability, may lead to a wrong interpretation of the results: if the probability distributions of the input parameter do not describe the same thing, it is not clear what the output distribution describes. In the case study the output of the Monte Carlo model describes the variability between vegetable puree packages. So the assumption is that the uncertainty in the parameter estimates sampled from the Normal distribution is the same as the variability between packages. The fact that the estimated values given in Table 4.1 do not fit exactly with the secondary growth model given in equations (4.2)–(4.4) is assumed to be caused only by a random variability between food products. This assumption may sound rather implausible, because it states that the predictive model is an exact description of the process and that the imprecision in parameter estimates presented in Table 4.1 is not a consequence of uncertainty due to imperfect measurement, but solely a consequence of variability. Next, this variability between experiments is assumed to translate exactly to the variability between packages of food product. With the observation of this problem, the discussion is gradually shifting from variability to uncertainty, which will be considered further below. This section has clarified that variability is inextricably bound with the microbiology of foods, and that several sources of variability can be identified. In models we can ignore this variability, and this may be appropriate in some instances, but may not be so in others. Alternatively, it can be assumed that this variability is known, but it is difficult to extract the description of this variability from the available data.
4.6
A closer look at uncertainty
As explained above, it is assumed in the Monte Carlo simulation that the variability in the model results is to be interpreted as variability between food packages. The alternatives are to regard the input and output distribution either as another type of variability, or as uncertainty.
Uncertainty and variability in predictive models
53
In the example, the uncertainty in the parameter estimates is a consequence of the fact that the experimental results deviate from the model [equations (4.2)– (4.4)], so in a way the variability between experimental results is translated to variability between food packages. It could just as well be translated, for example, to variability between industrial batches of food product, between grams of food product, between days of processing, between consumed meals, etc. – all rather artificial assumptions. Unfortunately the ‘best choice’ cannot be chosen based upon the experimental evidence, and therefore a simple assumption, convenient for the analysis, may be the best option in such case. Clearly, communication of this assumption is essential for a correct interpretation of the modelling results. If the input distributions represent uncertainty, the variable measurements are a consequence of measurement errors. These may be due to experimental mistakes or variability in the performance of the growth medium, but will predominantly be caused by stochasticity related to the counting method and simplifying assumptions in the interpretation of the data obtained by applying this method. For example, plate count data from a series of (well shaken) dilutions are commonly assumed to be homogeneously distributed, as in a Poisson process. This is a stochastic process which, by nature, will always give variable results. If the Poisson process assumption is violated by the effects of cell clustering, this variability in experimental results is even larger. Assuming that the input distributions only represent uncertainty implies that there is an invariable ‘true’ value for each of the four model parameters, and therefore that there is exactly one true growth curve for each of the strains. This true curve is unknown, but identical each time the strain experiences the same time and temperature combination. This approach, which is the basic principle behind many commonly applied statistical analyses of data, assumes that all variability in the system can either be neglected, or interpreted as uncertainty. Therefore, it cannot be applied to describe variability. The arguments listed here aim to clarify that in many instances it is not realistic to assume that the uncertainty in parameter estimates is to be interpreted as either some kind of variability or some kind of uncertainty alone. It is most likely that parameter uncertainty reflects a combination of variability and uncertainty, and it is not known how much either of them contributes to the total parameter uncertainty. As a final remark on uncertainty, some common additional sources of uncertainty should be identified. These uncertainties about the accuracy of the prediction deal with common basic assumptions. First, as data cannot be obtained for every food product, it is assumed that the data applied are representative for the problem at hand. In the case study, for example, the data of Valero et al. (2000) are obtained in nutrient broth, which is considered representative for naturally contaminated vegetable puree. In general, this is a relevant problem with the usage of predictive models and predictive modelling software, as the user may not always be aware of the representativity of the experimental media applied for the food product considered. Also, it is assumed that the model described in equations (4.2)–(4.4) is correct. These assumptions, and others, can lead to biased estimates, but are
54
Modelling microorganisms in food
usually not incorporated in a statistical analysis of uncertainty. This is particularly relevant if the uncertainty is quantified. The quantified uncertainty may then only be a small part of the total uncertainty about the conclusions drawn, and thus falsely imply that a conclusion is more certain than it actually is.
4.7
Separation of uncertainty and variability
Before addressing possibilities to deal with the problem of separating uncertainty and variability, the relevance of the problem should be considered. In the case study, for example, the problem is not particularly relevant because we are dealing with a single consumer and a single food package. In the modelling applied, this single package can be considered as a random sample from a large set of (potential) food packages. It does not matter whether this is a sample from a variability distribution (describing the true different concentrations in the food packages) or an uncertainty distribution (describing our confidence in the uncertain true concentration, which is the same in all food packages). Mathematically it is the same thing as in both cases the single package is a random sample from the same distribution, based on the analysis of the data. For the consumer the concentration in the specific single package is uncertain. However, the observed problem is relevant if a population of consumers of a set of food packages is considered. This has previously been illustrated with a simplified version of the model behind the case study (Nauta, 2000), where the lagtime was neglected (λ = 0) and Tmin,µ was assumed to be zero, as in the worst case model (Zwietering et al., 1996) described in the case study. Box 1 summarises some of the essential features of the example analysed in Nauta (2000). Here, it is explained that the uncertainty about ‘variability and/or uncertainty’ can be regarded Box 1 Example with different scenarios for either separating or not separating uncertainty and variability (Nauta, 2000). Consider a vat with 100 l pasteurised milk stored at 0–4 °C. From this vat one of ten samples of 10 ml is tested positive for a pathogen. One hundred 0.25 l cups are taken from the vat and stored for t = 3 days at T = 10 °C (with standard deviation 1 °C). One hundred people drink the cups (one each) (see Fig. 4.4). We want to evaluate the exposure of the consumers to this pathogen and estimate the fraction of cups containing more than 105 cfu and the probability that more than 50 people ingest more than 105 cfu. We use a simple exponential model for growth (Zwietering et al., 1996): log (Nt) = log (N0) + c, with c = a T 2t; the mean estimate of the parameter a = 0.013 with standard deviation 0.001. The concentration in the vat Cvat (cfu/ml) can be estimated from the finding that one of ten 10 ml samples is positive. The probability of a negative 10 ml sample is p = e– 10Cvat. With a Uniform prior we can characterise the probability distribution of this p as Beta(10,2) [see for example Vose (2000)]. Hence Cvat is distributed as –ln (Beta(10,2))/10. Next, the initial number of cfu in a cup i, N0,i , is a sample from a Poisson distribution Poisson(250 Cvat). The final number in cup i, Nt,i , is calculated from log (Nt,i) = log (N0,i) + a T 2t, where T follows a Normal(10,1) °C distribution, t = three days and a follows Normal(0.013,0.001) cfu/°C2 day.
Uncertainty and variability in predictive models
Table 4.3 Simulation results for four scenarios. Nt,i > 105 is the mean ± standard deviation of the percentage of cups (per 100) containing more than 105 cfu over 1000 simulation runs. Without separation of uncertainty and variability (No sep) all distributions are assumed to represent variability. ‘> 50 cups’ is the percentage of runs with > 50 cups with Nt,i > 105. Note that this percentage increases as a consequence of the increased uncertainty, expressed as the standard deviation in Nt,i > 105
Nt,i > 105 > 50 cups
No sep
c var
c var and unc
c unc
26.7 ± 5% 0%
25.9 ± 12% 3.4%
25.6 ± 25% 18.3%
26.6 ± 39% 27.4%
Ignoring the difference between uncertainty and variability (no separation), the distribution of Nt for 100 cups can easily be calculated with Monte Carlo simulation. However, the distribution of Cvat expresses uncertainty, and the distribution of N0,i (given Cvat) expresses variability per cup. To what extent the distribution of c = a T 2t represents variability and uncertainty is less clear. Therefore, we evaluate three scenarios, one where the distribution of c expresses variability (per cup), one where it expresses uncertainty (per Monte Carlo run) and one where it expresses both [see Nauta (2000) for details]. The results, expressed as the percentage of cups containing more than 105 cfu/cup and the percentage of simulation runs where more than 50 out of 100 cups contain more than 105 cfu/cup are given in Table 4.3. The mean value of the percentage of cups with more than 105 cfu/cup lies around 26% in each scenario, which indicates a potential food safety problem with any type of analysis. However, a thorough analysis of uncertainty and variability offers an important additional insight: if ‘> 50 cups’ stands for a major outbreak, the predicted probability of this outbreak largely depends on the assumptions about uncertainty and variability. If all distributions are falsely assumed to represent variability only, not separating uncertainty and variability may falsely lead to an assessment of zero risk for an outbreak. For an additional presentation of the results, see Fig. 4.5. Partitioning
Growth
Cvat
N0,i
Nt,i
Uncertain
Variable
Uncertain & variable
Fig. 4.4 Schematic representation of the example.
55
56
Modelling microorganisms in food
Fig. 4.5 The effect of different assumptions on the nature of probability distributions applied, on the probability of an outbreak [adapted from Nauta (2000)]. The outbreak size is equivalent to the number of meals containing more than a critical concentration of a microorganism, in a set of 100 meals. The given probabilities are the estimated probabilities of an outbreak size larger than or equal to the number given on the horizontal axis. The dashed lines indicate the probabilities of an outbreak size of at least ten or 50 people. Without separation of variability and uncertainty, all input distributions are assumed to represent variability (closed diamonds), and the predicted outbreak size is always larger than ten and smaller than 50. If all input distributions represent uncertainty (crosses), the outbreak size is either 0 or 100, where the probability of the latter is 27%. When variability and uncertainty are separated in the appropriate input distributions (with uncertainty attributable fraction α = 0, α = 0.5 and α = 1), the probability of a small outbreak (< 30) decreases, but the probability of a large outbreak (> 40) increases. (Reprinted from Int J Food Microbiol 57, Nauta M J, Separation of uncertainty and variability in quantitative microbial risk assessment models, pages 205–218, Copyright 2000, with permission from Elsevier.)
as an additional source of uncertainty. If the standard deviation of the experimental results reflects both uncertainty and variability, but it is not known how much each of these contributes, this can be translated to an additional, uncertain, parameter. Basically, this is done by defining an (uncertain) ‘uncertainty attributable fraction’ α for the variance. So, if statistical analysis yields a mean estimate m for parameter x, with standard deviation s, s√α represents the standard deviation of the uncertainty distribution and s√(1 – α) the standard deviation of the variability distribution. Figure 4.5 illustrates the results of the analysis of Nauta (2000) with different values for α. As is also explained in Box 1, the estimated probability of an ‘outbreak’ largely depends on the assumptions about uncertainty and variability as expressed by the value of α. For both the case study and the example illustrated in Fig. 4.4, this is best explained by the extremes:
Uncertainty and variability in predictive models
57
1. If all parameter uncertainty is to be interpreted as true uncertainty, only stochastic variability is maintained. Consequently, the variability between concentrations in a large set of packages is small, but the concentration is largely unknown. This implies that (almost) all or (almost) none of the packages will contain more than a critical concentration, but it is uncertain which of the two is the case. Considering 100 food packages instead of one in part 2 of the case study (with results as illustrated in Fig. 4.3), this translates to a probability of 7% that in all food packages an increase of 2 log units has occurred. In Fig. 4.4 this extreme is represented by the line with triangles (α = 1), where the stochastic variability is interpreted as it should be. Here, the probability of an outbreak of 50 people or more is 27%, the probability of no cases is 45% and the probability of everyone getting ill is 8%. In the extreme, where even obvious stochastic variability is interpreted to represent uncertainty (line with crosses), the outbreak size is 0 or 100, with 27% probability of the latter. 2. If all parameter uncertainty is to be interpreted as variability between packages, there is no uncertainty. Consequently, the large variability between packages is known, and so too is the percentage of packages that will contain more than a critical concentration. Considering 100 food packages instead of one in part 2 of the case study, this translates to a probability of 7% for each single package that an increase of 2 log units has occurred. In Fig. 4.4 this extreme is represented by the diamonds. For the closed diamonds the uncertainty in the estimate of the concentration in the vat (see Box) is wrongly interpreted as variability, for the open diamonds (α = 0) it is correctly interpreted as uncertainty. In this extreme, a small outbreak is predicted with certainty. The truth will probably lie in between, and our uncertainty about that can be expressed by an uncertainty parameter α. As stated, this has an impact on the assessment of the probability of an outbreak (i.e. in a population), not on the probability of a single case. It is important to stress here that research can help to reduce the uncertainty and thus enlarge our knowledge and understanding about the levels of uncertainty and variability too. For example, an experiment with 100 packages can give information about the plausibility of each of the extremes mentioned above. Such an experiment may suffer from experimental errors as well but, without any doubt, clever new experiments and new data will be helpful. Note that, as a golden rule, such experiments should aim to describe the variability and decrease the uncertainty. As noted above and discussed in Section 4.10 on GMPs below, this is the opposite of what is generally aimed for in setting up food microbiology experiments, where the researcher standardises the analysis by using one known strain, well-defined media, to get as little variability as possible.
4.8
Case study – epilogue
Despite a thorough analysis, the expert still has no simple advice for the consumer:
58
Modelling microorganisms in food
her food may or may not be contaminated with a high concentration of B. cereus. It is, however, obvious that the (un-)safety of the food cannot be predicted with absolute certainty. It is the task of the expert to communicate this chance, the consumer then has to decide what to do with it. The expert decides to list the arguments, so that he can explain his uncertainty to the consumer.
• The problem has to be formulated as ‘what is the probability that the food • •
•
contains an unacceptable concentration of B. cereus?’ This probability is not zero and definitely not one. Different predictive models give different predictions. Only the Monte Carlo analysis can be used to calculate probabilities, but it is not obvious how the calculated probability should be interpreted. The predictive models predict the increase in concentration, but this is not enough to decide whether a critical concentration is reached. Additional information is required on the initial concentration in the food and the critical concentration that has to be considered ‘unacceptable’. In literature there is reference to a critical minimum level of concern of 105 cfu/g (Notermans et al., 1997), based on epidemiological data. If this value is adopted, the initial concentration in the product should be about 103 cfu/g for an increase of 2 logs to be regarded as ‘unacceptable’. That 103 cfu/g seems a rather high initial level. In the refrigerator only psychrotrophic strains are expected to increase in concentration by 2 logs or more. According to the Monte Carlo simulation this happens in about 7% of the food products (see Fig. 4.3 and part 2 of the case study). Growth of mesophilic strains can be neglected. On the other hand, there are indications that only mesophilic strains are hazardous for humans, given their larger growth capacities in the human intestinal environment (Wijnands et al., 2006). Surprisingly, this would indicate that growth at 10 ºC is not likely to be relevant for human health risks at all.
With this information at hand, the expert returns to the consumer who has put the food in her freezer during the expert’s research. He explains his findings, concluding that the probability that the concentration of B. cereus exceeds the critical limit in her food product is small. Considering that she is not young, old, pregnant or immune compromised (YOPI), she may decide to eat it, but she should be aware that there is almost always uncertainty in predictions in food microbiology. Note that this conclusion is only partially based on predictive models, additional microbiological knowledge is indispensable in interpreting the results. Also, an advanced analysis including the variability and uncertainty attending the model prediction is useful. Although still not yielding a precise answer, it gives insight into the probability of growth to high concentrations that deterministic models alone cannot provide.
Uncertainty and variability in predictive models
4.9
59
Categorising questions of food professionals
As illustrated in the case study, communication of modelling results is an essential aspect in the application of predictive models. The food professional, like a food safety risk manager or a food processor, who applies predictive models should always be aware of the uncertainty and variability that is inevitably associated with food microbiology. Uncertainty is always part of the prediction obtained from a food microbiology model. If food professionals or the general public are not explicitly informed about this, the credibility of predictive modelling on the longer term may be affected. A crucial aspect in the communication between a predictive modeller and a food professional is the precise question to be answered. This question has strong implications for the model that has to be applied and for the importance of explicitly incorporating uncertainty and variability in these models. Communication about the research question is extremely important. Basically such research questions can be categorised as follows: 1. The expected concentration at time t = t*. A question that can typically be answered with deterministic predictive models relates to the expected concentration of a given microorganism at a point in time t*, given an initial concentration and some growth conditions. Note that the answer will yield an indicative value only, which may, however, offer sufficient information to know whether additional action is required. 2. When is the expected concentration N > N*? This is a similar question to the previous one, which can also be answered with a deterministic predictive model. This is typically of interest in exploring shelf life and acceptable time–temperature profiles for food products. If we know what the model predicts about the time it takes before the concentration passes a critical concentration N*, a critical storage time can be derived. Again, this is an indicative value. By using a ‘worst case’ scenario, safety limits can be derived. 3. What is the expected relative frequency p of N > N* at t = t*? With this question, the concept of variability is introduced. Apparently it is assumed that the expected relative frequency (or probability) p asked for is not likely to be zero or one. Here we need a predictive model that includes important sources of variability, such as between food products and microbial strains, and possibly also variability in growth conditions, for instance the temperature. For some strains, food products and/or temperatures, the concentration will be larger than N* at point in time t*, for some it will not. In this type of question a value of p > 0 is acceptable: the critical concentration N* may be undesirable but, to some extent, it is allowed. If not, the actual question will probably be one of the previous category (2), because we aim at a probability of zero. Again, the answer is indicative in being an estimated probability, because uncertainty is not incorporated. The approach applied in the case study assumes that the question of the consumer was one of this category.
60
Modelling microorganisms in food
4. How sure am I about N > N* with probability p = 5% at t = t*? This type of question is typically applied in quantitative risk assessment. Both variability and uncertainty are included in the question and therefore both need to be included in the predictive model. Here, one wants to know, for example, how likely it is that only one in 20 food packages contains more than N* cfu after a storage time t*. The variability between packages has to be included in the model, to check the 5% probability. By quantifying the uncertainty about the model parameters, the credibility of the answer can be assessed. This question may be difficult to answer, like in the example given in the case study. Still, it is a valid question, which requires transparency in the communication about uncertainty. Two final remarks are in place to conclude these four categories of questions. First, note that the question ‘How sure am I that N > N* at t = t*?’ is missing. It differs from the category (3) questions in that it would relate to uncertainty only, assuming that variability can be neglected. This is probably a rather unlikely situation in food microbiology and therefore not considered here. Second, one should be aware that no predictive model is ever sufficient to answer these questions unless at least some basic additional information is available. If for example the initial concentration and the time temperature profile are unknown or uncertain, a major source of uncertainty may be introduced that can upset even the most advanced model. In the examples given above this information was assumed to be known, but in practice this has quite frequently proven to be a considerable concern.
4.10
GMPs for unpredictable microbes
In the food safety domain GMP is a well-known abbreviation of Good Manufacturing Practices which are general, common-sense practices to prevent (cross-) contamination and bacterial growth. They need no further specification in, for example, Hazard Analysis and Critical Control Point plans. Here it is proposed to apply GMP as Good Modelling Practices and Good Microbiology Practices, some common-sense good practices when dealing with variability and uncertainty in predictive modelling and food microbiology. In food safety one generally has to deal with non-standardised conditions and wild-type microorganisms. As a consequence, variability is an intrinsic characteristic of microbial food safety. Models that aim to describe what happens with microorganisms in food which is on the market for the general public should therefore be able to include this variability. As indicated in the section on variability, this seems to contrast with the common scientific approach to food microbiology. Scientific experiments are designed to be repeatable, so experiments are done with the same (laboratory) strains at many places in the world, with identical growth media and standardised methods. Good research is generally considered to yield significant results, which demands small confidence intervals. This is a good approach if these confidence intervals represent uncertainty (as they should). However, if uncertainty and
Uncertainty and variability in predictive models
61
variability are mixed up here, this may imply that a description of variability is considered as bad experimental results or bad science. Microbiological researchers should be well aware of the difference between variability and uncertainty, where the former may be highly valuable information, and the latter the thing that one should try to minimise. To deal with this type of problem, some GMPs are proposed here for both microbiology and modelling. This list does not pretend to be complete, but merely aims to guide further discussion on the incorporation of the aspects uncertainty and variability in food microbiology and predictive modelling. 4.10.1 Good Microbiology Practices • Always present information on variability in experimental results. A published confidence interval, like for example those given in the PMP, is not easily interpreted in terms of representing either uncertainty or variability or both. So if a confidence interval is calculated, try to describe what this interval represents. Is it only the uncertainty about a fixed parameter value, or does it also include the variability of this value? If this is not clear from the experiments, this should be stated. The alternative is that the reader of a research paper makes up their own interpretation of the confidence interval and decides if it is appropriate, whatever this interpretation is. • Clearly state sources of variability like the number of strains considered. For example, the application of a mixed inoculum may not be very helpful in experiments designed to develop predictive models that include variability. In that case, growth data always represent the fastest growing strain, and inactivation data represent the least sensitive strain. The data obtained can be used for ‘worst case’ models, but not for models that aim to assess probabilities of growth or survival of a bacterial species in a food product. • As a golden rule, do not be afraid of stating uncertainty. Uncertainty is an intrinsic aspect of microbiology, which should be acknowledged. Included in this is the representativeness of the data obtained: for the broader interpretation of the data it may be helpful to state for which conditions, strains, food types, etc. they are considered to be representative, in order to prevent improper use of the data. Another interesting aspect of uncertainty is that if a microbiologist cannot manage to get proper repeatable results of experiments, they may be on the track of a highly interesting phenomenon and it may be worthwhile publishing it. Identifying uncertainty need not be bad science, as it may lead the way to scientific progress. • In microbiological practice, a general problem is that the uncertainty in results may be too large for the variability to be detected and quantified. If, for example, growth experiments are done with a set of different strains, it is not clear whether the differences in the data are to be interpreted as variability between strains or uncertainty (due to the experimental methodology, stochasticity or, in this context, variability within strains). This problem is complex, but can partly be solved by characterising the variability in results of repeated experiments
62
•
Modelling microorganisms in food with the same strains (Nauta and Dufrenne, 1999). Note that in such an experiment, a broad confidence interval is not a nuisance, but offers valuable information. For the purposes of modelling, raw data are valuable. Experience has shown that published data are often insufficient for the purpose of modelling variability in predictive modelling and risk assessment. Means and standard deviations alone often do not yield sufficient information to describe uncertainty and variability. Raw data, that is all the basic data obtained in the experiment without any interpretation, may be required if the analysis is to fit the purpose of the modeller. For example, raw enumeration data of a dilution series may yield information on the homogeneity of the distribution of cells, and thus on the uncertainty attending the method used. Fortunately more and more raw data have become available through the internet since the 1990s. This has now, for example, been standardised for micro-array data. It may offer good opportunities for new development in this research area in the near future.
4.10.2 Good Modelling Practices An important difference between models and data analysis is that a model is completely defined by the modeller. In contrast to the interpretation of data, in models it is up to the modeller to decide whether the probability distributions used represent variability or uncertainty or a combination of these. Whether this decision fits with the nature of the data is important, but it is a matter of discussion that need not interfere with the construction of the model itself.
• If models include probability distributions, it is essential to explicitly state what
•
•
these distributions do represent. If it is variability, it should be made clear whether this is variability between strains, variability in time, variability between food items or whatever. If these things are mixed up, then it is very likely that what the variability in the model output represents will be obscure, and this may lead to a wrong interpretation of results. If a model simulates a food production process or the transmission of a microorganism through a food chain, uncertainty can be identified in a mind experiment. Pretend that everything that can be known is known. For example, assume that a predictive model is perfect, that parameters like a and Tmin as applied in the case study are known. There is no uncertainty. Do you then assume that they are fixed for all strains or not? If so, there is no variability between strains in your model, if not there is variability between strains. Pretending perfect knowledge in this way can nicely identify the assumptions of the modeller on uncertainty and variability in the distributions that are applied to the models. Uncertainty is cancelled out by pretending perfect knowledge. By getting back to earth and regarding the available information, uncertainty can be brought back into the model again. A probability distribution describing variability can be uncertain if the variability is not known exactly. The best way to describe this is to first describe the
Uncertainty and variability in predictive models
•
63
variability (e.g. using the GMP in the previous bullet) and then use new probability distributions to describe the uncertainty of the parameters describing the variability. So if, for example, the variability within a population is given by a Normal distribution with parameters µ and σ, the uncertainty can described by other probability distributions of these parameters µ and σ. Uncertainty can be very difficult to quantify. In that case, the options are to neglect uncertainty and make qualitative statements on the uncertainty in the conclusions, or to assess the uncertainty throughout the model for all uncertain parameters by expert opinion or otherwise. To assess the uncertainty in the model output, incorporating uncertainty for some uncertain parameters but neglecting it for others, is not a meaningful strategy, because it is unclear how this uncertainty in the model output should be interpreted.
4.11 Current developments and future trends: towards novel predictive models Traditionally, aspects of uncertainty and variability have received only limited attention in predictive modelling. In particular, the increasing application of quantitative risk assessment in food microbiology has enforced the demand for the incorporation of uncertainty and variability in predictive models (Nauta, 2002). This demand may give rise to the development of a novel type of predictive model, to better answer the type of questions listed as categories (3) and (4) in Section 4.9. Specific research in both food microbiology (to obtain the appropriate data) and predictive model development is required. In predictive modelling, there is an increasing awareness of the need to include variability. Here, the description of inter-individual variability is predominantly the task of experimental microbiologists who study growth and inactivation characteristics of large sets of strains (Nauta and Dufrenne, 1999; Pielaat et al., 2005). Also, a rapidly increasing amount of data is becoming available from molecular genetical research, where DNA sequencing and studies on gene expression allow a promising tool to study variability within and between strains. The study of stochastic variability has so far primarily been a task for (predictive) modellers, with a recent increase in the availability of supporting experimental research. Stochastic variability plays a role during growth and inactivation, but is particularly relevant for the description of the lag phase. Its impact is largest in the context of small populations, where the unpredictable behaviour of single cells has a profound effect on the total population size and thus on the variability between populations and the uncertainty of single events. The development of stochastic modelling of the bacterial lag phase has been initiated by Baranyi (1998), who illustrated the difference between the individual lag and the population lag due to the stochasticity of individual cells. Lag phase duration of a single cell can be considered as a random stochastic event. The probability to start doubling follows an exponential distribution if the cell has a
64
Modelling microorganisms in food
probability p per time unit to start doubling, but other distributions are possible as well (Francois et al., 2005). As soon as the first cell starts doubling, the growth of the population has started, which suggests that the population lag may on average be smaller than the individual lag. As a consequence the lag phase of a smaller population will on average last longer. Kutalik et al. (2005) elaborated on Baranyi’s (1998) proof of this finding. These authors show that, although indeed the mean population lag-time is larger for smaller populations, the increased variance in lag-times dominates this effect. As a consequence, it is difficult to experimentally validate the expected increase in lag phase for small populations (Augustin et al., 2000; Smelt et al., 2002). For predictive modelling this increased variance, which represents large uncertainty in a single event and a large variability in lag-times between (small) populations, is at least as relevant as the increased mean lag-time. It may have considerable impact on the precision of model predictions for small population sizes. Model predictions obtained from data on large population sizes cannot be directly applied to small population sizes, for instance those of pathogens that remain undetected in a food product. Stochasticity in growth and inactivation has been further investigated by for example Juneja et al. (2003), Marks and Coleman (2005), Soboleva et al. (2000) and Shorten et al. (2004). These authors apply advanced mathematical techniques in their analyses, for which the reader is referred to their papers. Their studies yield improved scientific insight into the nature of variability caused by stochastic events in microorganisms and the impact of differences in population sizes on the growth dynamics of bacteria. However, application of these studies and the developed techniques in food microbiology and risk assessment is currently not evident, for example because the data demands are large. Also, only a part of the total variability can be analysed, which need not be representative for research questions at hand. So although important progress is being made in this area of research, and is expected in the future as experimental evidence is accumulating, it should focus more on a broader field of applications in the future. Also, research which specifically aims to identify uncertainty and variability as separate sources of variation in experimental research is important for future developments in predictive modelling (Ritz-Bricaud et al., 2003). One approach to doing this may be to unravel the stochasticity attending the counting method [either plate counts or Most Probable Number (MPN)] by building a simulation model of the experiment at hand. This may enhance the insight into the expected uncertainty of experimental results, and it could offer a tool for comparison with the experimental results obtained. If the experimental uncertainty is larger than predicted in the simulation model, specific experiments may be developed to further explore sources of uncertainty and variability. Further development of predictive models may concentrate on a description of the stochasticity that is naturally associated with bacterial growth and inactivation as discussed above in this section. Also, models may be further developed that include a description of other sources of variability that frequently recur. A good example of this is the variability in time–temperature profiles and the development
Uncertainty and variability in predictive models
65
of time–temperature integrators (TTIs) (Taoukis et al., 1999; Nauta et al., 2003). A technique that is very suitable for the incorporation of uncertainty even in complex food chain models is offered by Bayesian belief networks, which are now also increasingly applied in predictive modelling (Barker et al., 2002; Malakar et al., 2004).
4.12 Conclusions The aim of this chapter has been to explain and discuss the problem that predictive models cannot provide exact predictions of what will happen in food microbiology. This is not surprising because, by nature, uncertainty and variability play an important role in both foods and microbiology. A first important issue is that this role is recognised by those who apply predictive models. Second, it still is a major challenge for both predictive modellers and food microbiologists to develop methods to deal with variability and uncertainty. Since the rise of microbiological risk assessment, which explicitly aims to quantify the uncertainty and variability, the challenge has become of greater importance. Next to explaining the problems at hand, this chapter aimed to initiate discussions and supply some ideas that may be helpful for future development of this intriguing topic in predictive modelling in food microbiology.
4.13 References Augustin JC, Brouillaud-Delattre A, Rosso L and Carlier V (2000) Significance of inoculum size in the lag time of Listeria monocytogenes, Appl Environ Microbiol, 66, 1706–10. Baranyi J (1998) Comparison of stochastic and deterministic concepts of bacterial lag, J Theor Biol, 192, 403–408. Baranyi J, Roberts TA and McClure P (1993) A non-autonomous differential equation to model bacterial growth, Food Microbiol, 10, 43–59. Barker GC, Talbot NLC and Peck MW (2002) Risk assessment for Clostridium botulinum: a network approach, Int Biodeterior Biodegradation, 50, 167–175. Benedict RC, Partridge T, Wells D and Buchanan RL (1993) Bacillus cereus: Aerobic growth kinetics, J Food Prot, 56, 211–214. Francois K, Devlieghere F, Smet K, Standaert AR, Geeraerd AH, Van Impe JF and Debevere J (2005) Modelling the individual cell lag phase: effect of temperature and pH on the individual cell lag distribution of Listeria monocytogenes, Int J Food Microbiol, 100, 41–53. Juneja VK, Marks HM and Huang L (2003) Growth and heat resistance kinetic variation among various isolates of Salmonella and its application to risk assessment, Risk Anal, 23, 199–213. Kutalik Z, Razaz M and Baranyi J (2005) Connection between stochastic and deterministic modelling of microbial growth, J Theor Biol, 232, 285–99. Malakar PK, Barker GC and Peck MW (2004) Modeling the prevalence of Bacillus cereus spores during the production of a cooked chilled vegetable product, J Food Prot, 67, 939– 46. Marks HM and Coleman ME (2005) Accounting for inherent variability of growth in microbial risk assessment, Int J Food Microbiol, 100, 275–87.
66
Modelling microorganisms in food
Nauta MJ (2000) Separation of uncertainty and variability in quantitative microbial risk assessment models, Int J Food Microbiol, 57, 9–18. Nauta MJ (2002) Modelling bacterial growth in quantitative microbiological risk assessment: Is it possible? Int J Food Microbiol, 73, 297–304. Nauta MJ and Dufrenne JB (1999) Variability in growth characteristics of different E. coli O157:H7 isolates and its implications for predictive microbiology, Quant Microbiol, 1, 137–155. Nauta MJ, Litman S, Barker GC and Carlin F (2003) A retail and consumer phase model for exposure assessment of Bacillus cereus, Int J Food Microbiol, 83, 205–18. Notermans S, Dufrenne J, Teunis P, Beumer R, Te Giffel M and Peeters Weem P (1997) A risk assessment study of Bacillus cereus present in pasteurized milk, Food Microbiol, 14, 143–151. Pielaat A, Fricker M, Nauta MJ and Van Leusden FM (2005) Biodiversity in Bacillus cereus, Report 250912004, Bilthoven, Netherlands, RIVM. Ritz-Bricaud M, Nauta M, Federighi M and Havelaar A (2003) What part of uncertainty and variability in the modeling of Campylobacter survival in frozen chicken meat? in JFM van Impe, AH Geeraerd, I Leguérinel and P Mafart (Eds), Predictive Modelling in Foods Conference Proceedings, Belgium, Katholieke Universiteit Leuven/BioTeC, pp. 332– 334. Shorten PR, Membre JM, Pleasants AB, Kubaczka M and Soboleva TK (2004) Partitioning of the variance in the growth parameters of Erwinia carotovora on vegetable products, Int J Food Microbiol, 93, 195–208. Smelt J, Otten GD and Bos AP (2002) Modelling the effect of sublethal injury on the distribution of the lag times of individual cells of Lactobacillus plantarum, Int J Food Microbiol, 73, 207–12. Soboleva TK, Pleasants AB and le Roux G (2000) Predictive microbiology and food safety, Int J Food Microbiol, 57, 183–92. Taoukis PS, Koutsoumanis K and Nychas GJ (1999) Use of time-temperature integrators and predictive modelling for shelf life control of chilled fish under dynamic storage conditions, Int J Food Microbiol, 53, 21–31. Valero M, Leontidis S, Fernandez PS, Martinez A and Salmeron MC (2000) Growth of Bacillus cereus in natural and acidified carrot substrates over the temperature range 5– 30 ºC, Food Microbiol, 17, 605–612. Vose D (2000) Risk Analysis: A Quantitative Guide (2nd edn), Chichester, John Wiley and Sons. Wijnands LM, Dufrenne, JB, Zwietering MH and Van Leusden FM (2006) Spores from mesophilic Bacillus cereus strains germinate better and grow faster in simulated gastrointestinal conditions than spores from psychrotrophic strains, Int J Food Microbiol, 112, 120–128. Zwietering MH, de Wit JC and Notermans S (1996) Application of predictive microbiology to estimate the number of Bacillus cereus in pasteurised milk at the point of consumption, Int J Food Microbiol, 30, 55–70.
5 Modelling lag-time in predictive microbiology with special reference to lag phase of bacterial spores J. P. P. M. Smelt and S. Brul, University of Amsterdam, The Netherlands
5.1
Introduction: general aspects of lag-time
5.1.1 Introduction: relevance of lag-time in food microbiology Many traditional food products are stabilised against microbial food poisoning or food spoilage for an unlimited period of time. As a consequence, no growth of pathogenic or spoilage microorganisms can be tolerated, and in these cases inactivation models or growth/no growth models can be used to predict stability. However, there is a tendency for the consumers to ask for food that is healthier, closer to fresh food and is without preservatives. Hence, there is a need for milder treatments. In many cases ‘absolute’ microbiological stability is no longer required, but the food should be safe and stable during the required shelf life. This applies particularly to chilled foods. Infectious microorganisms such as Salmonella and Campylobacter should always be reduced to extremely low levels, but toxigenic organisms such as Staphylococcus aureus and Clostridium botulinum may be present in low numbers as long as they are not able to grow. Some growth of non-pathogenic spoilage organisms such as yeasts and lactobacilli or non-pathogenic bacilli can be tolerated as long as no detectable signs of spoilage can be observed during the entire shelf life. Particularly in the latter two cases, knowledge of the growth kinetics of the relevant organism is necessary in order to predict shelf life. Whereas much
68
Modelling microorganisms in food
attention has been paid to modelling of growth rate, modelling of lag-time has received less specific attention until the last few years. In many cases, however, lag-time is the single most important factor in determining shelf life, particularly when toxigenic microorganisms are concerned. One of the reasons for this lack of attention is that growth rate is relatively simple to study as it is solely determined by intrinsic factors such as the composition of the medium and extrinsic factors such as temperature and atmosphere. Lag-time, however, is dependent not only on these factors, but also on others such as previous history of the cell, composition of the medium in which the inoculum was cultivated or stress conditions before inoculation into the new medium. These factors make lag-time more prone to variability.
5.1.2 Definition of lag-time Confusion exists about the definition of lag-time. In the review by Swinnen et al. (2004) the various definitions of lag are covered. For instance Pirt (1975) refers to the lag-time of individual cells as true lag defined as the time from inoculation of one cell until the time of division. Apparent lag is defined as the time needed for a whole population to multiply by a factor 2. Buchanan and Cygnarowicz (1990) determined the duration of the lag phase by calculating the second derivative of the Gompertz curve: the time at which the maximum growth acceleration occurred was considered to be the lag-time. Taking into account the small variations caused by fitting with the Gompertz equation and the relatively large errors brought about by microbiological factors, a pragmatic definition of the duration of the lag phase λ, advocated by Zwietering (2005), can be used in many cases as a simple alternative. It is most commonly used in food microbiology, and it is the time obtained by extrapolating the tangent of the growth curve at the time of fastest growth back to the inoculation level. This definition can both be applied to every inoculum size including one cell. Throughout this chapter this definition will be used unless stated otherwise. Lag-time according to this definition is dependent on a variety of factors such as percentage of viable cells, injured cells and the growth conditions of the inoculum.
5.1.3 Relation between lag-time and microbial growth rate In predictive microbiology, the relation between lag-time (λ) and generation time (tg) is commonly assumed to be proportional, as long as no sublethal injury has occurred before inoculation. This relation was statistically examined in nine published datasets, and for every dataset it was roughly proportional (DelignetteMuller, 1998). The proportion seems to hold at suboptimal water activity or suboptimal pH. However, a more advanced study showed that the ratio (λ/ tg) was not totally independent of the environmental conditions. In particular, a significant negative effect of the pH on this ratio was observed in five of the nine datasets. For modelling the environmental dependence of microbial growth parameters, some authors deal with λ and tg independently. Other authors only model the environ-
Modelling lag-time with reference to lag phase of bacterial spores
69
mental dependence of tg, assuming (λ/tg) to be constant. These two modelling methods were statistically compared for the nine datasets under study. Results differed from one dataset to another. For some, the model developed with a constant ratio (λ/tg) sufficed to describe the data, whereas for the others, an independent modelling (λ) and (tg) was more satisfactory (Delignette-Muller, 1998; Valero et al., 2003).
5.2
Lag-time of bacterial spores, transformation of the spore to vegetative cells
5.2.1 Germination and outgrowth stage Although the definition for lag as mentioned above can be applied to cultures originating from spores, lag of spore(s) is different from that of the vegetative cell. According to expectation, the lag of exponentially growing vegetative cells is negligible even when exponentially growing cells are transferred to an identical environment. Stationary phase cells always show a lag before they resume cell division because they have to adapt themselves to the new environment and leave the resting state. The physiological and molecular characteristics of bacterial spores are certainly dependent on the sporulation conditions (Cazemier et al., 2001; Oomes and Brul, 2004). Spores grown at suboptimal temperatures can germinate more easily and they are generally less resistant than those grown at optimum temperatures (Evans et al., 1997). However, it could be inferred from the results of Valero et al. (2003) that the proportion of lag from spores of Bacillus cereus and the doubling time (or growth rate) remained constant irrespective of incubation temperature. The transition of the spore to exponentially growing vegetative cells proceeds via various stages as shown in Fig. 5.1. The spore in the dormant stage consists of a dry core surrounded by an inner membrane, the primordial cell wall and the cortex consisting of two different types of peptidoglycan respectively (Atrih et al., 1998). The cortex is surrounded by the outer membrane, presumably the remnant of the membrane of the mother cell, and by a coat consisting of keratine-like proteins. The dormant spore is highly refractile and hence phase bright because of dehydration of the core and the very high levels of dipicolonic acid, a substance not found in vegetative cells. As shown in Fig. 5.1 a distinction is made between germination stage I, germination stage II and outgrowth. The spore is still phase bright at stage I, and it can stay in this stage for a long period of time. According to some authors (Keynan and Evenchick, 1969) it can even revert from this state to its original dormant stage. The underlying mechanism responsible for progress to stage I is still not known, but the results of the experiments of Craven (1988) suggest that a conformational change of spore protein(s) through weakening of hydrophobic molecular forces is involved. As the core must still be dehydrated in this stage, it is highly unlikely that the spore in stage I is metabolically active. In accordance with this, spores do not seem to need any nutrient to proceed to stage I.
70
Modelling microorganisms in food SPORE GERMINATION GERMINATION – 1 Cation release Some dipicolinic acid release Partial core hydration Can be facilitated by mild heat No loss of dormancy No loss of refractility
‘ACTIVATION?’ Cortex
core
GERMINATION – 2 Coats
OUTGROWTH Metabolism RNA, protein and DNA synthesis Escape from spore coats Cell division
Triggered by germinants Once triggered the spore is committed to germinate Previous mild heating stimulates transition to G.2 Further DPA release Core hydration, Core expansion Loss of resistance and of dormancy No protein synthesis needed for start
Coat remnants
Fig. 5.1 Schematic representation of the sequence of events during germination.
Spores in germination stage I can lose some dipicolinic acid and cations while the heat resistance is probably marginally lost by some hydration. Spores in stage I can easily proceed to stage II when they are brought into contact with germination factors such as L-alanine, inosine or the glucose-fructose-arginine-potassium system. When fully dormant spores come into immediate contact with nutrients, rapid progression to further stages of germination takes place. Stage I, as an intermediate stage between the fully dormant stage and the germinating spores that have lost refractility, is then difficult to distinguish. Relatively mild heating conditions can enhance germination of the spore to stage I, presumably due to some hydration. Germination to stage I can also occur at room temperature or lower and is commonly referred to as germination by ageing. When spores are severely injured by heat they lose much dipicolinic acid proportional to the severity of heating, but no refractility (Kort et al., 2005; Smelt et al., in preparation). Spores injured by heat, irradiation or chemical treatment lose their refractility more slowly. This is also reflected by longer lag-times and a larger proportion that do not germinate at all, finally resulting in a complete inactivation of the spores. As it is unlikely that the spore in germination stage I is metabolically active, it is difficult to imagine that repair of injury occurs at this germination state. The delay in germination to stage II is presumably also caused by partial injury of the germination system, e.g. enzymes involved in degradation of the cortex. The receptor site of L-alanine, located in the inner membrane, is another candidate for the delay. Foster and Johnstone (1987) isolated a germination-specific lytic enzyme (GSLE) that was
Modelling lag-time with reference to lag phase of bacterial spores
71
capable of cortex hydrolysis of Bacillus megaterium KM. Later studies with Bacillus subtilis have shown that hydrolysis of the cortex is complex and involves several hydrolases with different specificity. Sebald and Ionesco (1972) showed that heat injured spores of Clostridium perfringens and Clostridium botulinum type E needed lysozyme to recover. As will be shown below, ‘mildly injured’ spores can grow to a full population almost quantitatively, albeit after a longer time. It can be expected that many other biomolecules in the spore which play a role in the outgrowth stage are damaged. However, it is not known whether this outgrowth stage is also delayed when spores are injured before germination. When the spore is transferred to a suitable medium it is triggered to proceed to germination stage II characterised by swelling and loss of refractivity. Once spores are triggered to proceed to stage II they are committed to germinating. Short exposure to L-alanine followed by quenching with excess of the inhibitor Dalanine does not prevent the spore from entering stage II, i.e. losing refractility. Thus it has been concluded that the spore at the point of exposure to L-alanine is committed to germinating. There is a time lapse of some minutes between commitment and release of calcium, which in turn precedes loss of refractivity. Previous heat activation enhances commitment of spores to germinate (Stewart et al., 1981). Transition to stage II can proceed without protein synthesis (Chea et al., 2000), and the presence of atmospheric conditions does prevent clostridia from germinating (Plowman and Peck, 2002). The metabolically inactive state is also illustrated by the observation of Moir (2003) who observed that proteins were immobile in the earliest stages of spore germination. Transition from the fully dormant spore to stage II would sometimes occur below minimum growth temperature as was observed for C. botulinum (Grecz and Arvay, 1982). In a spore population there is always some variation in time for the dormant spore to proceed to stage II. The extent of variation depends on the strain. This time to completion of transition to germination stage II is certainly dependent on the degradation rate of the cortex which is generally in the time range of several minutes. After completion of germination stage II, outgrowth of the spore occurs and it is known that at that stage the spore is clearly more vulnerable to heat than in the previous stage (G. Franco-Salinas, 2005, pers. comm.). The cell is now performing functions similar to those of vegetative cells. It is not known whether cells originating from the spore gradually become more stress resistant after some divisions compared to their physiological state during or immediately after outgrowth. Whilst previous heating of dormant spores in distilled water or other media enhances spore germination and outgrowth when transferred to a suitable medium, more severe heating conditions result in slower germination and outgrowth. Finally, upon a truly harsh thermal treatment a decrease of the number of spores that are able to germinate and grow out to a whole cell population is observed. Recently we have noted that the extent of spore rRNA degradation is a useful predictor of the fate of spores after a thermal treatment (Keijser, 2007). From our own observations on spores of B. subtilis 168 and A163 we inferred that spores that were heated in distilled water are always phase bright immediately
72
Modelling microorganisms in food
after heating, irrespective of the heating temperature. When spores are inoculated into a medium that is made suboptimal by adding salt or lowering the pH, both occurrence of phase darkness and subsequent outgrowth is delayed. The mechanism responsible for this behaviour is not known. Spores heated at sublethal temperatures in distilled water do stay phase bright, not only directly after heating in distilled water but also when suspended into conditions suboptimal for normal growth, e.g. suboptimal pH or suboptimal water activity (Stewart et al., 1981).
5.2.2 Environmental effects on germination and outgrowth The effect of incubation temperature on the transition to various stages has not received much attention apart from some older reports and, more recently, studies described by Chea et al. (2000). Transition from dormant spores to phase bright spores, followed microscopically or by optical density measurements, can proceed fast at room temperature, and it is to be expected that this process will proceed more slowly at lower temperatures. Consequently, it is even more difficult to measure the transition times from phase darkness to outgrowth and the first divisions. Modelling lag-time of sporeformers is further complicated by the effect that cell density has, presumably through quorum sensing, on the duration of the lag phase (Lankford et al., 1966; Caipo et al., 2002; Zhao et al., 2003). Dipicolinic acid is one of the candidates for quorum sensing as it is released easily and it can act as a germinant, presumably by activating lytic enzymes. The inactivation targets in the spore of a number of well-known environmental stress conditions differ. For instance, the target of wet heat is completely different from that of dry heat (Setlow and Setlow, 1996). The same holds for ultraviolet (Setlow, 2001). High-pressure treatment leads to a loss of spore refractility as well as small acid-soluble proteins (SASPs) at pressures of about 100 MPa. Upon applying higher pressure levels, the spore remains refractile and the SASPs remain intact (Wuytack et al., 1998). In contrast to sublethal heat, the effect of other sublethal stress conditions on the duration of the lag-time is not known.
5.2.3 Experimental approaches As mentioned above, the previous history of vegetative cells plays an important role in the duration of the lag, and this also seems to be the case for bacterial spores. For instance, Levinson and Hyatt (1964) demonstrated back in 1964 that the presence of calcium and manganese during sporulation increased the germination efficacy and, by itself, the presence of calcium during sporulation resulted in enhanced heat resistance. Subsequently this subject received virtually no further attention. Recent studies cast new light on it and, in fact, in retrospect the reported observation is unexpected as heat-resistant spores are generally more difficult to germinate than heat-sensitive spores (Oomes et al., unpublished observations, also discussed in Oomes et al., 2007). Whatever is the case, it is clear that sporulation conditions seem to play a prime role in determining the length of the lag phase. As indicated earlier, it is generally crucial to determine in a population the
Modelling lag-time with reference to lag phase of bacterial spores
73
germination lag phase of individual spores in order to be able to arrive at predictions which are of use in food microbiology where more often than not one deals with low numbers. A reasonable estimate of the distribution of lag-times of individual cells can be obtained by making serial dilutions in e.g. microtitre plates in many replicates followed by recording the time to turbidity of individual wells. When a small proportion of the wells inoculated with the lowest diluted suspension still show growth, they can be considered as suspension originating from individual cells on statistical grounds. If one is able to use a flowcytometer with sorting option this allows for the deposition of individual spores in single wells. Such an approach is in principle similar to the one outlined above but clearly even more efficient and reliable. As mentioned before, germination can also be followed by measuring the fall in optical density or by microscopic observation. The measurement of membrane properties with specific fluorescent dyes may offer further potential to study the physiology of single germinating spores and resulting outgrowing cells (Ferenko et al., 2004). As will be outlined in the next section, these techniques can in principle be applied to build a predictive model that describes the various stages during lag. The time lapse between inoculation into a medium and the change in refractility is sometimes referred to as microlag. Whereas microlag refers to the time between inoculation and the end of refractivity of the spore, true lag for one cell will be defined here as the time between inoculation of the refractive spore into a suitable medium until the first cell division, in other words according to the definition mentioned in the first paragraph. The apparent lag as defined above is dependent on the size of the inoculum and the distribution of the true lag-times.
5.3
Quantitative aspects, mathematical modelling
5.3.1 General aspects As will be dealt with below, the mechanism of germination and outgrowth can sometimes be an aid in developing models that allow a quantitative estimate of the length of the lag-time. On the other hand, in real food situations quantitative knowledge of the effect of various environmental conditions such as temperature, pH and water activity on lag-time in general is often needed more for the development of models predicting duration of the lag-time. There seems to be a gap between the knowledge of the physiology and molecular biology of the development of the spore to vegetative cell on the one hand and the purely descriptive growth models in which the lag-time is an intrinsic part on the other hand. For classical ‘descriptive predictive’ modelling, the general recommendations for modelling growth of microorganisms in foods given by McClure et al. (1994) are still valid and also relevant to lag-time of bacterial spores. According to these recommendations, modelling proceeds in the following steps: selection of relevant organisms; the use of a pure culture or a mixed culture; experimental design (temperature, pH, water activity and possibly other factors); preparation of the
74
Modelling microorganisms in food
inoculum; choice of the medium; and choice of the primary mathematical model. The dependence of the parameters of the primary model is mostly modelled as a polynomial equation. The final stage is the validation in real foods. Contrary to maximum growth rate, which is almost entirely dependent on the conditions during growth, special attention has to be paid to the preparation of the inoculum when lag-time is modelled. To make a reliable estimate of the lag-time, the following data should be known: initial inoculum and knowledge of at least one time-point where growth rate is maximum and the corresponding number of cells at that time-point. A complication in lag-time models might be that even in a genetically homogeneous population there is always a certain distribution of lag-times of individual cells that is even larger for lag-times originating from spores than for those originating from vegetative cells (Smelt et al., 2006). It is clear that the variability in spore germination response accounts for a great deal of the variability of lag-times (Chea et al., 2000; Alberto et al., 2003). Knowledge of the distribution of lag-times is particularly relevant when there is not only a worst case scenario to be worked out, but also a detailed risk assessment. The size of the inoculum to be chosen will depend on the expected initial contamination. As the fastest growing cell almost entirely determines the growth curve it can be predicted from single-cell experiments that the variance can be considerably reduced. This is illustrated in Fig. 5.2a and b where observed data are given on the development of B. subtilis growth curves originating from one spore and ten spores per well respectively. As mentioned above, the length of the apparent lag-time can be explained by statistical distributions. As mentioned in the introduction, there is also a need for more detailed knowledge of the behaviour of germinating spores until the moment of maximum growth of the originating vegetative cell population. For instance, there are only a few observations of the effect of incubation temperature on the duration of the various stages of spore germination and outgrowth ranging from transition from the refractile stage to the phase dark stage, outgrowth stage and first cell divisions. Knowledge of these facts will be an aid in preservation as the sensitivity of these stages to adverse environmental conditions is different. Data on combined effects of temperature, salt and pH on these stages are completely lacking. The time/ temperature combinations that can shorten the lag – ‘activation’ temperature – are generally positively correlated to the heat resistance of the spore. Individual cells of spore populations always show a relatively large variability (Fig. 5.2a and b). This variability seems to decrease when spores are heat activated under the appropriate conditions. Due to statistical effects, an increase in the size of the inoculum will reduce the variability of the lag and also its duration. As growth is exponential after a certain period of time, the population originating from the cell with the shortest lag will determine almost entirely the number of cells of the whole population as shown in Fig. 5.2. If spores do not interfere with each other the effect of inoculum size can be estimated when the distribution of lag-times is known. The fastest growing cell population originating from a single cell usually determines the growth of a population originating from more cells. The differences
Modelling lag-time with reference to lag phase of bacterial spores
75
Fig. 5.2 Effect of inoculum size on lag-time of Bacillus subtilis A163. Figure 5.2a gives the results for a situation with one cell per well. Figure 5.2b gives the results for a situation with ten cells per well.
of the growth curves are almost exclusively determined by the lag, as the variation in maximum growth rate of genetically identical cells in identical conditions is expected to be the same. Monte Carlo studies show that an inoculum of 100–1000 cells is sufficient to compensate for much variation in lag-time. As mentioned above, there are some reports that, apart from the statistical effects, germination of spores can be enhanced by quorum sensing. The minimum level required for demonstrable effects of quorum sensing is not known, but it seems likely that it is well above 104–105 cells/ml.
5.3.2 Choice of mathematical models Most mathematical models in food microbiology are purely descriptive and, in general, there is much freedom in choosing the appropriate model. It is advisable
76
Modelling microorganisms in food
that descriptive models in particular are kept as simple as possible and contain as few parameters as possible because the latter often have no biological meaning in a descriptive model. It is only recently that models that are able to predict microbial growth/no growth boundaries based on a mechanistic approach have become available for other microorganisms (Klipp et al., 2005; F. Mensonides, Chapter 12). It goes without saying that the models should meet the usual statistical criteria. In many cases, the differences caused by the choice of models are small compared to other, microbiological sources of variance. If possible, a linear model is to be preferred, as the parameters can be calculated rather than estimated. Non-linear models are often difficult to handle, particularly when censored data are incorporated. In most cases, however, the development of a model will end with a non-linear model. Although mathematical tools combined with software have made much progress in estimating the parameters of non-linear models, there are many pitfalls such as local minima, meaning a parameter estimate within a certain range or correlated parameters by which building of secondary models is hardly possible. For more details on non-linear modelling the reader is referred to Ratkowsky (1983). In predictive modelling of the behaviour of microorganisms in food, sources of variance are under-reported. To overcome this problem, worst case scenarios are mostly assumed, e.g. in Food MicroModel (McClure et al., 1994). This is probably one of the reasons why predictive modelling is still not generally used in practical situations. As pointed out previously, this is particularly relevant for lag-times. Many lag-time models for spores do describe general lag-times of whole populations as defined previously. However, the dynamics of the various stages of germination may sometimes give a better prediction of growth of microorganisms in foods compared to general lag-times alone. Ideally, a full descriptive model of the lagtime from spores should include: (i) effect of the inoculum size on the lag phase; (ii) effect of the distribution of lag-times; (iii) effect of previous treatment of the spore; (iv) effect of temperature, pH and aw on the kinetics of the microlag and subsequent transition to phase darkness; (v) effect of all above-mentioned factors on outgrowth; and (vi) effect of these factors on first stages of subsequent cell division. There are only a limited number of models describing germination up until phase darkness. Older reports of models of the first stages of germination of spores do exist (McCormick, 1965; Vary and Halvorson, 1965; Woese et al., 1968). In these reports, detailed studies on spore germination to complete phase brightness have been described and these kinetics could be described as a Weibulllike equation of the form Y = exp(–k*t*exp(a)) where Y = fraction of phase bright spores, k = reaction constant and a = shape parameter. If a = 1 the Weibull function becomes an exponential function. Both McCormick (1965) and Vary and Halvorson (1965) have studied the effect of temperature during germination and determined the value for the parameters at different temperature, but they have not developed a general model describing the effect of several parameters. Chea et al. (2000) developed a model
Cumulative number of positives
Modelling lag-time with reference to lag phase of bacterial spores
77
t0 obs
96 84 72 60 48 36 24 12 0
t0 pred 1070 obs 1070 pred 1080 obs 1080 pred 1090 obs 1090 pred 1095 obs 0
50
100 Lag time (h)
150
200
1095 pred
Fig. 5.3 Effect of various thermal pretreatments – untreated (t0) or 10 min at 70, 80, 90 or 95 °C (referred to in the figure as 1070, 1080, 1090 and 1095 respectively) – on distribution of lag-times of Bacillus subtilis, fitted according to Weibull distributions. The graph shows the observed versus predicted data.
describing the effect of temperature, pH and sodium chloride on germination kinetics of spores of C. botulinum, following the germination of C. botulinum spores by means of phase contrast microscopy. They tried to fit the germination kinetics with several models including the Weibull distribution, but finally they chose the exponential distribution for reasons of simplicity. In these models, parameters are often highly correlated. We developed a model fitted according to the Weibull distribution. The results of the fit are shown in Fig. 5.3. Care should be taken that in these models the parameters are not highly correlated. Otherwise a secondary model is hardly possible.
5.3.3 Experimental procedures for developing a mathematical model For the estimation of lag-time, the initial number of viable cells or viable spores and the maximum growth rate of cells at least one time-point when the population is at maximum growth rate should be known. An estimate of the number of viable vegetative cells can be made relatively easily by comparing most probable number (MPN) counts or plate counts with microscopic estimates in a counting chamber. Estimating the number of viable spores, however, is often more complicated, especially for thermophilic spores, as viable spores sometimes need heat activation to grow out. A good estimate can be obtained by giving untreated spores an appropriate heat shock and comparing the viable counts with microscopic estimates. It can be assumed that in most cases there is always a temperature range where spores can be activated without killing them. Sorting single spores in microtitre plates will give a precise and accurate estimate of the numbers of spores that can grow out to a whole population of over 1 000 000 cells. Maximum growth rate can be obtained by generating full growth curves. In most cases the curves are fitted with a Gompertz equation or the equation proposed by Baranyi and Roberts
78
Modelling microorganisms in food
(1993), and from the tangent at the point of inflection the maximum growth rate is calculated. This method is very accurate, but very labour-intensive when the curves are based on plate counts. Growth curves from optical density (OD) measurements are easy to generate but should be handled with care: whereas a good correlation often exists between plate counts or MPN counts and OD measurements, considerable deviations from maximum growth rate are often unnoticed at OD measurements. A more accurate method is to measure optical growth curves from serial dilutions and then measure the relation between detection time and initial inoculum (Cuppers and Smelt, 1993; McKellar and Knight, 2000). Growth rate (b) can be calculated from the formula: [ln(Ndetect)]/b – [ln(Ninoc)]/b = tdetect where ln is the natural logarithm; Ndetect (the number of cells when a significant change of turbidity of the medium, caused by growth, is detected) is a constant; Ninoc (the inoculum size) is the independent variable; tdetect (the time in which the change of turbidity or any other measurable changes caused by growth of the microorganism takes place) is the dependent variable. Note: if the corresponding inocula differ by a factor of 2, the difference in detection time is identical to the generation time as shown by the next formula: (tdetect2 – tdetect1) = ln (2)/b = generation time The method has to be applied with care though. One must be sure that the apparent lag-times of the subsequent inocula are the same; this can be guaranteed by using inocula from the late exponential phase and only using inocula larger than 100. Furthermore the time differences should be measured as early as possible and certainly not at the point when the stationary phase is reached. Although calibration of cell number and optical density is easy to perform, a slower growth rate can easily be missed in OD measurements as such. In static situations combining growth curves and distribution curves is straightforward; a fixed number obtained from the growth curve is added to the distribution curve as only the mean time but not the variance will change. In dynamic situations Monte Carlo simulations can be applied (Albert et al., 2005).
5.4
Lag-times in real foods
The final step is validation of predictive models in real foods. To this end, the models in laboratory media should be widely applicable and at the same time accurate. For instance, Stringer et al. (1999) have shown that addition of heattreated vegetable juices to culture media can increase the number of colony forming units of heat-damaged spores of non-proteolytic C. botulinum. Many models lack data for reproducibility and repeatability. In France a project, the Sym’Previus programme, is running (Pinon et al., 2004) with the aim of constructing a database that now covers 50 bacterial strains including the sporeforming pathogens B. cereus and C. perfringens. Pinon et al. (2004) showed an example
Modelling lag-time with reference to lag phase of bacterial spores
79
describing the growth behaviour of Listeria monocytogenes and Escherichia coli O26, where variation of the parameters such as lag-time was shown together with validation in chocolate cream, smoked salmon, crab sticks, poultry meat and potted meat. They apparently had a similar approach for C. perfringens and B. cereus. Dynamic models are relatively easy to build when secondary models are available. Integration into a time–temperature profile is particularly relevant for chilled foods. There is some evidence that bacterial spores can germinate below minimum growth temperature, and in some instances a separate germination model has to be incorporated in lag-time models as it might be expected that germinated spores can faster attain maximum growth rate. However, it is not yet known how relevant this is for predictions of safety and stability of foods. As for growth, germination of spores is delayed by pH and water activity values at suboptimum conditions.
5.5
Conclusions
The relevance of lag-time for spores is dependent on the type of food. When the initial number of spores in foods is high, it is unlikely that lag-time of spores is particularly relevant for the prediction of microbiological safety and stability as it can be expected that lag-time will be relatively short. However, when a risk assessment is conducted for mildly heated foods with a limited shelf life, lag-time of spores can be relevant, particularly when spores can be expected in low numbers. The lag-time of heat-damaged spores is sometimes extremely long. Accurate prediction is important, as this will tell the food producer whether or not these spores can germinate within the foreseen shelf life of the food product.
5.6
References
Albert I, Pouillot R and Denis JB (2005) Stochastically modelling Listeria monocytogenes growth in farm tank milk, Risk Anal, 25, 1171–1185. Alberto F, Broussolle V, Mason DR, Carlin F and Peck MW (2003) Variability in spore germination response by strains of proteolytic Clostridium botulinum types A, B and F, Lett Appl Microbiol, 36, 41–45. Atrih A, Zollner P, Allmaier G, Williamson MP and Foster SJ (1998) Peptidoglycan structural dynamics during germination of Bacillus subtilis 168 endospores, J Bacteriol, 180, 4603–4612. Baranyi J and Roberts TA (1993) A non-autonomous differential equation to model bacterial growth, Food Microbiol, 10, 293–299. Buchanan RL and Cygnarowicz ML (1990) A mathematical approach towards defining and calculating the duration of the lag phase, Food Microbiol, 7, 237–240. Caipo ML, Duffy S, Zhao L and Schaffner DW (2002) Bacillus megaterium spore germination is influenced by inoculum size, J Appl Microbiol, 92, 879–824. Cazemier AEJ, Wagenaars SF and ter Steeg PF (2001) Effect of sporulation and recovery medium on the heat resistance and amount of injury of spores from spoilage bacilli, Appl Microbiol, 90, 761–70. Chea FP, Chen Y, Montville TJ and Schaffner, DW (2000) Modeling the germination kinetics of Clostridium botulinum 6A spores as affected by temperature, pH and sodium chloride, J Food Prot, 63, 1071–1079.
80
Modelling microorganisms in food
Craven SE (1988) Activation of Clostridium perfringens spores under conditions that disrupt hydrophobic interactions of biological macromolecules, Appl Environ Microbiol, 54, 2042–2048. Cuppers HGAM and Smelt JPPM (1993) Time to turbidity measurement as a tool for modeling spoilage by Lactobacillus, J Ind Microbiol, 12, 168–171. Delignette-Muller JR (1998) Relation between the generation time and the lag time of bacterial growth kinetic, Int J Food Microbiol, 43, 97–104 Evans RJ, Russell NJ, Gould GW and McClure PJ (1997) The germinability of a psychrotolerant non-proteolytic strain of Clostridium botulinum is influenced by their formation and storage temperature, J Appl Microbiol, 83, 273–280. Ferenko L, Cote MA and Rotman B (2004) Esterase activity as a novel parameter of spore germination in Bacillus anthracis, Biochem Biophys Res Com, 319, 845–854. Foster SJ and Johnstone K (1987) Purification and properties of a germination-specific cortex-lytic enzyme from spores of Bacillus megaterium KM, Biochem J, 242, 573– 579. Grecz N and Arvay LH (1982) Effect of temperature on spore germination and vegetative cell growth of Clostridium botulinum, Appl Environ Microbiol, 43, 331–337. Keijser BJF (2007) Control of preservatives by biomarkers, European patent application EP 05077246.6. Keynan A and Evenchick Z (1969) Activation, in GW Gould and A Hurst (Eds), The Bacterial Spore, London, New York, Academic Press, pp. 359–396. Klipp E, Nordlander B, Kruger R, Gennemark P and Hohmann S (2005) Integrative model of the response of yeast to osmotic shock, Nature Biotechnol, 23, 975–982. Kort R, O’Brien AC, van Stokkum IH, Oomes SJ, Crielaard W, Hellingwerf K and Brul S (2005) Assessment of heat resistance of bacterial spores from food product isolates by fluorescence monitoring of dipicolinic acid release, Appl Environ Microbiol, 71, 3556– 3564. Lankford CE, Walker JR, Reeves JB, Nabbut NH, Byers BR and Jones RJ (1966) Inoculumdependent division lag of Bacillus cultures and its relation to an endogenous factor(s) (schizokinen), J Bacteriol, 91, 1070–1079. Levinson HS and Hyatt MT (1964) Effect of sporulation medium on heat resistance, chemical composition and germination of Bacillus megaterium spores, J Bacteriol, 87, 876–886. McClure PJ, Blackburn CW, Cole M, Curtis PS, Jones JF, Legan DJ, Ogden ID, Peck MW, Roberts TA and Sutherland JP (1994) Modelling the growth, survival and death of microorganisms in foods: the UK food micromodel approach, Int J Food Microbiol, 23, 265–275. McCormick N (1965) Kinetics of spore germination, J Bacteriol, 89, 1180–1185. McKellar RC and Knight KA (2000) A combined discrete-continuous model describing the lag phase of Listeria monocytogenes, Int J Food Microbiol, 54,171–180. Moir A (2003) Bacterial spores germination and protein mobility, Trends in Microbiol, 11, 452–454. Oomes SJCM and Brul S (2004) Correlation of genome wide expression profiles of Bacillus subtilis cultured under different sporulation conditions to spore wet heat resistance, Innov Food Sci Emerg Technol, 5, 307–316. Oomes SJCM, Van Zuijlen ACM, Hehenkamp JO, Witsenboer H, van der Voessen JMBM and Brul S (2007) The characterisation of Bacillus subtilis occurring in the manufacturing of (low acid) canned products, Int J Food Microbiol, in press. Pinon A, Zwietering M, Perrier M, Membre JM, Leporq B, Mettler E, Thuault D, Coroller L, Stahl V and Vialette M (2004) Development and validation of experimental protocols for use of cardinal models for prediction of microorganism growth in food products, Appl Environ Microbiol, 70, 1081–1087. Pirt SJ (1975) Principles of Microbe and Cell Cultivation, London, Blackwell Scientific. Plowman J and Peck MW (2002) Use of a novel method to characterize the response of spores
Modelling lag-time with reference to lag phase of bacterial spores
81
of non-proteolytic Clostridium botulinum types B, E and F to a wide range of germinants and conditions, J Appl Microbiol, 92, 681–94. Ratkowsky DA (1983) Nonlinear Regression Modeling: a Unified Practical Approach, New York, Marcel Dekker. Sebald M and Ionesco H (1972) Lysozyme-proteolytic enzyme dependent germination of type E Clostridium botulinum spores, C R Acad Sci Hebd Seances Acad Sci D, 275, 2175– 2177. Setlow B and Setlow P (1996) Role of DNA repair in Bacillus subtilis spore resistance, J Bacteriol, 178, 3486–3495. Setlow P (2001) Resistance of spores of Bacillus species to ultraviolet light, Environ Mol Mutagen, 38, 97–104. Smelt JPPM, Bos AP, Kort R and Brul S (2006) Effect of sublethal heat treatments at various stages of subsequent Bacillus subtilis as reflected by dipicolinic acid releases and by the variability of the various stages of the subsequent lag time of spores, in preparation. Stewart GSAB, Johnstone K, Hagelberg E and Ellar DJ (1981) Commitment of bacterial spores to germinate. A measure of the trigger reaction, Biochem J, 198, 101–106. Stringer SC, Haque N and Peck MW (1999) Growth from spores of nonproteolytic Clostridium botulinum in heat-treated vegetable juice, Appl Env Microbiol, 65, 2136– 2142. Swinnen IAM, Bernaerts K, Dens EJJ, Geeraerd AH and Van Impe JF (2004) Predictive modeling of the microbial lag phase: a review, Int J Microbiol, 94, 137–159. Valero M, Fernandez PS and Salmeron MC (2003) Influence of pH and temperature of Bacillus cereus in vegetable substrates, Int J Food Microbiol, 82, 71–79. Vary JC and Halvorson HO (1965) Kinetics of germination of Bacillus spores, J Bacteriol, 89, 1340–1347. Woese CR, Vary JC and Halvorson HO (1968) A kinetic model for bacterial spore germination, Proc Natl Acad Sci USA, 59, 69–75. Wuytack EY, Boven S and Michiels CW (1998) Comparative study of pressure-induced germination of Bacillus subtilis spores at low and high pressures, Appl Environ Microbiol, 64, 3220–3224. Zhao L, Montville TJ and Schaffner DW (2003) Computer simulation of Clostridium botulinum strain 56 A behavior at low spore concentrations, Appl Environ Microbiol, 69, 845–851. Zwietering M (2005) Temperature effect on bacterial growth rate: quantitative microbiology approach including cardinal values and variability estimates to perform growth simulations on/in food. Int J Food Microbiol, 100(1–3), 179–186.
6 Application of models and other quantitative microbiology tools in predictive microbiology J.D. Legan, Kraft Foods, USA
6.1
Introduction
Chapters up to this point have given detailed descriptions of how to design, build and validate good models, taking into account matrix structure and variability in experimental results. This chapter will focus on applications of the models thus constructed. Good models can be useful in many settings, for example: guiding research, establishing safe and stable food product formulations, setting safe processes, designing HACCP plans and guiding microbiological risk assessments. It is important to remember, however, that models are merely powerful tools. Like other powerful tools, they can help us to achieve more and better results when used appropriately, but they can give unexpected results in the hands of the unwary! Hence we need to be well aware of the capabilities and limitations of these tools before we start to use them. By the end of this chapter, readers will have an appreciation for the kinds of models that are available, for circumstances where models can be helpful and for appropriate precautions in their use. Readers will also be introduced to circumstances where other quantitative microbiological tools such as sampling plans or statistical process control approaches would be more suitable.
6.2
Definitions
• Database: a structured collection of information, typically stored in electronic form, for ease and speed of searching and retrieving information and security of archiving.
Application of models in predictive microbiology
83
• Model: for the purpose of this chapter we will consider a model to be an equation that relates a response (e.g. amount or rate of microbial growth or death) to a level or change in some controlling factor (e.g. temperature, pH, water activity (aw, preservative concentration).
6.3
Applications of models and databases
In engineering, sophisticated models of the static and dynamic behavior of structures, aircraft, cars, etc. can be built on the laws of physics. In microbiology there are not yet such widely applicable models that stem from a full theoretical understanding of the system. Current efforts in the area of ‘Microbial Systems Biology’ are aimed at an initial attempt in this direction. In later chapters of this book, this is described in detail and an example of an approach, studying the thermal stress response in yeast, is introduced. Microbiological models are all empirical on at least one level, based on a statistical analysis of experimental data and should be validated with reference to experimental data not used in constructing the model (sometimes known as a holdout sample). Thus, models and data are interdependent. Databases may be used as repositories of the data that support modeling activities to make data readily accessible to the researcher or, more broadly, within the research community. Other searchable databases can give access to related information, e.g. epidemiology data, risk assessment information, etc. Databases of this type are considered outside the scope of this chapter. Nevertheless, some Internet addresses are included in Section 6.7 on Sources of further information and advice. Database construction is a discipline in its own right, and this chapter will not attempt to discuss it, focusing instead on applications. Appropriate applications for models and databases may be considered in terms of questions. Models are particularly suited to exploratory types of questions, e.g.:
• What will happen to the growth rate of Listeria monocytogenes if I reduce the pH of this product?
• How will the shelf life of this product change if I increase the water activity? • What concentration of potassium lactate should I use in order to prevent growth of L. monocytogenes in this product for 12 weeks?
• Will my process kill 5 logs of Escherichia coli? These questions may address circumstances where there are no experimental data available, provided that the conditions are within the experimental range underpinning the model being used. Models are designed to permit interpolation, but extrapolation outside the experimental range can be misleading. Typical applications for models include:
• establishing safe shelf life for a food product; • establishing shelf life before spoilage; • exploring the effects of formulation changes on shelf life;
84
Modelling microorganisms in food
• exploring the risk of product failure (spoilage or loss of safety) within a defined shelf life;
• establishing process run times before cleaning; • establishing process lethality; • as research tools to understand the behavior of a system of interest. Questions that can be answered by databases necessarily relate to what is already known. Answers may be useful to confirm the output from a model, to address specific circumstances in the absence of a model or to provide an overview of available data, e.g:
• Does a model for growth of Salmonella created from measurements of growth • •
in broth give good predictions of growth of Salmonella in egg products at pH, aw and temperature within the range of the model? Has growth of Staphylococcus aureus been observed in pancake batter at pH 6.8, aw 0.97 and 8 °C? What data are available for growth of Bacillus cereus in cereal products?
A good example of a database is ComBase (Baranyi and Tamplin, 2004). 6.3.1 ComBase ComBase is a large, publicly available, web-accessible repository of microbiological data. In April 2005, searches of ComBase returned nearly 32 000 files representing growth or death curves of 27 different organisms or groups of organisms in foods, beverages or culture media. The core of the database comprises the data arising from development of the USDA Pathogen Modeling Program (PMP) and the UK Food MicroModel program [now replaced by ‘Growth Predictor’(Baranyi and Tamplin, 2004)]. These core holdings are supplemented by data provided by others or gleaned from the literature. The database can be searched for specific organisms and conditions, or for all conditions for particular organisms. ComBase was developed, and is updated, in Microsoft Excel with the database tables converted into Microsoft Access tables for greater ease and speed of searching. The structure and format of the database has been standardized to reduce incompatibility between data from different sources. Details of ComBase, including its formatting protocol and structure are described by Baranyi and Tamplin (2004) and more technical details are available directly from the ComBase web site (see Sources of further information and advice). A powerful use of ComBase is as a source of independent data to validate a newly-created predictive model. An example of this use was given by Pin et al. (2004). These authors created a model for growth of Aeromonas hydrophila from measurements of growth in tryptic soy broth under modified atmospheres. To validate the model, they used data from ComBase on growth of A. hydrophila in meat in addition to the results of purpose-designed experiments in seafood. ComBase might also be used to add further support to predictions from an established model where it can be used to validate, or extend validation of,
Application of models in predictive microbiology
85
predictions from the model. Consider, for example, the USDA PMP growth model for B. cereus. Like all the PMP models this was created in broth. Now assume that we want to apply the model to consider how B. cereus might grow in a meal component such as cooked rice or pasta. We can search ComBase for datasets relating to growth of B. cereus in ‘pasta or other starch products’ under ‘all’ conditions. This yielded 192 records (November 2005) covering a range of foods, temperatures, and other factors. Predictions from the model can now be compared against a selection of the ComBase records to determine how well they match. ComBase might also be used to answer specific questions, for example, ‘has L. monocytogenes been observed to grow in refrigerated (2–8 °C) pasta salad with pH of 5.8 and aw between 0.92 and 0.95?’ For this question there are no exact matches. Three data sets for L. monocytogenes/innocua in home-style macaroni salad with pH 6.0 stored at 4 °C were identified. Even without the aw recorded, these conditions may be close enough for the data to be useful. But since the question is not answered completely, it may be more appropriate to see if there is a predictive model that could be asked whether L. monocytogenes would be expected to grow under the conditions of interest, particularly if predictions from the model are validated against ComBase or other independent data. A very broad question that might be asked of ComBase is ‘what data are available for growth, survival and death of Salmonella in egg products?’ Searching ComBase for Salmonella in egg or egg products yielded 332 records, some relating to survival in low aw products such as meringue or egg powder, some to death in liquid egg subjected to heat treatment and some to growth in sensitive products such as custard. Inspection of individual data sets will reveal more detail about the conditions and the observed response. It is important to remember that database records used in the absence of a model can only confirm potential for growth; reported occurrence of growth under specific conditions indeed shows growth potential, but reported absence of growth does not give any security.
6.3.2 Types of models Models may be most simply classified by the response that they are intended to describe; thus, we can think of models for microbial growth, survival or death, further subdivided by the manner in which the response is described (Legan et al., 2000).
• Kinetic growth models are built using growth curves measured over a range of conditions. These models are able to predict growth rates, time to a critical amount of growth or even a whole growth curve. Thus they can be used to address most questions regarding growth within their effective working range. However, they are commonly constructed to predict over relatively short periods of hours to days (Palumbo et al., 1991; Bhaduri et al., 1995; Pin and Baranyi, 1998, 2000). This short timescale is not an inherent limitation of kinetic modelling, but arises because experiments to determine growth rate are often done in optimal broth systems over short periods of time. There are
86
•
•
•
•
•
Modelling microorganisms in food examples of kinetic models created from data gathered in foods over periods of weeks to months (Seman, 2001; Seman et al., 2002). Growth boundary models are built from qualitative or quantitative measurements of ‘growth’ and ‘no-growth’ over time and can predict the limits of conditions permitting ‘growth’ as defined by the model builder. These models can often predict over longer time periods of weeks to months and can be used to identify new conditions that may prevent growth. However, they cannot predict growth rates, time to an amount of growth in excess of the threshold used to define ‘growth’ or a whole growth curve (Stewart et al., 2001; Evans et al., 2004). Probabilistic growth models are built from the proportion of ‘growth’ and ‘nogrowth’ responses throughout the experimental design space at a defined point in time. What constitutes ‘growth’ is defined by the model builder based on reaching some threshold, e.g. defined increase in count, defined change in turbidity, conductivity or other response depending on the analytical method used. Probabilistic models can then predict the probability of growth occurring at the defined point in time for other conditions within the experimental design matrix (Lopez-Malo and Palou, 2000). Kinetic death models are built from death curves under conditions of interest. Typically 5 or 6 logs of death are followed over the course of a deliberately applied lethal treatment. Times involved are usually short; seconds to a few hours at most. These models can be used to predict the amount of death occurring during a lethal treatment or even to predict the entire death curve. Kinetic modeling is the most common approach to microbial death but relies on measuring viable microbial numbers over time (Musa et al., 1999; Mazzotta, 2001). Time to inactivation models are built on qualitative ‘dead’ or ‘not dead’ responses from different starting levels. Like growth boundary models, they are able to predict the time to the desired end point (in this case no survivors from the initial load of interest) independent of the underlying death kinetics. However, they are unable to predict the whole death curve. Time to inactivation models have recently been applied to materials treated by high-pressure processing (Bull et al., 2005). Survival models relate to transitional conditions where either growth or death may occur. They are built on measured viable counts over time under the conditions of interest. Such conditions are close to the boundary for growth and the times involved may be quite long. There are relatively few models of this type (Whiting and Golden, 2003; Curtis et al., 2005).
A growth question Before consulting one or more models, it is important to clearly define the question(s) that are being asked. The question(s) need to be specific enough to remain within the scope of the selected model(s) and detailed enough to define the model inputs and allow interpretation of the output. ‘What is the safe shelf life of quiche?’ is not a question that can be answered by any models currently available.
Application of models in predictive microbiology
87
It does not define the organism(s) of concern, making selection of appropriate models impossible. It does not give any information that can be used as an input to a model such as the pH, aw or salt concentration, storage temperature, concentration of antimicrobials or composition of the atmosphere surrounding the product. Nor does it give any indication of how the safe shelflife is to be defined, so, even if several models were interrogated using best guesses for input values, it would be impossible to interpret the output in terms of the ‘safe shelf life’ requested in the question. Let us consider the background to this question. All vegetative organisms are killed when the quiche is baked and we have good sanitation and an effective HACCP program in place. We recognize that there is a risk of recontamination during cooling and packaging but we have been making the product for a year, giving it a three-day shelf life, and weekly product testing has never detected the presence of any pathogens. Hence, we are confident that the incidence of recontamination and the concentration of pathogens if and when it occurs is extremely low. The quiche will be distributed chilled at 0–4 ºC so we are initially concerned with the potential for growth of L. monocytogenes, though we recognize that we may also need to consider the potential for growth of psychrotrophic strains of Clostridium botulinum in anaerobic micro-environments within the quiche, particularly if the safe shelf life with respect to L. monocytogenes exceeds ten days (Food Standards Agency, 2004) Protection of public health with respect to Listeria can be achieved by ensuring that the concentration of L. monocytogenes remains below 100 cfu/g at the point of consumption (ICMSF, 2002, p. 33). This level is considered acceptable internationally, although it is not currently accepted in the USA. Since our routine testing has never found L. monocytogenes in the product and we would expect the concentration of any contamination to be less than 1 cfu/g we believe that we could protect public health by limiting growth to not more than 2 logs over shelf life. Nevertheless, we prefer to operate more stringently and accept no more than a 1log increase during distribution. Analytical measurements of representative samples have shown that the quiche has a high aw (0.995) and a pH of 5.8 with little indication of variability between major components. It is wrapped in a highmoisture barrier clear film and packed in a box with a cut-out in the lid to show the product. The wrapping film will prevent the product from drying out during distribution, so the aw will change very little. Now we can rephrase the question as ‘What is the minimum time needed at 4 ºC for a 1-log increase in the population of L. monocytogenes in an aerobically packaged product with aw 0.995 and pH 5.8?’ In this question we have defined the organism of concern (L. monocytogenes), the controlling factors to use as inputs to the model and their levels (aw = 0.995, pH = 5.8 and temperature = 4 ºC) and the limit of the acceptable output (1-log increase in numbers). Once the question has been properly framed and appropriate model(s) selected consideration must be given to the models’ effective working range. This is, strictly, the domain of validity (zone of interpolation) where the model is built on data with no extrapolation. It may be more restricted than the design range of the
88
Modelling microorganisms in food
model-building experiments (Baranyi, 1999). When the domain of validity has not been determined, predictions should be restricted to conditions within the model design range for most of the applications above. The one exception is when the model is being used as a research tool to estimate possible responses outside its range, perhaps whilst designing experiments to extend its range. To find an answer to the properly framed question we can go to Growth Predictor. If we select the L. monocytogenes/innocua with CO2 (%) model we find that the controlling factors from our rephrased question (aw = 0.995, pH = 5.8 and temperature = 4 ºC) are within the design range of the model. Using the default value for physiological state and initial log count we can now predict the growth curve for selected observation times within the design range of the model. Entering ‘CO2 = 0 %’ and ‘observed hours = 240’ then selecting ‘predict’ we discover that the count increases by 0.94 logs in 208 h and 1.13 logs in 224 h. We could reasonably take 208 h as the minimum time to a 1 log increase in this system. The population increase is 0.06 log short of the specified 1 log maximum increase, but within another 16 h the maximum has been overshot by 0.13 logs. Interestingly, using a model allows a more stringent threshold to be used than a simple challenge test would. Detecting a 1 log increase in numbers by analytical measurements on a small number of samples is difficult because of the inherent variability of microbiological methods. A model has the ability to smooth out much of the variability because it is based on the results of a large number of analytical samples across a range of growth conditions. Of course, we would feel more comfortable if we had a way to confirm the predictions, or at least to confirm that the model behaves reasonably well for products of this type. So we can explore ComBase looking for relevant data that might confirm our prediction. We find several sets of data for L. monocytogenes/ innocua in quiche by searching under ‘egg or egg products’. By a happy coincidence, one of them closely matches the levels of our controlling factors but with temperature = 5 ºC and with an initial Listeria log count of 3.53, somewhat higher than the default value of 3.0 used for our prediction. The data set in ComBase extends to a maximum of 768 h so we re-run Growth Predictor for an observed 768 h with aw = 0.995, pH = 5.8 and temperature = 5 ºC. Figure 6.1 shows the results for the two predictions and the observed growth. The prediction at 5 ºC is in almost perfect agreement with the observed growth at that temperature. The prediction at 4 ºC indicates slightly slower growth, as would be expected. Hence we can be confident that the 208 h (8.7 days) to a 1 log increase represents a reasonable estimate. This prediction can be used to establish a safe shelf life of up to eight days for this specific product under these storage conditions. In practice, it is more likely that the model and available data do not match so closely as in the example just given. More exploration may allow us to build a more complete picture of the model’s performance in a similar food. Figure 6.2 shows that for three out of four predicted curves, the predictions from Growth Predictor are in good agreement with the data from quiche in ComBase, although the model generally predicts rather faster growth than observed. For the fourth curve (left, below), the data in ComBase showed much slower growth than predicted. The
Application of models in predictive microbiology
89
Fig. 6.1 Growth of Listeria monocytogenes in quiche predicted at 4 ºC (solid line) and 5 ºC (broken line), at pH 5.8 and aw 0.995 using Growth Predictor, and observed at 5 ºC, pH 5.8 and aw 0.995 (data from ComBase).
Fig. 6.2 Comparison of observed and predicted growth of Listeria monocytogenes in quiche under several different conditions. Solid lines, predictions from Growth Predictor; broken lines, predictions from USDA PMP 6.1; (ü) observations from ComBase.
90
Modelling microorganisms in food
picture is certainly one in which the model appears to make conservative predictions so we can be cautiously optimistic that predictions would offer good guidance on safe formulations. We do need to consider whether the physiological state of the inocula used for modeling and for validation compare with what we would expect for a natural contaminant. It is conceivable that both in Growth Predictor and in the ComBase experiments, the medium/product was inoculated using a broth culture grown under optimum conditions with the lag phase reflecting the physiological adjustment needed to cope with the new conditions. A naturally occurring contaminant might be expected to have grown in conditions very similar to those in the product, leading to a short or zero lag. If we are unable to determine the physiological state of the inoculum used for modeling and validation from the references associated with the models and database records we have the option, in Growth Predictor, to assign a physiological state of 1, forcing zero lag for a growth condition. Alternatively, we must consider how much of any predicted lag to factor into consideration of the safe shelf life. We may choose to generate our own challenge data to confirm the predictions for our selected formulations and to increase our confidence in the model as we gain experience with it. We may also be able to compare models from different sources to help us to understand more about model performance. Predictions from the USDA Pathogen Modeling Program version 6.1 are compared with predictions from Growth Predictor and observations in quiche from ComBase in Fig. 6.2. In PMP 6.1 the Gompertz equation is the primary model used for fitting growth curves of individual experimental treatments, unlike Growth Predictor which uses the equation of Baranyi and Roberts (1994). The agreement between models created from different experimental data using different equations is quite remarkable. The predictions from PMP 6.1 were more conservative than those from Growth Predictor for all of the example conditions used here, but the difference between predictions from the two models was less than the difference between predictions from either model and the observations in quiche. This degree of agreement increases our confidence in the usefulness of both models; both are somewhat conservative in relation to our example product. Several authors have made comparisons between models in the PMP and other models or their own observations (McElroy et al., 2000; Olmez and Aran, 2005; Tamplin, 2002). These authors generally agree on the usefulness of the models compared, provided that some care is taken to understand their limitations. A death question Just as we must frame a suitable question in order to use a model to explore a growth question, so we must do the same when we are interested in death of a pathogen or spoilage organism. For the purpose of exploring an approach, let us assume that we wish to use a thermal process for apple juice to meet the process standard of the US ‘Juice HACCP’ regulation (Food and Drug Administration, 2001). This requires US juice processors to apply a process capable of a 5 log reduction of the ‘pertinent pathogen’. The ‘pertinent pathogen’ is the most resistant microorganism of public health concern likely to be present in the juice.
Application of models in predictive microbiology
91
It may vary depending on the process and type of juice but is most likely to be Salmonella or E. coli O157:H7. Basic questions might be ‘what temperature and time do I need to reach to comply with the 5 log process requirement?’ or ‘does my current process comply with the 5 log process requirement?’ Let us consider first E. coli O157:H7. The first question that we might want to ask is ‘what combinations of time and temperature can kill 5 logs of E. coli O157:H7 at pH 3.8 and aw of 0.97?’. For now, the pH and aw values are estimates, but both would be confirmed by measurement before making any decisions about process effectiveness. We might also be constrained by operational factors that dictate a maximum hold time of, say, ten seconds in order to achieve some required level of throughput from the available equipment. We could then ask ‘what temperature do I need to achieve in order to obtain a 5 log reduction in ten seconds or less at pH 3.8 and aw of 0.97?’ When we look for models we find that our options are limited. The USDA PMP has a model for inactivation of E. coli O157:H7 in ‘meat gravy’. This is not obviously similar to apple juice but may offer some guidance if its pH and aw ranges are appropriate. The model has a pH range of 4.0–7.0 and includes the concentration of sodium chloride and sodium pyrophosphate but unfortunately does not include aw. We find models in the literature for destruction of E. coli O157:H7 in apple juice by high pressure (Ramaswamy et al., 2003; van Opstal et al., 2005) and ultraviolet light (Ngadi et al., 2003) and for destruction of E. coli O157:H7, Salmonella, and L. monocytogenes by heat (Mazzotta, 2001). It seems that there is no single model designed to answer our question and we may need to resort to experimentation. However, we decide to compare the model of Mazzotta (2001) with the USDA PMP as a guide to experimental conditions. Mazzotta (2001) measured thermal death time of E. coli O157:H7 at 56, 58 and 60 °C, and Salmonella and L. monocytogenes at 56, 60 and 62 °C in apple juice at pH 3.9. A line drawn above all the individual regression lines of log D-value against temperature over an extrapolated range of 56–72 °C had a slope described by log D = 11.5 – 0.19T where D is the time in minutes for a 1 log reduction in numbers and T is the temperature in °C. From this relationship, the time in minutes for a 5 log reduction (5D) is given by 5D = 5 × 10(11.5 – 0.19T). Using this relationship in a spreadsheet, we create the regression line for time to a 5 log reduction at different temperatures predicted by this combined Salmonella and L. monocytogenes model (Fig. 6.3) and can use that to estimate effective combinations of time and temperature to Pasteurize our juice. Next we look at the USDA PMP model for inactivation of E. coli O157:H7 in gravy. We set pH equal to 4.0, sodium chloride equal to 0.0 % (equivalent to aw = 1.0), and sodium pyrophosphate equal to 0.0 %. By making predictions at different temperatures within the range of the model and plotting the points in Fig. 6.3 we are able to compare the predictions with those from the model of Mazzotta (2001). Within the temperature range covered, the PMP model predicts shorter times for 5 logs reduction than Mazzotta’s model but with a less steep slope. Remembering that the PMP model at 1.0 % sodium chloride gives aw equal to 1.0 we adjust the aw equal to 0.97 by changing the salt level to 5.0 % and repeat the
92
Modelling microorganisms in food
Fig. 6.3 Comparison of models for inactivation of pathogens in apple cider. (¸) The model of Mazzotta (2001) for inactivation of Salmonella, Listeria monocytogenes and Escherichia coli O157:H7 at pH 3.9; USDA pathogen modeling program 6.1 model for inactivation of E. coli O157:H7 in gravy (K) at pH 7.0, NaCl 0%, (ü) at pH 4.0, NaCl 5 %, (õ) at pH 4.0, NaCl 0%; (¤) estimated time to 5 logs decline from data of Folsom and Frank (2000) in ComBase.
predictions across the temperature range of the model. Plotting these results gives a line parallel to the first PMP predictions but indicating longer times to 5 logs reduction than either the PMP predictions with 0 % salt or Mazzotta’s model. The agreement between models is not as good as we had hoped for. This is not completely surprising considering the differences between the two modeling approaches, but it does not give great confidence in the predicted times to 5 logs reduction, especially in the critical range less than 10 s, above about 68 °C, where we can meet our processing constraints. Next we look at ComBase and find 14 entries for E. coli in ‘fruit or fruit products’ with temperature from 60–120 °C and the default values for pH (0.1–14) and aw (0.01–1) and find no records. When we lower the search temperature to 50– 120 °C we find 14 records, all obtained in apple cider but only two contain original data points. These data sets were submitted by Folsom and Frank (2000) for E. coli O157:H7 injured by chlorine pretreatment then heated at 58 °C in apple cider at pH 3.6. Neither showed a 5 log decline. Extrapolating linearly allows us to estimate the time to a 5 log decline, and this is comparable to predictions from the PMP at 0 % NaCl and pH 4.0. Unfortunately the time and temperature are not very relevant to our current concern and we cannot see how predictions compare at different
Application of models in predictive microbiology
93
temperatures. In this case, it seems, ComBase is not going to help us to choose between models, but we must come to some decision on how to proceed. It seems reasonable to favor Mazzotta’s model over the PMP since it was developed in the material of interest. This model predicts that a 5 log reduction of the three pathogens E. coli O157:H7, Salmonella and L. monocytogenes would be achieved in 10.0 s at 68.3 °C, falling to 1.3 s at 73 °C. McCandless (1997) reported that a pasteurization process of 71.1 °C for 6 s was recommended at a workshop for New York State apple cider makers, and this is comparable with the predictions of Mazzotta’s model. In fact, at 71.1 °C, Mazzotta predicts a time of 2.9 s to a 5 log reduction. Taking into account all of the predictions and other considerations that we have reviewed we conclude that we will plan for a process of 71.1 °C for 6 s as recommended to the New York State cider makers, but we will commission validation studies of our own to confirm the effectiveness of this regime.
6.4
Access to models
Very many models have been published in peer-reviewed literature. Of these, only a relatively small number are available in electronic form. Electronic access to models, whether by direct web access, web download or on disc, has the great advantage of convenience. The work of writing the model into a useable form has already been done and limits or warnings are usually in place to prevent the user from accidentally exceeding the domain of validity or the design range. The potential disadvantage is that the electronically available models may not address the question under consideration.
6.4.1 Electronic access Internet addresses to access, download or request all models discussed in this section are given in Section 6.7 on Sources of further information. Pathogen Modeling Program The United States Department of Agriculture–Agricultural Research Service (USDA–ARS) has been developing the Pathogen Modeling Program (PMP) since the early 1990s and has made the program freely available in various different forms over the years. Since the late 1990s the PMP has been freely available to download. From time to time the program is updated to include newly developed models or make other changes such as improvements to the user interface. To bridge the gap between releases, newly developed models are initially made available in Microsoft Excel workbooks. The models primarily predict the growth of pathogens under defined environmental conditions, although there are also some inactivation models. Version 7.0 was the current version in December 2005. The Microsoft.NET Framework 1.1 component of the Microsoft Windows® operating system must be installed on the PC before PMP version 7.0 can be
94
Modelling microorganisms in food
installed. Users of PCs on an organizational network are likely to be barred from installing operating system files. These users may be able to access PMP Version 6.1 which does not require Framework 1.1 and contains the same models as version 7.0 without the improvements to the user interface. The PMP contains many models including:
• growth models for nine different pathogens: Aeromonas hydrophila, Bacillus
• • • •
cereus, Clostridium perfringens, Escherichia coli O157:H7, Listeria monocytogenes, Salmonella spp., Shigella flexneri, Staphylococcus aureus and Yersinia enterocolitica; thermal inactivation models for three pathogens: Clostridium botulinum, E. coli O157:H7 and L. monocytogenes; gamma irradiation models for Salmonella typhimurium, E. coli O157:H7 (in beef tartar) and spoilage flora of chicken; growth during cooling models for C. botulinum and C. perfringens; survival models for E. coli O157:H7, L. monocytogenes, and S. aureus using lactic acid as an acidulant and for Salmonella spp. without acidulant.
Most of these models were developed in broth media and were not later validated in foods so users must establish for themselves that the models are applicable to their circumstances. However, work is progressing towards combining the PMP with ComBase to provide both predictions and supporting data in a single package (Tamplin et al., 2004) Growth Predictor and Perfringens Predictor Growth Predictor is the successor to the UK Food MicroModel program described by McClure et al. (1994). Growth Predictor has been freely available from the UK Institute for Food Research (IFR) since 2003, and validation data and the data used for model building are available through ComBase. All models within Growth Predictor accept pH, aw and temperature as inputs. Additionally, a number of models can accept a fourth factor such as concentration of nitrite, CO2 lactic acid or acetic acid. All models calculate and plot growth curves using the model of Baranyi and Roberts (1994). Growth predictor has 18 models for 13 organisms: A. hydrophila, B. cereus (with CO2), B. licheniformis, B. subtilis, C. botulinum (proteolytic and non-proteolytic), C. perfringens, E. coli (with CO2), L. monocytogenes/innocua (with CO2, nitrite, lactic acid or acetic acid), S. aureus, salmonellae (with CO2 or nitrite), Y. enterocolitica (with CO2 or lactic acid) and Brochothrix thermosphacta. Perfringens Predictor, developed by the UK IFR, is an add-in for Microsoft Excel. It calculates the amount of growth of C. perfringens that can occur during an entered cooling curve for cooked meat and indicates whether that amount of growth is considered safe by the UK Food Standards Agency. Seafood Spoilage and Safety Predictor software The Seafood Spoilage and Safety Predictor (SSSP) software was developed by the Danish Institute for Fisheries Research and has been freely available since
Application of models in predictive microbiology
95
November 2004. The SSSP builds upon the Seafood Spoilage Predictor (SSP) that was launched in 1999. Development and initial uptake of SSP was described by Dalgaard et al. (2002). The models in SSSP include:
• relative rate of spoilage of fresh seafood from temperate and tropical waters, • •
cold smoked salmon and cooked and brined shrimps developed using sensory assessments; microbiological spoilage associated with Photobacterium phosphoreum and Shewanella putrefaciens developed from measurements of the concentrations of the organisms over time; growth of L. monocytogenes in cold smoked salmon.
The SSSP can be downloaded and is available in several different languages. Alternatively, a simplified version in English only is accessible online. The online version is particularly helpful for those unable or unwilling to install the downloadable version. Opti.Form™ Listeria Suppression Model™, Listeria Control Model™ and Listeria Control Model™ 2005 The first two of the models in the above heading, available on a free CD from Purac (see Section 6.7 on Sources of additional information and advice), can be used to develop formulas to protect cooked, cured meat and poultry products against growth of L. monocytogenes, during storage at 4.4 °C, by inclusion of potassium lactate and sodium diacetate. The Listeria Control Model™ is a kinetic model that can predict the total amount of growth over time based on product salt, moisture lactate and diacetate concentrations. Development of this model was described by Seman et al. (2002). The Listeria Suppression Model™ is a boundary model developed by Legan et al. (2004) through re-analysis of the data of Seman et al. (2002) and results for a parallel set of data for uncured products (Seman, 2001). It is able to calculate the level of lactate and diacetate needed to prevent growth over product shelf life (where growth is defined as a 1 log increase in count). This model, only available for US addresses, is in line with the US Food Safety Inspection Service (FSIS) interim final rule that came into force on October 6th, 2003. The interim final rule affords different regulatory treatment under three alternatives depending on the level of protection provided to the product (FSIS, 2003). The Listeria Control Model™ 2005 is a new kinetic model that includes pH and temperature as independent variables. It is available as a download after first contacting a Purac sales office for a password to the secured website. ERH Calc™ This software commercially available (for a fee) from Campden and Chorleywood Food Research Association, Chipping Campden, UK is primarily an aid to formulation of bakery products. However, it incorporates predictions of the moldfree shelf life of the finished products at two different storage temperatures, 21 °C and 27 °C, using a model based on the work of Seiler and his collaborators (Cooper et al., 1968; Cauvain and Seiler, 1992).
96
Modelling microorganisms in food
6.4.2 Published models (not electronically accessible) Many research papers describe models that are not electronically accessible (e.g. Cole and Keenan, 1987; Deak and Beuchat, 1993; Praphailong and Fleet, 1997; Jenkins et al., 2000; Mattick et al., 2001; Stewart et al., 2001; Battey et al., 2002). The results and discussions in these papers may be sufficient to answer a particular question, but they do not have the flexibility of a model to explore the effects of changing conditions. Although generating the models often requires access to some specialized statistical software, using them is a different matter. It is usually relatively easy to incorporate published models within a simple spreadsheet, but it is very important to compare predictions from the spreadsheet to results reported by the original authors before using the spreadsheet for new predictions. There are a number of errors, including copying and typing errors and failure to reverse any transformations used prior to modeling, that are very easy to make but also easily detected by this comparison. Further embellishments to constrain the input range or warn when inputs are out of range are also readily accomplished. One specific example is described below. Jenkins et al. (2000) published three models describing the boundary for growth of Zygosaccharomyces bailii in acid foods. Two predict time to growth and the third predicts probability of growth at a specified end time. Of the models predicting time to growth, model 1 is based on undissociated acetic acid and total molar concentration of fructose and sodium chloride. The boundary thus described is very comparable with that presented by Tuynenburg Muys (1971), suggesting that the modeling approach is sound. Model 2 describes the boundary in terms of all four independent variables, acetic acid concentration, pH, salt and fructose, thus: ln(T) = 4.76 + 3.79f + 1.47n – 1.62p + 3.21a + 0.59fn – 0.44fp + 0.82fa – 0.24np + 0.46na – 0.74pa + 0.39a2 where T = days to growth, f = (% w/w fructose – 19.5)/12.5, n = (% w/w NaCl – 3.4)/0.8, p = (pH – 3.75)/0.25, a = (% v/v acetic acid – 2.3)/0.5. Note that n, f, p and a are all normalized by subtracting the means of their modeling design levels and dividing by the standard deviations of the respective design levels. There are several positive interactions (giving longer time to growth as concentration of either factor increases) and a number of negative ones (where time to growth decreases as one of the factors increases). All of the negative interactions involve pH, where an inverse relationship between pH and time to growth is both expected and seen in the pH main effect term p. The presence of interactions raises the possibility that some ratios of the factors may be more advantageous than others. But how can the product developer find out? With the model and normalization information, it is quite easy to set up a simple spreadsheet to calculate days to growth. Table 6.1 shows the elements needed to enter the values for the independent variables (salt, fructose, pH and acetic acid), calculate the normalized values and calculate days to growth. The example as shown makes use of the Excel facility to define named ranges using Name > Define from the Insert menu. It could as easily be implemented in Excel
Table 6.1 Elements of a simple spreadsheet to implement Model 2 of Jenkins et al. (2000) to calculate time to growth of Zygosaccharomyces bailii in acid foods Enter input values
Calculate normalized values
Predict days to growth
2
Salt % [define as named range: salt]
= (salt-3.4)/0.8 [define as named range: n]
= exp(4.76 + (3.79*f) + (1.47*n) – (1.62*p) + (3.21*a) + (0.59*f*n) – (0.44*f*p) + (0.82*f*a) – (0.24*n*p) + (0.46*n*a) – (0.74*p*a) + (0.39*a^2)
3
Fructose % [define as named range: fructose]
= (fructose–19.5)/12.5 [define as named range: f]
4
pH [define as named range: pH]
= (pH–3.75)/0.25 [define as named range: p]
5
Acid % [define as named range: acid]
= (acid–2.3)/0.5 [define as named range: a]
6 7 8 9
=IF(salt>2.599,IF(salt<4.201,” “,”WARNING, salt out of range”),”WARNING, salt out of range”) =IF(fructose>6.999,IF(fructose<32.001,” “,”WARNING, fructose out of range”),”WARNING, fructose out of range”) =IF(pH>3.4999,IF(pH<4.001,” “,”WARNING, pH out of range”),”WARNING, pH out of range”) =IF(acid>1.799,IF(acid<2.801,” “,”WARNING, acid out of range”),”WARNING, acid out of range”)
Application of models in predictive microbiology
1
97
98
Modelling microorganisms in food
or any other spreadsheet by directly referencing the location of the cell(s) containing the input for each formula without naming ranges. In this approach, the range names given in Table 6.1 will serve as a guide to the cells to reference. After following the example, it is important to compare predictions back to the results provided by the original authors to detect any errors that may have crept into the spreadsheet. This example also shows how cells can be set up to warn if entered values are outside the range of the model. It would be as easy to restrict the spreadsheet’s ability to calculate outside the range of the model by using ‘IF’ statements to filter the input values. An example that should work with any spreadsheet would be something of the form: =IF(salt < min_salt,min_salt,IF(salt > max_salt,max_salt,salt)) where salt is the named range or the cell reference for the cell where the salt concentration is entered, min_salt is the value of the lowest concentration of salt in the design range and max_salt is the value of the highest concentration of salt in the design range. Using this approach, the user can enter any value, but the cell containing the IF statement will only take a value within the limits defined by min_salt and max_salt. The value in this cell is then used to calculate the normalized salt concentration to present to the model. When taking this approach to constrain predictions to the design range, it is important to create some warning for the user that the prediction is based on the restricted value. Again, ‘IF’ statements can be used, similarly to those used to give an ‘out of range’ warning in Table 6.1. If working within Excel there are other options for restricting the values that can be entered into a cell. Probably the simplest is to select ‘validation’ from the ‘data’ menu then select the ‘settings’ tab. In the ‘allow’ box select ‘decimal’ from the drop down menu then specify the limits in the ‘minimum’ and ‘maximum’ boxes. By default, a warning will pop up giving the option to ‘retry’ or ‘cancel’. The disadvantage of this approach is that the result of the out of range prediction may be visible until an option is selected. More sophisticated approaches using spinners or scrollbars are also available. More information on all these approaches is available from Excel help. A functioning spreadsheet model can be made quite quickly; an experienced user may take as little as half an hour to create a simple spreadsheet. Even an inexperienced user might only take one hour to include the model. One should, however, be very precise and build in checks as described previously.
6.5
Other quantitative microbiology tools
This section will briefly examine the use of other quantitative microbiology tools such as sampling plans and trend analysis. Data from these approaches may be used directly for decision-making or combined with the output from models to build a more complete picture of a food product or process.
Application of models in predictive microbiology
99
6.5.1 Statistical sampling plans As we have seen, models are useful for exploring a range of ‘what if?’ questions and databases can answer ‘what’s known?’ questions. Neither are designed to tell us ‘what’s there?’ When we want to know whether a microorganism is present in a food, or at what concentration, we must resort to laboratory testing. We might want this information as a starting point for using a model, but often we want to make a decision about the suitability of the food for a particular purpose. There are risks involved in making decisions based on test data because the test results tell us directly only about the small portion of the food analyzed in the laboratory. We use the laboratory results as an estimate of the condition of the far larger portion of the food that was not analyzed and make the decision based on that estimate. Since the laboratory result may be the same as, higher than or lower than the true value there are two principal types of decision risk:
• wrongly accepting a quantity of food that does not meet the specification (sometimes known as the consumer’s risk);
• wrongly rejecting a quantity of food that does meet the specification (sometimes known as the producer’s risk). Statistical sampling plans are designed to quantify the risk of making the wrong decision and show how the risk changes with sample size (number of samples taken for testing) and other decision criteria. They have been described at length by ICMSF and others (ICMSF 1974, 1986, 2002; Kilsby et al., 1979; Jarvis, 1989, 2000; Hildebrandt et al.,1996; Goddard et al., 2001; Legan et al., 2001; Legan and Vandeven, 2003). Statistical sampling plans are not designed for environmental sampling, but principles of environmental sampling were discussed by ICMSF (2002).
6.5.2 Trend analysis Many quality assurance laboratories are involved in repetitive testing of similar samples from the same source over extended periods of time. These laboratories gradually amass enormous quantities of data that have been generated at great expense and used to support some decision, e.g. whether to accept delivery of an ingredient or release a finished product, and are then filed for posterity (or, at least, for the duration required by the laboratory quality policy or local regulations). Microbiology laboratories often overlook the historical value of these data. Some simple techniques for plotting and analyzing them taken from the field of statistical process control (Montgomery, 1996) can greatly increase their value. Control charts The simplest kind of control chart is just a plot of results (e.g. log counts of a spoilage organism) against the order of sampling or testing. It is known as an ‘X’ or individual measurement chart. This chart can show trends that might otherwise be missed, allowing these to be the source of some action. For example, an upward trend in log counts over time might allow an intervention before the counts become
100
Modelling microorganisms in food
Fig. 6.4 Example of a control chart reprinted from Legan and Vandeven (2003). The upward trend in counts in January and February eventually triggers an investigation leading to an intervention that returns counts to wholly acceptable levels.
unacceptably high. This is illustrated in Fig. 6.4 where a process made product with bacterial counts that varied from batch to batch but with a clear upward trend in counts between early January and late February. The upward trend triggered an investigation when the counts were marginally acceptable. The investigation identified an intervention that was applied before the counts became high enough to cause the product to be defective. The intervention caused the counts to return to fully acceptable levels. Equally, a downward trend may suggest an investigation to discover the cause and capture an improvement in future quality. Control limits can be determined on the basis of past experience with the sample type and marked on the chart to give immediate feedback that an ‘out of control’ event has taken place. More sophisticated control charts can derive control limits and generate several different kinds of warning flags using formal statistical rules (Jarvis, 2000; ICMSF, 2002; Jewell et al., 2002). These charts can help us to:
• understand the capability of a particular process and whether it can consistently achieve the desired performance;
• take action before a control limit is breached when an adverse trend is identified; • look for the source of improvement when a positive trend is identified; • estimate the probability that a severely out of control condition will result simply from the inherent process variability (Peleg, 2002). Most statistical analysis programs have some ability to create control charts and some are very sophisticated, but simple plots can easily be created using a spreadsheet. Jewell et al. (2002) created a tool for plotting simple mean and range charts in Microsoft Excel. Process variability analysis Knowing the variability in a sequence of data is useful in itself. The more variable
Application of models in predictive microbiology
101
the data, the more likely it is that extreme values arise simply as a result of the natural variability. Statistical analysis can be used to determine the distribution that most closely describes the variability in a sequence of historical data. That distribution can be used to estimate the frequency with which extreme values are likely to occur in the future, but not to predict when those events might occur (Nussinovitch and Peleg, 2000; Peleg et al., 2000; Corradini et al., 2001; Engel et al., 2001; Peleg, 2002). It follows that the frequency of extreme events can be reduced by interventions to decrease the variability of future data. Figure 6.5 illustrates the tendency of a more variable distribution to lead to more extreme values in two ways. Figure 6.5a compares two normal distributions of log counts with the same mean but different standard deviations showing that the distribution with greater standard deviation has an increased probability of high count results. Figure 6.5b illustrates the same point with reference to sequences of counts following the same normal distributions of log counts as Fig. 6.5a. The sequences are simulated using the method described by Nussinovitch and Peleg (2000). When an arbitrary limit is used to illustrate the maximum tolerable count, it is clear that the more variable distribution is more likely to breach the limit.
6.5.3 Quantitative risk assessment Quantitative risk assessment is a tool to quantify the effect on public health of different food safety interventions, although in scope it is truly a discipline in its own right. It is considered here very briefly because it fits within a continuum of quantitative microbiology tools, but the interested reader will find a more detailed discussion on the topic later in this book. Formal quantitative risk assessment has become feasible for foodborne pathogens only since the mid 1990s. It involves four elements (Lammerding and Paoli, 1997; Lammerding and McKellar, 2004):
• hazard identification: in which the connection between disease and the presence of a pathogen in food is documented;
• hazard characterization: in which the characteristics of the disease, the •
•
pathogen–host relationship and, ideally, the dose–response relationship are documented; exposure assessment: which tracks the pathways by which the pathogen enters the food supply and multiplies, survives or dies until the food is consumed in order to estimate the likely consumption of the pathogen; depending on the purpose for the risk assessment this step may focus on a particular pathway (perhaps a food category or specific product); risk characterization: which uses the data from the previous steps to estimate the risk in terms of the likelihood and severity of illness. This step should also consider any uncertainties associated with the data.
Clearly, models have a role to play in exposure assessment and risk characterization. The use of models and modeling approaches in quantitative risk assessment has been well documented (Brown et al., 1998; van Gerwen and Gorris, 2004; van Gerwen et al., 2000; van Gerwen and Zwietering, 1998). Sampling supports
102
Modelling microorganisms in food
Mean 1.0
(a)
0.8
P
0.6
0.4
0.2
0.0 –2.0
–1.0
0.0
1.0
2.0
3.0
4.0
5.0
6.0
Log10 count/g
6
(b) Arbitrary limit
Log10 count/g
4
2
0
–2 0
10
20
30
40
50
Sequence order
Fig. 6.5 The effect of process variability on the likelihood (P) of extreme events. (a) Normal distribution of log counts for processes with mean log count of 2.0 but standard deviation of 0.4 (solid line) or 1.0 (broken line). The shaded area shows the increased probability of high count results in the distribution with greater standard deviation. (b) Simulated sequences of counts following normal distributions of log counts with means = 2.0 and standard deviation = 0.4 (¤) or 1.0 (¸). When an arbitrary limit is used to illustrate the maximum tolerable count it is clear that the more variable distribution is more likely to breach the limit.
Application of models in predictive microbiology
103
hazard identification and exposure assessment. Quantitative risk assessment would not be possible without the models that have been developed to date but, as quantitative risk assessment itself develops, it will demand new and improved models.
6.6
Future trends
McMeekin (2004) wrote ‘…one continues to have the sense that the predictive models and the databases upon which they are based have not nearly reached their potential’. Despite research since the early 1980s, only a small number of useable models have become available in electronic form, a situation which probably contributes to this perception. McMeekin (2004) sees the future as moving beyond building models to developing and verifying applications. Since this is where models can add considerable value to product and process evaluation and essential decision-making in industry and government, this also seems likely to be true. However, this process will, itself, take a long time (one hopes less than another 20 years!). Traditional scientific publishing provides a considerable barrier to the wider application of many perfectly useful models. The potential user must create their own spreadsheet or program guided by the printed description. Easier access to models in useable electronic formats will encourage exploration of model applications and build a wider pool of potential users. Already, institutions and individual researchers are showing a willingness to make models available through the Internet (see Section 6.7 on Sources of further information and advice) and, with an increasing number of scientific journals available electronically, the opportunities to link to electronic sources of models will grow. Another limitation of traditional scientific papers is that often only derivatives (summary statistics) of raw data are provided in the interests of saving space. For example a single ‘D’-value may be used to represent a 6–10 point lethality curve. Readers have no way to assess for themselves the underlying shape of the curve or the variability in the data, to try modeling the data using different approaches or to compare raw data from different studies. The development of online databases such as ComBase is a very positive step forward. As these become better known and grow through submissions of new data, they will become increasingly valuable both to researchers and decision makers. It is conceivable that submission of raw data to such a database will eventually become a requirement for publication of papers addressing microbial growth, death or survival. Another barrier to greater uptake of modeling applications is the perception that models are so conservative as to be impractical. There is clearly a fine line between ‘fail-safe’ and ‘too conservative’, but it is true that broth media are often more supportive of growth than foods. Hence the experimental approach to model building is extremely important. It is likely that applications development will proceed with models developed in foods or more ‘food-like’ materials. When particular models are seen as useful, effective, available and easy to use they are readily adopted and applied within the context of a broader food safety
104
Modelling microorganisms in food
program. Increasing adoption of modeling applications will happen quietly, but with real benefits for food safety. It is also clear that food microbiology is a much more quantitative discipline now than it was in the 1970s and 1980s. It seems inevitable that this trend will continue as the food industry continues to be highly competitive within an increasingly global business environment and as regulatory agencies worldwide move towards risk-based policies for food safety oversight.
6.7
Sources of further information and advice
The list below is intended as a guide to some useful sources of models and other quantitative tools but is not meant to be a comprehensive list of all such sites and tools.
6.7.1 Models and modeling tools • http://www.purac.com/purac_com/a5348511153c582f5bd69fd6bd64bb49. php#suppression. Brief information on models available from Purac to calculate inhibition of growth of Listeria monocytogenes using potassium lactate and sodium diacetate, including a link to request the model(s) on compact disc. Accessed 14 December, 2005. • http://www.purac.com/purac_com/86e55dbd9264d98f3407e7ab6589478f.php. Brief information on Purac Listeria Control Model 2005™ with link to contact details for Purac sales offices. Accessed 14 December, 2005. • http://www-unix.oit.umass.edu/~aew2000/GrowthAndSurvival.html. Spreadsheets for generating non-isothermal survival and growth curves provided by Dr Micha Peleg. Accessed 14 December, 2005. • http://www.dfu.min.dk/micro/sssp/Home/Home.aspx. Seafood spoilage software from the Danish Institute for Fisheries Research. Accessed 14 December, 2005. • http://www.ifr.ac.uk/safety/growthpredictor. Source for UK Growth Predictor and Perfringens Predictor software. Accessed 14 December, 2005. • http://www.ifr.ac.uk/safety/DMfit/default.html. DMFit: an add-in for Microsoft Excel 5 or above. It fits microbial growth curves using the model of Baranyi and Roberts (1994). Accessed 14 December, 2005. • http://www.ifr.ac.uk/microfit/ MicroFit: a tool developed by the UK Institute for Food Research to allow product developers to extract microbial growth parameters from challenge test data. Accessed 14 December, 2005. • http://ars.usda.gov/Services/docs.htm?docid=6786. Source for USDA pathogen modeling program and associated notes. Accessed 14 December, 2005. • http://www.symprevius.net/. Website of French national predictive microbiology program. Accessible in French and English (click on the rotating flag). Registered users can access a database of over 3000 growth curves and a simulation module. Accessed 14 December, 2005.
Application of models in predictive microbiology 6.7.2
105
Databases
• http://wyndmoor.arserrc.gov/combase/. Browser for ComBase: Combined Da-
•
tabase for Predictive Microbiology, a database provided jointly by the United Kingdom Institute for Food Research and the USDA Eastern Regional Research Center. It contains data on growth, survival and death of many foodborne pathogens. Accessed 14 December, 2005. http://www.foodrisk.org/database_pathogens.cfm. List of links to databases holding pathogen information including epidemiological data, disease incidence, etc., provided by JIFSAN (Joint Institute for Food Safety and Nutrition). Accessed 14 December, 2005.
6.7.3
Risk assessment information
• http://www.foodrisk.org/non_commercial_tools.cfm. Links to a number of •
•
different risk-assessment tools provided by JIFSAN. Accessed 14 December, 2005. http://smas.chemeng.ntua.gr/miram/?module=calculator. A simple risk-assessment calculator in Microsoft Excel to estimate probability of illness and microbial concentration. Produced by the European Union Safety Monitoring and Assurance System project. Accessed 14 December, 2005. http://www.eu-rain.com/. European Union food safety risk assessment database for sharing risk assessment data. Data are available to scientists who have themselves submitted data, summaries of research papers associated with data are freely available. Accessed 14 December, 2005.
6.7.4
Other information
• http://www.i2workout.com/mcuriale/mpn/index.html. Several versions of a •
most probable number (MPN) calculator, including one in Excel provided by Mike Curiale. Accessed 14 December, 2005. http://www.icmsf.iit.edu. Source for download of a spreadsheet to calculate the concentration of microorganisms controlled by ICMSF sampling plans as described by Legan et al. (2001). A short guide to the spreadsheet is available at www.icmsf.iit.edu. Accessed 14 December, 2005.
6.8
Acknowledgements
Thanks are due to Dr Mark Vandeven (of Colgate Palmolive, formerly of Kraft Foods) who provided helpful comments on the manuscript.
106
6.9
Modelling microorganisms in food
References
Baranyi J (1999) ‘Creating predictive models’ in Roberts TA (ed.), COST 914 Predictive modelling of microbial growth and survival in foods, Luxembourg, Office for Official Publications of the European Communities, 65–67. Baranyi J and Roberts TA (1994) A dynamic approach to predicting bacterial growth in food, Int J Food Microbiol, 23, 277–294. Baranyi J and Tamplin M (2004) ‘ComBase: a common database on microbial response to food environments’, J Food Prot, 67(9), 1967–1971. Battey AS, Duffy S and Schaffner DW (2002) Modeling yeast spoilage in cold-filled readyto-drink beverages with Saccharomyces cerevisiae, Zygosaccharomyces bailii, and Candida lipolytica, Appl Environ Microbiol, 68(4), 1901–1906. Bhaduri S, Buchanan RL and Phillips JG (1995) Expanded response surface model for predicting the effects of temperatures, pH, sodium chloride contents and sodium nitrite concentrations on the growth rate of Yersinia enterocolitica, J Appl Bacteriol, 79, 163– 170. Brown MH, Davies KW, Billon CMP, Adair C and McClure PJ (1998) Quantitative microbiological risk assessment: principles applied to determining the comparative risk of salmonellosis from chicken products, J Food Prot, 61, 1446–1453. Bull MK, Szabo EA, Cole MB and Stewart, CM (2005) Toward validation of process criteria for high-pressure processing of orange juice with predictive models, J Food Prot, 68, 949– 954. Cauvain SP and Seiler DAL (1992) Equilibrium relative humidity and the shelf life of cakes, Report No. 150, Chorleywood, Flour Milling and Baking Research Association. Cole MB and Keenan MHJ (1987) A quantitative method for predicting shelf life of soft drinks using a model system, J Ind Microbiol, 2, 59–62. Cooper RM, Knight RA, Robb J and Seiler DAL (1968) The equilibrium relative humidity of baked products with particular reference to the shelf life of cakes, Report No. 19, Chorleywood, Flour Milling and Baking Research Association. Corradini MG, Horowitz J, Normand MD and Peleg M (2001) Analysis of the fluctuating pattern of E. coli counts in the rinse water of an industrial poultry plant, Food Res Int, 34, 565–572. Curtis LM, Patrick M and Blackburn C de W (1995) Survival of Campylobacter jejuni in foods and comparison with a predictive model, Lett Appl Microbiol, 21, 194–197. Dalgaard P, Buch P and Silberg S (2002) Seafood Spoilage Predictor – development and distribution of a product specific application software, Int J Food Microbiol, 73, 343– 349. Deak T and Beuchat LR (1993) Use of indirect conductimetry for predicting growth of food spoilage yeasts under various environmental conditions, J Ind Microbiol, 12, 301–308. Engel R, Normand MD, Horowitz J and Peleg M (2001) A qualitative probabilistic model of microbial outbursts in foods, J Sci Food Agric, 81, 1250–1262. Evans DG, Everis LK and Betts GD (2004) Use of survival analysis and decision trees to model the growth/no growth boundary of spoilage yeasts as affected by alcohol, pH, sucrose, sorbate and temperature, Int J Food Microbiol, 92, 55–67. Folsom JP and Frank JF (2000) Heat inactivation of Escherichia coli O157:H7 in apple juice exposed to chlorine, J Food Prot, 63, 1021–1025. Food and Drug Administration (2001) Hazard Analysis and Critical Control Point (HACCP); Procedures for the Safe and Sanitary Processing and Importing of Juice; Final Rule, Federal Register, 66, 6137–6202. Food Standards Agency (2004) Guidance on the safety and shelf life of vacuum and modified atmosphere packed chilled foods (draft), http://www.food.gov.uk/multimedia/pdfs/ vpindustrycode2004.pdf (Accessed 14 December, 2005). FSIS (2003) Control of Listeria monocytogenes in ready-to-eat meat and poultry products, Federal Register, 68, 34208–34254.
Application of models in predictive microbiology
107
Goddard M, Jewell K, Morton RS, Paynter O, Ruegg JR and Voysey PA (eds) (2001) Review no. 27, Designing and improving acceptance sampling plans – a tool, Chipping Campden, Campden and Chorleywood Food Research Association. Hildebrandt G, Böhmer L and Dahms S (1996) Three-class attributes plans in microbiological quality control: a contribution to the discussion, J Food Prot, 58(7), 784–790. ICMSF (International Commission on Microbiological Specifications for Foods) (1974) Microorganisms in Foods 2, Sampling for Microbiological Analysis: Principles and Specific Applications, Toronto, University of Toronto Press. ICMSF (International Commission on Microbiological Specifications for Foods) (1986) Microorganisms in Foods 2, Sampling for Microbiological Analysis: Principles and Specific Applications (2nd edn), Toronto, University of Toronto Press. ICMSF (International Commission on Microbiological Specifications for Foods) (2002) Microorganisms in Foods 7, Microbiological Testing in Food Safety Management, New York, Kluwer Academic/Plenum Publishers. Jarvis B (1989) Statistical aspects of the microbiological analysis of foods, Progress in Industrial Microbiology, vol. 21, Amsterdam, Elsevier. Jarvis B (2000) Sampling for microbiological analysis, in BM Lund, TC Baird-Parker and GW Gould (eds), The Microbiological Safety and Quality of Foods, vol. 2, Gaithersburg, MD, Aspen Publishers, 1691–1733. Jenkins P, Poulos PG, Cole MB, Vandeven MH and Legan JD (2000) The boundary of growth of Zygosaccharomyces bailii in acidified products described by models for time to growth and probability of growth, J Food Prot, 63, 222–230. Jewell K, Voysey P and the Sampling and Statistics Working Party (2002) Review No. 26, Statistical Quality Assurance: how to use your microbiological data more than once, Chipping Campden, Campden and Chorleywood Food Research Association. Kilsby D, Aspinall LJ and Baird-Parker AC (1979) A system for setting numerical microbiological specifications for foods, J Appl Bacteriol, 46, 591–599. Lammerding AM and McKellar RC (2004) Predictive microbiology in quantitative risk assessment, in RC McKellar and X Lu (eds) Modeling Microbial Responses in Food, Boca Raton, FL, CRC Press, 263–284. Lammerding AM and Paoli GM (1997) Quantitative risk assessment: an emerging tool for emerging foodborne pathogens, Emerg Infect Dis, 3, 483–487. Legan JD and Vandeven MH (2003) Sampling techniques, in TA McMeekin (ed.) Detecting Pathogens in Food, Cambridge, Woodhead Publishing, 20–51. Legan D, Vandeven M, Stewart C and Cole M (2000) Modeling the growth, survival and death of bacterial pathogens in foods, in C de W Blackburn and PJ McClure (eds), Foodborne Pathogens: Hazards, Risk Analysis and Control, Cambridge, Woodhead Publishing, 53–95. Legan JD, Vandeven MH, Dahms S and Cole MB (2001) Determining the concentration of microorganisms controlled by attributes sampling plans, Food Control, 12, 137–147. Legan JD, Seman DL, Milkowski AL, Hirschey JA and Vandeven MH (2004) Modeling the growth boundary of Listeria monocytogenes in ready-to-eat cooked meat products as a function of the product salt, moisture, potassium lactate and sodium diacetate concentrations, J Food Prot, 67(10), 2195–2204. Lopez-Malo A and Palou E (2000) Modeling the growth/no-growth interface of Zygosaccharomyces bailii in mango puree, J Food Sci, 65, 516–520. Mattick KL, Jørgensen F, Wang P, Pound J, Vandeven, MH, Ward LR, Legan JD, LappinScott HM and Humphrey TJ (2001) Effect of challenge temperature and solute type on heat tolerance of Salmonella serovars at low water activity, Appl Environ Microbiol, 67(9), 4128–4136. Mazzotta AS (2001) Thermal inactivation of stationary-phase and acid-adapted Escherichia coli 0157:H7, Salmonella, and Listeria monocytogenes in fruit juices, J Food Prot, 64(3), 315–320. McCandless L (1997) New York cider industry learns to make cider safer, Great Lakes Fruit
108
Modelling microorganisms in food
Growers News, 1997 (March), 11. http://virtualorchard.net/glfgn/march97/ ciderworkshop.html (Accessed 14 December, 2005). McClure PJ, Blackburn C de W, Cole MB, Curtis PS, Jones JE, Legan JD, Ogden ID, Peck MW, Roberts TA, Sutherland JP and Walker SJ (1994) Modelling the growth, survival and death of microoorganisms in foods: the UK Food Micromodel approach, Int J Food Microbiol, 23, 265–275. McElroy DM, Jaykus LA and Foegeding PM (2000) Validation and analysis of modeled predictions of growth of Bacillus cereus spores in boiled rice, J Food Prot, 63, 268–272. McMeekin T (2004) An essay on the unrealized potential of predictive microbiology, in RC McKellar and X Lu (eds), Modeling Microbial Responses in Food, Boca Raton, FL, CRC Press, 321–335. Montgomery D (1996) Introduction to Statistical Quality Control (3rd edn), New York, John Wiley and Sons. Musa DM, Ramaswamy HS and Smith JP (1999) High-pressure destruction kinetics of Listeria monocytogenes on pork, J Food Prot, 62, 40–45. Ngadi M, Smith JP and Cayouette B (2003) Kinetics of ultraviolet light inactivation of Escherichia coli O157:H7 in liquid foods, J Sci Food Agric, 83, 1551–1555. Nussinovitch A and Peleg M (2000) Analysis of the fluctuating patterns of microbial counts in frozen industrial food products, Food Res Int, 33, 53–62. Olmez HK and Aran N (2005) Modeling the growth kinetics of Bacillus cereus as a function of temperature, pH sodium lactate, pH and sodium chloride concentrations, Int J Food Microbiol, 98, 135–143. Palumbo SA, Williams AC, Buchanan RL and Phillips JG (1991) Model for the aerobic growth of Aeromonas hydrophila K144, J Food Prot, 54, 429–435. Peleg M (2002) Interpretation of the irregularly fluctuating microbial counts in commercial dairy products, Int Dairy J, 12, 255–262. Peleg M, Nussinovitch A and Horowitz J (2000) Interpretation of and extraction of useful information from irregular fluctuating industrial microbial counts, J Food Sci, 65, 740– 747. Pin C and Baranyi J (1998) Predictive models as means to quantify the interactions of spoilage organisms, Int J Food Microbiol, 41, 59–72. Pin C and Baranyi J (2000) Predictive model for the growth of Yersinia enterocolitica under modified atmospheres, J Appl Microbiol, 88, 521–530. Pin C, Velasco de Diego R, George S, Garcia de Fernando GD and Baranyi J (2004) Analysis and validation of a predictive model for growth and death of Aeromonas hydrophila under modified atmospheres at refrigeration temperatures, Appl Environ Microbiol, 70, 3925– 3932. Praphailong W and Fleet GH (1997) The effect of pH, sodium chloride, sucrose, sorbate and benzoate on the growth of food spoilage yeasts, Food Microbiol, 14, 459–468. Ramaswamy HS, Riahi E and Idziak E (2003) High-pressure destruction kinetics of E. coli (29055) in apple juice, J Food Sci, 68, 1750–1756. Seman DL (2001) Safety and quality concerns – ingredients, in Proceedings of the 54th Reciprocal Meats Conference, American Meat Science Association, Savoy, Ill, 68–72. Seman DL, Borger AC, Meyer JD, Hall PA and Milkowski AL (2002) Modeling the growth of Listeria monocytogenes in cured ready-to-eat processed meat products by manipulation of sodium chloride, sodium diacetate, potassium lactate and product moisture content, J Food Prot, 65, 651–658. Stewart CM, Cole MB, Legan JD, Slade L, Vandeven MH and Schaffner DW (2001) Modeling the growth boundary of Staphylococcus aureus for risk assessment purposes, J Food Prot, 64, 51–57. Tamplin M (2002) Growth of Escherichia coli O157:H7 in raw ground beef stored at 10 ºC and the influence of competitive bacterial flora, strain variation and fat level, J Food Prot, 65, 1535–1540. Tamplin M, Baranyi J and Paoli G (2004) Software programs to increase the utility of
Application of models in predictive microbiology
109
predictive microbiology information, in RC McKellar and X Lu (eds), Modeling Microbial Responses in Food, Boca Raton, FL, CRC Press, 223–242. Tuynenburg Muys G (1971) Microbial safety in emulsions, Process Biochem, 6, 25–28. van Gerwen SJC and Gorris LGM (2004) Application of elements of microbiological risk assessment in the food industry via a tiered approach, J Food Prot, 67, 2033–2040. van Gerwen SJC and Zwietering MH (1998) Growth and inactivation models to be used in quantitative risk assessments, J Food Prot, 61, 1541–1549. van Gerwen SJC, te Giffel MC, van ’t Riet K, Beumer RR and Zwietering MH (2000) Stepwise quantitative risk assessment as a tool for characterization of microbiological food safety, J Appl Microbiol, 88, 938–951. van Opstal I, Vanmuysen SCM, Wuytack EY, Masschalk B and Michiels CW (2005) Inactivation of Escherichia coli by high hydrostatic pressure at different temperatures in buffer and carrot juice, Int J Food Microbiol, 98, 179–191. Whiting RC and Golden MH (2003) Modeling temperature, pH, NaCl nitrite and lactate on survival of Escherichia coli O157:H7 in broth, J Food Safety, 23, 61–74.
7 Predictive models in microbiological risk assessment M. H. Zwietering, Wageningen University, The Netherlands, and M. J. Nauta, National Institute for Public Health and the Environment, The Netherlands
7.1
Quantitative microbiological risk assessment
Quantitative microbiological risk assessments (QMRA) for foodborne disease are indispensable in order to design, develop and evaluate control measures to protect public health. They can be used in setting Food Safety Objectives (FSO) and to prove equivalence of risks in international trade. Furthermore, QMRAs can be used to detect critical points in food chains, to estimate the effects of interventions to reduce foodborne diseases or – when combined with economic analysis - to evaluate the cost-effectiveness of various interventions. For this quantification of risk it is necessary to relate the potential exposures during consumption (i.e. the dose) to the health impact (i.e. the response). In this whole process models are indispensable, since many parts cannot be measured directly. Models for dose–response can be linked to growth and inactivation models, models for temperature distribution during cooling in a product, recontamination models in preparation, etc. The objective of QMRAs is to quantify risk and, often more importantly, to determine what the effect of interventions on the risks is. Depending on the problem, the starting point of the assessment can be primary production, but it can in other cases start at production, at retail or even at the start of preparation. The end point, however, is always risk, with risk defined as a health effect caused by a hazard in a food and the likelihood of its occurrence. This risk is quantified in a number, hence the procedure is called ‘quantitative microbiological risk assessment’. In specific cases one can focus only on exposure assessment and set as the
Predictive models in microbiological risk assessment
111
target reduction of the exposure and designing mitigation strategies for the exposure only. Generally these assessments are extensive and data-demanding; a combination of sources is generally a necessity, for instance with data from experiments, literature, databases, models and experts. Quantitative models can contain combinations of the formerly mentioned data, and these are therefore strong supportive tools for quantitative risk assessments. Linking various models for the various phenomena (growth, inactivation, (re)contamination) gives insight into the kinetics along chains, and can be used within risk assessment, HACCP, and to show that one complies with an FSO.
7.2
Quantitative microbiology
Quantitative microbiology started in the 1920s with models for bacterial inactivation during thermal processing (Bigelow, 1921), and was extended in the 1980s with models for microbial growth (McMeekin et al., 1993). Models for the change of numbers in time, like growth models or inactivation models, are called primary models. Examples are the Gompertz, Baranyi and logistic models for growth. For inactivation, examples are the log-linear or Weibull model (for a review concerning primary or secondary models see Van Gerwen and Zwietering, 1998). These models result in an estimation of the growth or inactivation for the specific conditions studied. Models to estimate the growth or inactivation rate as a response to environmental variables [like temperature (T), pH] are secondary models. Polynomial models, gamma type, or cardinal models are often used (see also Van Gerwen and Zwietering, 1998). Since the 1980s the number of models and the amount of data have increased substantially. These models can be useful tools as a quantitative support in HACCP and can be used within quantitative risk assessment. Often the order of magnitude of the predictions does not vary largely between models, and in order to use them to the full, it is best to use multiple models in parallel, but also in combination with expert knowledge, literature data and challenge tests. In addition to the phenomena of growth and inactivation, the initial contamination level and recontamination rate are important factors to include. These last two factors have received relatively less attention than the first two, although they are often equally important. As an example, Section 7.3 describes a model for recontamination by air, which shows in a very illustrative way the approach of modelling. The following aspects of the modelling process will be concentrated on to begin with:
• • • • • •
the logic of the model; the testing of the model; the collection of parameters and the structured storage of data in databases; the use of the model for quantification; critical use; simulation tools and decision support tools.
112
7.3
Modelling microorganisms in food
Recontamination
7.3.1 The logic of the model A relatively simple model to quantify recontamination via the air has been developed by Whyte (1986): Rc = Cairνs At/W where Rc is contamination rate (cfu·g–1), Cair is concentration of microorganisms in the air (cfu·m–3), νs is settling velocity (m·s–1), A is exposed product area (m2), t is exposure time (s), W is weight of the product (g). The first aspect of modelling, the logic of the model, is relatively clear in this example: the number of organisms that will contaminate a product due to sedimentation from the air follows from a balance. It is always relevant to check the units in the equation as a first step in the verification. 7.3.2 Testing of the model The second aspect is the testing of the model. The model has been tested by Whyte (1986) by determining the sedimentation from air into bottles. If a model is tested for specific conditions and shown to work, it can be further challenged. For example the above model predicts that contamination rate is proportional to the air concentration, to the open surface area and to the time of exposure. These properties of the model can easily be challenged by experiments. 7.3.3
Collection of parameters and the structured storage of data in databases The third aspect is the collection of parameters. These data often come from different domains as data can be gathered from literature, experiments, factory data, etc. In this case area, exposure time and product weight are applicationspecific data. Comparison of a variety of settling velocity data shows that it is relatively constant [the average log settling velocity (m/s) was –2.59 with a standard deviation of 0.45] for very diverse conditions, even for bacteria, yeasts and moulds (den Aantrekker et al., 2003). This result is logical if it is considered that it is often not the organisms themselves that settle, but aerosols and dust particles, to which the organisms are attached. For the last parameter in this example, the concentration of microorganisms in the air, a structured collection of experimental data in databases can facilitate the use of the model. As shown by den Aantrekker et al. (2003), data typical for various types of production environments can be deduced by data analysis and integrated in decision support systems. This facilitates structured data storage and the possibility of quantitative estimation. 7.3.4 Use of the model for quantification During the production process, a sliced meat product, with A = 140 cm2 and
Predictive models in microbiological risk assessment
113
W = 17 g, is exposed to the air for 45 s. In this example Cair is 3.39 log cfu·m–3 and νs is 2.57 mm·s–1. The model thus results in a contamination level of 4 cfu·product–1 or 0.2 cfu·g–1 (den Aantrekker et al., 2003). This means that when the product is sterile, contact with contaminated air causes an increase in concentration with 4 cfu·product–1. It is obvious that this may be a major risk-determining factor in the case of very infectious bacteria or when subsequent growth occurs. On the other hand if the product is already contaminated with 100 cfu·g–1, an addition of 0.2 cfu·g–1 is not very relevant. However, this 0.2 cfu·g–1 can still be of considerable relevance if a specifically dangerous organism recontaminates. This shows clearly that such an equation can be a supportive tool in the decision-making process, but that it should be used in a critical way. Although these simple models do not incorporate all factors that may be of relevance, they can be used to get an indication of the importance of air contamination compared to the initial contamination of the product and possible growth and inactivation during the production process.
7.3.5 Critical use It should always be realised that while a model may work very well in certain conditions, in other conditions it may be totally wrong. For instance, the above model is not applicable under conditions of forced air flow and when electrostatic forces can play a role.
7.3.6 Simulation tools and decision support tools The use of the model can be facilitated when it is connected to data in databases. This model can be integrated in decision support systems as explained below.
7.4
Linking models
Linking of models for growth, inactivation and (re)contamination can give insight into production processes (Figs 7.1–7.4).
7.4.1 Example 1: sandwich spread In a production process for a sandwich spread (Fig. 7.1), the kinetics of Staphylococcus aureus have been determined. These kinetics depend on the cardinal parameters of the organism (Tmin, Topt, etc.). For the various stages (mixing, homogenising, etc.) the conditions (T, pH, aw) in the product determine the specific rate (ν) of the organism (specific growth or inactivation rate). Together with the residence time in the stage, the change in log numbers is estimated. This change in log numbers is represented by the growth characteristic (GC), reduction characteristic (RC), and contamination characteristic (CC). It can be seen that the time scales of the various stages are largely different. For instance, the mixing (stage 1) takes
Topt 36.7
Tmax 50.0
pHmin 4.0
pHopt 6.5
Initial 0
Mixing 1 1
Homogenising 2
Mixing 2 3
Deaeration 4
20 7 0.95 10m 1.00 0.33 0.02 0.02
20 7 0.95 1m 1.00 0.33 0.03 0.00
40 4.25 0.95 30m 1.00 0.33 0.10 0.07
40 4.25 0.95 15m 1.00 0.33 0.13 0.04
T (°C) pH aw t PF (–) v (1/h) log N (log cfu/g) 0.00 GC (–) RC (–) CC (–)
pHmax 10.0
muopt 2.3
Tref 70
log Dref 0.33
z 8.8
Heating 1 Packaging 5 6
Heating 2 7
Cooling 1 8
Cooling 2 9
Storage 10
92 88 4.25 4.25 0.95 0.95 1m 3m 1.00 1.00 –21117.99 –7370.72 –20.00 –20.00
85 4.25 0.95 10m 1.00 –3346.98 –20.00
45 4.25 0.95 30m 1.00 0.44 –19.91 0.09
37 4.25 0.95 30m 1.00 0.27 –19.85 0.06
21 4.25 0.95 480d 1.00 0.06 8.00 305.13
–152.86
awmin 0.8
–160.05
–242.26
Fig. 7.1 Evaluation of the fate of Staphylococcus aureus in a sandwich spread.
Modelling microorganisms in food
Process step
Tmin 6.7
114
Microorganism Staphylococcus aureus
Microorganism Staphylococcus aureus Process step
Topt 36.7
Tmax 50.0
pHmin 4.0
pHopt 6.5
Initial 0
Mixing 1 1
Homogenising 2
Mixing 2 3
Deaeration 4
20 7 0.95 10m 1.00 0.33 0.02 0.02
20 7 0.95 1m 1.00 0.33 0.03 0.00
40 4.25 0.95 30m 1.00 0.33 0.10 0.07
40 4.25 0.95 15m 1.00 0.33 0.13 0.04
T (°C) pH aw t PF (–) v (1/h) log N (log cfu/g) 0.00 GC (–) RC (–) CC (–)
pHmax 10.0
muopt 2.3
Tref 70
log Dref 0.33
z 8.8
Heating 1 Packaging 5 6
Heating 2 7
Cooling 1 8
Cooling 2 9
Storage 10
92 88 4.25 4.25 0.95 0.95 1m 3m 1.00 1.00 –21117.99 –7370.72 –20.00 –20.00
85 4.25 0.95 10m 1.00 –3346.98 –20.00
45 4.25 0.95 30m 1.00 0.44 –19.91 0.09
37 4.25 0.95 30m 1.00 0.27 –19.85 0.06
8 4.25 0.95 480d 1.00 0.00 –17.32 2.52
–152.86
awmin 0.8
–160.05
–242.26
Predictive models in microbiological risk assessment
Tmin 6.7
115
Fig. 7.2 Evaluation of the fate of Staphylococcus aureus with refrigerated storage after opening
116
Modelling microorganisms in food
ten minutes, while the storage (last stage) takes 480 days. The parameters GC, RC and CC can be used to compare the relative effect of the various stages. The kinetics are represented graphically in Fig. 7.1. It is clear that there is no relevant growth during the process, substantial inactivation during heating and potential large growth during consumer storage. Growth, can however, only occur if recontamination takes place, since inactivation is so large that no survivors will occur (the probability of survival is virtually zero since RC > 100, meaning more than 100 logs of reduction). After opening, recontamination is more likely to occur. But then the product should be stored refrigerated, and the temperature in the last stage will be lower. Furthermore the specific composition of the product (specific interaction of pH and acids) might reduce growth more than predicted. So a challenge test might show that growth during storage is lower than predicted. It is clear that the predictions as given in Fig. 7.1 and quantification of various scenarios (like recontamination with refrigeration, effects of process failures) gives insight into the kinetics and can support reasoning and decisions. As an example, in Fig. 7.2 the effect of refrigerated storage (at 8°C) is given, simulating refrigerated storage after opening and potential recontamination. It is clear that some growth (2.5 logs) is potentially possible during the 480 days shelf life. At 7 °C negligible growth (0.13 logs) is predicted and at 10 °C very large growth (> 8 logs).
7.4.2 Example 2: smoked salmon For the processing of smoked salmon (Fig. 7.3), growth of Listeria monocytogenes is estimated. This organism has other cardinal parameters and the process stages have different characteristics. Fig. 7.3 shows that the smoking stage and consumer storage are the most relevant stages as large growth occurs. If a specific effect of smoke (phenols) is quantified, this can be included (this is the product factor, PF). However, before such an additional effect can be included one needs to quantify the effect of these compounds on the growth of the organisms. Effects of shorter shelf life of the product can be quantified, and also effects of higher storage temperatures. As an example, in Fig. 7.4 the effect of a shelf life of ten days is represented. A scenario with smoking temperature of 10 ºC and shelf life of ten days gives a level of < 100 cfu·g–1. Again such a simulation should be considered critically since the initial contamination and consumer storage temperature are not exact values. Various scenarios should be run or the effects of the statistical variation of these values could be simulated.
7.4.3 Inclusion of the lag phase It should be noted that lag-time is assumed to be zero in these predictions. This assumption might be too fail-safe. On the other hand, however, sometimes the organism is already adapted to the product, resulting in no lag. In other cases it is not known where the contamination comes from, so that one cannot define the
Tmin –0.4
Topt 33.8
Tmax 47.0
pHmin 4.4
pHopt 7.1
pHmax 9.6
awmin 0.9
muopt 1.1
Process step
Initial 0
Filleting 1
Salting 2
Smoking 3
Cooling 4
Freezing 5
Packaging 6
Storage 7
5 6.3 1 2h 1.00 0.02 –0.98 0.02
4 6.3 0.98 5h 1.00 0.01 –0.95 0.03
28 6.3 0.98 10h 1.00 0.53 1.36 2.31
0 6.3 0.98 2h 1.00 0.00 1.36 0.00
–20 6.3 0.98 8h 1.00 0.00 1.36 0.00
4 6.3 0.98 3h 1.00 0.01 1.38 0.02
5 6.3 0.98 35d 1.00 0.02 8.00 7.02
T (°C) pH aw t PF (–) v (1/h) log N (log cfu/g) GC (–) RC (–) CC (–)
–1.00
117
Fig. 7.3 Evaluation of the fate of Listeria monocytogenes in smoked salmon.
Predictive models in microbiological risk assessment
Microorganism Listeria monocytogenes
Topt 33.8
Tmax 47.0
pHmin 4.4
pHopt 7.1
pHmax 9.6
awmin 0.9
muopt 1.1
Process step
Initial 0
Filleting 1
Salting 2
Smoking 3
Cooling 4
Freezing 5
Packaging 6
Storage 7
5 6.3 1 2h 1.00 0.02 –0.98 0.02
4 6.3 0.98 5h 1.00 0.01 –0.95 0.03
28 6.3 0.98 10h 1.00 0.53 1.36 2.31
0 6.3 0.98 2h 1.00 0.00 1.36 0.00
–20 6.3 0.98 8h 1.00 0.00 1.36 0.00
4 6.3 0.98 3h 1.00 0.01 1.38 0.02
5 6.3 0.98 10d 1.00 0.02 3.38 2.01
T (°C) pH aw t PF (–) v (1/h) log N (log cfu/g) GC (–) RC (–) CC (–)
–1.00
Fig. 7.4 Evaluation of the fate of Listeria monocytogenes in smoked salmon with a shorter shelf life.
Modelling microorganisms in food
Tmin –0.4
118
Microorganism Listeria monocytogenes
Predictive models in microbiological risk assessment
119
physiological state of the contaminating organisms. It is easy to criticise such an assumption by arguing that these predictions are too fail-safe, but it is very difficult and often even dangerous to assume that lag effects determined for a specific condition are representative for other conditions. These predictions should not be considered as absolute, but it is clear that they can be useful in supporting HACCP teams, product developers, quality assurance, etc. (see for example the scenarios in Figs 7.2 and 7.4). Their use should be like the use of a spell-checker. Careful use is necessary, and they clearly can help to focus attention on the most important parts (but an expert and a dictionary might still sometimes be useful!).
7.5
Information sources
Well-known sources of information are ComBase and Sym’Previus. The advantages of ComBase (http://www.combase.cc/) are that it is freely accessible, it is very extensive and it expands in time since it is ‘cooperative’. However, one needs a well-qualified person to select the right information and interpret this. Sym’Previus (http://www.symprevius.net/) includes experiments (strain variability in laboratory media), literature data, models and product-oriented challenge tests. It is focused on ‘products’ and there are some initial contamination data (N0). In this case too the interpretation is relatively complex. However, especially in combination with experimental data and expertise, these bases are in many cases very effective in supplying a quantitative foundation for decisions. In addition, the literature, which one can search more and more easily using the Internet, with direct access, can be seen as a fast information source, just like other Internet resources. Extended risk assessments are also rich information sources, in which a lot of information is already collected and interpreted. Often the models used in extensive quantitative risk assessments are not the most complex ones but the very simple basic approaches. This is due to their transparency, ease of use, general applicability, the fact that the accuracy is good enough in comparison to the other sources of information to which they are coupled and the large variabilities and uncertainties. An example is the initial risk assessment of Enterobacter sakazakii (FAO/WHO, 2004). Models for growth and inactivation combined with probabilities of intrinsic and environmental contamination can give indications of the effects of various interventions. Although this risk assessment is rather rudimentary it might be considered commensurate with the level of knowledge of this organism. However, for example in the extended FDA/FSIS Listeria risk assessment (HHS/USDA 2003), also, the models used are not overly complex. The risk assessment itself is very extensive and therefore difficult to interpret, but in it the lag-time is assumed to be zero (since organisms are assumed to be already adapted to the product). Furthermore, the model used for the effect of temperature on growth is the Ratkowsky equation:
120
Modelling microorganisms in food
T − Tmin µ = µ ref Tref − Tmin
2
For various products within a product group and to include strain variability, the distribution of µref is estimated. This gives then an estimate of this value with an uncertainty which is rather large due to this product and strain variability. Therefore, it is not of great relevance to introduce more complicated models. The assumption of a lag of zero is totally acceptable in this risk assessment, since the assessment starts at retail level, so the organisms present can be assumed to have already passed their lag phase. This risk assessment is also a very good example of the strength of the approach of risk assessment, since the assessment contains a wealth of information about L. monocytogenes in ready to eat foods. Aspects of the organism, its disease characteristics, its epidemiology, its ecology, its growth and inactivation characteristics are extensively reported. Although it is difficult to digest all of the large quantity of information which is contained within the risk assessment, it is nevertheless entirely possible to extract a great deal of data to support managerial decisions.
7.6
Representativeness of models
Models will never be 100% representative for practical cases, since such a large variety of food products, food chains and organisms exists. Therefore for a specific application, one should always consider the representativeness of the model and the data it is based on. The representativeness of the culture medium, the strain variability and the method of preculturing together with the experimental error must be considered. It should be seen in particular that in (micro)biological experiments the reproduction error is generally large and there is nothing to be gained by detailing effects whose impact is small compared to this reproduction error magnitude. Furthermore, it should be noted that it is easy to state that a preculture procedure should be representative, but since it is often the case that the contamination route is not known, it is very difficult to define these conditions. Apart from this, the variability of the process and other relevant aspects should also be integrated. Therefore, it is not always necessary to go into the growth and inactivation processes in great detail, since factors other than the specific growth or inactivation process are more often the ones which determine the precision of the outcome. Moreover, it is very easy to question the representativeness of a dataset or model, but very difficult or even impossible to define conditions that are perfectly representative. It should be noted that there is a principle difference between variability and uncertainty. For example, the above mentioned lack of knowledge of the physiological state of the contaminants is uncertainty, and it prevents accurate prediction of the lag-time. On the other hand, if we were able to determine the physiological state of the contaminant or its lag phase (uncertainty reduced by increased
Predictive models in microbiological risk assessment
121
knowledge) it would certainly still remain variable, due to biological and process variability, for example. Therefore confidence intervals should always be carefully interpreted, since they can be based on variability, uncertainty or some mixture of the two. In general, uncertainty can be reduced by more research and variability can be reduced by better control. Variability has to be included in the risk assessment and in the expression of the risk. Large uncertainty is often the problem. (The fact that we do not know the variability is part of the uncertainty.) Variability and uncertainty are further commented on in Chapter 4.
7.7
Food Safety Objectives and risk assessment
Models can also be used (often in combination with other information) to confirm that a Food Safety Objective (FSO) is adhered to. The equation as proposed by the International Commission for Microbial Specifications of Foods (ICMSF, 2002) and Codex Committee on Food Hygiene (CCFH) (Codex Alimentarius Commission, 2004): H0 – ΣR + ΣG + ΣC < FSO relates initial levels (H0), reduction (ΣR), growth (ΣG) and contamination (ΣC) in comparison to the FSO. Estimates of RC, GC and CC as in Figs 7.1–7.4 can be used to determine the sum over all stages (ΣR, ΣG, ΣC). The first example (S. aureus in a sandwich spread, Fig. 7.1) also shows that this equation should be used with caution because by simply adding all RC and GC values one might come to unrealistic predictions, since in this example the reduction is so large that no survivors will be found, so, even if large growth is potentially possible, the product is still safe. The prediction of large growth in the last stage in Fig. 7.1 shows, however, that if recontamination occurs, large growth can make the product unsafe. Another concern about the ICMSF/CCFH equation is that it relates variable quantities (the left side of the equation) with a threshold value (the FSO). H0, ΣR, ΣG and ΣC will all be variable and therefore, in principle, they should be represented by probability distributions. Using their mean values in a deterministic approach may be appropriate, but it may also lead to incorrect interpretations. If it is used to find a critical value of H0, the equation compares two thresholds, and the variability of this parameter is not problematic for the interpretation of the equation. For the other parameters, the variability may be very small, as in a wellcontrolled process, and in that case the equation is applicable to test adherence to the FSO. However, if growth, reduction and contamination are variable, the interpretation of the equation is more demanding, because it relates a sum of probability distributions with a critical value. Using means or most likely values of the different model parameters will not be meaningful because it ignores the fact that larger risks are commonly associated with the tails of the probability distributions. In theory, an FSO is set in order to reach a certain appropriate level of protection (ALOP). In Fig. 7.5 it is clearly seen that models for growth (G) and reduction (R)
122
Modelling microorganisms in food
Fig. 7.5 Overall analysis of the connection between exposure assessment, dose–response, risk characterisation and risk management. Nt is the final concentration of microorganisms originating from the initial number (N0) and/or recontamination (C), followed by potential growth (G) or reduction (R). For a given serving size, this results in a dose (D), which, along with the dose response–relation, gives a risk per serving (RpS). With the total number of food units consumed (n) this gives the final probability (P), that can be compared to an appropriate level of protection (ALOP).
do play a role; however, they should be combined with initial level (N0) and recontamination (C). For the final risk estimate the amount of food product consumed per serving, the dose–response parameters and the amount of consumed product units (n) also need to be known. Overall these analyses can only be made with large uncertainty and so the absolute value of risk is generally imprecise; however, relative effects can be estimated more precisely.
7.8
Examples of structured approaches of risk assessment
7.8.1 SIEFE One relevant part of both HACCP and QRA is the identification of hazards, and this is often based on experience and qualitative reasoning. This experience can be very well structured in expert systems and can even be improved by combining it with quantitative measures (e.g. selecting potential growth of organisms based on their growth range, including the minimum and maximum growth temperatures, etc.). The advantages of structured and transparent systems are that the same response results if the same question is asked (not depending on place, time,
Predictive models in microbiological risk assessment
123
person), in combination with the speed of answer and continuous availability. Furthermore, the system is not static but can be updated if new information becomes available. An example of a system for the identification of hazards was described by van Gerwen et al. (1997). This system is based on a combination of quantitative databases containing parameters of organisms and food products (pattern matching), in combination with well-defined qualitative knowledge rules. The hazard identification was set up in a step-wise manner so that the most relevant hazards were first identified, after which other hazards were presented, and finally potential hazards. This hazard identification was implemented in a risk assessment system with the same step-wise approach (the SIEFE model, by van Gerwen et al., 2000). The approach begins with rough risk estimations using simple models and data in order to identify the most important phenomena. The latter are subsequently described by more sophisticated models and data to improve the predictions. Finally in a last stage stochastic aspects can be included. The advantages of the tiered approach are that one does not get lost in detail in all aspects and that one can better target the efforts of both models and data collection.
7.8.2 Modular process risk model In a QMRA model the transmission of a pathogen through the food chain is modelled and simulated to explore the dynamics behind the present exposure. The complexity of a quantitative food chain model often increases with the length of the food chain. The modular process risk model (MPRM) methodology (Nauta, 2002; Lindqvist et al., 2002) offers a structural approach to deal with this complexity. Basically, MPRM links different models as discussed in Section 7.3. Structuring occurs by setting up a description of the food chain and splitting it up into a series of modules. Each of the processes represented in one module can be identified as one of six basic processes. The difference in input and output of these modules describes the change in unit or unit size of the food product, the change in prevalence (percentage of contaminated units) and the change in the distribution of concentrations of the pathogen in the contaminated units. The qualitative effects of the six basic processes on these variables are given in Table 7.1. Two of the basic processes, growth and inactivation, are typical microbial processes, which lead to an increase or decrease in the concentration. Traditionally, there is a variety of predictive models available for these processes. The other four basic processes, partitioning, mixing, removal and cross-contamination, can be categorized as food handling processes. Here, the changes in unit size, prevalence and concentration are typically the consequence of modifications of, or interactions between, the food units. Models for these processes are less well developed, although the availability of such models is increasing (e.g. Nauta, 2005). Note that these basic processes cover growth (G), reduction (R) and recontamination (C) as discussed in Section 7.7. An important aspect of the MPRM methodology is that the QMRA does not start with a collection of the available data, because this may be time-consuming
124
Modelling microorganisms in food
Table 7.1 Basic processes of the MPRM and their qualitative effect on the prevalence (P), the total number of organisms in the system (i.e. all units evaluated in one simulation run of the model; Ntot) and the unit size. ‘=’: no effect; ‘+’: an increase; ‘–’: a decrease. Depending on the precise definitions of the processes, removal may also give a decrease in unit size and cross-contamination may also give an increase in Ntot Effect on P Effect on Ntot The fraction of the total number of: contaminated units cells over all units Growth Inactivation Mixing Partitioning Removal Cross-contamination
= – + – – +
+ – = = – =
Effect on unit size
= = + – = =
and not very efficient. Instead, it is suggested that this step should be preceded by a proper definition of the statement of purpose (including the food product and the hazard to be considered), a description of the food pathway and the construction of the MPRM structure as a series of modules. With this approach, data collection will be focused on the needs of the food chain risk assessment. Based on the available data, one can elaborate on the construction of basic process models, including the details that are essential for the analysis and for which the required data are available. If such data are missing alternative modelling methods may be explored. In that way data collection, basic process modelling and data implementation become an iterative process, which ends with the actual exposure assessment. In this approach, basic process modelling is the stage where microbiological modelling comes in. Predictive models, as available from the literature or other information sources (Section 7.5), may be applied. However, the purpose of modelling should be carefully considered here. Many predictive models are deterministic, aiming to give best estimates or worst case predictions of the concentration of microorganisms as a function of time. In risk assessment, the purpose of modelling is often to estimate the probability distribution describing the variability in the concentration after a fixed amount of time, or the probability of exceeding a threshold value of the concentration (Nauta, 2002). Therefore, QMRA may demand a class of predictive models different from those most widely available.
7.9
Conclusions
The particular strength of quantitative microbiological tools lies in their potential to determine the relative effect of interventions and to show the best decisions based on current understanding. This can be achieved by either relatively simple risk assessments or very extensive ones, depending on the questions that need to be answered and commensurate with the amount of information present on the
Predictive models in microbiological risk assessment
125
problem under study. In both cases results should be scrutinized, investigating the models, assumptions and data used. So, although model predictions should not be considered absolute and decisions should not be based solely on modelling, these tools can fulfil an important role in QRA, QA and HACCP. Although some people say that ‘all models are wrong but some are useful’, we are more optimistic and would propose ‘all models are correct, but none are perfect’. Due to iterative improvements, understanding increases and predictions get better over time.
7.10 References Bigelow W (1921) The logarithmic nature of thermal death curves, J Infect Dis, 29, 528–538. Codex Alimentarius Commission (2004) ALINORM 04/27/13, Rome, CAC. Den Aantrekker ED, Beumer RR, van Gerwen SJC, Zwietering MH, van Schothorst M and Boom RM (2003) Estimating the probability of recontamination via the air using Monte Carlo simulations, Int J Food Microbiol, 87, 1–15. FAO/WHO (2004) Enterobacter sakazakii and other microorganisms in powdered infant formula. Microbiological risk assessment series no. 6, Rome, FAO/WHO. HHS/USDA (2003) Quantitative assessment of relative risk to public health from foodborne Listeria monocytogenes among selected categories of ready-to-eat foods. http:// www.foodsafety.gov/~dms/lmr2-toc.html. ICMSF (2002) International Commission on Microbiological Specifications for Foods. Microorganisms in Foods 7. Microbiological Testing in Food Safety Management, New York, Kluwer Academic/Plenum Publishers. Lindqvist R, Sylven S, and Vagsholm I (2002) Quantitative microbial risk assessment exemplified by Staphylococcus aureus in unripened cheese made from raw milk, Int J Food Microbiol, 78, 155–170. McMeekin TA, Olley JN, Ross T and Ratkowsky DA (1993) Predictive Microbiology: Theory and Application, New York, John Wiley and Sons. Nauta MJ (2002) Modelling bacterial growth in quantitative microbiological risk assessment: is it possible? Int J Food Microbiol, 73, 297–304. Nauta MJ (2005) Microbiological risk assessment models for partitioning and mixing during food handling, Int J Food Microbiol, 100, 311–322. van Gerwen S, de Wit JC, Notermans S, Zwietering MH (1997) An identification procedure for foodborne microbial hazards, Int J Food Microbiol, 38, 1–15. van Gerwen SJC, te Giffel MC, van’t Riet K, Beumer RR, Zwietering MH (2000) Stepwise quantitative risk assessments as a tool for characterisation of microbiological food safety, J Appl Microbiol, 88, 938–951. Van Gerwen SJC and Zwietering MH (1998) Growth and inactivation models to be used in quantitative risk assessments, J Food Prot, 61, 1541–1549. Whyte W (1986) Sterility assurance and models for assessing airborne bacterial contamination, J Parenter Sci Technol, 40, 188–197.
Part II New approaches to microbial modelling in specific areas of predictive microbiology
8 The non-linear kinetics of microbial inactivation and growth in foods M. G. Corradini and M. Peleg, University of Massachusetts, USA
8.1
Introduction
Microbial inactivation by heat and other means has been a key operation in the food and pharmaceutical industries. Therefore, the kinetics of microbial survival in a hostile environment has received extensive attention in industrial and general microbiology. The foundations of the field were laid at the beginning of the twentieth century. This was a time when there was a premium on linear models and linearization methods because of the limited means to handle elaborate algebraic models and solve differential equations. However, with the dramatic advances in computer power and the growing availability of user-friendly mathematical and statistical software, problems that were insoluble only a few decades ago can now be tackled with relative ease using these modern tools. The mathematical methods now available also allow us to view the old problems in a new light and to develop a non-traditional approach to their solution. This chapter will present an example of how microbial inactivation can be modeled without relying on the validity of the conventional assumption of first-order mortality kinetics. Growth is another important aspect of microbial safety, and it too has received considerable attention in the field of food research and its literature. Although inactivation and growth are usually treated as totally separate topics, their modeling has some common mathematical features. What we will attempt to show is how the methods that have been originally developed to predict microbial inactivation patterns can be implemented with only minor modifications to predict microbial growth, at least in a certain class of systems. Here, however, and in contrast with
130
Modelling microorganisms in food
microbial inactivation, the concept that growth curves should be described by rate models that incorporate the population’s momentary state is fully developed. Therefore we will demonstrate only an alternative way to choose the primary and secondary models, based on the notion that the parts of the isothermal growth curve are not necessarily interrelated in a manner commonly assumed. In a sense, this chapter is a summary of concepts developed at the University of Massachusetts at Amherst since the late 1990s, and no attempt has been made to discuss others’ works in the field at any depth. These are cited in the original publications on which this chapter is based and thoroughly discussed in other chapters of this book.
8.2
The traditional primary models of inactivation and growth
8.2.1 First-order inactivation kinetics To date, the basis of sterility and microbial safety calculation in the food and pharmaceutical industries has been the assumption that microbial mortality and sporal inactivation follow a first-order kinetics. This entails that the isothermal survival curves of the targeted cells or spores all obey the relationship:
dN (t ) = −k (T ) N (t ) dt or for isothermal inactivation: N(t) = N0 exp[– k(T)t]
[8.1]
[8.2]
where N(t) and N0 are the momentary and initial number of organisms or spores, respectively, and k(T) is the temperature-dependent inactivation’s exponential ‘rate constant’. Or, if the survival ratio S(t) is defined as N(t)/N0, then: loge S(t) = –k(T)t
[8.3]
According to this model, a semi-logarithmic plot of the isothermal survival curve will always be a straight line with a slope of – k(T). The time to reduce the population by 1 log cycle (base ten), known as the ‘D-value’, is loge10/k(T). Given that an exponential decay has no zero, ‘commercial sterility’ was once defined as the state where a hypothetical sporal population of Clostridium botulinum is reduced by 12 orders of magnitude or ‘12 D’. However, since for technical reasons one can rarely determine such a high reduction level experimentally – in most cases eight orders of magnitude is the limit set by the detection level – the ‘thermal death time’ has been calculated by extrapolation, using the log-linear kinetics [equation (8.3)] as a model. This procedure carries the risk that, even if the isothermal survival curve were linear in the experimental range, its continuation might still be curved. Obviously, the first-order kinetics model would be inappropriate whenever the isothermal semi-logarithmic survival curves of the organism in question are clearly non-linear.
The non-linear kinetics of microbial inactivation and growth in foods
131
Such non-linearity is now being observed in a growing number of microorganisms and spores (e.g. Anderson et al., 1996; Augustin et al., 1998; Peleg and Cole, 1998; Fernández et al., 1999; van Boekel, 2002), and it has been described by a variety of alternative models (see below). This observation should not come as a surprise because there is no reason for microbial inactivation to follow a firstorder kinetics. A mortality pattern expressed in equation (8.1) is analogous to that of radioactive decay (Peleg et al., 2004), that is the probability of a lethal event (at a given temperature) is time-independent. If true, then factors like damage accumulation in the survivors or the pre-existence of variations within the heattreated population (see below) play no role in the outcome of the inactivation process. No wonder, therefore, that in an extensive survey covering over 120 reported survival curves, van Boekel (2002) found that only less than 5% of those examined could be considered log-linear. This had led him to the conclusion that log-linear survival curves are the exception rather than the rule. The findings of van Boekel and others, clearly suggest that a revision of the currently held assumptions regarding microbial inactivation kinetics is both needed and timely.
8.2.2 Logistic growth Isothermal growth in a closed system, limited by the availability of material resources only, should be described by the classic continuous logistic equation based on the notion that the momentary growth rate is proportional to the momentary population size and the portion of the unexploited resources in the habitat, i.e.:
dN (t ) N (t ) = r (T ) N (t ) 1 − dt N asymp (T )
[8.4]
where r(T) is a temperature-dependent rate constant and Nasymp (T), frequently dubbed Nmax , is the asymptotic number or density of the organism that the habitat can support. The model’s equation implies that this asymptotic growth level will remain indefinitely, which is of course very unlikely. In fact, in the long run the population’s size must decline, not only due to material shortage but also because of the habitat pollution by released metabolites. However, since the inevitable decline might occur only after the experiment had ended, using the logistic model for describing the sigmoid part of the growth curve can still be permissible. The same can be said about the more elaborate models, which are derived from the logistic equation (see below). Certain versions of the logistic model [some based on the logistic function rather than on equation (8.4)] entail that the isothermal growth curve has a typical symmetric sigmoid shape with three distinct regions: the ‘initial lag phase’, followed by an ‘exponential growth stage’ (which has an inflection point at its center) and a ‘stationary phase’, where the population size remains practically unchanged at about the Nasymp level. When Nasymp is determined experimentally, the initial number N0, equation (8.4), has only one adjustable parameter, namely r(T).
132
Modelling microorganisms in food
Obviously, one cannot describe all conceivable isothermal sigmoid growth patterns by a single rate constant. Hence, more elaborate models have been proposed over the years to describe experimental growth curves. Many of these can be viewed as modifications of the logistic equation through added terms whose purpose has been not only to improve the model’s fit (e.g. Fujikawa et al., 2004) but also to account for the physiological state of the inoculum. This was done by the inclusion of parameters such as the ‘lag-time’, ‘maximum specific growth rate’ in addition to the already mentioned ‘Nmax’. The most prominent model among these is the one proposed by Baranyi–Roberts (1994), known as the Baranyi and Roberts model (McKellar and Lu, 2004):
dN (t ) q (t ) = µ dt q(t ) + 1 max
N (t ) m 1 − N (t ) N max
[8.5]
where m is a constant, q(t)/[q(t) + 1], defined by dq(t)/dt = µmax q(t), is a term intended to account for the physiological state of the introduced inoculum and µmax the ‘maximum specific growth rate’. A simplified version of the model has been proposed by McKellar (McKellar and Lu, 2004). What is common to both versions is the implication that there exists a universal relation between the ‘lag-time’, λ, the ‘maximum growth rate’, µmax, and the asymptotic growth level, ‘Nmax’. The existence of such a universal relationship, however, is yet to be confirmed by independent experimental evidence. A simplified model of sigmoid growth curves has been proposed by Buchanan et al. (1997), composed of a horizontal ‘lag phase’, followed by a log-linear growth segment and a horizontal ‘stationary phase’. The model implies that the isothermal growth curve must be symmetric, i.e. that the growth acceleration and deceleration mirror one another. A version of the Gompertz model, in which a ‘lag-time’ had been incorporated, has also been used to describe sigmoid curves (McMeekin et al., 1993; McKellar and Lu, 2004). However, while all the above and a few other models provide a similar fit to experimental growth data, they do not necessarily yield the same growth parameters. Thus, when the ‘lag-time’ of the same Listeria, derived from the same experimental dataset, was calculated with different growth models, the results varied between 45 and 68 days (McKellar and Lu, 2004). [This problem could be eliminated if the ‘lag-time’ were defined as the time to double, triple or multiply by any other factor, an old idea that has been rarely if ever implemented. If defined in such a manner, then every model with a good fit will yield a similar estimate of the ‘lag-time’ (Peleg, 2006)]. In light of the above, none of the currently used growth models can be considered as unique, and alternative models can be just as effective.
8.3
Traditional secondary models
Models that describe isothermal growth and inactivation patterns are considered ‘primary’ because they are derived directly from the experimental data. The
The non-linear kinetics of microbial inactivation and growth in foods
133
models that describe their parameters’ temperature-dependence are known as ‘secondary models’ because they deal with entities that have been themselves derived from mathematical models rather than directly from the original data. The same applies to the concentration or pressure-dependence of the survival parameters, but these should not be of concern here. Since isothermal survival and growth parameters can also depend on factors such as pH, water activity, osmotic pressure, salts concentration and the like, they too should be considered in the model’s equation to account for their influence on the organism’s inactivation or growth. However, we’ll limit our discussion to the role of temperature only, assuming that all other factors that can affect the observed survival or growth pattern remain practically unchanged on the pertinent time scale.
8.3.1 The log-linear model and the ‘z-value’ It has been traditionally assumed that the temperature-dependence of the ‘D-value’ is log-linear. Hence, a plot of the ‘D-value’ vs temperature on semi-logarithmic coordinates will be a straight line. Thus, the temperature span in degrees C or F, which reduces (or extends) the ‘D-value’ by a factor of ten, is called the ‘z-value’. According to this concept, knowing the ‘z-value’ and the ‘D-value’ at any given temperature would be sufficient to construct every isothermal survival curve of the organism or spore in question. Obviously, any inactivation model based on these ‘values’ will be valid if, and only if, both all the isothermal survival curves and the temperature-dependence of the resulting ‘D-values’ are log-linear. Since even the first condition is seldom satisfied (see Section 8.2.1), the general applicability of this model should be reconsidered. However, let us assume that the inactivation patterns of selected bacterial organisms or spores can be approximated by the firstorder kinetics. The question that arises is whether the log-linear temperature-dependence of D can be a valid secondary model from a theoretical viewpoint. One of the implications of such a model is that there is no qualitative difference between the temperature effect at high and low temperatures. One can claim, of course, that according to this model the inactivation rate at low temperatures is so small that it would be negligible for all practical purposes, which is true. Nevertheless, if the model faithfully captures the inactivation kinetics, it should include a marker of the temperature range where the inactivation mode changes.
8.3.2 The Arrhenius and Williams–Landel–Ferry (WLF) equations The Arrhenius equation has frequently been used as an alternative to the log-linear model (see Section 8.3.1) whenever the latter clearly does not fit the temperaturedependence of the ‘D-values’. The Arrhenius model was originally derived for simple chemical reactions and it can be written in this form: 1 k1 E 1 loge –– = –– –– – –– k2 R T2 T1
[8.6]
134
Modelling microorganisms in food
where k1 and k2 are the rate constants at absolute temperatures T1 and T2, E is the ‘energy of activation’ and R the universal gas constant. (The units of E are energy per mole and of R energy per mole per degrees K.) Equation (8.6) entails that a plot of loge k(T) vs 1/T is a straight line, from the slope of which the ‘energy of activation’ can be calculated. Such plots have indeed been used to calculate the ‘energy of activation’ of microbial inactivation, and the results are reported in numerous publications and listed on the web. However, the application of the Arrhenius equation to either microbial growth or inactivation has serious conceptual problems (Peleg et al., 2004; Peleg, 2006). As the ‘D-value’ temperature-dependence model that it is intended to replace, the Arrhenius model too requires that the organism’s or spore’s isothermal survival curves should be all log-linear, a condition that is rarely satisfied as has already been stated. And, in addition, the transformation of the temperature in degrees C to the absolute temperature reciprocal in K–1 unnecessarily ‘compresses’ the temperature scale. Moreover, even if the rate constant, k, were well-defined, its magnitude would rarely cover a span of several orders of magnitude to justify the logarithmic transformation. More seriously, the Arrhenius model, like the log-linear model that has produced the ‘z-value’, entails that the inactivation or growth rate is only temperature-dependent. Hence, if the model is correct, then the rate must be totally unaffected by the population’s thermal history, which rarely if ever can be the case (Peleg et al., 2004; Peleg, 2006). A critical assessment of the Arrhenius model from another viewpoint can also be found in McMeekin et al. (1993). Much of the above also pertains to the WLF equation, which has been proposed to replace the Arrhenius equation wherever the logek vs 1/T plot has been clearly curvilinear – see Peleg (1996a). The most popular form of the WLF equation adapted for microbial inactivation is:
k log10 1 kg
c (T − Tg ) = 1 c +T −T 2 g
[8.7]
where kg is the inactivation rate constant at the ‘glass transition temperature’, ‘ Tg’ and c1 and c2 are constants. The implied assumption here is that the temperature effect on spores’ inactivation and on the viscosity of synthetic polymers is governed by the same rules, again an analogy that is very difficult to explain. Moreover, since ‘Tg’ depends on the method of its determination and the difference can be tens of degrees C (e.g. Seyler, 1994; Donth, 2001), the theoretical foundations of the model itself rest on very shaky grounds. Of course, ‘Tg’ in the WLF model can be replaced by any other convenient reference temperature. However, this, if done, will render the equation an ordinary two-parameter empirical model, which like the Arrhenius and log-linear model entails that the temperature effect on the momentary inactivation rate is unaffected by the populations’ thermal history and hence its momentary state.
8.3.3 Non-linear isothermal inactivation models Curvilinear semi-logarithmic isothermal survival curves have been reported by
The non-linear kinetics of microbial inactivation and growth in foods
135
numerous authors. They have been described by a variety of mathematical models and explained in different ways. Most prominent among these is the notion that a non-linear semi-logarithmic survival curve is evidence of the existence of a mixed population (the ‘biphasic model’) with two characteristic first-order kinetics rate constants. However, other models, from purely empirical to those that invoke the concept that there is a ‘lag-time of inactivation’, have also been proposed. A survey and discussion of the non-linear inactivation models can be found in Geeraerd et al. (2004). An altogether different approach is to consider the survival curve as a manifestation of an underlying distribution of inactivation times. This has been proposed several times (e.g. Casolari, 1988; Anderson et al., 1996; Augustin et al., 1998) but has only recently received a more widespread attention in the food microbiology community, with the Weibull distribution leading the way (Peleg and Cole, 1998; Mafart et al., 2002; van Boekel, 2002). 8.3.4 The Weibullian model A survival curve, like a dose–response curve, is by definition the cumulative form of the resistances distribution within the population, measured in terms of the time or dose at which an individual cell or spore is inactivated (Peleg et al., 1997, 2001; Peleg and Cole, 1998; Peleg, 2002, 2006). However, since the slope of a survival curve has rate units, kinetics is the other side of the same coin. Excluding injury and recovery, the survival curve of an individual cell or spores is a step function (Fig. 8.1). However, since in a microbial population the inactivation times (the tcs as shown in Fig. 8.1) have a distribution, the survival curve usually has a considerable span (Peleg et al., 2001) – see Fig. 8.1. Since death or inactivation can be considered a failure phenomenon (van Boekel, 2002; Corradini and Peleg, 2004a), a Weibullian distribution of the inactivation time is expected to be quite common, as it has been observed in many unrelated processes that involve breakup and disintegration. Moreover, the ‘shape factor’ of the distribution (see below) is expected to be hardly influenced by temperature (van Boekel, 2002), and this can indeed be observed in the isothermal inactivation patterns of a variety of microbial cells and spores (Fernández et al., 1999; van Boekel, 2002; Corradini and Peleg, 2004b; Periago et al., 2004). In handbooks of statistics and mathematical software, the common presentation of the cumulative Weibull distribution of a random variable x is in the form: x β F(x) = exp– –– [8.7] α where α and β are the ‘location’ and ‘shape’ factors, respectively. If they are to be truly representative of the actual distribution, these factors should be determined experimentally from the original survival ratios and not from their plots after a logarithmic transformation, as is sometimes reported. However, since most microbial survival curves cover several orders of magnitude reduction and since the difference between a survival ratio of 10–3 and 10–6, let’s say, can have serious safety implications, the logarithmic transformation of the survival curve is both
136
Modelling microorganisms in food
Fig. 8.1 Schematic view of the survival curve of an individual microbial cell or spore and that of a population, whose members’ heat resistances expressed as the time to their inactivation has a Weibull distribution with a shape factor, bigger or smaller than one. (Notice that the traditional ‘first-order kinetics’ model is just a special case of the Weibull distribution with n = 1.)
convenient and necessary. Consequently, our preference is to present the Weibullian model in the form: Log10S(t) = –b(T)tn(T)
[8.9]
where b(T) is a temperature-dependent ‘rate parameter’ whose reciprocal is related to the distribution’s location factor and n(T) is an empirical power related to the distribution’s shape factor. When n(T) ≈ constant = n, the model becomes: log10S(t) = –b(T)tn
[8.10]
(The form of equation (8.9) or (8.10) is similar to that used by Rosin and Rammler, who proposed this distribution function for particulates disintegration in 1933, three years before Weibull’s 1936 celebrated paper which gave this distribution its more widely known name.) All the above pertains only to populations and timescales where defense mechanisms cannot come into play and where the log10S(t) vs t relationship is independent of the initial number of cells or spores treated. 8.3.5 Interpretation of the semi-logarithmic survival curve’s concavity A schematic view of the typical shapes of Weibullian survival curves is shown in
The non-linear kinetics of microbial inactivation and growth in foods
137
Fig. 8.2 Schematic view of the temperature dependence of the Weibullian model’s rate parameter, b(T), described by the log logistic equation. Notice that increased heat resistance is manifested in a higher Tc and/or a smaller k.
Fig. 8.1. They were generated with n < 1, for survival curves having upper concavity or showing ‘tailing’, and n > 1, for curves having downward concavity. The log-linear relationship is just a special case of the Weibullian model with n = 1. Upper concavity (n < 1) can be interpreted as a manifestation of the rapid elimination of the weak members of the population, leaving survivors with increased heat resistance. Downward concavity (n > 1) is most probably evidence that accumulated damage sensitizes the survivors. Thus, after a continued exposure, it takes progressively shorter time to eliminate the same portion of cells or spores.
8.3.6 The temperature-dependence of b(T) Regardless of whether the power n(T) is treated as a constant or a function of temperature, the b(T) vs T relationship can be frequently described by the linear or non-linear log logistic function, see Fig. 8.2, that is: b(T) = loge {1 + exp[k(T – Tc)]}
[8.11]
b(T) = loge {1 + exp[k(T – Tc)]}m
[8.12]
or
138
Modelling microorganisms in food
Table 8.1 Weibullian–log logistic survival inactivation parameters of four microorganisms k (C–1)
Tc (C )
1.5
0.88
60.5
Broth
0.7
0.17
63.0
C. botulinum Broth (spores) B. sporothermo- Soups durans (spores)
0.35
0.31
102.5
Organism
Medium n (–)
E. coli (cells)
Broth
Salmonella (cells)
0.7–0.8 0.24–0.44 121–123
Reference
Experimental data source
Corradini and Peleg (2004b) Peleg and Normand (2004) Corradini et al. (2005) Periago et al. (2004)
Valdramidis et al. (2004) Mattick et al. (2001) Anderson et al. (1996) Periago et al. (2004)
respectively, where Tc marks the temperature range at which lethality intensifies, k is a constant representing the steepness of the b(T)’s rise at temperatures well above Tc, and m is a constant (m > 1). In most of the cases examined to date, equation (8.11) provides an excellent description of the temperature-dependence of b(T) and hence the additional constant, m, has been unnecessary (Campanella and Peleg, 2001; Corradini and Peleg, 2004b; Periago et al., 2004). The reader will notice that this secondary model implies that at T << Tc, b(T) ≈ 0, but at T >> Tc , b(T) = k(T – Tc). It is therefore consistent with the notion that there is a qualitative difference between the temperature effect on microbial inactivation at high and low temperatures. The almost perfect fit of equation ( 8.11) has been demonstrated in several publications. Examples of the parameters are presented in Table 8.1. Notice that, in contrast with the traditional log-linear and Arrhenius (or WLF) models, the isothermal inactivation patterns (at different temperatures) are described by three instead of two survival parameters. Also notice the difference in Tc values between the extremely resistant spores of Bacillus sporothermodurans and C. botulinum and between the latter’s Tc and that of the vegetative cells. Because equation (8.9) [or equation (8.10)] accounts for the fact that the logarithmic inactivation rate is time-dependent, the use of the combined Weibullian– log logistic model, wherever it fits the data, eliminates the inconsistencies of the traditional models that stem from the requirement that the inactivation rate is determined by the temperature only and of course other conditions like pH, water activity, salt concentration and the like. Moreover, since the combined Weibullian– log logistic model is derived directly from the experimentally observed isothermal inactivation patterns of the targeted organism or spores, the need for a preconceived ‘kinetic reaction order’ is eliminated too and with it the hard to explain analogies between microbial inactivation and simple chemical reactions (Peleg et al., 2004; Peleg, 2006).
The non-linear kinetics of microbial inactivation and growth in foods
139
8.3.7 The ‘shoulder’ and the logistic distribution of inactivation times In certain survival curves, the decline in the treated population’s number becomes observable only after a certain time. This ‘lag-time’ will always appear whenever the temporal distribution of the lethal or inactivation events has a variance much smaller than its mode (Peleg, 2003a). This is demonstrated in Fig. 8.3 left-hand side, which depicts, schematically, the symmetric logistic distribution. However, a similar shoulder will also be observed with any other unimodal distribution, notably the Weibull distribution (with n > 1), except that the region where the survival ratio drops will not appear log-linear (Fig. 8.3, right-hand side). This mathematical characteristic of unimodal distribution functions is sufficient to explain the appearance of a lag in a survival curve, and the same continuous model can describe the entire curve despite the appearance of the existence of two distinct inactivation regimes. Certain bacterial spores, upon being subjected to a lethal temperature, exhibit what appears to be an initial increase in their number (Fig. 8.4). This phenomenon is expressed in what is known as ‘activation shoulder’ (Lewis and Heppel, 2000). It has been explained as being a manifestation of dormant spores’ activation prior to their heat destruction (Shull et al., 1963; Sapru et al., 1992). Nevertheless, the observed activation shoulder might be also due, partly, to the breakup of spore aggregates and their improved dispersion. Since the ‘activation shoulder’ is rarely a major issue in thermal preservation of foods, its safety implications will not be discussed. Still, it has been shown that the general methodology to calculate the efficiency of thermal processes to be described below also applies to spores that are activated by heat (Peleg, 2002). It has been shown too that, at least theoretically, there is a way to estimate the original number of ‘recoverable spores’ in such cases, but the method has yet to be confirmed experimentally (Corradini and Peleg, 2003). Mathematically, curves like those shown in Fig. 8.4 can be fitted by several ‘primary models’ having four adjustable parameters. Two examples, whose fit is shown in the figure, are:
log10 S (t ) =
{1 − log [1 + exp(bt )] } t n
e
k1 + k 2 t
[8.13]
where b, n, k1, and k2 are the adjustable parameters (Peleg, 2002) and the ‘double Weibullian’ model (van Boekel, 2003); log10 S(t) = b1tn1 – b2t n2
[8.14]
where b1, b2, n1 and n2 are the adjustable parameters. The reason for including them here is to demonstrate that the inactivation of neither the activated dormant spores nor those that are already active must follow the first-order kinetics as has been assumed by several authors (Peleg, 2002; van Boekel, 2003).
8.3.8 Residual survival It is not inconceivable that even a long heat treatment, especially if mild, might leave some viable survivors. This phenomenon, akin to the already mentioned
140
Modelling microorganisms in food
Fig. 8.3 Survival curves of two hypothetical microbial populations, whose members’ heat resistances have a logistic and Weibull distribution. Notice that whenever the distribution’s variance is considerably smaller than the mode (or mean), the survival curve will have a flat ‘shoulder’.
The non-linear kinetics of microbial inactivation and growth in foods
141
Fig. 8.4 Example of sporal survival curves at two lethal temperatures exhibiting an ‘activation shoulder’, fitted by equations (8.13) and (8.14) as models, solid and dashed line, respectively. The experimental data were reported by Sapru et al. (1992).
‘tailing’, may require special mathematical models. An example is the model
log10 S (t ) = −
at b+t
[8.15]
or
log10 S (t ) = ∑ −
ai t bi + t
[8.16]
where a and b are constants. Equation (8.15) implies that at t → ∞, log S(t) → –a and equation (8.16) that log S(t) → Σ(–ai). The rate of the population drop is primarily controlled by b or bi respectively (see Fig. 8.5). However, while such models can be used to fit survival data they, like all the other models cited in this chapter, cannot be used for extrapolation (Peleg and Penchina, 2000). Also, there is always the possibility that accumulated damage will eventually bring down the population (see Fig. 8.5), albeit only if one would prolong the heat treatment.
8.3.9 Does a thermal death time exist? A general model that can describe survival curves with a ‘strong’ downward concavity is:
log10 S (t ) = −
at b−t
[8.17]
or
log10 S (t ) = −Σ
ait bi − t i
[8.18]
142
Modelling microorganisms in food
Fig. 8.5 Schematic view of a survival curve with a residual survival ratio. Such a curve can be fitted by equation (8.15) or (8.16) as a model (see text).
Fig. 8.6 Schematic view of a survival curve exhibiting an absolute thermal death time. Such curves can be described by equation (8.17) or (8.18) as a model.
The non-linear kinetics of microbial inactivation and growth in foods
143
where, again, a and b are constants. In the case of equation (8.17) however, as t → b, log10S(t) → – ∞ or S(t) → 0, i.e. at any time beyond t = b there will not be any survivors. In the case of equation (8.18), this will happen whenever the time will equal the smallest bi as shown in Fig. 8.6. Either way, models of this kind allow for the existence of a ‘true thermal death time’, that is for an isothermal treatment of a finite duration that will result in absolute destruction of the microbial population at hand (Peleg, 2003a), which can be expected in light of physiological considerations. This hypothesis is a testable one, because one can predict that if the treatment exceeds the given time, no survivors will ever be found, even with the most rigorous isolation and counting methods. Unfortunately, there is an inherent asymmetry between a positive and negative proof. (‘Absence of evidence is not evidence of absence’.) Finding survivors where the prediction is that they could not exist will clearly refute the model, but not finding survivors as predicted could always be explained as being due to the failure of the recovery procedure and the method to detect them (Peleg, 2003a, Peleg, 2006). Still, the concept that there can be absolute sterility should not be dismissed out of hand. If it could be demonstrated that absolute mortality or inactivation actually exists, then sterility criteria could be established in absolute terms rather than on the basis of an arbitrarily chosen survival ratio, another offshoot of the first-order kinetics model.
8.4
Sigmoid isothermal survival curves
Sigmoid survival curves are fairly rare (Peleg, 2003b) but, whenever they are found, they can have one of two basic forms, shown schematically in Fig. 8.7. The first kind of curve has a downward concavity until the inflection point and an upward concavity afterwards, while the second curve has an initial upward concavity, which is inversed after the inflection point. The first shape (Fig. 8.7, Type A) is probably indicative that accumulated damage sensitizes the weak members of the population. After these are eliminated, survivors with higher heat resistance remain, which is expressed by the observed tailing. [Still, at least in principle, a prolonged heat treatment can eventually sensitize the sturdy survivors as well and they too might vanish, albeit at a time after the experiment’s termination (see dashed line).] When the initial part of the survival curve has an upward concavity (Fig. 8.7, Type B), most probably it means that the weak members of the population are decimated rather rapidly, but it takes a considerable amount of accumulated damage for the sturdy survivors to succumb to the treatment (Peleg, 2003b). At this stage, a high rate of inactivation will be resumed, which will be manifested in an accelerated dive of the survival curve – see Fig. 8.7. A sigmoid survival curve can therefore be viewed as the manifestation of the existence of a mixed population (Peleg, 1999). However, whether this is really true, that is, the differences in heat resistance can be attributed to genetic factors, for example, can only be established categorically by isolating the subpopulations and determining their identities by independent assays.
144
Modelling microorganisms in food
Fig. 8.7 Schematic view of the two types of sigmoid survival curves. Notice that in Type A the downward concavity is changing to upper concavity while in Type B it is the other way around. For interpretation see text.
Mathematically, sigmoid survival curves can be described by a variety of threeor four-parameter empirical models, none of them unique (Peleg, 2003b). Examples are:
The non-linear kinetics of microbial inactivation and growth in foods
145
• Type A (Fig. 8.7, top): log10 S (t ) = −
c1t n1 1 + c2t n2
n1 ≥ n2
[8.19]
n>1
[8.20]
or
ct log10 S (t ) = − 1 c2 + t
n
• Type B (Fig. 8.7, bottom): log10 S(t) = – c3t n3 – c4t n4 n3 > 1, n4 < 1
[8.21]
or
log10 S (t ) = −
c1t (1 + c 2 t )(c3 − t )
[8.22]
where the c and n are constants. Again, no universal meaning should be assigned to any of these adjustable parameters nor should they be considered as having any special interrelationships. There is simply no evidence that there exists a meaningful or well-defined finite ‘lag-time’ of inactivation, a related ‘maximum inactivation rate’ and a finite (‘maximum’) residual survival level that is associated with these two. Moreover, a short flat shoulder, or ‘lag-time’, may be simply an artifact of the come-up time, evidence that inactivation became measurable only after a certain temperature had been reached (Peleg, 2003b), a situation that should be avoided in any study of microbial survival.
8.5
Non-isothermal inactivation
Traditionally, the efficacy of thermal preservation has been assessed in terms of an ‘F0-value’. However, as demonstrated in this chapter and elsewhere (e.g. Peleg, 2003a; Corradini et al., 2005), the assumptions on which the ‘F0-value’ calculation is based are too frequently violated, raising serious doubts about its validity as a universal sterility criterion. Although many in the field think otherwise, we feel that it is about time that the ‘F0-value’ is abandoned and replaced by inactivation efficacy criteria that are not based on idealized and over-simplified kinetics and on analogies that cannot be sustained. What follows is an alternative approach to modeling microbial inactivation that we have repeatedly proposed in recent years. Although it has not been derived from any mechanistic considerations and hence is completely ‘empirical’, its applicability has been demonstrated in the correct prediction of non-isothermal survival patterns from the isothermal inactivation data on Listeria (Peleg et al., 2001), Salmonella (Mattick et al., 2001), E. coli (Corradini and Peleg, 2004b) and B. sporothermodurans spores (Periago et al., 2004).
146
Modelling microorganisms in food
8.5.1 Construction of the non-isothermal survival curve For the sake of simplicity, consider an organism or spores whose isothermal survival curves all obey the Weibullian model [equation (8.10)] with a constant power (n = const). Let us also assume that the temperature-dependence of the rate parameter b(T) obeys the linear log logistic model [equation (8.11)]. None of the above is a strict requirement and as has been shown elsewhere (e.g. Peleg et al., 2003; Corradini and Peleg, 2005, 2006a), the same methodology also applies to a variety of different primary and secondary models having a totally dissimilar mathematical structure. Since even under isothermal conditions the survival rate is a function of time, and hence of the survival ratio as shown at the top of Fig. 8.8, it cannot be a function of temperature alone under non-isothermal conditions too. Thus, when the organism or spores are subjected to a thermal process with a temperature profile T(t), we assume that the momentary logarithmic inactivation rate, d log10 S(t)/dt, is the isothermal logarithmic inactivation rate at the momentary temperature, d log10 S(t)/dt@T = T(t), at a time, t*, that corresponds to the momentary logarithmic survival ratio, log10 S(t). The assumption is illustrated graphically in Fig. 8.8. The figure shows that during the non-isothermal inactivation the momentary rate depends on both the momentary temperature and survival ratio. (Another assumption is that the timescale of the treatment is short enough so that adaptation and damage repair do not occur, a condition that is satisfied in most, but not necessarily in all, industrial heat processes.) For the Weibullian model with n(T) = n, the momentary isothermal rate is given by:
d log10 S (t ) = −b(T ) nt n −1 dt T = const
[8.23]
and the corresponding time of the logarithmic survival ratio by: 1
− log10 S (t ) n t* = b(T )
[8.24]
Realizing that b(T) becomes b[T(t)] as the temperature is continuously changing and combining equations (8.23) and (8.24) yields the rate equation: n–1 d log10 S (t ) = −b[T (t)]nt * dt
[8.25]
or
− log10 S (t ) d log10 S (t ) = −b[T (t )]n dt b[T (t )]
n –1 n
[8.26]
With b(T) being defined by the log logistic equation [equation (8.11)] the model becomes:
The non-linear kinetics of microbial inactivation and growth in foods
147
Fig. 8.8 Schematic view of the construction of a non-isothermal survival curve. Notice that the isothermal rate is time-dependent (top) and that the momentary slope of the non-isothermal survival curve is the slope of the isothermal curve at the momentary temperature at a time that corresponds to the momentary survival ratio (bottom); also notice that when the temperature increases (middle), the non-isothermal survival curve (bottom) can have a downward concavity despite the fact that all the isothermal curves have an upward concavity.
148
Modelling microorganisms in food
d log10 S (t ) = − log e [1 + exp{k [T (t ) − Tc ]}] ⋅ n ⋅ dt n –1 n − log10 S (t ) log e [1 + exp{k [T (t ) − Tc ]}]
[8.27]
This rate equation is based on the organisms’ or spores’ three survival parameters, namely n, k and Tc. It can be easily solved numerically by a program like Mathematica® (Wolfram Research, Champaign, IL), the one used to generate all the figures and simulations shown in this chapter. The solution in form of log10 S(t) vs time relationships can be produced for almost any conceivable temperature profile, including discontinuous temperature histories, e.g. Peleg (2003a) and Corradini et al. (2005). The reader will notice that for the first-order kinetics model where n = 1, and (n – 1)/n = 0, equation (8.25) is reduced to: [8.28] which can be integrated analytically whenever T(t) is a linear relationship (Peleg et al., 2003). The continuous rate model [equation (8.27)] can be approximated by the difference equation (Peleg et al., 2005):
b[T (ti ) ] + b[T (ti −1 ) ] log10 S (ti ) − log10 S (ti −1 ) =− ⋅n⋅ ti − ti −1 2 n −1 − [log10 S (ti ) + log10 S (ti −1 ) ] n b[T (ti )] + b[T (ti −1 )]
[8.29]
in which case it can be solved incrementally in real time. The iterations start with t = 0, T = T0, log10 S(0) = 0, b[T(0)] = b(T0) and therefore:
− log S (t1 ) log S (t1 ) b[T (t1 )] + b(T0 ) = ⋅n⋅ t1 2 b[T (t1 )] + b(T0 )
n –1 n
[8.30]
The only unknown here is log10 S(t1) which can calculated in Microsoft Excel® using the GoalSeek function. Once log10 S(t1) is calculated in this way, it is inserted into equation (8.29) with the corresponding t2, b[T(t1)], b[T(t2)] which is similarly solved for log S(t2). The process is repeated for t3 and log S(t3) and so forth, until the whole procedure is covered. Programs to generate survival curves with this model can be found as freeware on the web for pasteurization and sterilization processes (http://www-unix.oit.umass.edu/~aew2000/SalmSurvival. html and http://www-unix.oit.umass.edu/~aew2000/CBotSurvival.html, respectively). (The user can insert any chosen survival parameters and select parameters that control the heating, holding and cooling stages of the examined process or paste an already recorded time–temperature data file.) In principle at least, the same program can be
The non-linear kinetics of microbial inactivation and growth in foods
149
implemented in industrial processes where ti comes directly from a thermocouple embedded in a can, for example. This will enable monitoring the progress of thermal processes in real time in terms of the momentary theoretical survival ratio that they accomplish instead of the ‘F0-value’ now in use. [For more on the method, including a demonstration of its performance with a ‘non-Weibullan’ model, see Peleg et al. (2005) and Peleg (2006).]
8.5.2 Simulated examples Figure 8.9 shows simulated non-isothermal survival curves of two hypothetical organisms A and B, whose survival parameters’ temperature-dependence is also shown in the figure, generated for two temperature profiles using equation (8.27) as a model. The simulations demonstrate that the temperature profile’s complexity is no hindrance to solving the differential equations. They also show that one can generate, simultaneously, the survival curves of more than a single targeted organism or spore and those of nutrients and/or pigments as well, provided that their degradation kinetics follow a pattern similar to that of microbial inactivation (Corradini and Peleg, 2004a; Peleg, 2006). With today’s mathematical tools and the calculation power of even a laptop computer, the complexity of the primary and secondary models should not deter one from their use. As long as these models capture the organism’s survival characteristics, their mathematical format needs neither to be unique nor aesthetically appealing, and as long as the time–temperature conditions in the treatment do not require extrapolation, every suitable primary and corresponding secondary model will produce a similar non-isothermal survival curve (Peleg et al., 2004). It can also be added that the validity of the models can easily be tested by comparing their predicted non-isothermal survival curves with those experimentally recorded. As already stated, the models developed for three different organisms and for bacterial spores in three different media have all passed this test. A safety factor can be added to ensure that at least a certain level of decades reduction will be accomplished based on the above considerations (Corradini and Peleg, 2006a) rather than on the basis of an ‘F0-value’ whose limitations have already been stated. In other words, the safety factor will be in the form of an added number of minutes (or seconds) to that needed to reach a survival ratio deemed practically safe.
8.6
Empirical growth models
The most commonly used models, notably the modified Gompertz and the Baranyi– Roberts, have four adjustable parameters and they can be used interchangeably (McKellar and Lu, 2004) – see Section 8.2.2. In the absence of a decisively superior theoretical model, there is no reason to avoid ad hoc empirical models at least for certain applications. Such empirical models, chosen in light of Ockham’s razor as a guideline (Corradini and Peleg, 2005), have the advantage of being
150
Modelling microorganisms in food
Fig. 8.9 Simulated non-isothermal survival curves of two microorganisms A and B having the survival parameters, shown in the box at the upper right generated for two temperature profiles using equation (8.27) as a model.
The non-linear kinetics of microbial inactivation and growth in foods
Fig. 8.10
151
Schematic view of three possible microbial growth patterns generated with equation (8.32) as a model.
simpler mathematically and free of assumptions that require independent verification. To demonstrate this approach, consider the following: Suppose a dimensionless growth ratio, y(t), is defined as either: [8.31] where N(t) and N0 are the momentary and initial number of the organism in question or in a given unit volume or mass. The first will apply to moderate increases in the population size and the second to growth where the number of cells increases by several orders of magnitude. According to either definition, at t = 0, when N(0) = N0, y(0) = 0. Thus any change in y(t) can be considered as a normalized net growth or logarithmic growth ratio. Three typical growth scenarios depicted schematically as y(t) vs time relationships are shown in Fig. 8.10. All three were created with the single empirical model (Corradini and Peleg, 2005a): y(t) =
a (T )t n (T ) b(T ) + t m (T )
[8.32]
where a(T), b(T), n(T) and m(T) are temperature-dependent coefficients. This model is particularly convenient for growth patterns with a short ‘lagtime’. It is easy to show that when n = m, as t → ∞, y(t) → a(T). When n(T) < m(T), there will be a true peak growth at t = [(m – n)/b(T)n]–1/m and when n(T) > m(T), y(t) will grow indefinitely. The latter is of course an impossible scenario. Hence the
152
Modelling microorganisms in food
model with n > m can apply only to situations where the growth curve has an inflection point but neither a peak nor an apparent asymptotic level within the experiment’s duration. In the ‘asymptotic case’, resembling the scenario for which most of the current growth models have been developed, i.e. where the growth curve has a sigmoid shape, n(T) = m(T), the model becomes:
y (t ) =
a (T )t n (T ) b(T ) + t n (T )
[8.33]
According to this equation, the isothermal momentary normalized growth rate dy(t)/dt is given by:
a(T )b(T )n(T )t n (T ) −1 dy (t ) = [b(T ) + t n (T ) ]2 dt @ T = const
[8.34]
and the time that corresponds to any isothermal growth ratio, t*, by: 1
b(T ) y (t ) n (T ) t = a(T ) − y (t ) *
[8.35]
Combining the two and noting that a(T) = a[T(t)], b(T) = b[T(t)] and n(T) = n[T(t)] produces the non-isothermal growth rate equation:
dy(t ) a[T (t )]b[T (t )]n[T (t )]t *n[T (t )]−1 = 2 dt b[T (t )] + t *n[T (t )]
{
}
[8.36]
where t* is defined by equation (8.35), see Fig. 8.11. As has been recently shown, this model could be used to predict successfully the non-isothermal growth patterns of two organisms under a variety of changing temperature regimes (Corradini and Peleg, 2005). (Similar or slightly better predictions were obtained with a more elaborate but still three-parameter growth model, based on a modified version of the original logistic function. Such a model might have an advantage where long ‘lag-times’ are involved – see article.) Where n ≠ m the situation is more complicated because equation (8.32) has no analytical inverse, i.e. t* cannot be calculated algebraically. Yet in Mathematica®, t* can be expressed as the numerical solution of equation (8.32) in such a way that it will be recalculated at each iteration during the rate equation solution (see Peleg, 2002, 2003a). Thus, one can still use the model to generate non-isothermal growth curves that have a true peak (i.e. where n < m), as will be shown below. This is provided that the secondary models, that is a(T), b(T), n(T) and m(T) vs T, can be derived from the experimental isothermal data and that the temperature history, T(t), can be expressed algebraically, too. All the above can be said about any isothermal growth model that has no analytical inverse, as long as their parameters’ temperature-dependence can be expressed algebraically. The difference between the above and most previous methods of predicting
The non-linear kinetics of microbial inactivation and growth in foods
153
Fig. 8.11 Schematic view of the construction of a non-isothermal sigmoid growth curve. Notice that the momentary isothermal growth rate depends on the momentary growth level.
154
Modelling microorganisms in food
Fig. 8.12 Simulated isothermal ‘sigmoid’ growth curves of a hypothetical microorganism, using Eq. (8.33) as a model and the temperature-dependence of its parameters. The ordinate is the logarithmic growth ratio [Eq. (8.31)], i.e. y(t) = log10[N(t)/N0].
non-isothermal growth patterns from isothermal data (e.g. Bernaets et al., 2004) is that the model’s rate equation is derived directly from the isothermal experimental data and therefore need not be assumed a priori. Also, the primary model used to fit the isothermal growth data can be chosen by mathematical convenience considerations, and its exact form is unimportant (Corradini and Peleg, 2005; Peleg, 2006).
8.7
Simulation of non-isothermal growth curves
8.7.1 Sigmoid growth curves Examples of simulated sigmoid isothermal growth curves of a hypothetical microorganism, based on equation (8.33) as the primary model, are given in Fig. 8.12.
The non-linear kinetics of microbial inactivation and growth in foods
155
Fig. 8.13 Simulated non-isothermal growth curves of the hypothetical microorganism of Fig. 8.12 under four different temperature profiles marked I, II, III and IV.
Their realistic appearance demonstrates that there is no need to assume that the growth curve has an ‘autonomous’, i.e. a distinct, lag phase and, hence, that the growth process can be modeled in its entirety by means of a continuous function. [In fact these curves are very similar to the growth curves of E. coli in a culture broth reported by Fujikawa et al. (2004).] The figure also shows the temperature-dependence of the growth parameters a(T), b(T) and n(T). Simulated non-isothermal growth curves of this hypothetical microorganism under various temperature profiles (marked I–IV) are depicted in Fig. 8.13. These growth curves demonstrate that the mathematical complexity of the temperature profiles and the resulting model has practically no effect on the program’s ability to solve its rate equation.
156
Modelling microorganisms in food
8.7.2 Curves with a true peak growth An example of an isothermal growth curve generated with equation (8.32) with n < m as a primary model is given in Fig. 8.14, bottom left. It simulates a commonly encountered growth pattern where the population size declines after reaching a peak value – most probably as a result of resources depletion, competition, habitat pollution or a combination of these (Dougherty et al., 2002; Neysen et al., 2003; Chorianopoulos et al., 2005; Tadesse et al., 2005). Also shown in the figure is the temperature-dependence of the growth parameters a(T), b(T), m(T) and n(T). This kind of growth pattern can be described by a more elaborate model, constructed from the superimposition of logistic growth and Fermian inactivation components (Peleg, 1996b; Peleg et al., 2004). Such simulations can be used to assess and perhaps predict shifts in the growth peak and changes in its height as well as in the other growth characteristics. The main problem here is not that t* is defined as an iterative procedure rather than as an algebraic term. In fact, the calculations to generate the non-isothermal growth curve shown in the figure did not take much longer to produce than the curves shown in Fig. 8.12, where t* had been calculated analytically. The main drawback of equation (8.32) as a primary model, as well as that of any other four-parameter model, is the requirement that the experimental growth data must be dense and their scatter be very small. Otherwise, although the model will fit the data well as far as statistical criteria are concerned, the resulting regression parameters, namely a(T), b(T), m(T) and n(T), might vary to such an extent that it will be impossible to derive meaningful secondary models from them. Prediction of non-isothermal growth patterns with four-parameter primary models is still a little researched area. Therefore, it is too early to discard the possibility that such models will be found useful in describing real microbial growth patterns after all. Until then, though, simulations of the kind shown in Fig. 8.14 can still be used to assess, qualitatively, how changes in temperature might affect microbial peak growth in processed, stored or transported foods.
8.8
Conclusions
At least in principle, non-isothermal inactivation and growth patterns can be simulated and predicted using rate equations based on the same general assumption that the momentary rate of change is the isothermal rate at the momentary temperature, at a time that corresponds to the momentary state of the population. This is a testable hypothesis and the already-mentioned early results, reported elsewhere, inactivation (Mattick et al., 2001; Peleg et al., 2001; Corradini and Peleg, 2004b, 2005; Periago et al., 2004) and growth (Corradini and Peleg, 2005; Corradini et al., 2006b), indicate that it is a very reasonable one. In all the cases where the hypothesis was proven correct, the models were truly predictive, that is they could be used to estimate correctly the outcome of experiments not used in their parameters’ determination. It is hoped the concept will be tested and confirmed in additional systems and will be further developed for the prediction of non-isothermal inactivation and growth patterns where the intensity of synergistic and antagonistic factors vary simultaneously.
The non-linear kinetics of microbial inactivation and growth in foods 157
Fig. 8.14 A simulated non-isothermal growth curve of an organism whose isothermal growth curves have a true peak [Eq. (8.32) with m(T) > n(T)] under an irregularly oscillating temperature profile. The temperature-dependence of the primary model’s four parameters is shown at the top. Notice that despite the fact that that t* cannot be calculated analytically, only numerically at each iteration, the rate equation can still be solved by a program like Mathematica® even for complicated temperature profiles.
158
8.9
Modelling microorganisms in food
Acknowledgements
Contribution of the University of Massachusetts at Amherst. The authors express their thanks to Mark D. Normand for his help in programming.
8.10 References Augustin JC, Carlier V and Rozier J (1998) Mathematical modeling of the heat resistance of L. monocytogenes, J Appl Microbiol, 84, 185–191. Anderson WA, McClure PJ, Baird-Parker AC and Cole MB (1996) The application of a loglogistic model to describe the thermal inactivation of Clostridium botulinum 213B at temperatures below 121.1 degrees C, J Appl Bacteriol, 80, 283–290. Baranyi J and Roberts TA (1994) A dynamic approach to predicting bacterial growth in food, Int J Food Microbiol, 23, 277–294. Bernaerts K, Dens E, Vereecken K, Geeraerd A, Devlieghere F, Debevere J, Van Impe JF (2004) Modeling microbial dynamics under time-varying conditions, in McKellar R and Lu X (eds), Modeling Microbial Responses on Foods, Boca Raton, FL, CRC Press, 243–261. Buchanan RL, Whiting RC and Damert WC (1997) When is simple good enough: a comparison of the Gompertz, Baranyi and tree-phase linear models for fitting bacterial growth curves, Food Microbiol, 14, 313–326. Campanella OH and Peleg M (2001) Theoretical comparison of a new and the traditional method to calculate Clostridium botulinum survival during thermal inactivation, J Sci Food Agr, 81, 1069–1076. Casolari A (1988) Microbial death, in Bazin MJ and Prosser JI (eds), Physiological Models in Microbiology, Boca Raton, FL, CRC Press, 1–44. Chorianopoulos NG, Boziaris IS, Stamatiou A and Nychas GJE (2005) Microbial association and acidity development of unheated and pasteurized green-table olives fermented using glucose or sucrose supplements at various levels, Food Microbiol, 22, 117–124. Corradini MG and Peleg M (2003) A theoretical note on estimating the number of recoverable spores from survival curves having an ‘activation shoulder’, Food Res Int, 36, 1007–1013. Corradini MG and Peleg M (2004a) A model of non-isothermal degradation of nutrients, pigments and enzymes, J Sci Food Agr, 84, 217–226. Corradini MG and Peleg M (2004b) Demonstration of the Weibull–Log logistic survival model’s applicability to non isothermal inactivation of E. coli K12 MG1655, J Food Prot, 67, 2617–2621. Corradini MG, Normand MD and Peleg M (2005) Calculating the efficacy of heat sterilization processes, J Food Eng, 67, 59–69. Corradini MG and Peleg M (2005) Estimating non-isothermal bacterial growth in foods from isothermal experimental data, J Appl Microbiol, 99, 187–200. Corradini MG, Normand MD and Peleg M (2006a) On expressing the equivalence of nonisothermal and isothermal heat sterilization processes, J Sci Food Agr, 86, 785–792. Corradini MG, Amézquita A and Peleg M (2006b) Modeling and predicting non-isothermal microbial growth using general purpose software. Int J Food Microbiol, 106, 223–228. Donth EJ (2001) The Glass Transition: Relaxation Dynamics in Liquids and Disordered Materials, Berlin, New York, Springer. Dougherty DP, Breidt F, McFeeters RF and Lubkin SR (2002) Energy-based dynamic model for variable temperature batch fermentation by Lactococcus lactis, Appl Environ Microb, 68, 2468–2478. Fernández A, Salmeron C, Fernández PS and Martínez A (1999) Application of the frequency distribution model to describe the thermal inactivation of two strains of Bacillus cereus, Trends Food Sci Tech, 10,158–162.
The non-linear kinetics of microbial inactivation and growth in foods
159
Fujikawa H, Kai A, Morozumi S (2004) A new logistic model for Escherichia coli growth at constant and dynamic temperatures, Food Microbiol, 21, 501–509. Geeraerd AH, Valdramidis VP, Bernaerts K, Debevere J, Van Impe JF (2004) Evaluating microbial inactivation models for thermal processing, in Richardson P (ed.), Improving the Thermal Processing of Foods, Cambridge, Woodhead Publishing, 427–453. Lewis M and Heppel N (2000) Continuous Thermal Processing of Foods: Pasteurization and UHT Sterilization, Gaithersburg, MD, Aspen Publishers. Mafart P, Couvert O, Gaillard S and Leguerinel I (2002) On calculating sterility in thermal preservation methods: application of the Weibull frequency distribution model, Int J Food Microbiol, 72, 107–113. Mattick KL, Legan JD, Humphrey TJ and Peleg M (2001) Calculating Salmonella inactivation in non-isothermal heat treatments from isothermal non linear survival curves, J Food Prot, 64, 606–613. McKellar R and Lu X (eds) (2004) Modeling Microbial Responses on Foods, Boca Raton, FL, CRC Press. McMeekin TA, Olley JN, Ross T and Ratkowsky DA (1993) Predictive Microbiology: Theory and Application, New York, NY, John Wiley and Sons. Neysens P, Messens W and De Vuyst L (2003) Effect of sodium chloride on growth and bacteriocin production by Lactobacillus amylovorus DCE 471, Int J Food Microbiol, 88, 29–39. Peleg, M (1996a) On modeling changes in food and biosolids at and around their glass transition temperature range, Crit Rev Food Sci, 36, 49–67. Peleg, M (1996b) A model of microbial growth and decay in a closed habitat based on combined Fermi’s and the logistic equations, J Sci Food Agr, 71, 225–230. Peleg M (1999) On calculating sterility in thermal and non-thermal preservation methods, Food Res Int, 32, 271–278. Peleg M (2002) A model of survival curves having an ‘activation shoulder’, J Food Sci, 67, 2438–2443. Peleg M (2003a) Microbial survival curves: Interpretation, mathematical modeling and utilization, Comments Theor Biol, 8, 357–387. Peleg M (2003b) Calculation of the non-isothermal inactivation patterns of microbes having sigmoidal isothermal semi-logarithmic survival curves, Crit Rev Food Sci, 43, 645–658. Peleg M (2006) Advanced Quantitative Microbiology for Foods and Biosystems: Models for Predicting Growth and Inactivation, Boca Raton, FL, CRC Press. Peleg M and Cole MB (1998) Reinterpretation of microbial survival curves, Crit Rev Food Sci, 38, 353–380. Peleg M, Normand MD (2004) Calculating microbial survival parameters and predicting survival curves from non-isothermal inactivation data, Crit Rev Food Sci Nutr, 44, 409– 418. Peleg M and Penchina CM (2000) Modeling microbial survival during exposure to a lethal agent with varying intensity, Crit Rev Food Sci, 40, 159–172. Peleg M, Normand MD, Damrau E (1997) Mathematical interpretation of dose–response curves, Math Biol, 59(4), 747–761. Peleg M, Penchina CM and Cole MB (2001) Estimation of the survival curve of Listeria monocytogenes during non-isothermal heat treatments, Food Res Int, 34, 383–388. Peleg M, Normand MD and Campanella OH (2003) Estimating microbial inactivation parameters from survival curves obtained under varying conditions – The linear case, B Math Biol, 65, 219–234. Peleg M, Corradini MG and Normand MD (2004) Kinetic models of complex biochemical reactions and biological processes, Chemie Ingenieur Technik, 76, 413–423. Peleg M, Normand MD and Corradini MG (2005) Generating microbial survival curves during thermal processing in real time, J Appl Microbiol, 98, 406–417. Periago PM, van Zuijlen A, Fernandez PS, Klapwijk PM, ter Steeg PF, Corradini MG and Peleg M (2004) Estimation of the non-isothermal inactivation patterns of Bacillus
160
Modelling microorganisms in food
sporothermodurans IC4 spores in soups from their isothermal survival data, Int J Food Microbiol, 95, 205–218. Sapru V, Teixeira AA, Smerage GH and Lindsay JA (1992) Predicting thermophilic spore population dynamics for UHT sterilization processes, J Food Sci, 57, 1248–1257. Seyler RJ (ed.) (1994) Assignment of the Glass Transition, Philadelphia, PA, ASTM. Shull JJ, Cargo GT and Ernst RR (1963) Kinetics of heat activation and thermal death of bacterial spores, Appl Microbiol, 11, 485–487. Tadesse G, Ashenafi M and Ephraim E (2005) Survival of E.coli O157:H7, Staphylococcus aureus, Shigella flexneri and Salmonella spp. in fermenting ‘Borde’, a traditional Ethiopian beverage, Food Control, 16, 189–196. Valdramidis VP, Geeraerd AH, Bernaerts K and Van Impe JK (2004) Dynamic vs. static thermal inactivation: the necessity of validation some modeling and microbial hypotheses, Paper No. 434, Proceedings of the 9th International Conference of Engineering and Food (ICEF 9), March–11, Montpellier, France. van Boekel MAJS (2002) On the use of the Weibull model to describe thermal inactivation of microbial vegetative cells, Int J Food Microbiol, 74, 139–159. van Boekel MAJS (2003) Alternate kinetics models for microbial survivor curves, and statistical interpretations (Summary of a presentation at IFT Research Summit on Kinetic Models for Microbial Survival), Food Technol, 57, 41–42.
9 Modelling of high-pressure inactivation of microorganisms in foods A. Diels, I. Van Opstal, B. Masschalck and C. W. Michiels, Katholieke Universiteit Leuven, Belgium
9.1
Introduction
For more than a century, thermal processes have played a very important role in industrial food preservation. While the application of a suitable dose of heat allows efficient destruction of pathogens and spoilage organisms, it also inevitably causes a certain degradation of the sensorial and nutritional attributes of the food product. In the progress towards more fresh-looking, high-quality convenience foods, the benefits of minimal processing on the organoleptic and nutritional properties of foods have been amply demonstrated and constitute a driving force behind, among others, non-thermal preservation technologies. While most of these are still in various stages of development, high hydrostatic pressure (HP) treatment is already being exploited commercially for a number of niche applications, for instance fruit juices, shellfish, avocado products and hams (Gould, 2000; Linton et al., 2001). HP processing has the potential to inactivate vegetative microorganisms and inhibit undesired activity of various food-related enzymes with minimal changes in sensorial and nutritional properties (San Martin et al., 2002; Trujillo et al., 2002). In the biosphere, hydrostatic pressure increases from about 0.03 MPa on the top of the Mount Everest to 0.1 MPa at sea level and further to over 100 MPa at the deepest point in the ocean (approx. 11 000 m). Pressures used for food applications typically range between 100 and 600 MPa. An important principle that underlies the effects of pressure on reaction equilibria both in biological and abiotic systems is known as the principle of ‘Le
162
Modelling microorganisms in food
Chatelier’. This says that pressure promotes reactions accompanied by a decrease in volume and inhibits those involving a volume increase (Leadley and Williams, 1997). However, because they may involve solvent molecules (e.g. changes to the hydration shell upon denaturation of a protein in an aqueous environment), volume effects are not always a reliable predictor of the net sign of the volume change for a reaction. In general, however, one can say that in the considered pressure range, HP treatment mainly affects non-covalent bonds like hydrogen and ionic bonds and hydrophobic interactions in for example proteins, lipids and polysaccharides, while covalent bonds are generally not affected (Cheftel, 1995). Further, in contrast to heat, pressure does not increase the rate of all chemical reactions. As a consequence, pressure is more selective and its effect on the nutritional and sensorial properties of food is less dramatic compared to a conventional heat treatment. A second principle of practical importance in food processing is the so-called ‘isostatic principle’, which states that pressure is uniformly and instantaneously transmitted throughout products with a continuous liquid phase, independent of their size or geometry, provided that they are not packed in a container that resists pressure transmission (Tewari et al., 1999). Application of heat, in contrast, usually involves temperature gradients in the product which change over time and which complicate process design and validation.
9.2
Factors affecting microbial inactivation by high-pressure processing
Although the potential of HP processing has been widely recognized and some successful commercial applications have been launched, important unsolved scientific and technological problems restrain wide-scale application of high hydrostatic pressure in the food industry. With respect to microbiological safety, not only is there a need for reliable quantitative data on high-pressure inactivation of important food pathogens, but there is also a need for insight into the intrinsic and extrinsic factors that influence this inactivation. Ideally, kinetic studies and modelling of microbial inactivation by HP should be conducted on the most resistant of the microorganisms of concern in a particular product with a given shelf life. Such a model could then serve as a basis for developing processes that effectively inactivate all other microorganisms of concern, because they are less resistant. However, as described in more detail in Section 9.4.2, there are some fundamental problems with respect to the choice of such a reference microorganism to conduct these studies. Below, we will describe the major factors affecting microbial pressure resistance that are related to the microorganism itself (Section 9.2.1), the product properties (Section 9.2.2) and the process parameters (Section 9.2.3). A summarizing overview of these influencing factors is given in Table 9.1.
Modelling of high-pressure inactivation of microorganisms
163
9.2.1 Microbe-related factors In the early 1990s, most researchers were convinced that all vegetative microorganisms could be efficiently inactivated, i.e. reduced by six decimal reductions or more, by HP treatment at up to 800 MPa during a few minutes at 20–30 °C (Hoover, 2001). However, this idea has been gradually abandoned as more recent studies revealed the existence of barotolerant vegetative bacteria. Hauben et al. (1997), for example, isolated some extremely barotolerant E. coli strains after multiple rounds of exposure to HP and selection of survivors of an initially pressure-sensitive strain. For comparison, the survival of these barotolerant mutants after 15 min pressure treatment at ambient temperature was 40–85% at 220 MPa/20 °C/15 min and 0.5–1.5% at 800 MPa/20 °C/15 min, compared to 15% at 220 MPa/20 °C/15 min and only 2.10–8% at 700 MPa/20 °C/15 min for the pressure-sensitive parental strain. Similarly, Gao et al. (2001) obtained barotolerant Escherichia coli strains from the laboratory strains TG1, DH5alpha and HB101. Although these studies involved non-pathogenic strains, it is very likely that such an extreme barotolerance may also be selected in pathogenic E. coli strains or in other foodborne pathogens. A few years later, Karatzas and Bennik (2002) succeeded in isolating a pressure-resistant Listeria monocytogenes mutant after only a single pressurization round at 400 MPa/20 min/20 °C. The generation of barotolerance seems to be particularly problematic for HP processes at ambient temperature and below, precisely those treatment conditions that are an attractive alternative for thermal food preservation because of a maximal retention of food quality. Furthermore, detailed HP inactivation studies have since revealed a large variability in pressure resistance among vegetative bacterial species and even between strains within a single species. This is illustrated in Fig. 9.1 which shows a compilation of published data on HP inactivation of E. coli and L. monocytogenes strains. It can be seen that the strain variation in HP inactivation extends from the low (200–300 MPa) to the high pressure range (700–800 MPa). A similar picture emerges for other bacteria. For example, Alpas et al. (1999) reported a viability loss, expressed as decimal reductions, ranging from 0.7 to 7.8 among seven Staphylococcus aureus strains and from 5.5 to 8.3 among six Salmonella strains after pressure treatment in 1% peptone solution at 345 MPa/25 °C/5 min. The physiological state of a cell or a bacterial population also appears to be a significant factor in pressure resistance. On entry to the stationary phase of growth, bacteria undergo a number of genetically programmed changes that lead to an increased resistance against adverse conditions including high temperature, oxidative stress, low pH and high salt concentration (Kolter et al., 1993). Not unexpectedly, it has been found that stationary phase cells also display increased resistance to HP treatment when compared to exponentially growing cells (Linton et al., 2001). Finally, it has been repeatedly suggested that the growth temperature affects the pressure resistance of bacteria (McClements et al., 2001; Casadei et al., 2002). Lower growth temperatures generally correlate with an enhanced pressure resistance, and this has been ascribed to the higher membrane fluidity due to the
164
Factors
Effect of the influencing factor on bacterial survival after HP*
Target
↑
E. coli, L. monocytogenes
↑ various
E. coli E. coli, L. monocytogenes, S. aureus, Salmonella
Microbial related Extreme barotolerance acquired naturally existing Strain/species variability Physiological state: exponential growth phase Growth temperature
↓ conflicting data
Reference
Hauben et al., 1997; Gao et al., 2001; Karatzas and Bennik, 2002 Benito et al., 1999; Alpas et al., 1999 Alpas et al., 1999. Figure 9.1 Linton et al., 2001
E. coli psychrotrophic bacteria
McClements et al., 2001; Casadei et al., 2002
Modelling microorganisms in food
Table 9.1 Overview of the effect of different factors on pressure resistance of vegetative bacteria. The arrows indicate whether a factor increases (↑), decreases (↓) or has no effect on pressure resistance when compared to the reference method
Product related Low pH Low aw
Minerals Ca3–, Fe2+, Mg2+, Mn2+ Zn2+, Ni2+, Cu2+, Co2+ Biopreservatives lactoperoxidase lysozyme
solute dependent
Yeast and mould inactivation Vegetative bacteria inactivation Zygosaccharomyces bailli, E. coli Rhodotorula rubra Lactococcus lactis
↑ no effect
E. coli, L. monocytogenes various microorganisms
↑ ↓
E. coli E. coli
↓ ↓ or no effect
various microorganisms various microorganisms
Process related Pressure intensity (100–600 MPa) Treatment time (0–60 min)
↓ ↓ (< 15 min) tailing (> 15 min) Treatment temperature (< 50 ºC)) ↓ (30–50 ºC) ↑ (5–20 ºC)
Multiple pressure treatment cycles
↓ (5–20 ºC) ↓ (10–20 ºC) ↓
various microorganisms E. coli various microorganisms various microorganisms P. fluorescens, Lb. helveticus, L. innocua E. coli, S. aureus E. coli E. coli
Ogawa et al., 1990; Reyns et al., 2000 Garcia-Graells et al., 1998; Linton et al., 2000 Palou et al., 1997; Van Opstal et al., 2003; Oxen and Knorr, 1993 Molina-Hoppner et al., 2004 Garcia-Graells et al., 1999; Styles et al., 1991 Gervilla et al., 2000 Hauben et al., 1998 Hauben et al., 1998 Garcia-Grealls et al., 2003 Masschalck et al., 2001
Dogan and Erkmen, 2004 Cheffel et al., 1995 Dogan and Erkmen, 2004 Patterson and Kilpatrick, 1998 Trujillo et al., 2002 Trujillo et al., 2002 Gervilla et al., 1997 Masschalck et al., 2000; Ponce et al., 1998b
* Compared to the reference method: pressure-sensitive strain, stationary phase cells, growth temperature 37 ºC, pH 7, high aw (0.99), buffer, pressurization temperature 20 ºC, pressurization time 15 min, 1 pressure cycle.
Modelling of high-pressure inactivation of microorganisms
Composition food matrix Fat content
↓ or no effect ↓ ↑
165
166
Modelling microorganisms in food
(a) 12 y = 0.0062x + 0.3132
10
R2 = 0.1499
Log N0/N
8 6 4 2 0 100
200
300
400
500
600
700
800
Pressure (MPa)
(b) 12 10
y = 0.0184x – 3.6343 R2 = 0.5143
LogN0/N
8 6 4 2 0 100
200
300
400
500
Pressure (MPa)
Fig. 9.1 Compilation of published data on pressure resistance of (a) Escherichia coli (77 data points, 19 strains) and (b) Listeria monocytogenes (63 data points, 17 strains). E. coli strains: 931 (ü), 932 (¸), 933 (K), C9490 (×), 30–2C4 (◊), NCTC12079 (+), W2–2 (—), H1071 (K), NCTC8003 (), MG1655 (õ), LMM1010 (*), LMM1020 (¤), LMM1030 (˜), 11229 (–), NCTC9001 (ü), C7927 (•), SLR503 (ü), 35748-88 (¸), ATCC25922 (×) (Ludwig et al., 1994; Patterson et al., 1995; Hauben et al., 1997; Kalchayanand et al., 1998; Benito et al., 1999; Ter Steeg et al., 1999; Alpas, 1999; Pagan et al., 2001; Van Opstal et al., 2005; Yamamoto et al., 2005). L. monocytogenes strains: Scott A (K), CA (◊), 11994 (ü), 2433 (), OH2 (õ), NCTC7973 (◊), LMG11387 (˜), NCTC11994 (ü), CECT414 (¸), CECT910 (×), LMG11387 (*), LMG13568 (K), CIP78.44 (+), CIP79.45 (—), V7 (¤), 35091 (*), SLR1 (ü) (Mackey et al., 1995; Patterson et al., 1995; Simpson and Gilmour, 1997; Stewart et al., 1997; Kalchayanand et al., 1998; Garcia-Graells et al., 2003; Alpas, 1999; Tholozan et al., 2000; Karatzas and Bennik, 2002). Conditions of pressurization are similar (buffer of neutral pH, 25–30 °C, 10 min treatment).
Modelling of high-pressure inactivation of microorganisms
167
increased incorporation of unsaturated fatty acids during growth at low temperature. The higher membrane fluidity is then supposed to naturally counteract the membrane gelling effect of high pressure. However, while an increased unsaturated fatty acid content may be a logical and successful adaptation in piezophilic and psychrophilic bacteria, it can be questioned whether this would indeed also increase the resistance of bacteria to HP inactivation, because the effects of pressure on membrane phase transition are reversible.
9.2.2 Product-related factors It is widely recognized that the ability of bacteria to survive HP treatment is enhanced in nutritionally rich media. This protective effect is believed to have a dual origin: minerals, carbohydrates, proteins or fat are believed to reduce the damage inflicted on the bacteria, but also to serve as a supply of metabolizable components supporting damage repair (Hoover et al., 1989). This situation prevails in most low-acid foods. Low pH on the other hand, enhances the inactivation of vegetative bacteria by high pressure, prevents repair and outgrowth and can even cause further inactivation of vegetative cells that have been sublethally injured by the pressure treatment (Linton et al., 1999). Interestingly, even the most HPresistant vegetative bacteria known to date will become sensitized to low pH after HP treatment and eventually undergo complete inactivation in the first 2–3 days after HP treatment (Garcia-Graells et al., 1998). The HP inactivation of yeasts and moulds in contrast seems not or only marginally pH-dependent (Ogawa et al., 1990, Reyns et al., 2000). However, since these organisms are intrinsically more pressure-sensitive than most bacteria, they do not constitute a major problem for the HP preservation of acidic products. Several studies have shown that HP treatment can be used to ensure safety and extend the shelf life of acidic products (Garcia-Graells et al., 1998; Linton et al., 2000). For orange juice, for example, the shelf life under chilled storage could be extended from about ten days to more than five weeks (Tonello et al., 1997). Another important product-related factor affecting the pressure sensitivity of bacteria is water activity (aw). In line with what has been observed for HP-induced denaturation of proteins and, in particular, inactivation of enzymes, and with the well-documented enhanced microbial heat tolerance at reduced aw, aw depression as a result of high salt or sugar concentration has been postulated to exert a baroprotective effect on microorganisms. This has been confirmed with several microorganisms including the yeasts Zygosaccharomyces bailii (Palou et al., 1997) and Rhodotorula rubra (Oxen and Knorr, 1993), and E. coli (Van Opstal et al., 2003). The protective effect of low aw was further shown to vary with the solute added (Oxen and Knorr, 1993; Molina-Hoppner et al., 2004). As already mentioned above, the pressure sensitivity of vegetative bacteria is also influenced by the composition of the food matrix. Bacteria are generally more resilient in a complex matrix like milk or meat than in a buffer at the same pH and aw. Chen and Hoover (2003b) for instance demonstrated a strong baroprotective effect on Yersinia enterocolitica in whole ultra-high temperature treated (UHT)
168
Modelling microorganisms in food
milk, with inactivation levels 3.5–4.5 logs lower than in phosphate buffer when treated at 350–450 MPa/22 °C for 10 min. Hugas et al. (2002) reported 1.12–3.46 log units less inactivation of Carnobacterium piscicola LMG2739, Enterococcus faecium CTC492, Lactobacillus sakei CTC494 and CTC746, Leuconostoc carnosum CTC747, L. innocua CTC1014, Pediococcus acidilactici F, S. carnosus LTH2102 and E. coli CTC1007 and CTC1023 in cooked ham homogenized with water (3:1) than in phosphate buffer after 500 MPa/40 °C/10 min treatment. On the other hand, for E. coli CTC1018 no significant difference in inactivation was detected in the same study. In addition, with regard to the effect of fat on the inactivation of microorganisms, contradictory information is found in the literature. Some authors found a baroprotective effect of fat on inactivation of vegetative bacteria (Styles et al., 1991; Garcia-Graells et al., 1999), while others did not observe this (Garcia-Risco et al., 1998; Gervilla et al., 2000). Gervilla et al. (2000) for example studied the baroprotective effect of fat (0, 6 and 50%) on inactivation of E. coli, Pseudomonas fluorescens, L. innocua, S. aureus and Lb. helveticus in ovine milk and found that all microorganisms were more resistant in ovine milk per se (0% fat) than in buffer, but that the fat content (6 and 50%) did not affect barotolerance. These contradictory findings obviously complicate the development of safe HP preservation processes. A noteworthy factor that affects pressure resistance of microorganisms and that is related to product composition is the presence of certain minerals. Hauben et al. (1998) reported a significant protective effect of Ca2+ on HP inactivation of E. coli MG1655 in a buffer system. After pressure treatment of 300 MPa/20 °C/15 min, 2–5 log more survivors were detected in the presence of 0.5–80 mM Ca2+. Furthermore, also 0.5–10 mM Fe2+, Mg2+ and Mn2+ significantly enhanced the survival of E. coli. At a concentration of 0.5 mM, none of these cations had any effect on the heat resistance of E. coli, indicating a specific baroprotective effect. In contrast, Zn2+, Ni2+, Cu2+ and Co2+ increased pressure inactivation of the same bacteria (Hauben et al., 1998). Finally, antimicrobials naturally present in a food matrix, such as for example lysozyme, lactoferrin or the lactoperoxidase system in milk, have also been reported to drastically decrease pressure resistance of vegetative bacteria (Masschalck et al., 2000; Garcia-Graells et al., 2003).
9.2.3 Process parameters An HP treatment is defined by three process parameters: pressure, temperature and time of processing. It is in this respect more complicated than a heat treatment where the only process parameters are time and temperature. Apart from a few rare exceptions (Van Opstal et al., 2005), pressure inactivation of vegetative bacteria generally increases with increasing pressure at constant temperature and treatment time. Moreover, at constant pressure and temperature, inactivation also increases with increasing treatment time, although the rate of inactivation may become very low for extended treatment times, a phenomenon known as ‘tailing’ of the survival/inactivation curve.
Modelling of high-pressure inactivation of microorganisms
169
The effect of temperature on HP inactivation of vegetative bacteria is more complex. While it is well documented that elevated temperature (30–50 °C) promotes HP inactivation of microorganisms (Patterson and Kilpatrick, 1998), the effect of low temperature (< 20 °C) on inactivation is less clear. In ewe’s milk for example, P. fluorescens, Lb. helveticus and L. innocua showed higher resistance at 25 °C than at 4 °C, while the opposite was detected for E. coli and S. aureus (Trujillo et al., 2002). These results confirmed an earlier report stating that E. coli was most pressure-resistant at 10 °C, and P. fluorescens at 25 °C in ewe’s milk (Gervilla et al., 1997). This non-monotonous effect of temperature has been incorporated in a number of models (see below). Finally, interrupted pressure treatments have been reported to be more efficient than the equivalent continuous treatment for the same period of time both in laboratory culture media (Masschalck et al., 2000) and in food matrices (Ponce et al., 1998b).
9.3
Current models: strengths and weaknesses
9.3.1 Kinetic behaviour An important issue in the quantification of inactivation of important food-related pathogens by HP is the type of kinetic behaviour that best describes bacterial inactivation at constant pressure and temperature. The kinetics of microbial inactivation by heat is traditionally described by a first-order decay, meaning that the number of survivors decreases with a constant factor over a certain heating time during the entire heating process. This first-order behaviour can be well explained by considering bacterial inactivation by heat as a stochastic process resulting from a distribution of heat energy according to a Maxwell–Boltzmann function. Even if all cells within the population are considered equally resistant to heat, they will not die at the same time because the heat energy is not equally distributed over the cells (or over target molecules such as vital enzymes within the cells). At a certain temperature, there is an average probability for each cell to become lethally damaged by heat. In reality, as will be pointed out further, not all cells in a population have equal heat resistance, resulting in deviations from a perfect first-order behaviour. In several cases, HP bacterial inactivation also proceeds via a first-order reaction and can be explained by a similar theoretical concept. However, as data on the HP inactivation kinetics of microorganisms accumulate in the literature, remarkable and consistent deviations from this first-order behaviour become apparent. Sometimes an entirely different kinetic behaviour can even be observed for single organisms in different matrices (O’Reilly et al., 2000; Van Opstal et al., 2005). Inactivation kinetics can appear as sigmoid-shaped curves with both a shoulder and a tail, curves with a shoulder only (Ulmer et al., 2000; Castillo et al., 2004) and especially with a tail only (Lee et al., 2001; Chen and Hoover, 2003b). For L. monocytogenes, for example, a non-first-order inactivation behaviour due to tailing was proposed at 350, 375 and 450 MPa in
170
Modelling microorganisms in food
phosphate-buffered saline pH 7.0 at ambient temperature (Simpson and Gilmour, 1997), at 350 MPa/30 °C in buffer (Tay et al., 2003) and at 400 and 500 MPa at 22, 40, 45 and 50 °C in UHT whole milk (Chen and Hoover, 2003a). Chen and Hoover (2003b) observed also tailing for Y. enterocolitica at 300, 350, 400 and 450 MPa in 0.1 M Na-phosphate buffer pH 7.0 and at 350, 400, 450 and 500 MPa in UHT whole milk. Van Opstal et al. (2005) even reported both convex and concave inactivation curves for a single organism, E. coli, in a given medium, depending on the pressure–temperature process conditions. Another remarkable observation is that the rate of microbial inactivation seems not always to increase with increasing pressure. This has been observed both for the yeast R. rubra (Aertsen et al., 2003) and the bacterium E. coli (Pagan and Mackey, 2000; Van Opstal et al., 2005). In R. rubra, there is some experimental evidence to support the following explanation: at pressures of 150–190 MPa, the selective knockout of metabolic functions results in a metabolic imbalance and cytosolic acidification, culminating in cell death; at 190–230 MPa, the metabolic functions causing acidification are also knocked out, but truly vital cell functions remain intact and the cells thus survive better than at 180–190 MPa; pressures above 230 MPa, finally, affect vital targets, resulting in increased inactivation. In E. coli, the latter phenomenon was not observed in all food matrices (Van Opstal et al., 2005), confirming the pivotal role of the medium on inactivation kinetics (Chen and Hoover, 2003a; Claeys et al., 2003). Inactivation of bacterial spores also does not necessarily increase with increasing pressure, but the underlying reason is different. Under the conditions used in most studies (< 800 MPa, < 60 °C), spore inactivation occurs only after the induction of spore germination by high pressure, and the latter is suppressed or incomplete at pressures > 500–600 MPa (Wuytack et al., 1998) (see Section 9.4.5 for further details). Generally, the underlying causes of deviations from log-linear inactivation curves are poorly understood, and there may be different reasons for this in different cases (Smelt et al., 2002). Sometimes incorrect interpretations due to survivor detection limits may form a possible explanation but, mostly, tailing phenomena cannot be dismissed as experimental artefacts. A commonly used explanation for tailing phenomena is that the microbial population consists of several subpopulations, each with its own inactivation kinetics. The survival curve is then the result of several inactivation patterns, giving rise to non-linear survival curves (Ulmer et al., 2000). Alternatively, it has been proposed that very mild pressures and very long processing times (several hours), may induce a stress response, involving bacterial adaptional mechanisms. As a consequence, the bacteria may become more pressure-resistant during the process, giving rise to tailing. This is however unlikely at pressures > 70 MPa, because at these pressures, de novo protein synthesis is not possible. Shoulders in activation curves may be an indication that cellular damage has to accumulate to a certain threshold before the viability of the cell is lost. It cannot be excluded, however, that cell clumping may have caused some artefactual inactivation curve shoulders in some studies. Although heat and/or HP preservation processes applied in practice generally incorporate a generous safety margin, tailing and shoulder phenomena certainly
Modelling of high-pressure inactivation of microorganisms
171
deserve attention in view of the increasing trend towards minimal processing technologies. Different models have been proposed in the literature to describe these different types of kinetic behaviour (Smelt et al., 2002). The development of a mathematical model describing microbial inactivation generally follows a two-step approach. First, a primary model describing inactivation as a function of time is formulated to describe bacterial inactivation data obtained at constant pressure and temperature. Second, the dependence of the parameters of the primary model on pressure and temperature is described in a secondary model. The objective of the following section is to highlight the strengths and weaknesses of the most commonly used primary and secondary mathematical models for modelling pressure–temperature inactivation of microorganisms.
9.3.2
Primary modelling: kinetics of inactivation
First-order model As pointed out earlier, this modelling approach assumes that all cells in the population have equal resistance to a lethal treatment and that their inactivation is stochastic, resulting in a linear relationship between the logarithmic inactivation and treatment time at constant pressure and temperature (Schaffner and Labuza, 1997): t N N log 0 = log 0 + (t ≥ 0) [9.1] N 0 D N where
N0 N 0
is the initial reduction factor (after pressure buildup and temperature equilibration), N0 is the initial number of cells (cfu ml–1), N is the number of survivors (cfu ml–1), t is the exposure time (min) and D is the decimal reduction time, i.e. the time required for one log10 reduction of the number of cells. D can also be expressed as 2.303/k, where k is the first order inactivation constant. Although the underlying assumption of equal resistance of all microorganisms within the population can be questioned, the first-order kinetic model is a commonly used, simple and accurate type of model that has been successfully applied in many, though not all, HP microbial inactivation studies. This is, for example, the case for inactivation of L. innocua in liquid UHT dairy cream at 25 °C and 450 MPa (Raffalli et al., 1994), for E. coli, L. innocua and S. enteritidis in liquid whole egg at 2 and 20 °C and 400 MPa (Ponce et al., 1998b), for E. coli in a physiological saline solution at 20 °C and 200, 250, 300 and 350 MPa (Smelt and Rijke, 1992), for E. coli in broth, milk, peach and orange juice at 300–700 MPa (Erkmen and Dogan, 2004), and for L. monocytogenes in raw milk at 150– 400 MPa/20 °C and holding times up to 120 min (Mussa et al., 1999). An important advantage of the first-order inactivation model in expression (9.1) above
172
Modelling microorganisms in food
is the clear physical meaning of the parameter D. However, as discussed above (Section 9.3.1), deviations from first-order kinetics such as shoulders and tails are very common, and they require other, more complex model types. In this context, it is important to emphasize that the HP inactivation of a single organism in two different matrices cannot necessarily be described by the same model type, as is often assumed (Van Opstal et al., 2005). The most commonly used non-first-order model types to describe HP microbial are described below. Biphasic model The biphasic model assumes the microbial population to consist of two subpopulations which independently decay according to first-order kinetics (Xiong et al., 1999). The overall equation of this model can be written as:
[9.2] This is in fact an extension of model (9.1) for two subpopulations, with D1 and D2, respectively, the decimal reduction time for the more pressure-resistant subpopulation N1 and the decimal reduction time of the more pressure-sensitive subpopulation N2; α is the fraction of N1 in the original population, expressed as:
α=
N1 (t = 0) N1 (t = 0) + N 2 (t = 0)
[9.3]
The biphasic model has been successfully applied to describe the inactivation kinetics of several microorganisms in various food matrices, as a function of pressure and temperature. Reyns et al. (2000), for example, reported a biphasic inactivation curve for the yeast Z. bailii, with a first part covering four to six decades and obeying first-order kinetics, followed by a tail corresponding to a small fraction of cells that were inactivated at a much lower rate. In such circumstances, two rates (or two D-values) can be calculated, one for each assumed subpopulation. Also Van Opstal et al. (2005) reported biphasic pressure or pressure–temperature inactivation curves for E. coli in Hepes-buffer and, to a minor extent, in carrot juice for specific pressure/temperature/time combinations. The biphasic model has more flexibility compared to the first-order model and can provide a suitable alternative for fitting inactivation curves with tails. In addition, it preserves the physical meaning of the parameters used in the first-order model. The models described below provide even more flexibility if also the biphasic model falls short. Weibull model The Weibull model takes the population heterogeneity a step further, assuming the existence of several subpopulations, each with its own inactivation kinetics. The survival curve is then the result of several inactivation courses, giving rise to a non-
Modelling of high-pressure inactivation of microorganisms
173
linear survival curve (Van Boekel, 2002). The Weibull model takes into account biological variation within a culture of cells and considers lethal events as possibilities, rather than as deterministic. It is a flexible, yet simple model to describe bacterial inactivation by HP (Heinz and Knorr, 1996; Peleg and Cole, 1998):
1 t N log 0 = N 2.303 α
β
[9.4]
In this model, α is the scale parameter (min) and β is the shape parameter (dimensionless). Depending on the value of the shape parameter β, the inactivation curve can be upward concave (β < 1), linear (β = 1) or downward concave (β > 1). Although the Weibull model is of an empirical nature and the physical meaning of the model parameters is less obvious compared to the models described above, a link with physiological effects has been proposed by Van Boekel (2002). β < 1 was suggested to indicate that the surviving cells at any time-point in the inactivation curve have the capacity to adapt to the applied stress, whereas β > 1 would indicate that the remaining cells became increasingly damaged. The Weibull model has been successfully used to model inactivation of L. monocytogenes by simultaneous application of HP and mild heat (Chen and Hoover, 2003a) and by HP alone (Chen and Hoover, 2004), HP inactivation of a variety of Vibrio spp. in pure culture and in inoculated oysters (Hu et al., 2005), HP/T inactivation of L. innocua in peptone solution (Buzrul and Alpas, 2004), HP inactivation of Y. enterocolitica in phosphate buffer pH 7.0 and UHT whole milk (Chen and Hoover, 2003b), HP/T inactivation of E. coli in Hepes-buffer pH 7.0 (Van Opstal et al., 2005) and HP/T inactivation of Alicyclobacillus acidoterrestris in culture medium pH 3.0 (Buzrul et al., 2005). Models based on the Weibull distribution are simple, as only two parameters determine the curve course, yet they are also versatile, as they can accurately fit either straight, upwards and downwards concave survival curves. Probably their most important drawback is the lack of biological significance of the parameters of the models, which limits their application. Modified Gompertz model The modified Gompertz equation was originally proposed by Gibson et al. (1988) to model growth curves and was later adapted to model heat and pressure inactivation kinetics (Linton et al., 1995; Xiong et al., 1999; Erkmen, 2001):
µe N N log 0 = 0 − A exp − exp (λ − t ) + 1 A N 0 N
[9.5]
In this equation, A (log N/N0) is the lower asymptote value, µ is the inactivation rate (l/min), λ is the time for complete inactivation (min), t is the time (min) and e is a mathematical parameter. This highly flexible Gompertz equation is capable of
174
Modelling microorganisms in food
fitting survival curves which are linear, those which display an initial shoulder followed by a linear course and those which are sigmoidal. While certainly being among the most popular non-linear models to describe microbial inactivation, the modified Gompertz model often produces a poorer fit than the Weibull and log logistic model. This was also the case when this model was applied to HP inactivation, for example in the case of L. monocytogenes in whole milk (Chen and Hoover, 2003a) and Y. enterocolitica in buffer and whole milk (Chen and Hoover, 2003b). Log logistic model The log logistic model to describe inactivation kinetics of microorganisms by high pressure was proposed by Cole et al. (1993): [9.6] In this model, α is the upper asymptote (log cfu ml–1); ω is the lower asymptote (log cfu ml–1); σ is the maximum rate of inactivation [log (cfu/ml)/log (minutes)] and τ is the logarithmic time to the maximum rate of inactivation [log (min)]. Since log time at t = 0 is not defined, a small time (t = 10–6) was used to approximate t = 0. At t = 10–6,
log N 0 = α +
ω −α 1 + e 4σ (τ − 6 )/(ω −α )
[9.7]
An important disadvantage of this model is the lack of physical meaning of the model parameters. Selection of a primary model Several criteria to compare the performance of different models to fit a set of experimental data are reported in the literature. Among them, frequently used are (i) the mean square error (MSE) (Neter et al., 1996), (ii) R2 and R2adjusted values (Neter et al., 1996), (iii) accuracy factor Af (Ross, 1996) and (iv) model simplicity. Generally, the fitting performance of a model increases with decreasing MSE and with R2adjusted and Af values as near to 1 as possible. In addition, a good model should be as simple as possible to avoid overfitting. Thus, a correction for the number of parameters should be taken into account since models with more independent parameters evidently tend to fit better. Criteria such as MSE and R2adjusted take into account such a correction, but it is important to observe that R2adjusted should not be used if there is no Y-intercept. Guan et al. (2005), for example, show a data set of log10 inactivation values of S. typhimurium treated at 500 MPa in whole milk as a function of time that was fitted by four different mathematical models: first-order kinetic model (linear), Weibull model, modified Gompertz model and log logistic model. As shown in Fig. 9.2 and pointed out by the MSE, in this specific case, the log logistic model
Modelling of high-pressure inactivation of microorganisms
175
Fig. 9.2 Survival curve of Salmonella typhimurium DT104 at 500 MPa in UHT whole milk. Data were fitted with linear, modified Gompertz, Weibull and log-logistic functions. MSE-values are respectively 1.918, 0.601, 0.764 and 0.523 (reprinted from Int J Food Microbiol, 104, Guan D et al., ‘Inactivation of Salmonella typhimurium DT 104 in UHT whole milk by high hydrostatic pressure’, 145–153, Copyright 2005, with permission from Elsevier).
produced a significantly better fit than the first-order model, the modified Gompertz and the Weibull model. However, this does not imply that the log logistic model will necessarily perform better than the others for other data sets, even with the same organism. This is because HP inactivation is influenced by many factors, and inactivation in a different product may show a different kinetic behaviour. The same is of course true for heat inactivation but, because of pressure as an extra parameter, HP inactivation tends to be more complex. An interesting retrospective study was conducted to analyze the importance of various factors on the kinetics of heat inactivation for a large number of bacteria (van Asselt and Zwietering, 2006). These authors collected and analyzed over 4000 D-values from literature and concluded that most factors reported to have an effect on the D-value are smaller than the variability of all published D-values. Even effects of shoulders disappeared in their overall analysis. This does not mean that shoulders do not exist in particular instances, but rather indicates that the first-order model featuring D- and z-values is probably the best generally applicable model. Insufficient data are available to do a similar analysis for HP inactivation of most pathogens but, since HP inactivation also proceeds via a first-order reaction in the majority of cases, a similar recommendation can probably be made here. Even if the first-order model is not the best for a particular dataset according to the quantitative criteria
176
Modelling microorganisms in food
discussed above, it may still be the best choice, because it is likely to be valid under a wider range of different situations than other more complex models.
9.3.3 Secondary modelling Secondary models express the temperature- and pressure-dependence of the parameters in the primary model. Again, different model types can be used to this purpose. These are equations of the following general form: Y = anXn + an–1Xn–1 + … + a2X2 + a1X + a0
[9.8]
where n is a non-negative integer that defines the order of the polynomial and a0…an are fitting parameters. Polynomial models are among the most frequently used secondary models to fit primary modelling parameters with respect to pressure and temperature. They are flexible and simple with well-known properties, and they are easy to apply even for complex secondary modelling of microbial inactivation kinetics. On the other hand, their use also exhibits a number of limitations such as poor inter- and extrapolation properties: polynomials may provide good fits within the range of data, but they frequently deteriorate rapidly outside this range. Also the poor fitting of asymptotic datasets is a significant disadvantage. Finally, polynomial models have a shape/degree trade-off: they may have a relatively large number of parameters resulting in over-parameterization and highly unstable models. In the case of a first-order primary model, the secondary model expresses the dependence of the decimal reduction time D on pressure and temperature. A convenient visualization of this relationship is a p–T contour plot showing iso-D (or iso-inactivation) curves. Such a contour plot provides useful information on the process conditions necessary to obtain a desired level of microbial inactivation in a directly accessible format. Such information is critical for the establishment of safe HP processing conditions. Figure 9.3 shows an example of a p–T contour plot representing a 4 log10 microbial destruction curve of S. cerevisiae and Lb. plantarum in growth medium (a) and a p–T iso-D contour plot of Z. bailii in 40 mM Tris–HCl buffer at pH 6.5 (b). Each line represents combinations of pressure and temperature resulting in the same D-value, i.e. the same inactivation rates. A variant is a p–T plot as shown in Fig. 9.4, in which the curves represent pressure– time combinations resulting in a 7 log10 reduction of L. rhamnosus in orange juice at different process temperatures (a) and in four different fruit juices at 40 °C (b). p–T contour plots like those in Fig. 9.3 illustrate a remarkable feature that is observed for HP/T inactivation of microorganisms under some circumstances: at a constant pressure, the inactivation rate shows a minimum (maximum for Dvalue) as function of temperature (Sonoike et al., 1992; Reyns et al., 2000; Perrier-Cornet et al., 2005; Van Opstal et al., 2005). This minimum is usually found around 20–30 °C. Below this minimum, temperature and pressure have an antagonistic effect on microbial inactivation. In this respect, these curves resemble the p–T stability curves of many proteins (Fig. 9.5). Some proteins additionally
Modelling of high-pressure inactivation of microorganisms
177
(a) 300
Pressure (MPa)
250 200
S. cerevisiae L. plantarum
150 100 50
A
B
0 –25 –20 –15 –10 –5
0
5 10 15 20 25 30 35 40 45 50 55 60 65 70
Temperature (°C)
(b) 300
Pressure (MPa)
250
200
150
100 –10
0
10
20
30
40
50
Temperature (°C)
Fig. 9.3 Elliptical shape pressure–temperature contour plots for 4 log microbial destruction after 10 min treatment time of Saccharomyces cerevisiae and Lactobacillus plantarum in growth medium (reprinted from J Biotechnol, 115, Perrier-Cornet et al., ‘High pressure…’, 405–412, Copyright 2005, with permission from Elsevier) (a) and pressure-temperature iso-D contour plot of Z. bailii in 40 mM Tris–HCl buffer at pH 6.5 (b) (reprinted from Int J Food Microbiol, 56, Reyns et al., ‘Kinetic analysis…’, 199–210, Copyright 2000, with permission from Elsevier). The inner and outer line on the iso-D contour represents p, T combination for which D equals 100 min and 10 min, respectively.
178
Modelling microorganisms in food
(a)
(b) 300
300
30 ˚C
Treatment time (s)
Ap
60
t
˚C
120
ico
˚C 50
60
60
180
r Ap
120
ar Pe
180
240
C 40 ˚
Treatment time (s)
240
ple Ma ng
o
0 400
450
500
550
Pressure (MPa)
600
0 400
450
500
550
600
Pressure (MPa)
Fig. 9.4 Pressure–time conditions to achieve 7 log10 reductions of Lactobacillus rhamnosus GG. (a): in orange juice at different process temperatures; (b): in pear, apricot, apple and mango juice at 40 °C (Buckow et al., 2004).
display a minimum in inactivation (or denaturation) as a function of pressure, and a similar behaviour was also claimed for E. coli (Smeller, 2002), but this has not been confirmed in more extensive experiments (Van Opstal et al., 2005). One consequence of this behaviour is that the temperature- and pressure-dependence of log D cannot be described by a single linear relationship as is the case with classical D–z models of heat inactivation, and that is why many researchers return to a polynomial model type. For enzyme inactivation by HP, several researchers have used the Eyring equation to model the pressure-dependence of the inactivation rate constant of the primary kinetic model. This equation can be considered as an equivalent of the Arrhenius equation which expresses the temperaturedependence of the rate constant and which is frequently used in thermal or in HP/T inactivation studies of proteins (Hendrickx et al., 1998; Indrawati, 2000; Ly Nguyen, 2004). Use of both these equations is probably the most generic approach to modelling HP/T inactivation in the field (Weemaes et al., 1998; Indrawati et al., 2002) but, while in principle it is also applicable to microbial inactivation, it has not been used to this purpose to our knowledge.
9.3.4 Complexity in real systems Once a suitable model has been formulated that can accurately describe the original dataset and also independently acquired data under the same experimental setup, it will be important to validate the model in real food systems. Two important
Modelling of high-pressure inactivation of microorganisms
179
(a) 250
Pressure (MPa)
200
150
100
50
0 0
20
40 Temperature (°C)
60
80
(b) Denatured, ∆G < 0
Pressure (kbar)
3
—20
2
Native, ∆G > 0
1
0
20 40 Temperature (°C)
60
Fig. 9.5 Schematic pressure–temperature diagram for 2 log inactivation of Escherichia coli after 5 min treatment time (a) (reprinted from Biochim Biophys Acta, 1595, Smeller, ‘Pressure–temperature…’, 11–29, Copyright 2002, with permission from Elsevier) and protein denaturation (b) (reprinted from Comp Biochem Physiol, 116, Balny C et al., ‘Hydrostatic pressure and modelling of combined high pressure – temperature inactivation of the yeast Zygosaccharomyces baillii and proteins: basic concepts and new data’, 299–304, Copyright 1997, with permission from Elsevier).
180
Modelling microorganisms in food
complexities of HP inactivation in real food systems will be considered here: inactivation under dynamic pressure–temperature conditions and inactivation in different food matrices. Dynamic pressure–temperature conditions The input for developing HP/T inactivation models always consists of kinetic data of inactivation under isobaric and isothermal conditions over a certain p–T domain, and the models are thus in principle only valid to predict inactivation under such conditions. However, real HP-T processes have a dynamic component, mainly during the pressure buildup and pressure release phase, and these can be important in practice. It is not only pressure which is variable during these phases, but also temperature, due to the effect of adiabatic heating/cooling as a result of compression/expansion. If a function describing the time-dependence of pressure and temperature is known, these can be imported into the experiments, but it has to be well understood that this is valid only if the effect of this dynamic phase is equal to the integrated effect of an infinite number of sequential isobaric and isothermal treatments. This need not necessarily be the case: deviations can be anticipated for example for very mild processes and long processing times, when microorganisms develop a stress response. As far as we are aware, this type of validation has not been performed for microbial inactivation by HP-T processes. For inactivation of soybean lipoxygenase and Bacillus subtilis α-amylase, it was shown that models developed for static conditions were also valid under dynamic conditions (Ludikhuyze et al., 1997, 1998). Besides the temporal variability of pressure and temperature, there is also a spatial variability of temperature in real HP-T processes. Since pressure acts instantaneously and uniformly through a mass of food independently of its size, shape or composition, a spatial pressure gradient within the vessel should not normally be taken into account. However, there are at least two sources of spatial temperature gradients. The first is related to the different degrees to which different products increase in temperature upon compression. The temperature change during adiabatic compression or expansion is given by (Matser et al., 2004):
αT dT = dp ρCp
[9.9]
In this equation, T is the temperature (K), p the pressure (Pa), α the volumetric expansion coefficient (1/K), ρ the product density (kg/m³) and Cp the specific heat (J/kgK). Depending on the type of product, the initial product temperature and the pressure intensity, the adiabatic heating/cooling may vary between 3 and 9 °C/100 MPa (Balasubramaniam and Balasubramaniam, 2003; de Heij et al., 2003). The second source of temperature gradients is due to heat transfer between the content of the pressure vessel and the vessel wall and exterior. This is because the
Modelling of high-pressure inactivation of microorganisms
181
vessel itself is not subjected to significant compression heating (de Heij et al., 2003). As a result, product near the vessel wall will reside at a lower temperature than product in the centre of the vessel (de Heij et al., 2002; Ting et al., 2002). These small temperature differences have been reported to significantly affect the inactivation rate of for example B. stearothermophilus spores (de Heij et al., 2002): after pressure treatment at 700 MPa/90 °C/5 min spore reduction near the vessel wall was 4 log10 units lower than in the centre. When temperature and pressure are known as a function of time and position in the vessel during the HP-T treatment, it will be in principle possible to predict the inactivation of microorganisms more accurately. However, these spatial temperature gradients are equipment and food matrix dependent, and this limits the comparability and transferability of results obtained in different types of equipments and in different food matrices. In other words, it will be important to conduct this type of measurement in every piece of equipment and for every food product with different thermophysical properties (Otero and Sanz, 2003). Different food matrices As described in more detail in Section 9.2.2, the pressure sensitivity of microorganisms is not only influenced by general physicochemical properties of the food matrix like pH and aw, but also by its composition. This considerably complicates, if not making it impossible, to transpose models developed for one food product to other food products. Generally, bacteria are more resilient in a complex matrix like milk or meat compared to a buffer at the same pH and aw. Furthermore, the same microorganisms (E. coli) may not only be more pressure-resistant in a food matrix (carrot juice) than in buffer, but may also reveal completely different kinetic behaviour. A comparison of the iso-D contour plots of E. coli in carrot juice and in buffer is given in Fig. 9.6 (Van Opstal et al., 2005). The (top) plot shows straight iso-D lines over the entire pressure–temperature domain studied, indicating that the high-pressure resistance of E. coli increases with decreasing temperature in carrot juice. In Hepes-buffer (bottom plot), in contrast, the elliptical shape of the iso-D curves indicates that between 20 and 30 °C, a higher pressure is needed to achieve the same inactivation rate than at higher or lower temperatures. The observation in this study, that this phenomenon occurs for the same organism in buffer and not in carrot juice, indicates that it is not merely a fundamental property of the microbial cell, but notably influenced by the medium in which the cells reside. However, it cannot be excluded that similar shapes of iso-D contour plots can be found in Hepes-buffer and carrot juice, but that for carrot juice the top of the ellipse lies outside the pressure–temperature range studied and consequently also outside the range that is industrially applicable. These data call for studies that aim at more mechanistic understanding. For the yeast Z. bailii, Reyns et al. (2000) observed that the biphasic model developed in buffer at pH 6.5 was also applicable in low-pH buffer, but not in apple and orange juice, however, where higher inactivation than predicted by the model occurred.
182
Modelling microorganisms in food
(a) 600
D=1
550
Pressure (MPa)
500 450 400 350 300 250
D = 81
200 0
10
15
20 25 30 35 Temperature (°C)
40
45
(b) 500 D=1 450
Pressure (MPa)
400 350 300 250 200 D = 71 150 5
10
15
20 25 30 35 Temperature (°C)
40
45
Fig. 9.6 Iso-D contour plot of Escherichia coli MG1655 in carrot juice (top) and Hepesbuffer (bottom) based on observed D-values. Lines present P, T combinations with equal D-values (between 1 and 81 min in (a) and between 1 and 71 min in (b)], and D = decimal reduction time in minutes (reprinted from Int J Food Microbiol, 98, Van Opstal I et al., ‘Inactivation of Escherichia coli by high hydrostatic pressure at different temperatures in buffer and carrot juice’, 179–191, Copyright 2005, with permission from Elsevier).
Modelling of high-pressure inactivation of microorganisms
9.4
183
Further developments in the modelling of pressure– temperature processes
The availability of models that accurately predict bacterial survival would support the food industry in selecting optimum process conditions to achieve the desired target levels of bacterial inactivation, while minimizing the production costs and maintaining a maximal nutritional and sensorial quality. The major challenges for improving the existing models and making them applicable to the food industry are discussed below.
9.4.1 Standardization of data collection Standardization of data collection is of major importance to the comparability of different datasets generated by HP processing. In this respect, there is a need to understand the critical issues related to the technology, as described in Section 9.2, and to have knowledge of the common problems and pitfalls that may be encountered conducting high-pressure microbial inactivation studies. An important problem, particularly in the earlier studies, is the lack of control of thermal effects or the lack of measurement of the temperature during HP treatment (de Heij et al., 2002; Ting et al., 2002). Furthermore, differences in process equipment design and configuration, pressurization fluid, degree of vessel filling, comeup time and time for pressure release, the lack of understanding of important process parameters such as compression heating, and differences in reporting data all complicate comparison of pressure-induced microbial inactivation conducted in different laboratories. Establishing standard definitions and methods would facilitate independent confirmation and meaningful comparison of results. Balasubramaniam et al. (2004) proposed some guidelines for conducting HP experiments, including the necessity to adequately describe the equipment and samples (vessel size, material of construction, wall thickness, heating and cooling system, reading accuracy of temperature and pressure, the packaging, sample volume) and to provide a sample graph illustrating pressure–time–temperature conditions. Farkas and Hoover (2000) recommended controlling and recording pressure at ± 0.5% (electronic) or ± 1.0% (digital display) level of accuracy. Regular calibration of the high-pressure registration equipment is evidently of critical importance. This usually has to be conducted by specialist companies. Further, as already discussed (Section 9.3.4), knowledge of the temperature fluctuations in the pressure vessel and in the pressurized product is also important. Because of the possible temperature heterogeneity, it is best to incorporate temperature registration equipment at different spots in the product. One of the main difficulties when modelling heat transfer in HP/T processes is the total lack of thermophysical parameters like adiabatic heat-related temperature change in specific products, materials, foods or pressurizing fluids at the pressure– temperature range of interest (Meyer et al., 2000). Experimental determination of these properties will be essential to improve the prediction of the temperature changes, and thus the ability of the models to predict microbial inactivation.
184
Modelling microorganisms in food
Fig. 9.7 Temperature increase of milk samples during pressure treatment due to adiabatic heat. The desired treatment temperature is 40 °C and the applied pressure 400 MPa. The pressure chamber and the contained sebacate were controlled at 40 °C during pressure treatment (reprinted from Innov Food Sci Emerg Technol, 4, Chen H and Hoover DG, ‘Modeling the combined effect of high hydrostatic pressure and mild heat on the inactivation kinetics of Listeria monocytogenes Scott A in whole milk’, 25–34, Copyright 2003, with permission from Elsevier).
Incorporation of heat transfer phenomena in high-pressure models will also allow selection of the best conditions for a given pressure treatment to assure homogeneous results (compression rate, pressurizing fluid, sample geometry, thermoregulating conditions). Therefore, future models should consider the whole high-pressure system to take into account all the thermal exchanges involved, including those between sample-pressurizing fluid and steel mass of the vessel. An interesting and relatively simple approach has been recently proposed to reduce temperature inhomogeneity due to the generation of adiabatic heat by compensating for the temperature increase. The principle of this method, which is illustrated in Fig. 9.7, is to hold the temperature of the sample in the pressure vessel before the pressurization below the target temperature to be reached during the pressure holding phase. If this temperature difference is well chosen, the adiabatic heat generated during the pressure buildup will bring the sample temperature close to the target temperature, and heat transfer and temperature gradients are minimized.
Modelling of high-pressure inactivation of microorganisms
185
Another aspect of standardization relates to the bacterial strains used and their preparation. Unequivocal genus, species and strain designations should be reported and, preferably, strains used should be available, or be made available, through public strain collections for other investigators. Increasingly there should be a focus on how to use the modern tools of genomics to fingerprint (‘bar-code’) strains such that their identity and thus physiological behaviour can be unequivocally confirmed. Experiments to generate kinetic inactivation data for model development should be carried out using a pure culture of a single strain of microorganism because the use of cocktails can lead to results that are difficult to interpret (Balasubramaniam et al., 2004). Strain cocktails can be useful for validation purposes at a later stage. The suspending medium must be well-defined. Factors such as pH and buffering capacity, aw, redox potential, should be measured and reported. Methods for enumeration should distinguish both sublethally injured and intact survivors. Plating only on a selective medium gives no information about the injured fraction, which is nevertheless very important for microbiological food safety and stability (Ray, 1989; Everis, 2000). Additionally, the incubation time and temperature of the plates have a marked effect on the recovery of survivors and must therefore also be mentioned. Finally, with respect to the quantification of the synergistic action of high pressure with another preservation factor such as a biopreservative, there is a clear need to develop simple and uniform methodology to identify and quantify synergistic effects (Dufour et al., 2003).
9.4.2 Defining reference organisms One of the major challenges with respect to the implementation and evaluation of HP processes for food pasteurization is to define reference organisms. With respect to food safety, these are organisms that survive the HP/T process at least as well as or better than the most resistant pathogen of concern in a particular product. The process can then be optimized to achieve a certain desired minimum reduction of the target organism, and this will ensure that all the pathogens of concern have at least been reduced by the same magnitude. The reference organism can be the most resistant among the pathogens of concern, or it can be a non-pathogenic surrogate organism. A major complication is that a single ranking of organisms by resistance over a wide p–T domain is often not possible, even for a single food matrix, because the inactivation of different organisms can show different temperature- and pressuredependence. Thus, for example, HP treatments of 400 MPa/30 min or 500 MPa/ 5 min at 20 °C were reported to result in an equivalent extension of the spoilagefree shelf life of milk to approximately ten days at 10 °C (Rademacher et al., 1998), but these treatments are not necessarily equivalent with respect to reduction of various pathogens. In fact, they are probably insufficient, since even 600 MPa/ 30 min/20 °C results in less than 2 log inactivation of E. coli in milk (Patterson et al., 1995). A similar ranking problem exists also for thermal processing, but it is certainly more prominent in HP-T processes, because of the involvement of an
186
Modelling microorganisms in food
additional process parameter. Again, to assess equivalence genome-wide transcript or protein analysis may be employed since that assesses microbial responses to the environment at a molecular mechanistic level rather than the high aggregate level of cellular growth. Thus such approaches provide a much more high-resolution fingerprint of the cellular state given as a reflection of (even mild) environmental perturbations. Another, possibly more straightforward, solution to the problem of ranking microbial resistance to HP inactivation is to define food classes and related reference organisms. The following sections describe several food classes with their microorganisms of concern. In acid foods (pH < 4.5), pathogenic bacteria may be able to survive, sometimes even for extended periods, but they cannot grow. Therefore, only pathogens with a low infective dose are of concern for the safety of this product category. The most pressure-resistant members of this group known today are the pressure-resistant E. coli strains (Hauben et al., 1997). Challenge studies have shown that pressure treatment sensitized these strains to low pH so that inactivation ensued within one or two days after pressurization (Garcia-Graells et al., 1998; Pagan et al., 2001). This means that a 5-D reduction (as obliged by FDA for fruit juices) for low infective dose pathogens can be met using relatively mild HP-T treatments (< 500 MPa; 20 °C), provided that the product is held in storage for a certain time before consumption. In low acid foods processed to shelf-stability (the equivalent of thermally sterilized foods), all pathogens should be completely eliminated by the HP/T process. It is not clear from literature whether the (spores of) Clostridium botulinum type A are the most pressure-resistant pathogen as well as being the most heat-resistant (Stumbo, 1965), but there should be no doubt that a 12-D reduction of this pathogen is not possible by ambient temperature pressurization (800 MPa max.). Although high pressure at temperatures < 60 °C induces bacterial spores to germinate, the efficiency of pressure-induced germination is limited because a recalcitrant fraction of superdormant spores remains ungerminated (Sale et al., 1970; Wuytack et al., 1998). There is a consensus that HP sterilization of low-acid foods is therefore only possible in combination with an elevated temperature (> 80 °C) (Meyer et al., 2000; de Heij et al., 2003). Further work is necessary to compare the resistance of pathogenic spores to the high pressures and temperatures in these processes. Low-acid foods with a limited shelf life require refrigerated storage to prevent microbial growth. In this category of products, psychrotrophic and low infective dose pathogens are of major concern. The former are able to grow to high numbers in the product during storage while, for the latter, even small numbers may cause disease. For most psychrotrophic pathogens, except L. monocytogenes (see for example Fig. 9.1), detailed information is lacking, and it is therefore not yet possible to define a reference organism. This does not necessarily imply that pressure treatment of non-acid products has no applications. As a matter of fact, several commercial products in this category exist. Interesting examples are oysters and shellfish, which are normally eaten raw, but which can be contaminated with
Modelling of high-pressure inactivation of microorganisms
187
pathogenic Vibrio spp. and viruses. Cook (2003) reported a 5-D reduction in the levels of Vibrio spp. after respectively 250 MPa/2 min and 300 MPa/3 min without affecting the fresh character of the oyster.
9.4.3
Process optimization by combining inactivation models for microorganisms and quality attributes HP/T processes will have to be optimized not only with respect to microbial inactivation, but also with respect to various non-microbial aspects of product quality, such as inactivation of spoilage-inducing enzymes and retention of nutritional and sensorial quality attributes. This will require the combination of quantitative data for these different components, and it is clear that this is a complex exercise. One reason is that the HP/T induced inactivation of enzymes and microorganisms and denaturation of proteins often display an elliptical isorate contour diagram as a function of pressure and temperature (Fig. 9.5). As a result, the stability ranking of enzymes towards high-hydrostatic pressure depends on the applied pressure and temperature, and rankings at atmospheric pressure differ from those at higher pressures. For example, the following enzymes may be ranked according to an increasing thermostability at atmospheric pressure as follows: pectin methylesterase (PME) from oranges < alkaline phosphatase (ALP) in raw milk < lipoxygenase (LOX) from soybean and green beans < avocado polyphenoloxidase (PPO) < LOX from peas < Bacillus subtilis α-amylase (BSAA). Upon pressure treatment at 400 MPa, the thermostability ranking is altered with LOX from green beans becoming the least thermostable and ALP in raw milk the most thermostable (Ludikhuyze et al., 2002). Analogous effects may occur in bacteria, but the stress stability of the microorganisms as such has not been assessed in such detail. A few attempts have already been made to combine kinetic models related to food safety and food quality. A theoretical case study aiming at HP process optimization was elaborated by combining pressure–temperature kinetic diagrams of spoilage-causing enzymes with pressure–temperature kinetic diagrams of microbial inactivation (Ludikhuyze et al., 2002). Pressure–temperature combinations resulting in a 6 log-unit reduction of various microorganisms and in 90% loss of enzyme activity and chlorophyll content after a processing time of 15 minutes are shown in Fig. 9.8. The quality attributes shown in this figure are more resistant than the microorganisms with respect to pressure–temperature treatments, and it was concluded that enzymes related to food quality, rather than vegetative microorganisms, seem to be the critical issue in defining optimal HP/T processes. A similar conclusion was reached in an optimization study combining inactivation of E. coli and pectin methylesterase in carrot stimulant system (Valdramidis et al., 2007). However, this type of process optimization certainly needs to be further developed. A particular issue that will require attention is to conduct optimization studies with extremely pressure-resistant vegetative bacteria (see Section 9.2.1) and with bacterial spores, which have resistance levels that are more similar to those of enzymes.
188
Modelling microorganisms in food
Fig. 9.8 Simulated pressure–temperature combinations resulting in a 6 log unit reduction of microorganisms, 90% reduction of enzyme activity and 90% reduction of chlorophyll after a pressure treatment of 15 min: PPO (¸), ALP (ÿ), BSAA (◊), pea LOX in situ (+), pea LOX in juice (K), green bean LOX in situ (õ), green bean LOX in juice (˜), soybean LOX (*), PME (×), total chlorophyll content (–), yeast (), Zygosaccharomyces bailii (¤), Lactobacillus casei (_), Escherichia coli (—) (Ludikhuyze et al., 2002).
9.4.4
Hurdle technology: combination of high pressure with (bio) preservatives Although HP processed products generally establish a superior quality compared to products that have undergone conventional thermal pasteurization, some products suffer from significant quality loss, even after being treated with high pressure. A possible strategy to retain quality aspects and at the same time still reach the desired inactivation level is to use milder process conditions of pressure and temperature, but in combination with (bio) preservatives, an approach known as ‘the hurdle approach’. Some successful combinations with temperature, low pH and (natural) antimicrobials are recognized (Ponce et al., 1998a; Masschalck et al., 2000; García-Graells et al., 2003; Raso and Barbosa-Canovas, 2003; Ross et al., 2003). However, despite numerous papers on the use of hurdle technology to control foodborne pathogens or spoilage microorganisms, there is no commonly accepted methodology to detect or quantify synergistic interactions. Quantifying the synergistic action of high hydrostatic pressure and the lactoperoxidase system, for example, depends on the time the samples are plated after pressure treatment (García-Graells et al., 2003).
Modelling of high-pressure inactivation of microorganisms
189
9.4.5 Modelling inactivation of bacterial spores Modelling pressure-induced inactivation of bacterial endospores is a major challenge in HP processing for several reasons. Treatment at pressures up to 800 MPa and temperatures below about 60 °C induces spore germination, which may be followed by inactivation of the germinated spores because these have lost their high-level pressure and heat resistance. This two-step inactivation mechanism considerably complicates modelling. In addition, as mentioned earlier, a recalcitrant fraction of superdormant spores remains ungerminated, making it difficult to reach high levels of spore inactivation as required in some food applications (Wuytack et al., 1998, Van Opstal et al., 2004). Furthermore, the levels of pressure-induced spore germination depend on the organism, the process conditions and the physiochemical environment. In contrast to vegetative bacteria, for which low pH enhances HP inactivation, low pH inhibits HP-induced spore germination (Wuytack and Michiels, 2001), making it more difficult to inactivate spores under pressure in acid than in non-acid foods. This behaviour is opposite to the behaviour with regard to thermal inactivation. HP treatments at elevated temperatures (typically > 100 °C during the pressure holding phase) are more efficient in spore inactivation and can be used for sterilizing non-acid foods. The inactivation mechanism does not involve spore germination in this case, and it seems to be more straightforward to model this type of process (Meyer, 2000; de Heij et al., 2003). Margosch et al. (2006) studied the HP-mediated survival of C. botulinum and Bacillus amyloliquefaciens endospores at high temperature. Inactivation curves for both strains showed a pronounced pressure-dependent tailing, which indicates that a small fraction of the spore populations survives conditions of up to 120 °C and 1.400 MPa in isothermal treatments. Whenever the mechanism of pressuremediated heat tolerance of the bacterial endospore remains unknown, spore stabilization under specific parameter combinations is, according to Margosch et al. (2006), in apparent analogy with pressure stabilization observed with proteins. For many proteins, it has been described that pressure induces structural changes and denaturation, which follow elliptic pressure–temperature phase diagrams with areas of pressure stabilization of a protein at a denaturing temperature. This behaviour may also account for the stabilization of endospores with specific pressure–temperature combinations. Still, for the complexity of a bacterial endospore, the behaviour of a single protein can hardly be determined or described as determinative for cell death unless it is required for a vital function or is abundant, e.g. as a structural or protective component. Because of the pressure-dependent tailing and the fact that pressure– temperature combinations stabilizing bacterial endospores vary from strain to strain, food safety must be ensured in case-by-case studies demonstrating inactivation or non-growth of C. botulinum with realistic contamination rates in the respective pressurized food and equipment.
190
Modelling microorganisms in food
9.4.6 Development of pressure–temperature–time integrators In the context of the implementation of high pressure, a scientific base describing the impact of the process is needed for evaluating the adequateness of a HP treatment. To quantify the impact of a thermal process time–temperature integrators (TTIs) can be used. Analogous to intrinsic TTIs, intrinsic pressure– temperature–time integrators (PTTIs) can be defined as pressure- and/or heat-sensitive components intrinsically present or formed in the food product that allow the impact of the process to be measured directly and quantitatively without knowledge of the actual process variables. For a compound to be applicable as PTTI, a complete kinetic characterization of its response to high pressure and/or temperature is required. PTTIs could be applied not only for evaluating food safety in industrial processes, but also to monitor quality retention/loss and process uniformity. In this context, Claeys et al. (2003) investigated whether intrinsic compounds present or formed in milk could be used as PTTI for the direct and quantitative measurement of the impact of HP-T processes. It could be concluded from this exercise that alkaline phosphatase and lactoperoxidase, two valuable TTIs for the evaluation of the impact of a thermal treatment on milk, are not appropriate as PTTI because of their extreme resistance to pressure (Claeys et al., 2003). Minerich and Labuza (2003) developed an extrinsic pressure indicator for HP treatment consisting of a compressed powdered copper tablet increasing in density with pressure and increasing treatment time. However, no significant impact of temperature on the copper density was found which limits its application as PTTI. It is clear that much more work needs to be done before a suitable intrinsic or extrinsic PTTI will become available.
9.5
Conclusions
As HP processing finds its way into an increasing number of commercial applications, the need for a reliable quantitative description of the process impact on microbial inactivation and on various quality attributes will continue to increase. The quantitative models that are being developed to fill in this need will support process design and optimization and help food processors to reliably reach food safety objectives. In this way, quantitative models will in turn stimulate the development of new applications. The development of quantitative models of microbial inactivation by high pressure is generally more complex than for inactivation by heat, because the process is governed by two physical parameters, pressure and temperature. Processes can be designed over a wide p–T domain, making data collection as an input for inactivation models a laborious task. Efforts are therefore needed to improve both the amount and the quality of primary inactivation data. Another important contribution to the improvement of HP inactivation models should come from a better appreciation of the thermal effects acting during pressurization and depressurization due to differential adiabatic heating and heat dissipation. Finally, the modelling of spore inactivation by HP processes at mild temperatures, which proceeds by a two-step mechanism
Modelling of high-pressure inactivation of microorganisms
191
involving spore germination, and the modelling of hurdle technology type processes combining high pressure with other factors such as low pH or biopreservatives, will be challenging tasks. Further developments towards insight into the molecular basics of high-pressure microbial inactivation will facilitate this task significantly.
9.6
References
Aertsen A, Masschalck B, Wuytack EY and Michiels CW (2003) Na+-mediated baroprotection in Rhodotorula rubra, Extremophiles, 7, 499–504. Alpas H, Kalchayanand N, Bozoglu F, Sikes A, Dunne CP and Ray B (1999) Variation in resistance to hydrostatic pressure among strains of food-borne pathogens, Appl Environ Microbiol, 65, 4248–4251. Balasubramaniam S and Balasubramaniam VM (2003) Compression heating influence of pressure transmitting fluids on bacteria inactivation during high pressure processing, Food Res Int, 36, 661–668. Balasubramaniam VM, Ting EY, Stewart CM and Robbins JA (2004) Recommended laboratory practices for conducting high-pressure microbial inactivation experiments, Innov Food Sci Emerg Technol, 5, 299–306. Balny C, Mozhaev VV and Lange R (1997) Hydrostatic pressure and proteins: basic concepts and new data, Comp Biochem Phys A, 116, 299–304. Benito A, Ventoura G, Casadei M, Robinson T and Mackey B (1999) Variation in resistance of natural isolates of Escherichia coli O157 to high hydrostatic pressure, mild heat, and other stresses, Appl Environ Microbiol, 65, 1564–1569. Buckow R, Ardia A, Heinz V and Knorr D (2004) High hydrostatic pressure effects on microbial stability in acidic beverages. Oral presentation at The Safe Consortium–Novel (mild) preservation technologies in relation to food safety, 22–23 Jan., Brussels. Buzrul S and Alpas H (2004) Modeling the synergistic effect of high pressure and heat on the inactivation kinetics of Listeria innocua: a preliminary study, FEMS Microbiol Lett, 238, 29–36. Buzrul S, Alpas H and Bozoglu F (2005) Use of Weibull frequency distribution model to describe the inactivation of Alicyclobacillus acidoterrestris by high pressure at different temperatures, Food Res Int, 38, 151–157. Casadei MA, Manas P, Niven G, Needs E and Mackey BM (2002) Role of membrane fluidity in pressure resistance of Escherichia coli NCTC8164, Appl Env Microbiol, 68, 5965– 5972. Castillo LA, Meszaros L and Kiss IF (2004) Effect of high hydrostatic pressure and nisin on microorganisms in minced meats, Acta Alimentaria, 33, 183–190. Cheftel JC (1995) High pressure, microbial inactivation and food preservation, Food Sci Technol Int, 1, 75–90. Chen H and Hoover DG (2003a) Modeling the combined effect of high hydrostataic pressure and mild heat on the inactivation kinetics of Listeria monocytogenes Scott A in whole milk, Innov Food Sci Emerg Technol, 4, 25–34. Chen H and Hoover DG (2003b) Pressure inactivation kinetics of Yersinia enterocolitica ATCC 35669, Int J Food Microbiol, 87, 161–171. Chen H and Hoover DG (2004) Use of Weibull model to describe and predict pressure inactivation of Listeria monocytogenes Scott A in whole milk, Innov Sci Emerg Technol, 5, 269–276. Claeys WL, Indrawati, Van Loey AM and Hendrickx ME (2003) Review: Are intrinsic TTIs for thermally processed milk applicable for high-pressure processing assessment?, Innov Food Sci Emerg Technol, 4, 1–14.
192
Modelling microorganisms in food
Cole MB, Davies KW, Munro G, Holyoak CD and Kilsby DC (1993) A vitalistic model to describe the thermal inactivation of Listeria monocytogenes, J Ind Microbiol, 12, 232– 239. Cook DW (2003) Sensitivity of Vibrio species in phosphate buffered saline and in oysters to high-pressure processing, J Food Prot, 66, 2276–2282. de Heij WBC, Van Schepdael L, Van den Berg R and Bartels P (2002) Increasing preservation efficiency and product quality through control of temperature distribution in high pressure applications, High Pressure Res, 22, 653–657. de Heij WBC, van Schepdael LJMM, Moezelaar R, Hoogland H, Matser AM and van den Berg RW (2003) High-pressure sterilization: Maximizing the benefits of adiabatic heating, Food Technol, 57, 37–41. Dogan C and Erkmen O (2004) High pressure inactivation kinetics of L. monocytogenes inactivation in broth, milk and peach and orange juices, J Food Eng, 62, 47–52. Dufour M, Simmonds RS and Bremer PJ (2003) Development of a method to quantify in vitro the synergistic activity of ‘natural’ antimicrobials, Int J Food Microbiol, 85, 249– 258. Erkmen O (2001) Mathematical modeling of Escherichia coli inactivation under highpressure carbon dioxide, J Biosci Bioeng, 92, 39–43. Erkmen O and Dogan C (2004) Kinetic analysis of Escherichia coli inactivation by high hydrostatic pressure in broth and foods, Food Microbiol, 21, 181–185. Everis L (2000) Significance of injured microorganisms in food, Campden and Chorleywood Food Research Association Review, 19, 96. Farkas D and Hoover D (2000) High pressure processing. Kinetics of microbial inactivation for alternative food processing technologies, J Food Sci Supplement, 65, 47–64. Gao X, Li J and Ruan KC (2001) Barotolerant Escherichia coli induced by high hydrostatic pressure, Sheng Wu Hua Xue Yu Sheng Wu Wu Li Xue Bao, 33, 77–81. García-Graells C, Hauben KJA and Michiels CW (1998) High-pressure inactivation and sublethal injury of pressure resistant Escherichia coli mutants in fruit juices, Appl Environ Microbiol, 64, 1566–1568. García-Graells C, Masschalck B and Michiels CW (1999) Inactivation of Escherichia coli in milk by high-hydrostatic-pressure treatment in combination with antimicrobial peptides, J Food Prot, 62, 1248–1254. Garcia-Graells C, Van Opstal I, Vanmuysen SCM and Michiels CW (2003) The lactoperoxidase system increases efficacy of high pressure inactivation of foodborne bacteria, Int J Food Microbiol, 81, 211–221. Garcia-Risco M, Cortes E, Carrascosa A and Lopez-Fandino R (1998) Microbiological and chemical changes in high-pressure-treated milk during refrigerated storage, J Food Prot, 61, 735–737. Gervilla R, Capellas M, Ferragut V and Guamis B (1997) Effect of high hydrostatic pressure on Listeria innocua 910 CECT inoculated into ewe’s milk, J Food Prot, 60, 33– 37. Gervilla R, Ferragut V and Guamis B (2000) High pressure inactivation of microorganisms inoculated into ovine milk at different fat contents, J Dairy Sci, 83, 674– 682. Gibson AM, Bratchell N and Roberts TA (1988) Predicting microbial growth: growth responses of Salmonellae in a laboratory medium as affected by pH, sodium chloride and storage temperature, Int J Food Microbiol, 6, 155–178. Gould GW (2000) Preservation: past, present and future, Brit Med Bull., 56, 84–96. Guan D, Chen H and Hoover DG (2005) Inactivation of Salmonella Typhimurium DT104 in UHT whole milk by high hydrostatic pressure, Int J Food Microbiol, 104, 145– 153. Hauben KJA, Bartlett DH, Soontjens CCF, Cornelis K, Wuytack EY and Michiels CW (1997) Escherichia coli mutants resistant to inactivation by high hydrostatic pressure, Appl Environ Microbiol, 63, 945–950.
Modelling of high-pressure inactivation of microorganisms
193
Hauben KJA, Bernaerts K and Michiels CW (1998) Protective effect of calcium on inactivation of E. coli by high pressure, J Appl Microbiol, 85, 678–684. Heinz V and Knorr D (1996) High pressure inactivation kinetics of Bacillus subtilis cells by a three-state-model considering distributed resistance mechanisms, Food Biotechnol, 2, 149–161. Hendrickx M, Ludikhuyze L, Van den Broek I and Weemaes C (1998) Effects of high pressure on enzymes related to food quality, Trends Food Sci Technol, 9, 197–203. Hoover DG, Metrick C, Papineau AM, Farkas DF and Knorr D (1989) Biological effects of high hydrostatic pressure on food microorganisms, Food Technol, 43, 99–107. Hoover DG (2001) Microbial inactivation by high hydrostatic pressure, in JN Sofos and VK Juneja (eds), Inactivation of Foodborne Microorganisms, Marcel Dekker, New York, 419–449. Hu X, Mallikarjunan P, Koo J, Andrews LS and Jahncke ML (2005) Comparison of kinetic models to describe high pressure and gamma irradiation used to inactivate Vibrio vulnificus and Vibrio parahaemolyticus prepared in buffer solution and in whole oysters, J Food Prot, 86, 292–295. Hugas M, Garriga M and Monfort JM (2002) New mild technologies in meat processing: high pressure as a model technology, Meat Sci, 62, 359–371. Indrawati (2000) Lipoxygenase inactivation by high pressure treatment at subzero and elevated temperatures: a kinetic study, Doctoral dissertation no 444, Katholieke Universiteit, Leuven. Indrawati, Van Loey AM, Fachin D, Ly Nguyen B, Verlent I and Hendrickx M (2002) Overview: effect of high pressure on enzymes related to food quality – kinetics as a basis for process engineering, High Pressure Res, 22, 613–618. Kalchayanand N, Sikes A, Dunne P and Ray B (1998) Interaction of hydrostatic pressure, time and temperature of pressurization and Pediocin AcH on inactivation of foodborne bacteria, J Food Prot, 61, 425–431. Karatzas KAG and Bennik MHJ (2002) Characterization of a Listeria monocytogenes Scott A isolate with high tolerance towards high hydrostatic pressure, Appl Environ Microbiol, 68, 3183–3189. Kolter R, Siegele DA and Tormo A (1993) The stationary phase of the bacterial life cycle, Annu Rev Microbiol, 47, 855–874. Leadly CE and Williams A (1997) High pressure processing of food and drink – an overview of recent developments and future potential, in New Technologies, Bull. No 14, Chipping Campden, CCFRA. Lee DU, Heinz V and Knorr D (2001) Biphasic inactivation kinetics of Escherichia coli in liquid whole egg by high hydrostatic pressure treatments, Biotechnol Prog, 17, 1020– 1025. Linton RH, Carter WH, Pierson MD and Hackney CR (1995) Use of a modified Gompertz equation to model non-linear survival curves for Listeria monocytogenes Scott A, J Food Prot, 58, 946–954. Linton M, McClements JM and Patterson MF (1999) Survival of Escherichia coli O157: H7 during storage in pressure-treated orange juice, J Food Prot, 62, 1038–1040. Linton M, McClements JMJ and Patterson MF (2000) The combined effect of high pressure and storage on the heat sensitivity of Escherichia coli O157:H7, Innov Food Sci Emerg Technol, 1, 31–37. Linton M, McClements JMJ and Patterson MF (2001) Inactivation of pathogenic Escherichia coli in skimmed milk using high hydrostatic pressure, Innov Food Sci Emerg Technol, 2, 99–104. Ludikhuyze L, Van den Broeck I, Weemaes C and Hendrickx M (1997) Kinetic parameters for pressure-temperature inactivation of Bacillus subtilis α-amylase under dynamic conditions, Biotechnol Progr, 13, 617–623. Ludikhuyze L, Indrawati, Van den Broeck I, Weemaes C and Hendrickx M (1998) Effect of combined pressure and temperature on soybean lipoxygenase. II. Modeling inactivation
194
Modelling microorganisms in food
kinetics under static and dynamic conditions, J Agr Food Chem, 46, 4081–4086. Ludikhuyze L, Van Loey A, Indrawati I, Denys S and Hendrickx M (2002) Effects of high pressure on enzymes related to food quality, in M Hendrickx and D Knorr (eds) Ultra High Pressure Treatments, New York, Kluwer Academic/Plenum Publishers, 115– 160. Ludwig H, Gross P, Scigalla W and Sojka B (1994) Pressure inactivation of microorganisms, High Pressure Res, 12, 193–197. Ly Nguyen B (2004) The combined pressure-temperature stability of plant pectin methylesterase and their inhibitor, Doctoral dissertation no 630, Katholieke Universiteit, Leuven. Mackey BM, Forestière K and Isaacs N (1995) Factors affecting the resistance of Listeria monocytogenes to high hydrostatic pressure, Food Biotechnol, 9, 1–11. Margosch D, Ehrmann MA, Buckow R, Heinz V, Vogel RF and Gänzle MG (2006) Highpressure-mediated survival of Clostridium botulinum and Bacillus amyloliquefaciens endospores at high temperature, Appl Environ Microbiol, 72, 3476–3481. Masschalck B, Garcia-Graells C, Van Haver E and Michiels CW (2000) Inactivation of high pressure resistant Escherichia coli by lysozyme and nisin under high pressure, Innov Food Sci Emerg Technol, 1, 39–47. Masschalck B, Van Houdt R, Van Haver E and Michiels CW (2001) Inactivation of gramnegative bacteria by lysozyme, denatured lysozyme and lysozyme derived peptides under high hydrostatic pressure, Appl Environ Microbiol, 67, 339–344. Matser AM, Krebbers B, van den Berg RW and Barltels PV (2004) Advantages of high pressure sterilisation on quality of food products, Trends Food Sci Tech, 15, 79– 85. McClements JMJ, Patterson MF and Linton M (2001) The effect of growth stage and growth temperature on high pressure inactivation of some psychrotrophic bacteria in milk, J Food Prot, 64, 514–522. Meyer RS (2000) Ultra high pressure, high temperature food preservation process, United States Patent no. 6017572. Meyer RS, Cooper KL, Knorr D and Lelieveld HLM (2000) High-pressure sterilization of foods, Food Technol, 54, 67, 68, 70. Minerich PL and Labuza TP (2003) Development of a pressure indicator for high hydrostatic pressure processing of foods, Innov Food Sci Emerg Technol, 4, 235–243. Molina-Hoppner A, Doster W, Vogel RF and Ganzle MG (2004) Protective effect of sucrose and sodium chloride for Lactococcus lactis during sublethal and lethal high-pressure treatments, Appl Environ Microbiol, 70, 2013–2020. Mussa DM, Ramaswamy HS and Smith JP (1999) High pressure destruction kinetics of Listeria monocytogenes Scott A in raw milk, Food Res Intern, 31, 343–350. Neter J, Kutner M, Nachtsheim C and Wasserman W (1996) Applied Linear Statistical Models, New York, McGraw-Hill. Ogawa H, Fukuhisa K, Kubo Y and Fukumoto H (1990) Pressure inactivation of yeast, molds and pectinesterase in satsuma mandarin juice: effects of juice concentration, pH, and organic acids and comparison with heat sanitation, Agric Biol Chem, 54, 1219– 1255. O’Reilly CE, O’Connor PM, Kelly AL, Beresford TP and Murphy PM (2000) Use of hydrostatic pressure for inactivation of microbial contaminants in cheese, Appl Environ Microbiol, 66, 4890–4896. Otero L and Sanz PD (2003) Modelling heat transfer in high pressure food processing: a review, Innov Food Sci Emerg Technol, 4, 121–134. Oxen P and Knorr D (1993) Baroprotective effects of high solute concentrations against inactivation of Rhodotorula rubra, Lebensm.-Wiss u-Technol, 26, 220–223. Pagan RS and Mackey B (2000) Relationship between membrane damage and cell death in pressure-treated Escherichia coli cells: differences between exponential- and stationaryphase cells and variation among strains, Appl Environ Microbiol, 66, 2829–2834.
Modelling of high-pressure inactivation of microorganisms
195
Pagán RS, Jordan S, Benito A and Mackey B (2001) Enhanced acid sensitivity of pressuredamaged Escherichia coli O157 cells, Appl Environ Microbiol, 67, 1983–1985. Palou E, López-Malo A, Barbosa-Cánovas GV, Welti-Chanes J and Swanson BG (1997) Effect of water activity on high hydrostatic pressure inhibition of Zygosaccharomyces bailii, Lett Appl Microbiol, 24, 417–420. Patterson MF, Quinn M, Simpson R and Gilmour A (1995) Sensitivity of vegetative pathogens to high hydrostatic pressure treatment in phosphate-buffered saline and foods, J Food Prot, 58, 524–529. Patterson MF and Kilpatrick DJ (1998) The combined effect of high hydrostatic pressure and mild heat on inactivation of pathogens in milk and poultry, J Food Prot, 61, 432– 436. Peleg M and Cole MB (1998) Reinterpretation of microbial survival curves, Crit Rev Food Sci Nutr, 38, 353–380. Perrier-Cornet J-M, Tapin S, Gaeta S and Gervais P (2005) High pressure inactivation of Saccharomyces cerevisiae and Lactobacillus plantarum at subzero temperatures, J Biotechnol, 115, 405–412. Ponce E, Pla R, Sendra E, Guamis B and Mor-Mur M (1998a) Combined effect of nisin and high hydrostatic pressure on destruction of Listeria innocua and Escherichia coli in liquid whole egg, Int J Food Microbiol, 43, 15–19. Ponce E, Pla R and Capellasn M (1998b) Inactivation of Escherichia coli inoculated in liquid whole egg by high hydrostatic pressure, Food Microbiol, 15, 265–272. Rademacher B, Pfeiffer B and Kessler H (1998) Inactivation of microorganisms and enzymes in pressure treated raw milk, in Isaacs NS (ed.), High Pressure Food Science, Bioscience and Chemistry, Cambridge, The Royal Society of Chemistry, 145–151. Raffalli J, Rosec JP, Carlez A, Dumay E, Richard N and Cheftel JC (1994) High pressure stress and inactivation of Listeria innocua in inoculated dairy cream, Sci Aliment, 14, 349– 358. Raso J and Barbosa-Canovas GV (2003) Nonthermal preservation of foods using combined processing techniques, Crit Rev Food Sci Nutr, 43, 265–285. Ray B (1989) Enumeration of injured indicator bacteria from foods, in B Ray (ed.), Injured Index and Pathogenic Bacteria: Occurrence and Detection in Foods, Water and Feeds, Boca Raton, FL, CRC Press, 10–54. Reyns KMA, Soontjens CCF, Cornelis K, Weemaes CA, Hendrickx ME and Michiels CW (2000) Kinetic analysis and modelling of combined high pressure-temperature inactivation of the yeast Zygosaccharomyces baillii, Int J Food Microbiol, 56, 199–210. Ross T (1996) Indices for performance evaluation of predictive models in food microbiology, J Appl Bacteriol, 81, 501–508. Ross AI, Griffiths MW, Mittal GS and Deeth HC (2003) Combining nonthermal technologies to control foodborne microorganisms, Int J Food Microbiol, 89, 125–138. Sale AJH, Gould GW and Hamilton WA (1970) Inactivation of bacterial spores by hydrostatic pressure, J Gen Microbiol, 60, 323–334. San Martin MF, Barbosa-Canovas GV and Swanson BG (2002) Food processing by high hydrostatic pressure, Crit Rev Food Sci Nutr, 42, 627–645. Schaffner DW and Labuza TP (1997) Predictive microbiology: where are we, and where are we going?, Food Technol, 51, 95–99. Simpson RK and Gilmour A (1997) The effect of high hydrostatic pressure on Listeria monocytogenes in phosphate-buffered saline and food model systems, J Appl Microbiol, 83, 181–188. Smeller L (2002) Pressure–temperature phase diagrams of biomolecules, Biochim Biophys Acta, 1595, 11–29. Smelt JPPM and Rijke AGF (1992) High pressure as a tool for pasteurization of foods, in C Balny, R Hayashi, K Heremans and P Masson (eds), High Pressure and Biotechnology, London, John Libby and Company Ltd, 361–364. Smelt JPPM, Hellemons JC, Wouters PC and Van Gerwen SJC (2002) Physiological and
196
Modelling microorganisms in food
mathematical aspects in setting criteria for decontamination of foods by physical means, Int J Food Microbiol, 78, 57–77. Sonoike K, Setoyama T, Kuma Y and Kobayashi S (1992) Effect of pressure and temperature on the death rates of Lactobacillus casei and Escherichia coli, in C Balny, R Hayashi, K Heremans and P Masson (eds), High Pressure Biotechnology, London, John Libby and Company Ltd, 361–364. Stewart CM, Jewett FF, Dunne CP and Hoover DG (1997) Effect of concurrent high hydrostatic pressure, acidity and heat on the injury and destruction of Listeria monocytogenes, J Food Safety, 17, 23–36. Stumbo CR (1965) Thermobacteriology in Food Processing, New York, Academic Press. Styles MF, Hoover DG and Farkas DF (1991) Response of Listeria monocytogenes and Vibrio parahaemolyticus to high hydrostatic pressure, J Food Sci, 56, 1404–1407. Tay A, Shellhamer TH, Yousef AE and Chism GW (2003) Pressure death and tailing behaviour of Listeria monocytogenes strains having different barotolerances, J Food Prot, 66, 2057–2061. Ter Steeg PF, Hellemons JC and Kok AE (1999) Synergistic action of nisin sublethal ultrahigh pressure and reduced temperature on bacteria and yeast, Appl Environ Microb, 65, 41–48–4154. Tewari G, Jayas DS and Holley RA (1999) High pressure processing of foods: an overview, Sci Aliment, 19, 619–661. Tholozan JL, Ritz M, Jugiau F, Federighi M and Tissier JP (2000) Physiological effects of high hydrostatic pressure treatments on Listeria monocytogenes and Salmonella Typhimurium, J Appl Microb, 88, 202–212. Ting E, Balasubramaniam VM and Raghubeer E (2002) Determining thermal effects in high pressure processing, Food Technol, 56, 31–35. Tonello C, Kesenne S, Muterel C and Jolibert F (1997) Effect of high hydrostatic pressure treatments on shelf-life of different fruit products, in K Heremans (ed.), High Pressure Research in the Biosciences and Biotechnology, Leuven, Belgium, University Press Leuven, 439–442. Trujillo AJ, Capellas M, Saldo J, Gervilla R and Guamis B (2002) Application of highhydrostatic pressure on milk and dairy products: a review, Innov Food Sci Emerg Technol, 3, 295–307. Ulmer HM, Gänzle MG and Vogel RF (2000) Effects of high pressure on survival and metabolic activity of Lactobacillus plantarum TMW1.460, Appl Environ Microbiol, 66, 3966–3973. Valdramidis VP, Geeraerd AH, Poschet F, Ly-Nguyen B, Van Opstal I, Van Loey AM, Michiels CW, Hendrickx ME and Van Impe JF (2007) Model based process design of the combined high pressure and mild heat treatment ensuring safety and quality of a carrot simulant system, J Food Eng, 78, 1010–1021. van Asselt ED and Zwietering MH (2006) A systematic approach to determine global thermal inactivation parameters for various food pathogens, Int J Food Microbiol, 107, 73–82. Van Boekel MAJS (2002) On the use of the Weibull model to describe thermal inactivation of microbial vegetative cells, Int J Food Microbiol, 74, 139–159. Van Opstal I, Vanmuysen SCM and Michiels CW (2003) High sucrose concentration protects E. coli against high pressure inactivation but not against high pressure sensitization to the lactoperoxidase system, Int J Food Microbiol, 88, 1–9. Van Opstal I, Bagamboula CF, Vanmuysen SCM, Wuytack EY and Michiels CW (2004) Inactivation of Bacillus cereus spores in milk by mild pressure and heat treatments, Int J Food Microbiol, 92, 227–234. Van Opstal I, Vanmuysen SCM, Wuytack EY, Masschalck B and Michiels CW (2005) Inactivation of Escherichia coli by high hydrostatic pressure at different temperatures in buffer and carrot juice, Int J Food Microbiol, 98, 179–191.
Modelling of high-pressure inactivation of microorganisms
197
Wuytack EY, Boven S and Michiels CW (1998) Comparative study of pressure-induced germination of Bacillus subtilis spores at low and high pressures, Appl Environ Microbiol, 64, 3220–3224. Wuytack EY and Michiels CW (2001) A study on the effects of high pressure and heat on Bacillus subtilis spores at low pH, Int J Food Microbiol, 64, 333–341. Xiong R, Xie G, Edmondson AE and Sheard MA (1999) A mathematical model for bacterial inactivation, Int J Food Microbiol, 46, 45–55. Yamamoto K, Matsubara M, Kawasaki S, Bari ML and Kawamoto S (2005) Modeling the pressure inactivation dynamics of Escherichia coli, Braz J Med Biol Res, 38, 1253– 1257.
10 Mechanistic models of microbial inactivation behaviour in foods A. A. Teixeira, University of Florida, USA
10.1 Introduction The manner in which populations of microorganisms increase or decrease in response to controlled environmental stimuli is fundamental to the engineering design of bioconversion processes (fermentations) and thermal inactivation processes (pasteurization and sterilization) important in the food, pharmaceutical and bioprocess industries. In order to determine optimum process conditions and controls to achieve desired results, the effect of process conditions on rates (kinetics) of population increase or decrease needs to be characterized and modeled mathematically. This chapter will focus on the development and application of mechanistic models capable of more accurately predicting thermal inactivation of bacterial spores important to sterilization and pasteurization treatments in the food, pharmaceutical and bioprocess industries. The complex physiology and morphology of spore-forming bacteria are often responsible for the observation of growing evidence that traditional microbial inactivation models do not always fit the experimental data. In many cases the data plotted on semi-logarithmic graphs showing logarithm of survivors over time at constant lethal temperature (survivor curves) reveal various forms of non-loglinear behavior, such as shoulders and tails. These non-log-linear survivor curves often confound attempts to accurately model thermal inactivation behavior on the basis of assuming a traditional single first-order reaction. Although it is frequently possible to adequately reproduce these non-log-linear curves mathematically with
Mechanistic models of microbial inactivation behaviour in foods
199
empirical polynomial curve-fitting models, such models are incapable of predicting response behavior to thermal treatment conditions outside the range in which experimental data were obtained, and so have limited utility. In contrast to such empirical models, mechanistic models are derived from a basic understanding of the mechanisms responsible for the observed behavior, and how these mechanisms are affected by thermal treatment conditions of time and temperature. As such, mechanistic models are capable of accurately predicting the microbial inactivation response to thermal treatment conditions outside the range in which experimental data were obtained. This capability is highly valued by process engineers responsible for the design of ultra-high temperature (UHT) or high temperature short-time (HTST) sterilization and pasteurization processes that operate at temperatures far above those in which experimental data can be obtained. This chapter begins by attempting to establish a strong case for the use of mechanistic models with a summary of background information on the history of mechanistic, vitalistic and probabilistic model approaches to thermal inactivation of bacterial spores, followed by a presentation of the scientific rationale in support of first-order kinetic models as true mechanistic models for thermal inactivation of microbial populations. Following sections describe the development of mechanistic models capable of dealing with shoulders and tails in the case of spore-forming bacteria, followed by validation, applications and comparisons of model performance in the case of selected spore-forming bacteria. Subsequent sections address strengths, weaknesses, limitations, future trends, sources of further information and references.
10.2 Case for mechanistic models 10.2.1 History of survivor curve model approaches The history of survivor curve models provides vital information to address current important problems related to the characteristics observed in semi-logarithmic inactivation curves of microbial populations. Basic mathematical models were proposed very early in the 20th century (Chick, 1908, 1910). Watson (1908) examined two main approaches used to study the inactivation of bacterial spores at that time: mechanistic and vitalistic. He presented Chick’s model mathematically as a first-order reaction (Chick–Watson equation) describing the exponential decay commonly observed in microbial survivor curves. This model is the basis of the mechanistic approach and defines the inactivation of bacterial spores as a pseudo first-order molecular transformation. The temperature-dependency of the corresponding rate constant was found to be appropriately described by the Arrhenius equation. The vitalistic approach was based on the assumption that the observed exponential decay could be explained by differences in resistance of the individuals. Watson noted that in order for this argument to hold, most of the spores would have to be at the low extreme of resistance, instead of following the
200
Modelling microorganisms in food
normal distribution that would be expected from natural biological variability. He concluded that the vitalistic approach could not be correct. Kellerer (1987) stated that the vitalistic approach is naïve and ignores the rigorous stochastic basis for the inactivation transformation (Maxwell–Boltzmann distribution of speed of molecules or random radiation ‘hits’ on DNA) by using the simplistic assumption that the biological variability of the resistance can explain the observed behavior correctly. A third option is the approach developed by Aiba and Toda (1965) where the lifespan probability of spores was mathematically defined and used to develop expressions describing inactivation of single and clumped spores. This analysis postulated that for a population of spores (Ni in number and ti in lifespan), the probability associated with the distribution of life span is: 1 N0 ! Pr = ––––– –– ΠNi ! κ
[10.1]
i
where κ is a normalizing factor. Based on this probability function, Ni could be described as a function of the lifespan (ti) provided that the initial number of spores (N0) was large: Ni = α′ exp(– β′ti)
[10.2]
Modifying the discrete equation into its continuous form (for large N0): dN – ––– = α′ exp(– β′ t) dt
[10.3]
Thus, the probabilistic approach led to the same response equation as that of the mechanistic approach. This revelation leads to several important concepts: 1. arbitrary use of frequency distribution equations, such as the Weibull distribution, is an extensive ‘jump’ in logic that cannot be rigorously justified; 2. clumping induces a delay that depends on the number of spores per clump, and can be described using the probabilistic approach – however, clumping will not lead to increments in the number of survivors such as those induced by spore activation; 3. the probabilistic approach established a direct relationship between the average lifespan and the rate constant used in the mechanistic approach – both approaches lead to the same basic mathematical model.
10.2.2 Scientific basis for first-order kinetic models The nature of the inactivation transformation can be addressed and explained in terms of Eyring’s transition-state theory and the Maxwell–Boltzmann distribution of the speed of molecules from molecular thermodynamics. For a given configuration (i.e. mass and degrees of freedom), the fraction of molecules that have enough kinetic energy to overcome the energetic barrier (i.e. the activation energy) for a transformation such as inactivation of spores depends mostly on
Mechanistic models of microbial inactivation behaviour in foods
201
Fig. 10.1 Schematic non-linear survivor curves for bacterial spores illustrating ‘shoulders’ and ‘biphasic’ population responses to exposure at constant lethal temperatures.
temperature. Therefore, at a given temperature, the fraction of molecules that will reach the level of energy required for the transformation (in this case inactivation) to happen is constant. The fraction of molecules that have enough energy to react will increase with temperature. For instance, if at a given temperature 10% of the molecules have enough energy for inactivation to occur, the percentage inactivated during a time interval will remain constant. This explains why the instantaneous rate of inactivation is proportional to the number of surviving spores present at that moment. This can be envisioned by random swatting of flies by a blind-folded person trapped in a phone booth. The number of flies successfully swatted per unit of time will depend upon (and be proportional to) the number still in flight at that moment, and will decrease exponentially with time, just as with any concentrationdependent (first-order) reaction. Variability from biological or environmental
202
Modelling microorganisms in food
factors is reflected by the effect they have on values of the rate constants. For example, an increase in temperature (speed of molecules) could be represented by an increase in rate of swatting (speed of fly swatter).
10.2.3 Non-log-linear survivor curves Activation of dormant spores and the presence of subpopulations with different resistance give rise to the ‘shoulders and tails’ appearing as deviations from loglinearity in some survivor curves exhibited by spore-forming bacteria. Models that include these concepts based on systems analysis of population dynamics were developed in the late 1980s and early 1990s (Rodriguez et al., 1988, 1992; Sapru et al., 1992, 1993), and verified experimentally under extreme-case conditions. The case of ‘shoulders’ caused by activation of dormant spores, as well as that of ‘biphasic tailing’ (early rapid inactivation of a subpopulation of relatively low heat resistance followed by a slower inactivation of a subpopulation of relatively high heat resistance), is shown in Fig. 10.1. These models may be described by a summation of first-order terms as in the following expression: N(t)j = Σi ai exp(– bit)
[10.4]
where: Nj(t) = density of population j (i.e. cfu/ml) as a function of time t. Sub index j may refer to subpopulations of active spores, dormant spores or other spores with different resistance. ai = parameter related to initial population density of the different subpopulations in the system. parameter related to rate constants for the different transformations in bi = the system. The parameters needed for each term (ai, bi) can be estimated using non-linear regression or the successive residuals method, similar to analysis of stress relaxation curves. A step-by-step description of the use of non-linear regression for this purpose can be found in Rodriguez (1987). The simulation of curves where two or more subpopulations with different resistances to the lethal agent must be taken into consideration is explained by letting each of the subpopulations be described by one of the exponential terms, and the total effect by their summation. Moreover, when only tails are present, parameter estimation by the method of successive residuals will automatically reveal the number of terms (subpopulations) required in the model. Finally, perhaps one of the most important advantages of the models presented here is that they have succeeded in simulating inactivation of bacterial spores under transient conditions at temperatures outside the range used for parameter estimation where extrapolation must be used. HTST pasteurization or UHT sterilization processes operate in temperature ranges far above those in which survivor curves can be generated. Models derived from survivor curves for use in
Mechanistic models of microbial inactivation behaviour in foods
203
Fig. 10.2 Family of spore survivor curves on semi-log plot showing viable spore concentration versus time at different lethal temperatures (reprinted from Teixeira, 1992, p. 567 by courtesy of Marcel Dekker, Inc.).
the design/specification of such processes should be mechanistic models that are based upon an understanding of the mechanisms responsible for the behavior patterns observed in the survivor curves. Empirical (curve fitting) models generally should not be used for this purpose.
10.3 Development of mechanistic models for microbial inactivation 10.3.1 Normal log-linear survivor curves The case has now been made for new-found confidence in the long-standing assumption that thermal inactivation of bacterial spores can be modeled by a single first-order reaction, as exemplified by Stumbo (1973). As explained earlier, this can be described as a straight-line survivor curve when the logarithm of the number of surviving spores is plotted against time of exposure to a lethal temperature, as shown in Fig. 10.2 for survivor curves obtained at three different lethal temperatures. The decline in population of viable spores can be described by the following exponential or logarithmic equations:
204
Modelling microorganisms in food C = C0 (e–kt)
[10.5]
ln (C/C0) = –kt
[10.6]
where C is concentration of viable spores at any time t, C0 is initial concentration of viable spores and k is the first-order rate constant and is temperature-dependent. This temperature-dependency is illustrated in Fig. 10.2, in which T1, T2 and T3 represent increasing lethal temperatures with corresponding rate constants, –k1, –k2 and –k3. The temperature-dependency of the rate constant is also an exponential function that can be described by a straight line on a semi-log plot when the natural log of the rate constant is plotted against the reciprocal of absolute temperature. The equation describing this straight line is known as the Arrhenius equation: Ln [k/k0 ] = –(Ea/R) [(T0–T)/T0T]
[10.7]
where k is the rate constant at any temperature T and k0 is the reference rate constant at a reference temperature T0. The slope of the line produces the term Ea/R, in which Ea is the activation energy and R is the universal gas constant. Thus, once the activation energy is obtained in this way, equation (10.7) can be used to predict the rate constant at any temperature. Once the rate constant is known for a specified temperature, equations (10.5) or (10.6) can be used to determine the time required for exposure at that temperature to reduce the initial concentration of viable bacterial spores by any number of log cycles or, conversely, predict the degree of log cycle reduction in population at any point in time during lethal exposure. The objective in most commercial sterilization processes is to reduce the initial spore population by a sufficient number of log cycles so that the final number of surviving spores is one millionth of a spore, interpreted as probability for survival of one in a million (in the case of spoilage-causing bacteria), and one in a trillion in the case of pathogenic bacteria such as Clostridium botulinum. 10.3.2 Non-log-linear survivor curves Actual survivor curves plotted from laboratory data often deviate from a straight line, particularly during early periods of exposure. These deviations can be explained by the presence of competing reactions taking place simultaneously, such as heat activation causing ‘shoulders’ and early rapid inactivation of less heatresistant spore fractions causing biphasic curves with tails, as shown previously in Fig. 10.1. Activation is a transformation of viable, dormant spores enabling them to germinate and grow in a substrate medium. This transformation is part of the lifecycle of spore-forming bacteria shown in Fig. 10.3. Under hostile environments, vegetative cells undergo sporulation to enter a protective state of dormancy as highly heat-resistant spores. When subjected to heat in the presence of moisture and nutrients, they must undergo the activation transformation in order to germinate once again into vegetative cells and produce colonies on substrate media to make their presence known (Keynan and Halvorson, 1965; Lewis et al., 1965; Gould, 1984).
Mechanistic models of microbial inactivation behaviour in foods
205
Initiation Activation
Activated spore
Free, dormant resistant spore
Germinated spore Outgrowing cell Vegetative cell
Lysis Mature spore
Sporulation
Cell division
Growth, multiplication
Sporangium
Fig. 10.3
Cycle of spore formation, activation, germination and outgrowth.
In recent years several workers have approached the mechanistic modeling of shoulders in survivor curves of bacterial spores subjected to lethal heat (Shull et al., 1963; Rodriguez et al., 1988, 1992; and Sapru et al., 1992, 1993). The most recent model of Sapru et al. encompasses the key features of the earlier models and is based on the system diagram presented in Fig. 10.4. The total population at any time is distributed among storehouses (stores) containing the fraction of population in a given stage of the lifecycle. Dormant, viable spores, potentially capable of producing colonies in a growth medium after activation, comprise population N1. Activated spores, capable of forming colonies in a growth medium, comprise population N2. Inactivated spores, incapable of germination and growth, comprise populations N3 and N0. Activation, A, transforms many members of N1 to N2 while inactivation, D1, transforms other members of N1 to N0. Inactivation, D2, transforms activated spores from N2 to N3. It is assumed that A, D1 and D2 are independent, concomitant and first-order with respective rate constants Ka, Kd1 and Kd2. The Rodriguez model (Rodriguez et al., 1988, 1992) is obtained from the Sapru model by assuming that inactivation transformations D1 and D2 have identical rate constants, i.e. Kd1 = Kd2 = Kd. The Shull model (Shull et al., 1963) is obtained from the Sapru model by deleting D1 (Kd1 = 0) and N0, and setting Kd2 = Kd. Finally, the conventional model for a normal straight line survivor curve is obtained by deleting N1, N0, D1 and A (Kd1 = Ka), so that only N = N2, D = D2 and N3 remain, and setting Kd = Kd2 (see Table 10.1).
10.4 Model validation and comparison with others The mechanistic models described in the previous section were tested by comparing the model predicted with experimental survivor curves in response to constant
206
Modelling microorganisms in food
Fig. 10.4
Process diagram of the Sapru model of bacterial spore populations during heat treatment (from Sapru et al., 1993).
Table 10.1 Comparison of mathematical models of Sapru, Rodriguez, Shull and conventional models. Dormant population is N1, activated population is N2 and survivors N = N2 (from Sapru et al., 1993) Model
Mathematical model dN1 ––– = – (Kd1 + Ka) N1 dt
Sapru
dN2 ––– = KaN1 – Kd2N2 dt dN ––– N1 = – (Kd + Ka) N1 dt
Rodriguez
dN ––– N2 = KaN1 – KdN2 dt dN ––– N1 = – KaN1 dt
Shull
Conventional
dN ––– N2 = KaN1 – KdN2 dt dN ––– = – KdN dt
Mechanistic models of microbial inactivation behaviour in foods
207
Fig. 10.5 (a) Temperature history experienced by spore population during experimental heat treatment. (b) Experimental data points and model-predicted survivor curves in response to temperature history above for the Sapru, Rodriguez, Shull and conventional models (from Sapru et al., 1993).
208
Modelling microorganisms in food
and dynamic lethal temperature histories. For dynamic temperatures, the curves were predicted by simulation, using Arrhenius equations to vary rate constants with temperature. In separate experiments with dynamic temperature, sealed capillary tubes containing bacterial spores (Bacillus stearothermophilus) suspended in phosphate buffer solution were placed in an oil bath, and the temperature was varied manually. Tubes were removed at times spaced across the test interval and analyzed for survivor counts. Temperature histories of the oil bath were recorded (Sapru et al., 1993). The dynamic, lethal temperature for one of these experiments is shown in Fig. 10.5a. Corresponding survivor data and survivor curves predicted by the various models in response to this dynamic temperature are given in Fig. 10.5b. Note that activation of the significant subpopulation of dormant spores was predicted well by the early increase in population response shown by the Rodriguez and Sapru models. Both models performed much better overall than the other two. The conventional model, being limited to a single firstorder death reaction, was incapable of predicting any increase in population.
10.5 Applications of microbial inactivation mechanistic models Advantages of the Rodriguez–Sapru model make it preferable to the classic model for many applications; this section provides an overview of its application. First, suitability of the model for representing a specific situation is assessed by ascertaining that it involves lethal heating of a homogeneous, single species/strain population of spores in potentially mixed dormant/activated states. If a different, possibly more complex, situation exists, a new model should be developed consistent with the concepts and methods embodied in the Rodriguez–Sapru model as was exemplified herein and the model of a complex of normal and injured spores by Rodriguez et al. (1988). In the sequel, the Rodriguez–Sapru model is assumed to be suitable for the situation addressed. Quantitative application of the Rodriguez–Sapru model requires species/strain-specific, numerical values of rate constants Ka, Kd1 and Kd2 and initial subpopulations N10 and N20, respectively, of dormant and activated spores. Initial subpopulations in a sample of untreated suspension or product are determined under the common assumption that all spores in a direct microscopic count (DMC) of the sample are viable (Gombas, 1987). Incubation of the sample and enumeration of colony forming units (cfu) yields the initial number of activated spores, N20, and the initial number of dormant spores, N10, is calculated from N10 = DMC – N20
[10.8]
Rate constants for a specific species/strain of spores at specified temperature are obtained by analysis of an experimental, isothermal survivor curve for a suspension of those spores and known values of N10 and N20. Specifically, Ka, Kd1 and Kd2 are estimated by non-linear regression, using the procedure in SAS (SAS, 1985; Sapru et al., 1992) or by Levenburg-Marquardt (Press et al., 1986; Rodriguez et al., 1988, 1992), fitting survivor curve of the model to data defining the
Mechanistic models of microbial inactivation behaviour in foods
209
experimental curve. Initial estimates of Ka, Kd1 and Kd2 required by the non-linear procedures are appropriately calculated from the data by the method of successive residuals (Rodriguez et al., 1992). Rate constants estimated at a single temperature apply only to that temperature; applications of the model for other constant and, especially, dynamic temperature regimes require estimates of the rate constants and, preferably, continuous functions describing them as functions of temperature over a range of temperatures. It follows that isothermal experiments and estimations of rate constants must be performed at several temperatures over the prescribed range, and expressions relating rate constants to temperature must be found by regressing graphs of rate constants vs temperature. This was done for B. subtilis spores over 87–99 °C by Rodriguez et al. (1992) and for B. stearothermophilus spores over 105–120 °C by Sapru et al. (1992). In both cases, dependencies of activation and inactivation rate constants on temperature were described well by the empirical Arrhenius equation (Williams and Williams, 1973): K = A e–(Ea/RT)
[10.9]
where K denotes a rate constant, A is frequency constant (time–1), Ea is activation energy (J/mol), T is absolute temperature (K) and R is universal gas constant (8.314 (J/molK). Regression to estimate A and Ea is better done with ln(K) = ln(A) – Ea/RT
[10.10]
and an Arrhenius plot of rate constant data, the semi-logarithmic plot of K against the reciprocal of absolute temperature. Use of the Rodriguez–Sapru model at UHT conditions is contingent upon the ability to perform accurate estimation of valid rate constants in that range. Generation of isothermal UHT survivor curves for parameter estimation is difficult. One approach to the matter is extrapolation of results obtained at lower, lethal temperature to the UHT temperature range using the Arrhenius equations established for that range. In a series of UHT experiments with B. stearothermophilus spores over 123–146 °C by Sapru et al. (1992), rate constants in that range estimated with Arrhenius equations established at 105– 120 °C gave very good agreement between the model predicted (by simulation) and experimental numbers of survivors at the conclusion of UHT heating. With rate constants and their dependencies on temperature for a specific species/strain of spores known, and with N10 and N20 known for dormant and activated subpopulations of those spores in a specific suspension or product, the dynamics of those subpopulations and the survivor curve caused by specific, lethal heating of the suspension or product may be estimated by computer simulation or analytical solution of the Rodriguez–Sapru model given in Table 10.1. Both simulation and analytical solution are readily accomplished on a microcomputer, and graphs of the temperature regime and response variables, e.g. N1 and N = N2, over the exposure interval are the most useful forms of output. During a simulation or analytical solution, rate constants are varied as temperature varies by means of the Arrhenius equations; those variations of rate constants are the only way in
210
Modelling microorganisms in food
which temperature enters the model and affects population dynamics. Temperature may be constant or dynamic and in the low or UHT lethal range. Such analyses of the behavior of the model enable one to predict, understand and interpret the dynamics and effectiveness of existing and proposed sterilization processes, and the Rodriguez–Sapru model should be a tool in the design and validation of new, thermal sterilization processes.
10.6 Strengths, weaknesses, and limitations of mechanistic models 10.6.1 Strengths The key strength of mechanistic models is their ability to predict accurate response to process conditions outside the range of conditions in which valid rate parameters can be estimated. Clearly, this is of enormous value to the design of UHT and HTST sterilization and pasteurization processes in the food and pharmaceutical industries. At such high process temperatures, the exposure times are sufficiently short (seconds) that little if any damage occurs to product quality. Thermal inactivation rates for microbial populations are so fast at these temperatures that they cannot be measured. Therefore, mechanistic models must be relied upon to predict the accurate (valid) values of rate constants at those high process temperatures based solely upon measurements made with experimental data obtained at much lower temperatures. This capability (to predict responses with confidence to process conditions far outside the range in which experimental data are obtained) is what sets mechanistic models apart from empirical models. Empirical models employ polynomial curve-fitting techniques to derive a mathematical equation capable of producing a curve on a graph that closely passes through the original experimental data points. Such models are based upon observation, alone, and bear no scientific representation of mechanisms responsible for the observed behavior. Nonetheless, they are useful for predicting responses to process conditions that lie between the data points within the range of experimental data obtained. They are incapable of, and should not be used for, predictions outside the range of experimental data.
10.6.2 Weaknesses The primary weaknesses of mechanistic models lie in the difficulty often encountered in being able to successfully develop them. Recall that a first requirement is the need to understand the mechanism or mechanisms responsible for the observed behavior in order to conceptualize the model. A second requirement is the need to be able to characterize mathematically each element within the responsible mechanisms. In the case of bacterial spores there is an understanding of the mechanisms responsible for much of the observed experimental data seen on survivor curves.
Mechanistic models of microbial inactivation behaviour in foods
211
For example, there is general understanding of why spore populations should decrease as an exponential decay based upon scientific principles of molecular biophysics, and why ‘shoulders’ should occur as a result of competing simultaneous reactions of inactivation and activation, and why ‘tailing’ should be observed when two or more subpopulations with different thermal resistance are present. Such knowledge and understanding of responsible mechanisms is not always available or apparent in many situations. Without such knowledge mechanistic models cannot be developed. Even when mechanisms are understood, it may be difficult to characterize them mathematically. For example, in attempting to develop models for microbial population growth in predictive microbiology, scientists are often frustrated with the difficulty in modeling the initial ‘lag phase’ prior to exponential growth. There exists considerable evidence in the scientific literature pointing to the various mechanisms likely to be responsible for the wide variability and randomness in observed lag phase with replicate experiments. The problem lies in the difficulty with which any of these mechanisms can be characterized mathematically.
10.6.3 Limitations It should be noted that the mechanistic models described here are denoted as those that obey biophysical principles at a high level of cellular organization. They do not tell anything about the underlying molecular biology and its highly non-linear events. A combination of both would be a valuable contribution to the further understanding of spore damage/repair and outgrowth kinetics. The models discussed here still need a lot of ‘biological assumptions’ to be made in explaining events in spore inactivation and activation. For example, it would be most helpful if we could further explore how genomic-type data that cover relevant signaling networks, metabolism, etc. might improve the robustness and possible extrapolation possibilities of the models presented here.
10.7 Future trends with mechanistic models In spite of growing confidence in the long-standing assumption that thermal inactivation of bacterial spores can be modeled by a single first-order reaction, there is general agreement that this classical model inadequately represents biological situations commonly extant in spore suspensions and food and pharmaceutical products involving spore-forming bacteria. The Rodriguez–Sapru model offers many advantages and will be preferred for many applications because it represents the broader range of situations extant during lethal heating of spore populations. Parameters of the Rodriguez–Sapru model for a specific species/ strain of spores are readily determined by isothermal experiments. Dependencies of the parameters on temperature in the lower, lethal range, e.g. 100–120 °C, are well described by Arrhenius equations that may be used to estimate values of the parameters for ultra-high temperature (UHT) process conditions. The Rodriguez–
212
Modelling microorganisms in food
Sapru model obviates heat shock commonly employed to permit use of the classical model. With rate constants of the Rodriguez–Sapru model and their dependency on temperature known for a specific, single species/strain of spores and with dormant and activated subpopulations of those spores known for a specific, unsterilized suspension or product, the dynamics of those subpopulations in response to a specific thermal sterilization process and the effectiveness of the process may be estimated by computerized analyses of the model. The temperature may be constant or dynamic and in the low or UHT lethal range. The Rodriguez–Sapru model is useful for analysis and interpretation of existing and proposed sterilization processes and in the design and validation of new processes.
10.8 Sources of further information and advice Much of the material presented in this chapter serves as a summary of much more detailed work carried out by others (Rodriguez et al., 1988, 1992; Sapru et al., 1992, 1993), and can be found in the cited references that are listed at the end of the chapter. Of particular interest should be a recent Institute of Food Technologists (IFT) Summit Conference report, edited by Heldman (2003), on ‘Kinetics for Inactivation of Microbial Populations Emphasizing Models for Non-Log-Linear Microbial Survivor Curves’. This was a well-attended conference with considerable informal discussion and dialog concerning the different modeling approaches being promoted by different conference participants. The report highlights the various arguments presented, and concludes with recommendations on how kinetic data should be reported and presented in published scientific literature.
10.9 References Aiba S and Toda K (1965) An analysis of bacterial spores’ thermal death rate. J. Fermentation Technol (Japan), 43(6), 528–533. Chick H (1908) An investigation into the laws of disinfection, J Hygiene, 8, 92–158. Chick H (1910) The process of disinfection by chemical agencies and hot water, J Hygiene, 10, 237–286. Gombas DE (1987) Bacterial sporulation and germination, in TJ Montville (ed.), Topics in Food Microbiology, Vol I – Concepts in physiology and metabolism, Boca Raton, FL, CRC Press, 131–136. Gould GW (1984) Injury and repair mechanisms in bacterial spores, in MHE Andrew and AD Russel (eds), The Revival of Injured Microbes, New York, Academic Press, 199–207. Heldman DR (2003) Kinetics for inactivation of microbial populations emphasizing models for non-log-linear microbial survivor curves, IFT Summit Conference Report, Orlando, FL, Institute of Food Technologists, January 12–14. Kellerer AM (1987) Models of cellular radiation action, in RG Freeman (ed.), Kinetics of Non-homogeneous Processes, New York, John Wiley and Sons, 305–375. Keynan A and Halvorson H (1965) Transformation of a dormant spore into a vegetative cell, in LL Campbell and HO Halvorson (eds), Spores III, Washington, D.C., American Society of Microbiology, 174–180.
Mechanistic models of microbial inactivation behaviour in foods
213
Lewis JC, Snell NS and Alderton G (1965) Dormancy and activation of bacterial spores, in LL Campbell and HO Halvorson (eds), Spores III, Washington, D.C., American Society for Microbiology, 47–53. Press WH, Flannery BP, Teukolsky SA and Vetterling WT (1986) Numerical Recipes: The Art of Scientific Computing, Cambridge, Cambridge University Press. Rodriguez AC (1987) Biological validation of thermal sterilization processes, PhD Dissertation, University of Florida, Agricultural and Biological Engineering Department, Gainesville, FL. Rodriguez AC, Smerage GH, Teixeira AA and Busta FF (1988) Kinetic effects of lethal temperatures on population dynamics of bacterial spores, Trans ASAE, 31(5), 1594–1606. Rodriguez AC, Smerage GH, Teixeira AA, Lindsay JA and Busta FF (1992) Population model of bacterial spores for validation of dynamic thermal processes, J Food Proc Engr, 15, 1–30. Sapru V, Teixeira AA, Smerage GH and Lindsay JA (1992) Predicting thermophilic spore population dynamics for UHT sterilization processes, J Food Sci, 57(5), 1248–1257. Sapru V, Smerage GH, Teixeira AA and Lindsay JA (1993) Comparison of predictive models for bacteria spore population response to sterilization temperatures, J Food Sci, 58(1), 223–228. SAS (1985) SAS Users Guide: Statistics, Cary, NC, SAS Institute, Inc, 575–580. Shull JJ, Cargo GT and Ernst RR (1963) Kinetics of heat activation and thermal death of bacterial spores, Appl Microbiol, 11, 485–487. Stumbo CR (1973) Thermobacteriology in Food Processing (2nd edn), New York, Academic Press. Teixeira AA (1992) Thermal process calculations, in DR Heldman and DB Lund (eds), Handbook of Food Engineering, New York, Marcel Dekker, 565–567. Watson HE (1908) A note on the variation of the rate of disinfection with change in the concentration of the disinfectant, J Hygiene, 8, 536–542. Williams VR and Williams HB (1973) Basic Physical Chemistry for the Life Sciences (2nd edn), San Francisco, CA, WH Freeman and Co.
11 Modelling microbial interactions in foods F. Leroy and L. De Vuyst, Vrije Universiteit Brussel, Belgium
11.1 Introduction Food products are generally to be considered as complex microbial ecosystems, consisting of various sets of heterogeneous microbial populations that interact with each other and with their environment (Fig. 11.1). Nevertheless, the complexity of microbial interactions and the implications of competitive growth in foods are frequently overlooked in predictive microbiology and other modelling studies (McDonald and Sun, 1999; Malakar et al., 2003). Such microbial interactions may play an important role in the development of foodborne pathogens or spoilage. The growth of Escherichia coli in ground beef, for example, is dependent on both its initial population density and that of competing organisms (Coleman et al., 2003). In some cases, microbial interactions are promoting microbial growth, such as the protocooperation between Streptococcus thermophilus and Lactobacillus delbrueckii subsp. bulgaricus in yoghurt (Béal et al., 1994; Courtin et al., 2002; Ginovart et al., 2002). In the latter example, interactions are related to the production of growth-promoting compounds by S. thermophilus (formate, CO2) and peptide-generating protease activity by Lb. delbrueckii subsp. bulgaricus. Also, in several food products, the yeast population positively influences the growth of specific bacterial groups by releasing growth factors or fermentable sugars, such as in sourdough (Gobbetti, 1998), or by deacidification of the environment, such as in smear surface-ripened cheeses (Corsetti et al., 2001). In other cases, microbial interactions are of an inhibitory, antagonistic nature, leading to shifts in the microbial ecology towards the growth of the most
Modelling microbial interactions in foods
Fig. 11.1
215
Schematic representation of interactions occurring between microbial populations in a food ecosystem.
competitive microorganisms and the disappearance of the less competitive populations. In fermented sausage, for instance, the lactic acid bacteria population develops rapidly, whereas Enterobacteriaceae are outcompeted and disappear from the batter (Drosinos et al., 2005). Similarly, during spontaneous fermentation of flour, the lactic acid bacteria strongly dominate the Enterobacteriaceae (Stolz, 1999). Such antagonistic interactions are determined by several factors. They are related to competition effects for niches and nutrients, induced changes in the food environment regarding pH or redox potential, or the production of antimicrobials that target competing cells. Lactic acid bacteria, for instance, produce several antimicrobials that play a role in microbial interactions and the inhibition of undesirable microbial populations. Examples of such antimicrobials include organic acids, bacteriocins, hydrogen peroxide, reuterin, reutericyclin, phenyllactic acid, cyclic dipeptides and 3-hydroxy fatty acids (De Vuyst and Vandamme, 1994; Schnürer and Magnusson, 2005). For instance, reutericyclin is responsible for the stability of type II sourdough fermentations dominated by Lactobacillus reuteri (Gänzle et al., 2000). The effect of nutrient depletion on microbial interactions is
216
Modelling microorganisms in food
particularly of importance in fermented foods, for instance concerning lactic acid bacterium starter cultures that grow out to high levels (Leroy and De Vuyst, 2001). In non-fermented foods, nutrient depletion usually has a lesser impact except in the case of severe microbial spoilage.
11.2 Measuring growth and interactions of bacteria in foods Measuring microbial interactions in foods is intrinsically a multifaceted task and, because of the complexity, a certain degree of rationalization is usually required. In principle, traditional microbiological and analytical approaches can be used to monitor changes in population sizes and in the concentrations of growth-limiting nutrients and inhibitory compounds in foods. However, straightforward monitoring is sometimes hampered, because numerous populations interact not only with each other but also with the food environment, in many ways. Measurement of the various populations and interacting effects is therefore complex, amongst other reasons because detection systems with satisfactory discriminatory and selective power are not always present. Moreover, the food matrix can interfere strongly with measurement methods, and it is therefore not always easy to obtain reliable quantitative data. For instance, extraction of bacteriocin molecules from a food matrix, such as cheese and fermented sausage, generally leads to considerable activity losses (Foulquié Moreno et al., 2003). Moreover, once produced, the bacteriocin molecules rapidly adsorb to the sausage particles or cell surfaces, which hampers their detection. In such cases, liquid simulation media can be used to obtain kinetic data that will permit simulation by bacteriocin production in situ (Doßmann et al., 1996; Neysens et al., 2003; Verluyten et al., 2004; Leroy and De Vuyst, 2005). Another frequently applied method of rationalized simplification is to compare growth in mixed cultures of two or more populations to growth in pure cultures (Bielecka et al., 1998; Pin and Baranyi, 1998). It can for instance be assumed that interaction occurs from the moment that the specific growth rates of the microorganisms in mixed cultures deviate by 10 % from their growth in pure cultures (Malakar et al., 1999). On some occasions, the growth kinetics remain unaltered if a population is grown in the presence of another population. For instance, the growth of Listeria innocua and Pseudomonas spp. in decontaminated meat could be predicted from their growth kinetics obtained in pure cultures, indicating that the populations did not stimulate or inhibit each other (Lebert et al., 2000). However, if interaction does occur, the responsible compound should be sought. The effect of the responsible compound on the growth of either population can then be further investigated. For instance, in the case of Lb. sanfranciscensis, specific peptides obtained by yeast activity may be necessary for good cell growth (Gobbetti, 1998; Corsetti et al., 2001).
Modelling microbial interactions in foods
217
11.3 Developing models of microbial interactions 11.3.1 Developing a general interaction model Consider, according to a highly generalized approach (Bernaerts et al., 2004), a consortium of n microbial populations with cell densities Ni (in number of cells per unit of volume), with i ranging from one to n, and each evolving at a specific evolution rate µi (in h–1): dN(t)i ––––– = µiNi(t) dt dt µi = f N i (t ), N j (t )
{
[11.1]
i≠ j
}
, env(t ) , P(t ) , S (t ) , phys(t ) ,...
where t is time (in hours) and µi depends on interactions within and/or between microbial populations (Ni and/or Nj, respectively), physicochemical environmental conditions (<env>), microbial metabolite concentrations (
), the physiological state of the cells (), substrate concentrations (<S>) and other factors. The presence and absence of the factors of each category depends on their relevance in the microbial process under study. Microbial growth is obtained when µ i > 0 and microbial decay when µ i < 0. The development of interaction models with a (semi-)mechanistic basis is usually based on the incorporation of growth-influencing effects, that can be ascribed to all competing microbial populations, into the individual growth kinetics of each single population in equation (11.1) (Breidt and Fleming, 1998; Martens et al., 1999; Leroy and De Vuyst, 2003, 2005; Leroy et al., 2005a,b; Poschet et al., 2005). Examples of such growth-influencing effects are changes in pH and in the concentration of nutrients and inhibitory compounds such as organic acids. Their quantification is a way of incorporating environmental changes due to the action of one or more competing microbial populations into the model (Malakar et al., 1999; Poschet et al., 2005).
11.3.2 Developing a model to describe interactions that cause growth inhibition Growth inhibition of a given microbial population can be ascribed to selfinhibition (Leroy and De Vuyst, 2001), for instance due to its own depletion of nutrients, as well as to inhibitory effects caused by competing populations (Fig. 11.2). Nutrient depletion is generally not of concern in foods that are not subject to a high degree of spoilage, except for some fermented food products such as fermented sausage, where the depletion of sugar or, possibly, of growthstimulating peptides may be of importance. If one is dealing specifically with a microbial interaction situation that causes growth inhibition, the quantification of the inhibitory effect γx of a population Ni, due to a specific effect x caused by one or more other populations, can be written symbolically as the ratio of the actual value of the specific growth rate (µi) to its optimum value in the absence of the
218
Modelling microorganisms in food
Fig. 11.2
Inhibitory effects on the evolution rate of a microbial population in a food ecosystem containing competing microbial populations.
growth-limiting effect, i.e. at the optimum value of x (µi opt(x)), provided that all other factors remain constant: γx = µi /µi opt(x)
[11.2]
where γx for a studied effect ranges from one (no inhibition) to zero (complete inhibition). Such changes in specific growth rate are usually of a dynamic nature, thus changing over time. For instance, the growth inhibition of a certain bacterial population can be related to a continuous decrease in pH due to acidification by one or more other populations. This can for instance be quantified as (Rosso et al., 1995; Malakar et al., 1999): (pH – pHmin)(pH – pHmax) γpH = ––––––––––––––––––––––––––––––––– (pH – pHmin)(pH – pHmax) – (pH – pHopt)2
[11.3]
The latter equation is a cardinal function based on the maximum, minimum and optimum values of pH for growth to be estimated in the absence of organic acids. Besides the pH effect, specific inhibition by undissociated organic acid molecules should be considered [see below, equation (11.6)]. Further, growth inhibition can be due to the exhaustion of a certain nutrient or substrate S (e.g. glucose in grams or moles per unit of volume) by a competing population. This effect can be described with Monod-type kinetics (Malakar et al., 1999): γ[S] = S/(KS + S)
[11.4]
with KS the Monod constant (in grams or moles per unit of volume). If the substrate
Modelling microbial interactions in foods
219
is a sugar, the changes in S can be calculated with Pirt-type equations that take into account sugar consumption for biomass production as well as for cell maintenance and product formation, for all relevant populations (Pirt, 1975; Malakar et al., 1999). A similar approach is followed in the so-called S-model by Poschet et al. (2005). However, such ‘smooth’ and somewhat simplistic exhaustion kinetics are not always satisfactory. Nutrient depletion sometimes follows more complex and less smooth patterns, especially for lactic acid bacteria that have high and complex nutrient requirements. In such cases, the nutrient depletion model offers an alternative (Leroy and De Vuyst, 2001). γ[CNS] = 1
[X] ≤ [X1]
γ[CNS] = 1 – I1 ([X] – [X1])
[X1] < [X] ≤ [X2]
γ[CNS] = (1 – I1 ([X2] – [X1]) – I2 ([X] – [X2])
[X] > [X2]
[11.5]
with [X1] and [X2] the critical cell densities for nutrient inhibition (in cfu l–1) and I1 and I2 the dimensionless inhibition slopes. The values of [X1], [X2], I1, and I2 are expected to be a function of the competing microbial populations. Finally, the effect of growth inhibitory metabolites can be quantified. For instance, the inhibition due to the presence of an organic acid A (e.g. lactic acid) is related to the concentration of undissociated organic acid ([HA], in grams or moles per unit of volume) as follows (Breidt and Fleming, 1998; Leroy and De Vuyst, 2001): [HA] γ[A] =1 – ––––––– [HA]max
n
[11.6]
with n a fitting exponent and [HA]max the maximum value of [HA] that still allows growth. The increase in [A], and the resulting, pH-dependent [HA], can be calculated based on growth- and non-growth-related terms of the acid-producing population(s) (Luedeking and Piret, 1959; Malakar et al., 1999). This accumulation of undissociated organic acid as a specific antimicrobial adds to the effect of the decrease in pH caused by acidification [see above, equation (11.3)]. Undissociated organic acid molecules are uncharged and may therefore cross the cell membrane and acidify the cytoplasm, resulting in cell death. A similar approach for product inhibition is followed in the P-model applied by Poschet et al. (2005). It is important to realize that growth inhibition due to microbial interactions is seldom the result of one single growth inhibitory action, but usually of the combination of several effects. Interestingly, the combined effects of different inhibitory actions can sometimes be obtained by simply considering such effects as multiplicative. According to the γ-concept, overall growth inhibition can be obtained by multiplying the individual γ-functions, including the ones listed above (te Giffel and Zwietering, 1999; Leroy and De Vuyst, 2001). If this is not the case, for instance at the growth/no growth interface, more complicated and interdependent quantification methods have to be developed.
220
Modelling microorganisms in food
11.3.3 Developing descriptive interaction models A mechanistic approach requires a complex mathematical model with many variables. Therefore, less complex descriptive interaction models are built by simply quantifying how much the growth of a population is affected by the growth of one or more other populations. Therefore, the growth kinetics of pure cultures are usually compared with the growth kinetics obtained in mixed cultures. Statistical tests (e.g. F-test) are then used to evaluate if the interaction effects are significant (Pin and Baranyi, 1998). Also, the growth of two mixed populations Ni and Nj (in number of cells per unit of volume) can be described with a relatively simple Lotka-Volterra-type model for two-species competition for a limited amount of resources, as follows (Dens et al., 1999; Vereecken et al., 2000; Dens and Van Impe, 2001). Qi(t) 1 µi = µmax ––––––– –––– (Nmax – Ni – αij Nj) i 1 + Q (t) i Nmax i
[11.7]
i
with Nmaxi the maximum population density of species i when no other species is present, Qi(t) the physiological state of the cells needed to describe the lag phase and αij a coefficient of interaction measuring the effects of species j on species i.. As is the case for the specific growth rate, it is conceivable that the lag phase could also be affected by microbial interactions. However, conclusive research on this topic is currently lacking.
11.3.4 Developing interaction models based on antimicrobial activity An extension of the model presented above incorporates the antagonistic effects due to the production of antimicrobials (e.g. bacteriocins) by population Ni on a sensitive population Nj, as follows (Pleasants et al., 2001; Leroy et al., 2005a,b): 1 Qj(t) µj = µmax ––––––– –––– (Nmax – Nj – αji Ni) – ψB j 1 + Q (t) j N j max
[11.8]
j
with ψ the bacteriocidal coefficient (in units of volume per hour per activity unit) and B the concentration of antimicrobial (in activity units per volume). Figure 11.3 gives an illustration of the inactivation of a bacteriocin-sensitive population of L. innocua LMG 13568 in a mixed culture with the bacteriocin-producing strain Lb. sakei CTC 494 (Leroy et al., 2005b). The data are modelled with equation (11.8). A quick drop in Listeria counts is observed from the very moment bacteriocin is being detected. A bacteriocin-resistant subpopulation of L. innocua LMG 13568 is obtained, but is not able to grow out because of the stringent environmental conditions. In such cases, it is convenient to split the target population into a bacteriocin-resistant and a bacteriocin-sensitive subpopulation. In monoculture, L. innocua LMG 13568 does not display an inactivation pattern.
Modelling microbial interactions in foods
221
Fig. 11.3 Evolution of Listeria innocua LMG 13568 counts (in cfu ml–1) in MRS medium at 20 °C and a constant pH of 5.2, containing 40 g l–1 of sodium chloride, and 200 ppm of sodium nitrite, in monoculture (õ) and in mixed culture (¤) with the bacteriocin-producing strain Lactobacillus sakei CTC 494 (◊). Bacteriocin activity by Lb. sakei CTC 494 is presented in Arbitrary Units (AU) per ml (˜) and detection limits inherent to the associated bioassay are given by the bars. Full lines are according to the interaction model by Leroy et al. (2005a,b). The dashed line indicates the onset of bacteriocin production resulting in a decrease of Listeria counts.
11.4 Applications and implications for food processors 11.4.1 Improvement of food safety and shelf stability Modelling of microbial processes in foods, in particular predictive food microbiology, yields quantitative information that is crucial for process control, food safety and the prevention of food spoilage (McDonald and Sun, 1999). Validated predictive models represent an essential tool to aid the exposure assessment phase of ‘quantitative risk assessment’ (Ross et al., 2000). In the framework of quantitative risk assessment, models for microbial interaction should not be neglected, because the behaviour of foodborne pathogens may be strongly affected by the growth of competing microbiota, in particular in fermented products or if competing differences are relatively large.
222
Modelling microorganisms in food
11.4.2 Fermented foods Fermented foods, characterized by strong antimicrobial production and pronounced microbial interactions, represent an interesting field of application for models that describe microbial interactions. Lactic acid bacteria, yeasts and fungi play an important role in the production of fermented foods. In such foods with high microbial loads, lactic acid bacteria generally display complex patterns of inhibitory actions towards other microbial populations, including thorough acidification and the production of several antimicrobial compounds. As a result, lactic acid bacteria populations are generally highly competitive in food ecosystems, provided that they are adapted to the substrate (Leroy et al., 2002). Based on their antagonistic activities, the use of lactic acid bacteria as functional starter cultures in fermented foods or as bioprotective cultures in non-fermented foods is a way to stabilize the food ecology (De Vuyst, 2000; Leroy and De Vuyst, 2004). Nevertheless, for this strategy to be successful, it is essential to understand the mechanisms that lead to the competitive advantages of lactic acid bacteria and to quantify their interaction effects with spoilage bacteria and foodborne pathogens in food systems (Breidt and Fleming, 1998). Studying such interactions can help to optimize process technology, because it can reveal and quantify crucial elements such as inactivation limitations due to bacteriocin-resistant subpopulations (Leroy et al., 2005a,b).
11.4.3 Product quality and attractiveness Applications should not only focus on antagonisms. In situations where interactions are of a positive nature, for instance during protocooperation by yoghurt cultures (Courtin et al., 2002; Ginovart et al., 2002), beneficial effects such as enhanced acidification and aroma formation may be obtained compared to pure cultures. An optimization of the beneficial effects is to be obtained by steering the interactions towards improved performance of the interacting microbial populations.
11.5 Future trends 11.5.1 Improvement of the model architecture While models are very useful decision support tools, they remain simplified representations of reality. Therefore, model predictions should be used with cognisance of microbial ecology principles that may not be included in the model, and efforts should be made to incorporate such missing principles (Ross et al., 2000). Microbial risk assessment processes, for instance, are evolving to incorporate more science to replace judgements that do not hold up to hypothesis testing in controlled scientific experiments (Coleman et al., 2003). This certainly leaves room for improvement of the interaction models. Future trends will probably deal with the improved extrapolation of laboratory studies in liquid media to real food environments. Growth models based on experiments in broth
Modelling microbial interactions in foods
223
frequently overpredict growth in the actual food environment (Houtsma et al., 1996; Ross et al., 2000; Coleman et al., 2003). One of the major reasons is probably because food is often solid or semi-solid and spatially heterogeneous. In such structured, (semi-)solid food matrices, microorganisms generally grow as immobilized colonies (Wilson et al., 2002). In fermented sausage, for instance, the distance between ‘nests’ of lactobacilli varies from 100 to 5000 µm (Katsaras and Leistner, 1988, 1991). As a result, the spatial separation of the colonies and the gradients caused by diffusion limitations (e.g. pH, substrate, oxygen, antimicrobials) will influence potential interactions between them. Indeed, spatial differentiation has an influence on the behaviour (coexistence/extinction) of the populations (Dens and Van Impe, 2001). This opens the way for the development of spatial or territory models. In such models, the statistical geometric distribution of the colonies, the local exhaustion and diffusion of nutrients, and the local accumulation and diffusion of inhibitory compounds have to be taken into account (Thomas et al., 1997; Dens and Van Impe, 2001).
11.5.2 Modelling of positive interactions The study of positive interactions instead of negative, inhibitory-type interactions seems promising. In many foods, food quality and safety rely heavily on the presence of distinct beneficial populations. In yoghurt, for instance, the interplay of formic acid, peptides, and acidity between S. thermophilus and Lb. delbrueckii subsp. bulgaricus should be modelled to understand bacterial growth kinetics leading to protocooperation (Ginovart et al., 2002). Likewise, modelling can be applied to study the remarkably stable associations between certain yeasts and lactic acid bacteria observed in sourdough (Gänzle et al., 1998). In fermented sausage, certain lactic acid bacteria, catalase-positive cocci, yeasts, and fungi all play a role, and it is evident that a certain degree of interaction between the microbial groups will occur, in particular regarding flavour development (Sunesen et al., 2004). Interaction studies should therefore not only focus on growth and inactivation of microorganisms, but also try to quantify other aspects such as the production of aroma compounds or molecules that are advantageous to the health of the consumer. Moreover, little quantitative information is available about the stimulatory effects of proteolysis of food proteins by certain microbial populations on the growth of other populations, despite the important role of hydrolysis of proteins in microbial growth kinetics (de la Broise et al., 1998).
11.5.3 Application of interaction models in population dynamics and ecology Based on the increased knowledge on heterogeneity within microbial strains, interactions between different subpopulations of a microorganism may be considered. Consideration of the fitness costs of certain specific mutations and their advantages in microbial interactions may help to predict the prevalence and stability of a mutant within a certain population. For instance, the occurrence of
224
Modelling microorganisms in food
bacteriocin-resistant subpopulations of Listeria on strain level may be the result of repeated exposure to bacteriocin-producing lactic acid bacteria. Also, in certain cases, the effects of cell-to-cell communication and quorum sensing should be explored when considering microbial interactions and growth kinetics (Kaprelyants and Kell, 1996). This may for instance help to explain the ecological phenomenon where growth inhibition of slow-growing microorganisms is related to the total density of all the microbial populations present in the food (Coleman et al., 2003). Such interaction models would have to be of a mechanistic nature and may be based on molecular biology data.
11.5.4 Integrating interaction models in risk assessment Finally, integration of microbial interaction models in risk assessment is a challenge for the future. However, one should bear in mind that interactions between microbial populations are part of a highly complex process and that any reduction of an ecosystem to a few populations, or inappropriate selection of strains, involves considerable risks (Coleman et al., 2003). For instance, the level of antagonism in the inhibition of Salmonella Typhimurium in the presence of five representative species of indigenous competitors of the gut microbiota in a continuous culture broth system was reduced if any one of the competitors were removed from the culture (Ushijima and Seto, 1991). Such results caution against oversimplifications of complex ecosystems. Therefore, targeted research will be needed to calibrate adjustments of available predictive models for risk assessment with the dense and diverse populations of food ecosystems (Coleman et al., 2003).
11.6 References Béal C, Spinnler HE and Corrieu G (1994) Comparison of growth, acidification, and productivity of pure and mixed cultures of Streptococcus salivarius subsp. thermophilus 404 and Lactobacillus delbrueckii subsp. bulgaricus, Appl Microbiol Biotechnol, 41, 95– 98. Bernaerts K, Dens E, Vereecken K, Geeraerd AH, Standaert AR, Devlieghere F, Debevere J and Van Impe J (2004) Concepts and tools for predictive modeling of microbial dynamics, J Food Prot, 67, 2041–2052. Bielecka M, Biedrzycka E, Biedrzycka E, Smoragiewicz W and Smieszek M (1998) Interaction of Bifidobacterium and Salmonella during associated growth, Int J Food Microbiol, 45, 151–155. Breidt F and Fleming HP (1998) Modeling of the competitive growth of Listeria monocytogenes and Lactococcus lactis in vegetable broth, Appl Environ Microbiol, 64, 3159–3165. Coleman ME, Tamplin ML, Phillips JG, and Marmer BS (2003) Influence of agitation, inoculum density, pH, and strain on the growth parameters of Escherichia coli O157:H7 – relevance to risk assessment, Int J Food Microbiol, 83, 147–60. Corsetti A, Rossi J and Gobbetti M (2001) Interactions between yeasts and bacteria in the smear surface-ripened cheeses, Int J Food Microbiol, 69, 1–10. Courtin P, Monnet V and Rul F (2002) Cell-wall proteinases PrtS and PrtB have a different role in Streptococcus thermophilus/Lactobacillus bulgaricus mixed cultures in milk, Microbiology, 148, 3413–3421.
Modelling microbial interactions in foods
225
de la Broise D, Dauer G, Gildberg A and Guérard F (1998) Evidence of positive effect of peptone hydrolysis rate on Escherichia coli culture kinetics, J Mar Biotechnol, 6, 111– 115. Dens EJ, Vereecken KM and Van Impe JF (1999) A prototype model structure for mixed microbial populations in food products, J Theor Biol, 201, 159–170. Dens EJ and Van Impe JF (2001) On the need for another type of predictive model in structured foods, Int J Food Microbiol, 64, 247–260. De Vuyst L (2000) Technology aspects related to the application of functional starter cultures, Food Technol Biotechnol, 38, 105–112. De Vuyst L and Vandamme EJ (1994) Bacteriocins of Lactic Acid Bacteria: Microbiology, Genetics and Applications, London, Blackie Academic and Professional. Doßmann MU, Vogel RF and Hammes WP (1996) Mathematical description of the growth of Lactobacillus sake and Lactobacillus pentosus under conditions prevailing in fermented sausages, Appl Microbiol Biotechnol, 46, 334–339. Drosinos EH, Mataragas M, Xiraphi N, Moschonas G, Gaitis F and Metaxopoulos J (2005) Characterization of the microbial flora from a traditional Greek fermented sausage, Meat Sci, 69, 307–317. Foulquié Moreno MR, Rea MC, Cogan TM and De Vuyst L (2003) Applicability of a bacteriocin-producing Enterococcus faecium as a co-culture in Cheddar cheese manufacture, Int J Food Microbiol, 81, 73–84. Gänzle MG, Ehmann M and Hammes WP (1998) Modeling of growth of Lactobacillus sanfranciscensis and Candida milleri in response to process parameters of sourdough fermentation, Appl Environ Microbiol, 64, 2616–2623. Gänzle MG, Höltzel A, Walter J, Jung G and Hammes WP (2000) Characterization of reutericyclin produced by Lactobacillus reuteri LTH2584, Appl Environ Microbiol, 66, 4325–4333. Ginovart M, López D, Valls J and Silbert M (2002) Simulation modelling of bacterial growth in yoghurt, Int J Food Microbiol, 73, 415–425. Gobbetti M (1998) The sourdough microflora: interactions of lactic acid bacteria and yeasts, Trends Food Sci Technol, 9, 267–274. Houtsma PC, Kant-Muermans ML, Rombouts FM and Zwietering MH (1996) Model for the combined effect of temperature, pH, and sodium lactate on growth rates of Listeria innocua in broth and Bologna-type sausages. Appl Environ Microbiol, 62, 1616–1622. Kaprelyants AS and Kell DB (1996) Do bacteria need to communicate with each other for growth? Trends Microbiol, 4, 237–241. Katsaras K and Leistner L (1988) Topographie der Bakterien in der Rohwurst, Fleischwirtschaft, 68, 1295–1298. Katsaras K and Leistner L (1991) Distribution and development of bacterial colonies in fermented sausages, Biofouling, 5, 115–124. Lebert I, Robles-Olvera V and Lebert A (2000) Application of polynomial models to predict growth of mixed cultures of Pseudomonas spp. and Listeria in meat, Int J Food Microbiol, 61, 27–39. Leroy F and De Vuyst L (2001) Growth of the bacteriocin-producing Lactobacillus sakei strain CTC 494 in MRS broth is strongly reduced due to nutrient exhaustion: a nutrient depletion model for the growth of lactic acid bacteria, Appl Environ Microbiol, 67, 4407– 4413. Leroy F and De Vuyst L (2003) A combined model to predict the functionality of the bacteriocin-producing Lactobacillus sakei strain CTC 494, Appl Environ Microbiol, 69, 1093–1099. Leroy F and De Vuyst L (2004) Functional lactic acid bacteria starter cultures for the food fermentation industry, Trends Food Sci Technol, 15, 67–78. Leroy F and De Vuyst L (2005) Simulation of the effect of sausage ingredients and technology on the functionality of the bacteriocin-producing Lactobacillus sakei CTC 494 strain, Int J Food Microbiol, 100, 141–152.
226
Modelling microorganisms in food
Leroy F, Verluyten J, Messens W and De Vuyst L (2002) Modelling contributes to the understanding of the different behaviour of bacteriocin-producing strains in a meat environment, Int Dairy J, 12, 247–253. Leroy F, Lievens K and De Vuyst L (2005a) Interactions of meat-associated bacteriocinproducing lactobacilli with Listeria innocua under stringent sausage fermentation conditions, J Food Prot, 68, 2078–2084. Leroy F, Lievens K and De Vuyst L (2005b) Modeling bacteriocin resistance and inactivation of Listeria innocua LMG 13568 by Lactobacillus sakei CTC 494 under sausage fermentation conditions, Appl Environ Microbiol, 71, 7567–7570. Luedeking R and Piret EL (1959) A kinetic study of the lactic acid fermentation. Batch processes at controlled pH, J Biochem Microbiol Technol Eng, 1, 393–412. Malakar PK, Martens DE, Zwietering MH, Béal C, and van ’t Riet K (1999) Modelling the interactions between Lactobacillus curvatus and Enterobacter cloacae. II. Mixed cultures and shelf life predictions, Int J Food Microbiol, 51, 67–79. Malakar PK, Barker GC, Zwietering MH and van ’t Riet K (2003) Relevance of microbial interactions to predictive microbiology, Int J Food Microbiol, 84, 263–272. Martens DE, Béal C, Malakar PK, Zwietering MH, and van ’t Riet K (1999) Modelling the interactions between Lactobacillus curvatus and Enterobacter cloacae. I. Individual growth kinetics, Int J Food Microbiol, 51, 53–65. McDonald K and Sun D-W (1999) Predictive food microbiology for the meat industry: a review, Int J Food Microbiol, 52, 1–27. Neysens P, Messens W and De Vuyst L (2003) Effect of sodium chloride on growth and bacteriocin production by Lactobacillus amylovorus DCE 471, Int J Food Microbiol, 88, 29–39. Pin C and Baranyi J (1998) Predictive models as means to quantify the interactions of spoilage organisms, Int J Food Microbiol, 41, 59–72. Pirt SJ (1975) Principles of Microbe and Cell Cultivation, London, Blackwell. Pleasants AB, Soboleva TK, Dykes GA, Jones RJ and Filippov AE (2001) Modelling of the growth of populations of Listeria monocytogenes and a bacteriocin-producing strain of Lactobacillus in pure and mixed cultures, Food Microbiol, 18, 605–615. Poschet F, Vereecken KM, Geeraerd AH, Nicolaï BM and Van Impe JF (2005) Analysis of a novel class of predictive microbial growth models and application to coculture growth, Int J Food Microbiol, 100, 107–124. Ross T, Dalgaard P and Tienungoon S (2000) Predictive modeling of the growth and survival of Listeria in fishery products, Int J Food Microbiol, 62, 231–245. Rosso L, Lobry JR, Bajard S and Flandrois JP (1995) Convenient model to describe the combined effects of temperature and pH on microbial growth, Appl Environ Microbiol, 61, 610–616. Schnürer J and Magnusson J (2005) Antifungal lactic acid bacteria as biopreservatives, Trends Food Sci Technol, 16, 70–78. Stolz P (1999) Mikrobiologie des Sauerteiges, in: G Spicher and H Stephan (eds), Handbuch Sauerteig: Biologie, Biochemie, Technologie (5th edn), Hamburg, Behr’s Verlag, 35–60. Sunesen LO, Trihaas J and Stahnke LH (2004) Volatiles in a sausage surface model – influence of Penicillium nalgiovense, Pediococcus pentosaceus, ascorbate, nitrate and temperature, Meat Sci, 66, 447–456. te Giffel MC and Zwietering MH (1999) Validation of predictive models describing the growth of Listeria monocytogenes, Int J Food Microbiol, 46, 135–149. Thomas LV, Wimpenny JWT and Barker GC (1997) Spatial interactions between subsurface bacterial colonies in a model system: a territory model describing the inhibition of Listeria monocytogenes by a nisin-producing lactic acid bacterium, Microbiology, 143, 2575– 2582. Ushijima T and Seto A (1991) Selected faecal bacteria and nutrients essential for antagonism of Salmonella Typhimurium in anaerobic continuous flow cultures, J Med Microbiol, 35, 111–117.
Modelling microbial interactions in foods
227
Vereecken K, Dens EJ and Van Impe JF (2000) Predictive modeling of mixed microbial populations in food products: evaluation of two-species models, J Theor Biol, 205, 53– 72. Verluyten J, Leroy F and De Vuyst L (2004) Effects of different spices used in production of fermented sausages on growth of and curvacin A production by Lactobacillus curvatus LTH 1174, Appl Environ Microbiol, 70, 4807–4813. Wilson PDG, Brocklehurst TF, Arino S, Thuault D, Jakobsen M, Lange M, Farkas J, Wimpenny JWT and Van Impe JF (2002) Modelling microbial growth in structured foods: towards a unified approach, Int J Food Microbiol, 73, 275–289.
12 A kinetic model as a tool to understand the response of Saccharomyces cerevisiae to heat exposure F. Mensonides, University of Amsterdam (current address: EML Research gGmbH, Germany), B. Bakker, Vrije Universiteit Amsterdam, The Netherlands, and S. Brul, K. Hellingwerf and J. Teixeira de Mattos, University of Amsterdam, The Netherlands
12.1 Introduction The increasing demand for economically feasible, effective means to preserve food without decreasing the quality constitutes a new challenge to microbiology. Although traditional food preservation methods like pasteurisation, pickling, salting and drying are very effective in preventing spoilage of food by microorganisms, it is recognised that these rather harsh treatments not only affect the microbial cells present, but may also negatively affect the physicochemical properties of the food itself and ultimately its nutritional value. In addition, the modern consumer is critical and demands safe food products that have retained organoleptic qualities (i.e. flavour, colour, texture and nutritional value) and that have both a long closed and a long open shelf life. In order to meet these new demands, a thorough understanding of fundamental phenomena such as the upper limits of growth, survival and cell death and, for example, the temperature-dependence of enzyme-catalyzed reaction rates is needed. In addition, knowledge about the mechanisms by which the living cell adapts to adverse conditions and the rate at which such adaptations occur is a prerequisite for designing a rational strategy that guarantees today’s food safety and quality norms on the one hand and economic feasibility on the other.
A kinetic model to understand the response of Saccharomyces cerevisiae
229
Clearly then, simple descriptive models for growth, survival or cell death do not suffice. Instead, there is a call for models that have as mechanistic a basis as possible. It is generally assumed that such models will gain in robustness and that their predictive value will increase. In general, unicellular organisms, including Saccharomyces cerevisiae, encounter a constantly changing environment in their natural habitat. Therefore, it is not surprising that they have developed an extensive set of mechanisms to sense and respond adequately to (potentially) threatening conditions like nutrient depletion,30,43,61 changes in temperature,1–4,10,20–22,31,32,40,46–48,52,58,59 external osmolarity and desiccation,8,19,61 oxidative damage,9 etc.6,11,15,17–19,43,44,47,51,53,62 Some of these response mechanisms are constitutive and depend on the prior growth conditions. In the case of yeast, the cell cycle phase and growth phase (exponential, stationary, etc.) are particularly important. Other mechanisms are induced upon stress and depend on the type of stress. Because of an overlap in the responses to various stresses, acquisition of resistance to one type of stress can lead to increased tolerance of another type.43,28,29 These stress responses of microorganisms have been studied for many years, albeit with opposite interests. For instance, there is an interest in increasing the survival or performance of fermenting yeasts under certain stressful conditions, with the baking industry and wine making as the obvious examples. In the former example, cells should not be directly inactivated upon a temperature increase, should be able to cope with high-sugar environments and should withstand freezing temperatures,1 whereas in the latter the ethanol produced by the cells themselves will actually cause stress.44,63 On the other hand, these stress responses have been and still are studied so that unwanted microorganisms can be killed more effectively, for instance in food preservation5,33 or for application in the development of novel antibiotics by the pharmaceutical industry. As outlined, the harsh methods used here are being reevaluated, and today the food industry is developing ‘minimal processing’ techniques to produce safe products while maintaining the functionality and quality of the raw materials. One promising strategy is the application of a series of subsequent but relatively mild treatments that are individually not lethal but collectively present too many (physiological) hurdles for the organism to survive.23,27,50 Obviously, the subtler the separate treatments, the greater the need to understand their effects on the cell’s physiological and defence mechanisms. In this chapter we will elaborate on the response of yeast (Saccharomyces cerevisiae) to heat as this is an obvious stress factor applied in the preservation of food. We opted as an initial study to quantitatively assess the behaviour of the cells upon a continuous mild thermal stress since this minimised the need to include population heterogeneity in the analysis whilst still arriving at meaningful results. A systems biological approach was followed: data on the heat response were collected at the level of anabolism (growth, survival and death), catabolism (carbon and electron fluxes and energetic effects) and gene expression (the latter level will not be discussed here) (see35,37) and used to validate a mathematical kinetic model that
230
Modelling microorganisms in food
was constructed on the basis of known kinetics of essential pathways in yeast. We will show firstly that the range of growth temperatures in which the cell switches from regular growth to growth arrest and ultimately loss of viability is very narrow and secondly that the initial response to a temperature shift is vastly different from the long-term adaptive response. Finally, our quantitative analysis will illustrate that the maintenance energy requirement (i.e. energy needed to keep the existing cell functional, in contrast to energy needed for growth) increases dramatically with increasing temperature. We will first present the experimental data and subsequently present a kinetic model, that despite its simplicity, gives a satisfactory description of our experimental results with respect to the effects of temperature on growth and death and to catabolic fluxes and metabolite changes. We will limit ourselves to this simple model; for a more extended model that includes specific catabolic and anabolic events upon a temperature shift, with an even greater predictive power, the reader is referred to 36.
12.2 Experimental data 12.2.1 The method The growth conditions and experimental set up are extensively described in 39. Essentially, Saccharomyces cerevisiae strain X2180-1A (MATα SUC2 mal mel gal2 CUP1) was grown in fermentors at 28 °C with a stirring rate of 700 rpm and an aeration rate of a fermentor volume of air per minute. The growth medium consisted of 0.67% YNB w/o amino acids (Difco) and 1% glucose in 100 mM potassium phthalate at pH 5.0. At mid-exponential growth (OD600 nm 0.5), the growth temperature was raised to the required stress temperatures. This was achieved by switching to a thermostat (water bath) that was 5 °C warmer than the desired temperature. The desired temperature was reached within 5 min and an overshoot was prevented. Because cells are neither transferred nor harvested nor stressed in any other way, the method guarantees that the observed effects are due to a temperature shift only. In the series of experiments shown here, the growth temperature of exponentially grown cells was raised from 28 °C to a temperature in the range of 37–43 °C. In addition, 37, 41 and 42 °C were used as initial growth temperatures. All temperature shifts were performed in triplo, and control measurements at 28 °C (non-stressed cells) were carried out. For each temperature shift, a typical data set is shown.
12.2.2 Growth and viability: the thin line between life and death The specific growth rate of S. cerevisiae was found to increase significantly upon
A kinetic model to understand the response of Saccharomyces cerevisiae
231
Fig. 12.1 The effect of several temperature shifts (from 28 °C to the indicated final temperatures) on specific growth rate. The number of cells as counted by the Coulter counter. From 38.
a shift of the growth temperature from 28 °C (0.28 h–1) to 37 °C (0.35 h–1) (Fig. 12.1). No further increase was seen upon the transition from 28 to 39 ºC. The transition to 41 °C resulted in large changes in the growth profile, however. First, a lag phase of approximately two hours was seen, after which the cells resumed growth, albeit at a low specific rate (0.10 h–1). An increase of the growth temperature to 42 °C resulted in a complete arrest of growth. Counting cells by Coulter counter includes all cells, dead or alive. In order to assess the percentage of cells that survive a temperature shift, viable counts were done by plating samples from the cultures exposed to the shift. From the data presented in Fig. 12.2, it can be seen that throughout all the temperature shifts up to 42 °C, cell viability remained about 90% of the total number of cells. In contrast, an increase of the growth temperature to 43 °C resulted in a complete and rapid loss of cell viability. Essentially similar results were obtained when higher initial growth temperatures were used, although differences were seen in the adaptation profiles. From these data (not shown), the most important conclusion could be drawn that growth at some elevated temperature endows the cells with some heat resistance to
232
Modelling microorganisms in food
Fig. 12.2 The effect of several temperature shifts (from 28 °C to the indicated final temperatures) on survival. The number of viable cells as counted by colony forming units (CFUs). From 38.
higher temperatures. The details of the relationship between initial growth temperature and acquired resistance, however, are complex and non-linear.30 Clearly then, there is a thin line between growth, growth arrest and loss of cell viability with regard to the ambient temperature. The temperature range in which these changes from growth to cell death take place is even smaller than has been reported before.59 In particular, the transition from survival (42 °C) to cell death (43 °C) is abrupt. Further, our studies have shown that both the absolute temperature and the relative temperature change have effects on cell growth and viability under heat stress conditions. One may speculate that the thin line between regular cell growth and cell death is due to rapid denaturation of one or more enzymes which are essential to cell growth. Growth at higher initial temperatures may result in a cellular makeup that can resist higher temperatures, either by the presence of temperature-resistant proteins or by intracellular conditions that contribute to the protection of enzymes.
12.2.3 Catabolic fluxes and energy conservation Living cells have to invest energy continuously, on the one hand in order to maintain their integrity and on the other to proliferate, divide and grow. The energy
A kinetic model to understand the response of Saccharomyces cerevisiae
233
costs of all these processes may be affected, albeit to different extents, in a stressful environment. In addition, there will be steady-state effects on the energy requirements of the cell. Thus, in the case of heat stress, the increased temperature is thought to have a continuous effect on the cell membrane, resulting in changes in permeability and leakage of ions.49 This will call for additional proton pumping to maintain the proton gradient. Furthermore, proteins will denaturate and aggregate more easily, warranting the presence of a protecting system, or at least an increase in the processes generally involved in protein synthesis and degradation.12,41,53 Finally, signalling pathways are triggered as a consequence of heat shock, involving phosphate-transfer at the expense of ATP.39,42 Therefore, one would expect that the energy needed for maintenance of existing cells as well as the energy investment for growth will be increased at elevated temperatures. So, for a successful response to these increased temperatures, it is not only the presence of stress response mechanisms which is important for the cell, but also the potential to use them, for example have a sufficient supply of energy. In this manner, the available amount of energy may be a determinant in defining to what extent a microbial cell can cope with a stress situation and, ultimately, adapt to the stress circumstances. As with the analysis on the effects of a temperature upshift on anabolic rates, a quantitative analysis with respect to catabolic rates was carried out. The analysis included both the initial response and the long-term (pseudosteady state) effects. Furthermore, the temperature-dependence of the maintenance energy requirements was determined in order to separate temperature effects on the energetics of growth from those on the energetics of maintaining the living cell.55,56 Substrate level phosphorylation The specific ATP production rate can be calculated directly from the specific fermentation product formation rate and the substrate level phosphorylation in the tricarboxylic acid (TCA) cycle. The production of ATP per unit biomass can be calculated from the stoichiometry of all processes in which ATP is produced multiplied by the specific rates at which these processes take place. The latter rates can be derived by measurement of the specific product formation rates and substrate consumption rates. In the case of S. cerevisiae the important products are ethanol, CO2, acetate, succinate and glycerol and the substrates glucose and O2, respectively. For the calculations, the ATP-yielding reactions at the substrate phosphorylation level from glucose to ethanol, succinate and acetate were used whereas glycerol production was considered to be ATP consuming. Electron transport phosphorylation ATP will be produced in proportion to the production of the reducing equivalents NADH2 and FADH2 through oxidative phosphorylation. With respect to oxidative phosphorylation, the production of NADH2 and FADH2 can be calculated from CO2 production corrected for ethanol, succinate and acetate production. All reducing equivalents formed are to be reoxidized by oxygen reduction in the energy-conserving respiratory chain. Thus, the calculated rate of production of
234
28 ºC 37 ºC 39 ºC 41 ºC 42 ºC 43 ºC
µ (g DW/g DW/h)
q(glucose) (mmol/g DW/h)
q(EtOH) (mmol/g DW/h)
q(glycerol) (mmol/g DW/h)
q(acetate) (mmol/g DW/h)
q(CO2) (mmol/g DW/h)
Y(glucose) (g DW/g glc)
0.29 ± 0.01 0.38 ± 0.04 0.30 ± 0.01 0.10 ± 0.02 0.01 ± 0.01 –0
8.73 ± 0.35 13.0 ± 0.85 12.3 ± 0.97 10.3 ± 0.87 4.18 ± 0.41 –0
14.5 ± 0.35 19.7 ± 0.85 18.9 ± 0.41 12.8 ± 0.87 4.35 ± 0.39 –0
1.16 ± 0.09 1.83 ± 0.07 2.06 ± 0.07 2.11 ± 0.04 0.76 ± 0.05 –0
0.32 ± 0.09 0.55 ± 0.05 0.62 ± 0.02 0.86 ± 0.04 0.39 ± 0.01 –0
14.4 ± 0.15 29.2 ± 3.5 24.2 ± 0.57 23.3 ± 1.45 7.70 ± 0.43 –0
0.18 ± 0.01 0.16 ± 0.02 0.10 ± 0.01 0.05 ± 0.01 0.01 ± 0.02 –0
Modelling microorganisms in food
Table 12.1 The growth rate µ, glucose consumption flux, ethanol, glycerol, acetate and CO2 production fluxes and the biomass yield on glucose at several growth temperatures in batch cultures. The rates given here refer to the long-term, steady-state response of the cell to temperature increase. DW = dry weight
A kinetic model to understand the response of Saccharomyces cerevisiae
235
Table 12.2 The total ATP production flux and the biomass yield on ATP at several growth temperatures in batch cultures. The rates given here refer to the long-term, steady-state response of the cell to temperature increase. The ATP flux was calculated on the basis of the stoichiometries given in the addendum. DW = dry weight
28 ºC 37 ºC 39 ºC 41 ºC 42 ºC 43 ºC
q(ATP) (mmol/g DW/h)
Y(ATP) (g DW/mol ATP)
13.1 ± 10.34 41.1 ± 5.8 28.9 ± 1.6 32.2 ± 0.56 11.9 ± 0.40 –0
22 ± 1.0 9.3 ± 1.6 10 ± 0.66 3.1 ± 0.62 0.84 ± 1.7 –0
reducing equivalents and the rate of oxygen consumption should match. Subsequently, ATP production can be calculated assuming an overall P/O ratio (ATP produced per mono-oxygen reduced). Here, a constant P/O ratio of 1 has been assumed. The stoichiometries of the reactions used to calculate the specific ATP production rate are summarized in the Addendum. A general pattern was observed: for all temperature shifts an initial increase in the catabolic fluxes [i.e. the specific glucose consumption flux (mmol g–1 h–1) and the specific CO2 and ethanol production] was seen that was invariably accompanied by a decrease in anabolic activity (µ). Subsequently, the rates declined and after a transition phase the rate of glucose consumption and ethanol and CO2 production stabilized to a level that varied with the final temperature. In Table 12.1 several metabolite fluxes together with the growth rate and the biomass yield on glucose are given for the stress temperatures used here, at the final stabilized phase. Although trehalose and sometimes glycogen accumulated in the initial phase of the stress response,39 the compounds were not detected in the later phase. This does not mean that there is no production and subsequent consumption but, even during the initial accumulation, it can be calculated that the ATP consumption for trehalose synthesis makes up only 1% of the total ATP consumption flux. Therefore, the trehalose and glycogen fluxes were ignored here. The highest rate of metabolism was seen at 37 °C, with a 33% rise in the glucose consumption flux to 13 mmol/g DW/h (DW = dry weight) and a similar rise in the ethanol production flux to 20 mmol/g DW/h. The CO2 flux even doubled, pointing to a large increase in respiration in the cell. At higher temperatures, the measured rates tended to decrease. In these experiments, glucose was used by the cells as both the carbon and the energy source. As a consequence, the glucose consumption flux consists of an anabolic and a catabolic part. When the glucose consumption is compared to the growth rate at the various temperatures, it becomes clear that the relative catabolic contribution to glucose consumption increased with increasing temperature. This is reflected by a large decrease in biomass yield on glucose with increasing temperature. From the stoichiometries given above, specific ATP production rates could be calculated. These calculations are presented in Table 12.2. Again, the highest rate
236
Modelling microorganisms in food
Fig. 12.3 The intracellular ATP (closed symbols) and ADP (open symbols) concentrations upon a temperature increase from 28 °C to 39 °C, 41 °C, 42 °C, or 43 °C.
was seen at 37 °C, while at higher temperatures the rate decreased with increasing temperature. Nevertheless, the biomass yield on ATP decreased with increasing temperature. ATP and ADP pools Energy conservation and consumption apparently play an important role in the adaptation of yeast to higher temperatures. To see how changes in the catabolic and anabolic rates related to the intracellular metabolite pools that link these rates, the concentrations of intracellular ATP and ADP in the batch cultures were measured in time. The pools of ATP and ADP (as well as AMP) were to be included in the kinetic model we present below and are shown in Fig. 12.3. Invariably it was found that in incubations using temperature upshifts that allowed the cells to resume growth, the concentrations of intracellular ATP and ADP increased: upon a temperature increase to 37 or 39 °C, the levels rapidly doubled whereas a shift to 41 °C resulted in a slower increase, but the final levels reached threefold. Note that the ATP/ADP ratio remained approximately one in all cases. Upon a temperature increase to the growth-suppressing temperature of 42 °C, the intracellular ATP and ADP concentrations did not change, while at the lethal temperature of 43 °C, they rapidly dropped to about one third of their original value. With regard to intracellular concentrations, the observed increases in ATP and
A kinetic model to understand the response of Saccharomyces cerevisiae
237
ADP pools are seemingly dramatic but, compared to the total ATP flux through the cell, the change is very small. At 41 °C, the final levels of intracellular ATP and ADP are approximately 5.8 µmol/g dry weight each, while the total ATP production flux rises to 32 mmol/g dry weight/hour in the new steady state and is even higher in the initial phase after the temperature increase. Thus, a rise in concentration can easily be accomplished. The drop in nucleotide pools at the lethal temperature coincides with the peak in catabolic fluxes vanishing and with the start of the loss of viability. This suggests that the cells could survive the first minutes after the temperature was raised to 43 °C due to the increased energy-producing rates, but somehow could not maintain this high catabolic rate. Whether this is due to increased leakiness of the cytoplasmatic membrane (resulting in loss of a proton motive force) or is caused by denaturation of enzymes remains to be solved. In the latter case, it may be that catabolism is impaired, decreasing the rate of ATP production and therefore rendering the cell unable to carry out repair reactions. Here, we will not further discuss the physiological interpretation of the data but limit ourselves to the kinetic model which describes these physiological events in terms of enzymatic conversions and changes in metabolite pools.
12.3 The model Since the focal point of our modelling is protein synthesis and degradation and resulting cell growth, we constructed a model in which an inherent property of the system is that it actually grows. Thus the system is not divided into single cells, but is treated as one expanding intracellular volume, and the increase in biomass is represented as an increase in the volume of the cell population. This changing volume of the system is a rather new approach in the modelling of metabolic systems. The model has been based on the kinetic properties of a set of equations that describe essential (lumped) enzymatic conversion rates. Except for the glucose transporter and adenylate kinase, this core model does not include single enzymes. Instead, it consists of several pathways, each modelled as single reactions. In addition, the effect of temperature on the enzyme-catalyzed reaction rates is included. In short, we constructed a kinetic model that takes into account temperature dependence and time-dependent enzyme content. It contains two major fluxes, one to the formation of new cells (anabolism leading to volumetric increase at the cost of ATP) and one to provide the necessary energy-equivalents (catabolism leading to ATP synthesis). The model presented here is the most basic one (Fig. 12.4) consisting of the essential catabolic pathways of S. cerevisiae (#1, 2, 3 and 4 in Fig. 12.4), some of them generating ATP, as well as synthesis of protein as the anabolic flux (#6 in Fig. 12.4). The protein denaturation reaction (#7 in Fig. 12.4) given in this model is not an enzymatic reaction, but simply an effect of a rise in temperature. Adenylate kinase (#8 in Fig 12.4) is included since it is generally believed to be in equilibrium, thus affecting ATP levels in the cell. Finally, this is meant to be a
238
Modelling microorganisms in food
Fig. 12.4
The core model – see the text for explanation of the various numbered pathways.
growing system. In this model, therefore, there is not only a need for an increase of protein (from glucose), but also of adenine nucleotides, since they are the other set of key components (#5 in Fig. 12.4). The stoichiometry of the protein synthesis reaction is based on theoretical calculations with respect to carbon and ATP requirements.13,55,56 The volume is defined as being directly proportional to the amount of protein present. It is mainly Michaelis–Menten kinetics that are used for the equations in the model. Since the volume of the system is time-dependent as well, metabolites are not given as concentrations, but as absolute amount, in either mmol or mg. The latter unit is used for protein, since it is impossible to express these components in number of molecules because of the polymers formed. For easy comparison of the reaction rates, all other components are expressed in mmol. In addition, the constant Vmax is not used, the absolute amount of protein present multiplied by a constant being used instead. Since the model should deal with changes caused by increases in temperature, all reactions are made temperature-dependent. Furthermore, most reactions in the model are actually whole pathways (with the glucose transporter as the major exception), which necessitates an estimation of kinetic constants, based on our experimental data on the fluxes under the circumstances modelled here. There are two exceptions. The rate equation of the glucose transporter (#1 in Fig. 12.4) is taken from 24,25. The kinetic constants of reaction representing glycolysis (#2 in Fig. 12.4) are based on an average of the values for single enzymes as given in 57 and references therein.
12.3.1 Rate equations All rate equations that have been used in the kinetic model are summed up in the Addendum. They all contain an Arrhenius term to take into account the temperature-dependency of the reaction rate that includes the Gibbs activation energy (Ea). It has been fitted in such a way that a temperature increase of 10 °C leads to a doubling in the reaction rate, while the constant KT is chosen so that the whole Arrhenius term is equal to 1 at 28 °C. The constant K1 can then be regarded as the specific rate constant at 28 °C.
A kinetic model to understand the response of Saccharomyces cerevisiae
239
The single-step rate equation for the glucose transport is based upon diffusion of the transport mechanism being facilitated by a symmetric carrier protein.54 All the glycolytic reactions are lumped into one overall reaction, and this holds also for the conversion of pyruvate to generate ATP via the TCA cycle and oxidative phosphorylation, the ‘respiration pathway’. As mentioned, the equations are essentially based on Michaelis–Menten kinetics, taking into account feedback regulatory mechanisms, etc. that ensure proper tuning of each reaction. However, in this model, the temperature-dependent denaturation of protein is considered not to be an enzymatic reaction. Its rate is dependent only on the temperature and the amount of protein present. The activation energy of the thermal denaturation (Ed) of a protein typically lies in the range of 150–400 kJ/mol.7 In general, adenylate kinase is in steady state. To mimic this, the reaction rate has been set very high. The synthesis of adenine nucleotides is also described in one reaction. Since adenylate kinase is assumed to be in equilibrium, the adenosine nucleotides are treated as a single pool with respect to product inhibition. The stoichiometries and constants of the rate equations are summed up in the addendum.
12.4 Validation We have presented a core model of those metabolic processes in the cell that accompany the transition to an environment with an increased ambient temperature. In Figs 12.5 to 12.7 a comparison is shown between the experimentally assessed anabolic, catabolic and metabolite changes on the one hand and the predictions made by the model on the other. Considering the simplicity of the model, its predictive value is surprisingly powerful. The model can reproduce the thin line between cell growth and cell death, with regard to growth temperature that has been observed experimentally. Qualitatively, the trends in observed changes in nucleotide pools, catabolism and anabolism are well in line with the model’s calculations. Clearly, the validation of the model showed some mismatches between experiment and model, and we have therefore extended the model. These extensions include more processes involved in cell metabolism and heat stress response. Thus, the synthesis of trehalose [known to be transiently accumulated upon an increase in temperature and important for (temporary) protein stabilization2,3,10,12,20,44,45,51,52,60,58] has been included. Furthermore, glycerol synthesis is introduced since, depending on the temperature, the flux through this pathway can be as high as 10% of the flux through glycolysis. In addition, the extended model includes other cellular polymeric constituents like cell wall and cell membrane. Finally, it has been taken into account that the composition of the protein pool in the cell will change at different temperatures by dividing the total protein pool into two subpools each of which is related to one specific temperature. For this model the reader is referred to 35,36.
240
Modelling microorganisms in food
Fig. 12.5 A schematic representation of the experimentally determined effect of an upshift in temperature on the growth rate (left) and the effect as predicted by the model (right). The arrow indicates the time of the temperature shift.
Fig. 12.6 A schematic representation of the experimentally determined effect of an upshift in temperature on the Yglucose (left) and the effect as predicted by the model (right). The arrow indicates the time of the temperature shift.
12.5 Conclusions In view of the enormous amount of simultaneous and interrelated reactions – involving hundreds of intracellular compounds – that take place in the living cell, it is surprising how well a simple mathematical model, as presented here, can simulate such a network. Essentially the model is based on the kinetics of a set of enzymatic master reactions (catabolism and anabolism). These are coupled by energy conservation and investment and constrained by the energetic demands of
A kinetic model to understand the response of Saccharomyces cerevisiae
241
Fig. 12.7 A schematic representation of the experimentally determined effect of an upshift in temperature on the ATP and ADP pool (left, see also Fig. 12.3) and the effect as predicted by the model (right). Time of the temperature shift at 400 min.
cellular growth and maintenance. These basic rules are sufficient to describe not only the steady-state growth of a yeast cell but also its behaviour upon perturbation of the steady state. Surely, this is not to be interpreted as a detailed in silico yeast cell. Firstly, the model has been constructed with the specific aim of investigating temperature effects on survival and growth. Second, lumping major pathways into modules decreases by definition the resolution of the model: describing the cell as being made up of a catabolic and an anabolic module limits the predictive value to its catabolic and anabolic properties only. Thirdly, in a number of aspects the model’s behaviour does not match the living cell’s performance. Yet, the approach presented here is an aid in understanding the mechanistic events that underlie survival and death in relation to temperature. For one, the model suggests that the assumption that cell death by elevated temperatures is mainly caused by an energetic drain due to changes in the fluidity of the cell membrane and therefore to leakage of ions is at best incomplete.34 Instead, it is consistent with the idea that the energetic demands due to turnover and compositional shifts of the protein pool of the cell set a firm limit to the magnitude in relative temperature change that yeast cells can cope with. We emphasize that we do not claim that a model as presented here – or for that matter a much more refined model – unequivocally provides all the answers as to
242
Modelling microorganisms in food
how to go about manipulating the cell’s performance. Rather, we have illustrated that our modelling approach increases the transparency of physiological behaviour and thus our understanding of the mechanisms that govern that behaviour. In this manner in silico simulations based on the systems biological properties of the cell may help to reveal targets within the network that can be addressed in order to gain control over the cell’s performance, be it proliferation, death or whatever type of cell activity we are interested in.
12.6 References 1. Attfield PV (1997) Stress tolerance: The key to effective strains of industrial baker’s yeast, Nat Biotechnol, 15, 1351–1357. 2. Attfield PV (1987) Trehalose accumulates in Saccharomyces cerevisiae during exposure to agents that induce heat shock response, FEBS Lett, 225, 259–263. 3. Attfield PV, Kletsas S and Hazell BW (1994) Concomitant appearance of intrinsic thermotolerance and storage of trehalose in Saccharomyces cerevisiae during early respiratory phase of batch-culture is CIF1-dependent, Microbiology, 140, 2625–2632. 4. Boy-Marcotte E, Lagniel G, Perrot M, Bussereau F, Boudsocq A, Jacquet M and Labarre J (1999) The heat shock response in yeast: differential regulations and contributions of the Msn2p/Msn4p and Hsf1p regulons, Mol Microbiol, 33, 274–283. 5. Brul S, Coote P, Oomes S, Mensonides F, Hellingwerf K and Klis F (2002) Physiological actions of preservative agents: prospective of use of modern microbiological techniques in assessing microbial behaviour in food preservation, Int J Food Microbiol, 79, 55–64. 6. Causton HC, Ren B, Koh SS, Harbison CT, Kanin E, Jennings EG, Lee TI, True HL, Lander ES and Young RA (2001) Remodeling of yeast genome expression in response to environmental changes, Mol Biol Cell, 12, 323–337. 7. Chaplin MF and Burke C (1990) Enzyme Technology. Cambridge, Cambridge University Press. 8. Davenport KR, Sohaskey M, Kamada Y, Levin DE and Gustin MC (1995) A 2nd osmosensing signal transduction pathway in yeast – hypotonic shock activates the Pkc1 protein kinase-regulated cell integrity pathway, J Biol Chem, 270, 30157–30161. 9. Davidson JF, Whyte W, Bissinger PH and Schiestl RH (1996) Oxidative stress is involved in heat-induced cell death in Saccharomyces cerevisiae, Proc Natl Acad Sci USA, 93, 5116–5121. 10. Devirgilio C, Hottiger T, Dominguez J, Boller T and Wiemken A (1994) The role of trehalose synthesis for the acquisition of thermotolerance in yeast. 1. Genetic evidence that trehalose is a thermoprotectant, Eur J Biochem, 219, 179–186. 11. Dickson RC and Lester RL (2002) Sphingolipid functions in Saccharomyces cerevisiae, Biochim Biophys Acta, 1583, 13–25. 12. Dubois MF, Hovanessian AG and Bensaude O (1991) Heat-shock-induced denaturation of proteins. Characterization of the insolubilization of the interferon-induced p68 kinase, J Biol Chem, 266, 9707–9711. 13. Forrest WW and Walker DJ (1971) The generation and utilization of energy during growth, Advan Microb Physiol, 5, 213–274. 14. Fuhrmann GF, Volker B, Sander S and Potthast M (1989) Kinetic analysis and simulation of glucose transport in plasma membrane vesicles of glucose-repressed and derepressed Saccharomyces cerevisiae cells, Experientia, 45, 1018–1023. 15. Gasch AP, Spellman PT, Kao CM, Carmel-Harel O, Eisen MB, Storz G, Botstein D and Brown PO (2000) Genomic expression programs in the response of yeast cells to environmental changes, Mol Biol Cell, 11, 4241–4257.
A kinetic model to understand the response of Saccharomyces cerevisiae
243
16. Goffeau A, Barrell BG, Bussey H, Davis RW, Dujon B, Feldmann H, Galibert F, Hoheisel JD, Jacq C, Johnston M, Louis EJ, Mewes HW, Murakami Y, Philippsen P, Tettelin H and Oliver SG (1996) Life with 6000 genes, Science, 274, 546, 563–567. 17. Gustin MC, Albertyn J, Alexander M and Davenport K (1998) MAP kinase pathways in the yeast Saccharomyces cerevisiae, Microbiol Mol Biol Rev, 62, 1264–1300. 18. Hannun YA (1996) Functions of ceramide in coordinating cellular responses to stress, Science, 274, 1855–1859. 19. Hohmann S (2002) Osmotic stress signaling and osmoadaptation in yeasts, Microbiol Mol Biol Rev, 66, 300–372. 20. Hottiger T, Schmutz P and Wiemken A (1987) Heat-induced accumulation and futile cycling of trehalose in Saccharomyces cerevisiae, J Bacteriol, 169, 5518–5522. 21. Jenkins GM, Richards A, Wahl T, Mao C, Obeid L and Hannun Y (1997) Involvement of yeast sphingolipids in the heat stress response of Saccharomyces cerevisiae, J Biol Chem, 272, 32566–32572. 22. Kamada Y, Jung US, Piotrowski R and Levin DE (1995) The protein kinase C activated MAP kinase pathway of Saccharomyces cerevisiae mediates a novel aspect of the heat shock response, Genes Dev, 9, 1559–1571. 23. Kanatt SR, Chawla SP, Chander R and Bongirwar DR (2002) Shelf-stable and safe intermediate-moisture meat products using hurdle technology, J Food Prot, 65, 1628– 1631. 24. Kotyk A (1967) Mobility of the free and of the loaded monosaccharide carrier in Saccharomyces cerevisiae, Biochim Biophys Acta, 135, 112–119. 25. Kotyk A (1989) Kinetic studies of transport in yeast, Meth Enzymol, 174, 567–591. 26. Lang JM and Cirillo VP (1987) Glucose transport in a kinaseless Saccharomyces cerevisiae mutant, J Bacteriol, 169, 2932–2937. 27. Leistner L (2000) Basic aspects of food preservation by hurdle technology, Int J Food Microbiol, 55, 181–186. 28. Levin DE, Bowers B, Chen CY, Kamada Y and Watanabe M (1994) Dissecting the protein kinase C/MAP kinase signaling pathway of Saccharomyces cerevisiae, Cell Mol Biol Res, 40, 229–239. 29. Lewis JG, Learmonth RP and Watson K (1995) Induction of heat, freezing and salt tolerance by heat and salt shock in Saccharomyces cerevisiae, Microbiology, 141, 687– 694. 30. Lillie SH and Pringle JR (1980) Reserve carbohydrate metabolism in Saccharomyces cerevisiae: responses to nutrient limitation, J Bacteriol, 143, 1384–1394. 31. Lindquist S and Craig EA (1988) The heat-shock proteins, Annu Rev Genet, 22, 631– 677. 32. Liu XD, Morano KA and Thiele DJ (1999) The yeast Hsp110 family member, Sse1, is an Hsp90 cochaperone, J Biol Chem, 274, 26654–26660. 33. Loureiro V (2000) Spoilage yeasts in foods and beverages: characterisation and ecology for improved diagnosis and control, Food Res Int, 33, 247–256. 34. Martinez de Maranon I, Chaudanson N, Joly N and Gervais P (1999) Slow heat rate increases yeast thermotolerance by maintaining plasma membrane integrity, Biotechnol Bioeng, 65, 176–181. 35. Mensonides FIC (2007) PhD Thesis, University of Amsterdam. 36. Mensonides FIC, Teixeira de Mattos MJ, Hellingwerf KJ and Brul S, in preparation. 37. Mensonides FIC, Teixeira de Mattos MJ, Hellingwerf KJ and Brul S, in preparation. 38. Mensonides FIC, Schuurmans JM, Teixeira de Mattos MJ, Hellingwerf KJ and Brul S (2002) The metabolic response of Saccharomyces cerevisiae to continuous heat stress, Mol Biol Rep, 29, 103–106. 39. Mensonides FI, Brul S, Klis FM, Hellingwerf KJ and Teixeira de Mattos MJ (2005) Activation of the protein kinase C1 pathway upon continuous heat stress in Saccharomyces cerevisiae is triggered by an intracellular increase in osmolarity due to trehalose accumulation, Appl Environ Microbiol, 7(8), 4531–4538.
244
Modelling microorganisms in food
40. Miller MJ, Xuong NH and Geiduschek EP (1982) Quantitative analysis of the heat shock response of Saccharomyces cerevisiae, J Bacteriol, 151,311–327. 41. Nguyen VT, Morange M and Bensaude O (1989) Protein denaturation during heat shock and related stress. Escherichia coli beta-galactosidase and Photinus pyralis luciferase inactivation in mouse cells, J Biol Chem, 264, 10487–10492. 42. Pace B and Campbell LL (1967) Correlation of maximal growth temperature and ribosome heat stability, Proc Natl Acad Sci USA, 57, 1110–1116. 43. Palhano FL, Orlando MT and Fernandes PM (2004) Induction of baroresistance by hydrogen peroxide, ethanol and cold-shock in Saccharomyces cerevisiae, FEMS Microbiol Lett, 233, 139–145. 44. Parrou JL, Teste MA and Francois J (1997) Effects of various types of stress on the metabolism of reserve carbohydrates in Saccharomyces cerevisiae: genetic evidence for a stress-induced recycling of glycogen and trehalose, Microbiology, 143, 1891– 1900. 45. Parsell DA, Kowal AS, Singer MA and Lindquist S (1994) Protein disaggregation mediated by heat-shock protein Hsp104, Nature, 372, 475–478. 46. Patton JL and Lester RL (1991) The phosphoinositol sphingolipids of Saccharomyces cerevisiae are highly localized in the plasma membrane, J Bacteriol, 173, 3101– 3108. 47. Piper PW (1993) Molecular events associated with acquisition of heat tolerance by the yeast Saccharomyces cerevisiae, FEMS Microbiol Rev, 11, 339–355. 48. Piper PW (1995) The heat shock and ethanol stress responses of yeast exhibit extensive similarity and functional overlap, FEMS Microbiol Lett, 134, 121–127. 49. Piper PW (1997) The yeast heat stress response, in Hohmann S and Mager WH (eds), Yeast Stress Responses, Austin, TX, RG Landes Co., 75–99. 50. Senorans J, Ibanez E and Cifuentes A (2003) New trends in food processing, Crit Rev Food Sci Nutr, 43, 507–526. 51. Schmitt AP and McEntee K (1996) Msn2p, a zinc finger DNA-binding protein, is the transcriptional activator of the multistress response in Saccharomyces cerevisiae, Proc Natl Acad Sci USA, 93, 5777–5782. 52. Simola M, Hanninen AL, Stranius SM and Makarow M (2000) Trehalose is required for conformational repair of heat-denatured proteins in the yeast endoplasmic reticulum but not for maintenance of membrane traffic functions after severe heat stress, Mol Microbiol, 37, 42–53. 53. Singer MA and Lindquist S (1998) Multiple effects of trehalose on protein folding in vitro and in vivo, Mol Cell, 1, 639–648. 54. Smits HP, Smits GJ, Postma PW, Walsh MC and van Dam K (1996) High-affinity glucose uptake in Saccharomyces cerevisiae is not dependent on the presence of glucosephosphorylating enzymes, Yeast, 12, 439–447. 55. Stouthamer AH (1973) A theoretical study on the amount of ATP required for synthesis of microbial cell material, Antonie Van Leeuwenhoek, 39, 545–565. 56. Stouthamer AH and Bettenhaussen C (1973) Utilization of energy for growth and maintenance in continuous and batch cultures of microorganisms. A reevaluation of the method for the determination of ATP production by measuring molar growth yields, Biochim Biophys Acta, 301, 53–70. 57. Teusink B, Passarge J, Reijenga CA, Esgalhado E, van der Weijden CC, Schepper M, Walsh MC, Bakker BM, van Dam K, Westerhoff HV and Snoep JL (2000) Can yeast glycolysis be understood in terms of in vitro kinetics of the constituent enzymes? Testing biochemistry, Eur J Biochem, 267, 5313–5329. 58. Thevelein JM (1984) Regulation of trehalose mobilization in fungi, Microbiol Rev, 48, 42–59. 59. van Uden N (1984) Temperature profiles of yeasts, Adv Microb Physiol, 25, 195–251. 60. Wiemken A (1990) Trehalose in yeast, stress protectant rather than reserve carbohydrate, Antonie Van Leeuwenhoek, 58, 209–217.
A kinetic model to understand the response of Saccharomyces cerevisiae
245
61. Winderickx J, Holsbeeks I, Lagatie O, Giots F, Thevelein J and de Winde H (2003) From feast to famine: adaptation to nutrient availability in yeast, in Hohmann S and Mager WH (eds), Yeast Stress Responses, Berlin, Springer Verlag, 305–386. 62. Winderickx J, de Winde JH, Crauwels M, Hino A, Hohmann S, Van Dijck P and Thevelein JM (1996) Regulation of genes encoding subunits of the trehalose synthase complex in Saccharomyces cerevisiae: novel variations of STRE-mediated transcription control?, Mol Gen Genet, 252, 470–482. 63. Zuzuarregui A and del Olmo M (2004) Analyses of stress resistance under laboratory conditions constitute a suitable criterion for wine yeast selection, Antonie Van Leeuwenhoek, 85, 271–280.
246
Modelling microorganisms in food
12.7 Addendum Stoichiometric reactions used for the calculation of the specific ATP production rates All catabolic reactions have been assumed to be glycolytic and the respiratory catabolism has been assumed to take place via the TCA cycle. Fermentative stoichiometries glucose + 2 ADP → 2 EtOH + 2 CO2 + 2 ATP glucose + 2 ADP + 2 NAD+ → 2 pyruvate + 2 ATP + 2 NADH + 2 H+ glucose + 2 ATP + 2 NADH + H+ → 2 glycerol + 2 ADP + 2 NAD+ glucose + 2 ADP + 4 NAD+ → 2 acetate + 2 CO2 + 2 ATP + 4 NADH + 4 H+ glucose + 3 ADP + 6 NAD+ → succinate + 2 CO2 + 3 ATP + 6 NADH + 6 H+ Respiratory stoichiometries glucose + 10 NAD+ + 2 FAD + 4 ADP → 6 CO2 + 10 NADH + 10 H+ + 2 FADH2 + 4 ATP The last process, which depicts a combination of glycolysis and the TCA cycle, does not have a unique product, but its rate can be derived by comparing the production of CO2 to that of ethanol and succinate. The calculated production rate of reducing equivalents matched the experimentally determined specific oxygen reduction rate.
Rate equations of the model Rate equations The transport of glucose across the cell membrane occurs via facilitated diffusion (14, 24, 26), which is described by a symmetric carrier model (24, 25). The boxed part is the Arrhenius term (including the Gibbs activation energy term Ea) reflecting the temperature dependency of the reaction. It has been fitted in such a way that a temperature increase of 10 °C leads to a doubling in the reaction rate, while the constant KT is chosen so that the whole Arrhenius term is equal to 1 at 28 °C. The constant K1 can then be regarded as the specific rate constant at 28 °C.
v1 = K 1 P 1+
glcout Vout K glc1
glcout Vout glcin Vin − K glc1 K glc1 K Exp [− Ea1 RT ] glcin Vin glc out Vout glcin Vin T + + K I ,glc1 K glc1 K glc1 K glc1
A kinetic model to understand the response of Saccharomyces cerevisiae
247
Glycolysis is given as one reaction:
v2 = K 2 P
adp Vin glcin Vin K adp 2 K glc 2 1 + adp Vin + atp Vin K adp 2 K atp 2
1 + glcin Vin + pyr Vin K glc 2 K pyr 2
K T Exp[− Ea 2 RT ]
The same is true for the conversion of pyruvate to generate ATP via the TCA cycle and oxidative phosphorylation, the ‘respiration pathway’:
v3 = K 3 P
adp Vin pyr Vin K adp 3 K pyr 3 adp Vin atp Vin 1 + + K adp 3 K atp 3
1 + pyr Vin K pyr 3
K T Exp [− Ea3 RT ]
The fermentation of pyruvate to ethanol is given by:
pyr Vin K pyr 4 v4 = K 4 P K Exp [− Ea 4 RT ] pyr Vin T 1+ K pyr 4 The synthesis of adenine nucleotides is also described in one reaction. Since adenylate kinase is assumed to be in equilibrium, the adenosine nucleotides are treated as a single pool for the product inhibition:
v5 = K 5 P
atp Vin glcin Vin K atp 5 K glc 5 atp Vin axp Vin 1 + + K atp 5 K axp 5
1 + glcin Vin K glc 5
K T Exp[− Ea5 RT ]
The synthesis of protein from pyruvate is given by:
v6 = K 6 P
atp Vin pyr Vin K atp 6 K pyr 6 atp Vin 1 + 1 + pyr Vin K atp 6 K pyr 6
K T Exp [− Ea6 RT ]
248
Modelling microorganisms in food
The temperature-dependent denaturation of protein is not an enzymatic reaction. Its rate is dependent only on the temperature and the amount of protein present. The activation energy of the thermal denaturation (Ed) of a protein typically lies in the range of 150–400 kJ/mol (7):
ν7 = K7 P Exp[– Ed7 /RT]
In general, adenylate kinase is in steady state (57). To mimic this, the reaction rate has been set very high:
ν8 = (K8f (adp/Vin)2 – K8r (atp/Vin)(amp/Vin))KT Exp[– Ea8/RT] Stoichiometry d glcout –––––– = – ν1 dt d glcin –––––– = – ν1 – ν2 – 2ν5 dt d pyr –––––– = 2ν2 – ν3 – ν4 – ν6 + ν7 dt d atp –––––– = 2ν2 + (2 + 12 po)ν3 – 9ν5 – 4.32ν6 + ν8 dt d adp –––––– = – 2ν2 – (2 + 12 po)ν3 + 8ν5 + 4.32ν6 – 2ν8 dt d amp –––––– = 2ν5 + ν8 dt dP –––––– = 67.8 (ν6 + ν7) dt d Vin dP –––––– = 2.56 • 10–6 ––– dt dt Constants K1 = 12.0 * 10–4 mmol (mg protein)–1 min–1; Kglc1 = 1.19 mM; KIglc1 = 0.91 mM; Ea1 = 53.9 kJ mol–1; K2 = 11.5 * 10–4 mmol (mg protein)–1 min–1; Kglc2 = 2.0 mM; Kadp2 = 0.8 mM; Kpyr2 = 5 mM; Katp2 = 5 mM; Ea2 = 53.9 kJ mol–1;
A kinetic model to understand the response of Saccharomyces cerevisiae
249
K3 = 2.0 * 10–4 mmol (mg protein)–1 min–1; Kpyr3 = 5.5 mM; Kadp3 = 2.5 mM; Katp3 = 5 mM; Ea3 = 53.9 kJ mol–1; K4 = 11.0 * 10–4 mmol (mg protein)–1 min–1; Kpyr4 = 4.0 mM; Ea4 = 53.9 kJ mol–1; K5 = 0.001 * 10–4 mmol (mg protein)–1 min–1; Kglc5 = 4.0 mM; Katp5 = 2.0 mM; Kaxp5 = 7.0 mM; Ea5 = 53.9 kJ mol–1; K6 = 4.7 * 10–4 mg (mg protein)–1 min–1; Kpyr6 = 3.0 mM; Katp6 = 2.0 mM; Ea6 = 53.9 kJ mol–1; K7 = 1.57 * 1034 mg (mg protein)–1 min–1; Ed7 = 225 kJ mol–1; K8r = 2.22 * K8f ; K8f = 20 * 10–4 mmol (mg protein)–1 min–1; Ea8 = 53.9 kJ mol–1; KT = 2.26 * 109; Vout = 1 liter; R = 8.314 * 10–3 kJ mol–1 K–1; po = 1. Abbreviations glcout: glcin: pyr: atp: adp: amp: axp: Vin: Vout: P: po: vx: t= T=
extracellular glucose, in mmol intracellular glucose, in mmol pyruvate, in mmol ATP, in mmol ADP, in mmol AMP, in mmol ATP + ADP + AMP, in mmol intracellular volume, in liter extracellular volume, in liter protein, in mg P/O ratio, dimensionless reaction rate, in mmol min–1 for x = 1, 2, 3, 4, 5, 8 and mg min–1 for x = 6, 7 time, in min–1 absolute temperature, in K
13 Systems biology and food microbiology S. Brul, University of Amsterdam, The Netherlands, and H. V. Westerhoff, Free University of Amsterdam, The Netherlands
13.1 Introduction Food microbiology is important with respect to a large number of aspects of human wellbeing, given that we carry more microbial cells than mammalian cells, that we eat many microbial fermented foods and that, on the downside, we become ill from many microbial infections. We shall first discuss the type of microbial systems biology that relates to the production of food by microbes, a field traditionally called ‘metabolic engineering’. Here there is much systems biology activity, including pathways recognition, flux analysis, precise kinetic modelling, control analysis and regulation analysis. Food microbiology is perhaps mainly associated with the safety of food in terms of its possible contamination with pathogenic microorganisms. We will discuss how microbial systems biology may help in designing new types of growth-retarding environmental conditions. We will highlight how systems biology may help us to elucidate the mechanisms behind sorbic acid stress resistance in microbes. Finally, we shall briefly visit the field of microbial ecology and conclude with a consideration of how to apply derived knowledge in the context of food processing as a whole.
13.2 Systems biology: biology at last ‘All science is either physics or stamp collecting’ is a famous statement attributed to Rutherford (Birks, 1962). Biology started out as one of these, i.e. necessary
Systems biology and food microbiology
251
stamp collecting. Now it has come on a long way and, with the help of other disciplines, physics but certainly also chemistry and mathematics, it is near to its original quest and mission, i.e. to give a logical account of ‘life’.1 Starting out with ‘stamp collecting’, a fair number of highly meaningful insights were obtained in all areas of biology. However, strict testing of models and hypotheses according to the principles put forward by amongst others Popper (1963) was often extremely difficult because of the possible action of yet unknown components of a given system. Functional genomics currently provides many, if not in some cases all, possibly relevant components of living organisms. However, when the resulting datasets are dealt with by, for instance, the methods of traditional physics with their emphasis on minimalism and simplicity, this avoids the issues that transcend the stamp collecting and addresses functionality. In this sense, systems biology addresses the functionality that emerges beyond the physics of the individual components of the living cell, i.e. in the non-linear and functional interactions of the latter. Therewith it installs biology, not as stamp collecting or physics but reaching beyond both, showing that all bold statements have to be looked at with due care. 13.2.1 Stamp collecting in microbiology The activity of classification has been highly important for microbiology. It started from the concept of species, which is correlated with groupings in terms of anatomy and morphology. The microbial world is, however, much richer than the range of its anatomies and therefore this concept was insufficient. The ability to grow in certain conditions became another important way to define species. This definition of species through biochemical function led to the recognition that speciation appeared to be related to environmental conditions, as if in any reasonable environment some appropriate life form would be found, a principle attributed to Winogradsky and Beyerinck. From the perspective of evolution, the stamp collections of microbiology led to evolutionary trees similar to the ones proposed by macrobiology. They also led to the appreciation of a two-way relationship between genome and function. 13.2.2 ‘(Micro)biological physics’ Whilst the elementary particles of chemistry (i.e. the elements) are already much more diverse than the elementary particles of physics (electrons and protons and their constituents), the elementary particles of biology, i.e. macromolecules, living cells or organisms, are again much more diverse. It is thus no wonder that biology ranked behind physics in terms of a description of its underlying principles in molecular detail. Biology did begin though to develop general principles, and microbiology played a significant part in this process. An important example was the way in which Pasteur disproved the hypothesis of vitalism, which had hitherto word ‘biology’ stems from the Greek βιοσ (bios), meaning ‘life’, and λογοσ (logos), meaning ‘study of; reasoned account of, logics of’.
1The
252
Modelling microorganisms in food
been considered quite feasible. By sterilizing and then isolating, he was able to show that the life that appeared in spoiling food originated from ‘invisible’ microorganisms entering from the environment rather than being generated de novo. Still, with exceptions, such precise and quantitative verification or falsification of hypotheses often remained beyond the capacity of biology. One well-recognized exception stood at the cradle of a subsequent reductionist approach to biology: the discovery of the B-structure of DNA. This discovery involved the making of a precise structural model, which only then made Watson and Crick realize that they should turn it inside out (Watson and Crick, 1953). The other example is that of the chemi-osmotic coupling mechanism, which Peter Mitchell (1968) precalculated in his two grey books that constituted perhaps the very first example of systems biology.
13.2.3 Genomics In the mid-1990s microbiology was confronted full swing with the genomics revolution. Although the official aim was the sequencing of the human genome, the sequencing group which would ultimately be most successful, i.e. that of Craig Venter, was motivated profoundly by microbiology. This group first went after the sequencing of the smallest genome (and finished this second after a slightly larger one), i.e. that of Mycoplasma. A few years later, the group completed its quest after understanding how small the smallest possible genome could be. The answer was: some 300 genes (Hutchison et al., 1999). It seems that life requires the coexistence of some 300-gene products, which can apparently function only when they are together. Noticeably, many microorganisms have genomes that are ten times this size. A second innovation brought about by the sequencing of entire genomes was that now for a number of living organisms, all of the most important classes of components became known, at least at the level of their DNA sequence. These were the genes, the corresponding mRNAs and the corresponding proteins. This may not be quite true yet, as not all genes could be identified and mRNAs can arise from splicing. On the other hand, it does entail that for any protein fragment of which the sequence or mass spectrum can be identified, the corresponding DNA sequence, and the reading frame of which it may be part, can be identified. Subsequently, its expression at the mRNA level can be measured and, through bioinformatics searches for homologies, its function may be guessed. In addition, through green fluorescent protein (GFP) fusions, its protein localization may be sought. The protein may also be over-expressed, its crystal structure determined and its activity assayed. The genome-wide sequencing has had a tremendous impact on the ability to examine the function of individual molecules. Returning to the statement of Rutherford, with genomics (micro)biology has acquired the ultimate ability for collecting all stamps, examining what they look like and comparing them between collections. The availability of the nucleotide sequence of the entire genome also stimulated another development that was important to the development of systems biology
Systems biology and food microbiology
253
(discussed extensively in Westerhoff and Palsson, 2004). Briefly, this concerned the spotting of probes for each mRNA of a genome (or for each potentially expressed piece of sequence) onto a surface, for hybridization with mRNA isolated from living organisms. The approach is now common practice and enabled the simultaneous determination of the expression of entire genomes, in terms of all their components. Again this meant that it had become possible to obtain a picture of the whole, rather than of individual components. Using various separation methodologies and mass spectrometry, it has since also become possible to measure the expression of entire microbial genomes at the protein level. The same activity for the metabolome is inherently more difficult, as metabolites cannot be as readily identified; they do not correspond to a piece of DNA well defined through genome sequencing. Yet, there is steady progress in the identification of the metabolites of some microorganisms (Kell, 2006). With these developments microbiology could make two quantum leaps in terms of its scientific status. First, it got rid of the problem that its hypotheses could not be tested because its systems were always incompletely defined in terms of their composition. Now, at least for all mRNAs and proteins and soon for metabolites, both presence and to some extent abundance can be established, and there can be no mRNA or protein the presence of which is not suspected on the basis of the completely known genome. Second, it acquired the capacity to look at functioning living systems as a whole in terms of all their constituents. Therewith it became possible to turn molecular biology into molecules-based biology. After all, the objective of study in biology is the functioning living state, and that living state has the minimum complexity of autonomous life, i.e. of individual microorganisms (at least some 300 genes, see above). Here microbiology is a front runner for the biology disciplines. The heterogeneity of gene expression over the various tissues of multi-cellular organisms, or over the various organisms in an ecosystem, precludes a similar complete level of analysis for macrobiology.
13.2.4 Why genomics alone does not yet lead to systems microbiology The years since 2000 have witnessed tremendous further advances in our abilities to obtain complete ‘microbial stamp collections’: increases in speed, reduction in time to data and reduction in time to well-accessible-Internet format. Complete expression patterns of genomes under some conditions are available on the Internet, and this has already given rise to analyses of such datasets by groups distinct from the group that had generated the data. Soon everyone will have access to numerous such datasets representing the expression of entire genomes. Having available the genome sequence of a certain microorganism may correspond to having the complete list of all words that occur in a human language. After doing the bioinformatics and homology searches and after crystallization and function determination, this list will have been transformed to a dictionary. Then, having the expression levels of the mRNAs and proteins for a number of conditions
254
Modelling microorganisms in food
may correspond to knowing for a couple of chapters of a novel how many times each word occurs in that chapter. All of this is impressive progress and leads to impressive amounts of information that somehow seems relevant for ‘understanding’ that novel. Because it is data that can be presented as such in non-controversial ways and without ‘speculation’, most scientific journals will find it hard to refuse the corresponding papers. However, these compilations of functional genomics data may tell one what is in the book, but they do not provide the reader with the story the novel tells. Thus with all its might, neither the word count of the novel nor the functional genomics of an organism will by itself enable us to understand how life works, or what the novel intends to tell us.
13.2.5 Microbial physiology This reminds one of the scientific discipline of microbial (cell) physiology, which was nearly pushed out of the limelight by the more appealing developments in molecular biology. Physiology describes the functioning of unicellular microorganisms in terms of correlations between systems properties. Using an analogy to a car engine, it is the study that reveals how that engine responds to increases in resistance that need to be overcome once the driver moves from flat to mountainous terrain. Note also that here the parts list of the engine fails to reveal how the engine responds to such increased resistance. In microbiology, microbial physiology would determine how the flux of catabolism of a given substrate varies with the change in environmental conditions such as pH, water activity, temperature, etc. Some empirical theories might be formulated, such as the supposition that there is a linear dependence between the two fluxes. An interpretation could follow for the intersection of the abscissa of this relationship as ‘maintenance catabolism’ (the engine running stationary in the example discussed above). Where functional genomics did not by itself deliver the understanding of life aimed for by biology, neither did microbial physiology. After all, the empirical laws found could at any time be overturned by the interference of as yet notunderstood or undefined processes. Moreover, there was no way in which those empirical laws could be founded in terms of underlying principles other than vague concepts such as that cells might need processes such as ‘maintenance’. Indeed, attempts to replace the abstract maintenance concept by more mechanistic processes such as ATP hydrolysis needed to maintain ion gradients had already led to significant effects on the phenomenological equations (Westerhoff et al., 1982). What was needed for physiology to lead us to the logos of life was its founding in the behaviour of the components of living cells. The attempt to do this was called molecular cell physiology (see e.g. Hellingwerf et al., 1995; also discussed more recently in Westerhoff and Palsson, 2004). This combined biochemistry and molecular biology with physiology, but again was limited in scope by the unavailability of complete insight into all the molecules active in the microorganism.
Systems biology and food microbiology
255
13.2.6 Models in microbiology Certainly in its union with biochemistry and physiology, microbiology is full of models. The maps of pathways of metabolism, gene expression and signal transduction constitute models for how the cells carry out their functions. There are also models for the modes of action of antibiotics, such as those targeting the cell wall or protein synthesis. In the context of this book and of systems biology it may be more appropriate to discuss quantitative, ‘mathematical’ models. Various mechanistic modelling approaches may be used that are suitable for incorporation in such quantitative ‘mathematical’ models of microbial systems. One of these is the simple model for the dependence of growth rate on the concentration of the growth substrate, i.e. the Monod equation, which is isomorphic to the Michaelis–Menten equation. The Monod equation corresponds to a phenomenological description, which is sometimes rationalized by the assumption that the growth rate depends on a rate-limiting step that depends on the concentration of the growth substrate through Michaelis–Menten kinetics. The step could be the uptake step of the growth-limiting substrate. Because the affinity for most growth substrates is below the detection limit, the Monod equation has not been much tested quantitatively. Where it has been tested, it is not particularly superior to other equations such as those of Mosaic Non Equilibrium Thermodynamics (Westerhoff et al., 1982; Westerhoff and Van Dam, 1987; Senn et al., 1994; Wu et al., 2004). Another model describes the variation of growth yield with growth rate (Pirt, 1982) as a hyperbolic function with the maximum growth yield and a maintenance coefficient as presumed constants (cf. below). This model translates to an equation that prescribes the rate of catabolism to be equal to this maintenance coefficient plus a proportional function of growth rate. In chemostat studies these relationships are often obeyed. However, the ‘maximum growth yields found through this equation were far (some 50 %) below those expected on the basis of calculated stoichiometries of ATP production and consumption (Stouthamer, 1979). The maintenance coefficient in these equations was proposed to represent the catabolism used to empower reactions that the cells need to stay alive. These include the reactions maintaining the ion gradients across their plasma membranes. However, more detailed models that described ATP hydrolysing reactions that would serve this purpose led to equations that contained growth rate-dependent maintenance in addition to growth rate independent maintenance (Westerhoff et al., 1982). Also with these models, however, growth yields were unexpectedly low (Westerhoff and Van Dam, 1987). The above models were descriptive and, in case of the latter, mechanistic. However, they were not explanatory in the sense of why microbial growth appeared to be so inefficient. In a non-equilibrium thermodynamic analysis, the thermodynamic efficiency of microbial growth was calculated and shown to correspond to efficiencies expected if growth had not been optimized for thermodynamic efficiency itself but rather for growth rate (Westerhoff et al., 1983). This proposal received some further support recently when it was shown that soil bacteria were able to reduce their maintenance energy requirement to very low levels in a retentostat (Lin, 2006).
256
Modelling microorganisms in food
None of these models was quite systems biology. They did not refer to genomewide phenomena, nor did they emphasize the properties that emerge from interactions, and nor were they constructed in terms of the precise properties of all the molecular components.
13.2.7 What is systems biology, or what should it most profitably be? Concluding from the former, systems biology should not replace, but rather add to, the existing microbial physiology, functional genomics, molecular biology and mathematical modelling. With these disciplines, we are currently able to identify (virtually) all molecule types of living systems in terms of their individual properties. Additionally, we are able to understand life in terms of correlations between physiological properties. However, we are not yet able to understand these physiological properties in terms of the molecular properties. What is missing are the functional properties of the system that are not yet present in its molecules. Of necessity, these properties must arise in the interactions of the molecules. Systems biology should therefore focus on the functional properties that emerge from the interactions of the components of living systems (Alberghina and Westerhoff, 2005; Westerhoff, 2005). When components only engage in linear interactions, the resulting properties can only be a weighted sum of the properties of the components. Consequently, for new functional properties to arise, the interactions must be non-linear. This nonlinearity has a number of important consequences. First, when interactions are non-linear, their strength depends on the state of the system. Therefore, the development of the true functional properties of the systems can be understood only when a system is analyzed under its actual operating conditions. This also has implications for food microbiology since conditions are often suboptimal with low numbers of microorganisms being exposed to, for instance, low temperatures when chilled products are studied. Generally, experiments have to be done under and around the ‘relevant’ physiological conditions. Generally, experiments should be performed in and around the physiological state. Second, the strengths of the interactions will matter, and therefore the experiments need to be quantitative. In the third place, in a non-linear world the effect of the interaction of two components tends to depend on the interactions of these with other components and this means that, at least potentially all interacting components need to be considered. This requires the investigation of all components that are necessary for the system to function, which may require the studies ultimately to be genomewide. In the fourth place, the understanding of non-linear interactions and their effects requires mathematical analysis of non-linear systems. And fifth, of course, when striving to understand how living systems make life work, it is essential to understand what life is. This requires the understanding of what functions are important for life: one needs the functional aspects of biology. Thus, systems biology needs to combine quantitative (i.e. accurate) experimentation around the physiological state, plus precise mathematical analysis, from the perspective of looking at the complete system which is ultimately a genome-wide
Systems biology and food microbiology
257
Fig. 13.1 A schematic depiction of the evolution from molecular biology and cell-based black and grey box analysis to systems biology. The figure shows on the left-hand side the developments in molecular biology that have led to the present day human genome sequence and numerous microbial genome sequences. At the same time it shows on the right hand side the somewhat less well known time-line of cell-based functional analysis as it developed over the years. Adapted from Westerhoff and Palsson (2004). PCR = polymerase chain reaction.
perspective, and with a connection to physiology. These latter aspects of systems biology are necessary, although not defining, properties. Accordingly, systems biology is not (just) the implementation of mathematical modelling in biology. There is much mathematical modelling in biology that does not focus on what emerges from the interactions. These aspects of systems biology rationalize its deep roots in various pre-existing disciplines. Although the developments of recent years might suggest an exclusive origin of systems biology in the axis through molecular genetics to functional genomics, there is quite an important second root in physical chemistry and mathematical biology. In a recent historical account of the evolution of molecular biology into systems biology Westerhoff and Palsson (2004) highlighted this (Fig. 13.1). Systems biology has not only been dissected in terms of its roots, but also in
258
Modelling microorganisms in food
terms of its character. Here the analogy may be drawn with the study of the foliage of a tree with the aim of understanding how, through their interactions, the components of the tree bring about the living state of the latter. One way to study the tree would be to take a picture of all its individual leaves and then to collect these pictures in a photoalbum. In one way that album would be a complete representation of the system tree, yet it would not enlighten us much on how the tree actually functions. Making the collection would correspond to the biology of the system, rather than the systems biology of the tree, as it would not focus on the difference between the tree, i.e. the system, and the sum of all the leaves, roots, branches and trunks. A second way to study the tree would be to look at its dynamics in the wind. One would see that the leaves move irregularly at the fastest timescale. One would also notice that at a somewhat slower timescale, the movements of some leaves correlate with each other more than with the movements of other leaves. One might deduce that those leaves might be attached to each other. By focusing at even longer timescales one might find that some groups of leaves apparently correlating with each other would again correlate at that even longer timescale. In this way one might come to the hypothesis that leaves are attached to something one might call twigs, which are in turn connected to branches. This type of activity would correspond to measuring the concentrations of all macromolecules at one level of gene expression, for example all mRNAs, as a function of time after a number of perturbations of the system. It has been called ‘top-down’ systems biology as it starts from a bird’s eye view of the behaviour of the entire system and then zooms in to the possible interactions and mechanisms that may underlie that behaviour (Alberghina and Westerhoff, 2005). In terms of a pre-existing duality of science, it corresponds to the empirical branch. It is important that the inspection of the dynamics leads not only to the reporting of that dynamics, but also to hypothesis generation and to the subsequent formation of theory on how the living organism actually functions (Kell, 2006). Although not absolutely necessary, it would make sense to look at all the leaves of the tree in one go, i.e. to go genome-wide. Consequently much of this ‘top-down’ systems biology tends to engage in genome-wide experimentation. A third way to go about understanding how the tree works would study how by connecting a leaf to a root through a branch one could get energy out of the sunlight, enabling the pumping of water and minerals out of the soil. One would see how through their non-linear interaction root and leaf could both obtain entirely new properties and in fact grow. This approach has been called bottom-up systems biology. It is driven more by pre-existing concepts and theories. Bottom-up systems biology tends to start with a small number of components and work from there to the whole, genome-wide system. Since both top-down and bottom-up systems biology have the aim of understanding how functions of the system come about in interactions of its components, they are really two ways to approach the same problem. They can greatly synergize with each other, and the systems biology that combines the two approaches may be called integrative systems biology (see Westerhoff, 2005).
Systems biology and food microbiology
259
Fig. 13.2 Simulation of the growth of a microorganism contaminating food in the presence of a labile antimicrobial agent. Note that the y axis is given in arbitrary units equivalent to optical density and that it is assumed that the minimal level indicated ≠ zero cells but rather the detection limit of a given experimental analysis tool. From t = 1 the time derivative of x became dx –– = x • (1 – 5 • e–0.25•t) dt
13.3 Systems biology and food microbiology In the context of food microbiology, the predictability of the reaction of biological systems has always been a major issue. This ‘predictive microbiology’ was not in the least driven by the need to arrive at quality assurance procedures based on a (very low) risk estimate of the probability that pathogenic microorganisms would be present in an end product of a food production process and cause disease upon its consumption. Thus the ‘12D’ concept of Esty and Meijer (1922) was born. However, the consumer wish to come in short innovation cycles to better tasting foods with higher nutritional values, led to significant challenges to the need to always apply this ‘botulinum cook’ [see e.g. McMeekin and Ross (2002), as well as the web link of the European Technology Platform for Food http://etp.ciaa.be]. These events led to the challenge for ‘predictive microbiology’ to come up with more robust mechanistic models of microbial behaviour that should remove the need to (over)process, i.e. the need to be ‘on the safe side’. In this respect, food microbiology does assume a special position in the sciences due to its focus on the behavior of low numbers. This is illustrated by Fig. 13.2, which simulates the situation where food contains a limited number of cells of a
260
Modelling microorganisms in food
certain type of microorganism. A food preservative is added 1 h after the food has been prepared and this reduces the number of cells of this microorganism to far below the detection limit. However, if the food preservative is itself somewhat unstable, then after a time much longer than the characteristic time of the initial decay phase, ‘all of a sudden’ the microorganism reappears and then rapidly overgrows the food. If the latter happens just before consumption, a toxic microorganism may cause a serious health problem. The time of reappearance of the microorganism in Fig. 13.2 is determined by the point in time that the specific growth rate of the microorganism balances its specific killing rate. The latter of these is a function decreasing with time because of the decay of the food preservative. This balance can be a difference between two large numbers each of which may be determined by a number of processes and factors in the system. Consequently, because the system has many factors, which were until recently undeterminable, this important feature of reappearance of contamination was an issue of unpredictability. Now, with the ability of functional genomics to make most if not all important factors known and testable, it would seem that the unpredictability may disappear. For a simple system consisting of linearly interacting components, this should indeed be the case. Suppose that the food preservative concentration would just decay to zero with time and the microorganism would only then start growing with the same time-dependency up to the toxic level. Then the time at which the food would become toxic would be the simple sum of the two times. In reality, however (and in the model used for Fig. 13.2), the microbial growth rate varies non-linearly with the concentration of the preservative and with the concentration of the microorganism. Because of these non-linearities the microorganism re-emerges suddenly and fairly unpredictably. It should be realized of course that at least one survivor needs to be present within a product unit, otherwise growth is not possible. Simple superposition of suspected implications of functional genomics experiments or extrapolation on the basis of initial data cannot lead to predictions in such a non-linear system, unless they are aided by computation. The latter is required to guide intuition through a non-linear system. Systems biology is needed in order to be able to predict the results of the non-linear interactions between system components, and modular approaches may be very useful for this purpose. As is the case with many biological systems of interest, food microbiology is one in which there is a natural division in such modules. One is the human consumer of the food, the others are the microbes that may be present in the food or in the environment in which that food is useful to the eater. The latter may be in the oral cavity where taste is co-determined by the microorganisms present in the mouth and in the intestinal tract where the microorganisms co-determine which part of the food will enter the system of their host. This is the microbial ecology part of food microbiology. In the following we shall discuss some applications and developments of systems biology in three areas of food microbiology, i.e. metabolic engineering, food safety and food ecology.
Systems biology and food microbiology
261
13.4 Food production: metabolic engineering 13.4.1 Network topology: the blueprint and metabolic potential The blueprint is the first stop on this line, and it addresses pathway construction. Genome sequences are often annotated in terms of the process they may be encoding. This is mostly based on sequence homology with a gene in a different organism that is known to encode an enzyme with a certain catalytic function. Occasionally the evidence for molecular function is more direct. A network of enzymes with catalytic capabilities has an important emergent property, i.e. the ability to perform a number of those reactions in series. Flux through a network can occur only when the different steps in that network interact through the metabolites between them. Then combinations of the individual reactions constitute pathways. A characteristic of metabolic pathways is their conservation of element mass and often of more complex combinations thereof such as the AMP moiety of the adenine nucleotides. The consequence is that synthetic reactions for which a gene is present can only realistically occur in the organism when there are also reactions encoded that supply the substrates needed for that reaction, as well as reactions that remove the products. The exceptions are the substrates that are provided from the outside and can be taken up and the end products that are secreted. Chemical reactions may be represented by their reaction equations. We shall here discuss a simplified example where we suppose that an enzyme (or groups of enzymes) has been identified that converts pyruvate to ethanol: pyruvate → ethanol + CO2; reaction (1) at rate ν1 Ethanol, carbon dioxide, oxygen and phosphate are molecules that we suppose to be able to enter and leave the cell freely, and we shall not discuss here other reaction components such as protons and redox equivalents (NADH) [which are important in reality (Palsson, 2006)]. We suppose that also another enzyme has been found in the genome that oxidizes pyruvate to carbon dioxide, coupled to the synthesis of five molecules of ATP from ADP and phosphate: pyruvate + 8 ADP + 8 phosphate + 5 O2 → 3 CO2 + 8 ATP; reaction (2) at rate ν2 [The number 8 derives from an assumed ATP produced per mono-oxygen reduced (P/O) ratio of 1 and the assumed mitochondrial oxidation of the NADH produced by glyceraldehyde dehydrogenase]. The rate of change of pyruvate with time is given by: d[pyruvate] ν1 –––––––––– = – 1•ν1 – 1•ν2 = (– 1 – 1) • dt ν2 where t is time and matrix annotation has been used. The analysis shows that the pyruvate concentration would continue to decrease and ultimately become negative for sustained magnitudes of the two rates. No steady-state production of ethanol could be achieved (i.e. pyruvate consumption being matched by pyruvate
262
Modelling microorganisms in food
production). Consequently reactions 1 and 2 do not constitute a complete pathway; they do not produce the emergent property of a steady flux through the system. This conclusion suggests that one should search in the genome sequence for a gene that is homologous to a gene that is in some organism able to encode an enzymeproducing pyruvate. We assume now that this enzyme (group) is found and synthesizes two molecules of pyruvate from one molecule of glucose whilst phosphorylating two ADP molecules: glucose + 2 ADP + 2 phosphate → 2 pyruvate + 2 ATP; reaction (3) at rate ν3 The above equation becomes: d[pyruvate] –––––––––– = N •ν = (– 1 – 1 dt
ν1 2) • ν2 ν 3
where N is the matrix describing reactions 1, 2, 3 leading to a balance in pyruvate. This now allows for a balance in pyruvate provided that the three reaction rates obey the relationship that two times the rate of the third reaction equals the sum of the rates of the first and second reactions. An example is given by the vector: 1 ν1 ν2 ≡ p1 = 1 ν ss 1 3 where the subscript ss denotes steady state. The matrix N (also written as S) is called the stoichiometry matrix of the network, and ν the rate vector. The vector p1 corresponds to a pathway and equals a basis vector for the Kernel (or null space) of N. All combinations of reaction rates that correspond to a factor times this p1 are also consistent with a balance in pyruvate, i.e. all cases where all rates are equal to each other. The equation:
ν1 + ν2 = 2•ν3 also has solutions with rates different from each other. An example is: 2 ν1 ν2 ≡ p2 = 0 ν ss 1 3 This is a second pathway, which one may recognize as the alcoholic fermentation of glucose, as performed by the yeast Saccharomyces cerevisiae. All linear combinations of the two pathways we have now discussed also make the pyruvate concentration time-independent. This includes the respiration of glucose to carbon dioxide:
Systems biology and food microbiology
263
0 ν1 ν2 ≡ p3 = 2 = 2•p1 – p2 ν ss 1 3 Since chemical reactions are reversible in principle, solutions that have negative rates are also tolerable in principle. The extreme pathway approach developed by Palsson and coworkers, however (Schilling et al., 2000a,b; Papin and Palsson, 2004), starts from the concept that in biochemical networks most processes only run in one direction, and it requires all rates to be positive. Indeed, for most biochemical reactions the standard free-energy difference is sufficiently far from zero to make reversal of the reaction infeasible. The linear combination 2p1–3p2 should thereby be considered unrealistic as it would lead to a negative flux through process 1 (i.e. synthesis of pyruvate from extracellular alcohol and carbon dioxide). It is hard to see in advance, however, which linear combinations of pathways would lead to negative rates. The extreme pathways approach therefore first determines the pathways from which all other pathways can be reconstructed as linear combinations with positive coefficients. Plotting in a 3-D picture the three pathways we discussed, all chemically possible linear combinations of rates correspond to points on the 2-D plane defined by any combination of two of the three pathways. All rates that are biochemically feasible lie in the positive quadrant of this plane. The limits of the plane of biochemically allowable rates correspond with the intersection of the plane of chemically allowed rates with the planes defined by v1 = 0 and by v2 = 0 (the line defined by v3 = 0 lies outside the positive quadrant). All biochemically allowable rate combinations (i.e. all rate combinations with positive rates and the system at steady state) therewith lie on the area of the said plane that is delimited by the vectors p2 and p3. This area of the plane is a two-dimensional cone. p2 and p3 have been called the extreme pathways. In general, extreme pathways are defined as pathways that lie in an (n–r)-dimensional space of a flux map at the edges of a cone that bounds the null space. That is, the process rate space that is consistent with steady state (see e.g. Papin and Palsson, 2004; n is the number of independent reactions and r the number of independent metabolite concentrations, here they are 3 and 1 respectively). It is of interest to see that for S. cerevisiae the extreme pathways also make biochemical sense, corresponding to the aerobic catabolism and the anaerobic catabolism of glucose. In genome-wide analysis, rates that are consistent with steady state fall into a multidimensional (hyper)cone starting in the origin of the rate space, extending into the positive quadrant and delimited by a number of extreme pathways that equals the dimensionality of the cone. Schilling et al. (2000a,b) have developed an algorithm for finding extreme pathways. If all enzymes have been taken into account, the flux space spun delimited by the extreme pathways, corresponds to the biochemical potential of a genome. Consequently, if one wishes a particular organism to make a particular product from a number of substrates to be supplied, it makes sense to determine this biochemical potential before embarking on the implementation of this organism.
264
Modelling microorganisms in food
Further details on how to work with the approach are given extensively in Addendum 1 to this chapter. For a comparison of the extreme pathways methodology with alternative methods used in metabolic engineering see also e.g. Koffas and Stephanopoulos (2005) or Kholodenko and Westerhoff (2004a,b).
13.4.2 Network topology: the definition of metabolic pathways The methodology for identifying extreme pathways has not only proven useful for analyzing the possible performance of metabolic networks, but it has also provided a systematic definition of what a biochemical pathway is. Although the concept of biochemical pathways had been used for many years, there had been no strict definition. That was fine for the early days, but with a genome-wide dataset becoming available the situation with undefined pathways became untenable. Schuster and Hilgetag (1994) had already addressed this issue and defined ‘elementary modes’. Their definition differs somewhat from that of extreme pathways in that it does take into account both reversible and irreversible steps in a pathway and may therewith better correspond with actual metabolic pathways (Schuster et al., 2000). Extreme pathway analysis dissects reversible steps into two irreversible steps, which is somewhat unnatural biochemically, as both steps are catalyzed by the same gene product and inhibited by the same competitive inhibitors. Extreme pathway analysis and elementary mode analysis both suffer from yet another disadvantage: the number of pathways they generate for genome-scale networks is very high (Papin et al., 2004). To illustrate this we may consider a simple scheme in which pyruvate functions as the central node in metabolism, which can be produced by three catabolic reactions and from which three anabolic reactions can start. This leads to 3 × 3 extreme pathways. Classical biochemistry would define the three routes to pyruvate as three biochemical pathways and the three routes from pyruvate as three more biochemical pathways, leading to six pathways in total. Indeed, genomes do not tend to encode pathways that lead from any growth substrate to any growth product. They rather encode catabolic pathways that degrade all substrates to a limited set of carriers for carbon, nitrogen and free energy, from which the biosynthetic processes then start again. Therefore it makes sense to define the catabolic and the anabolic pathways separately. In this sense the extreme pathways and elementary modes may not be able to substitute for the classical biochemical pathways. A solution to this problem might be to take ATP and NADH as external metabolites.
13.4.3 Network topology and flux-balance analysis: what would be wise of the cell to do The extreme pathway analysis given above did not lead to a unique flux pattern on the basis of the pathway reconstruction: various flux patterns were still possible, i.e. all positive linear combinations of the four extreme pathways. Kinetics and
Systems biology and food microbiology
265
gene-expression regulations determine which of all possible pathways is actually executed by the organism and to what extent. If one wants to determine or predict which fluxes actually occur in the organism, then there are three possible approaches. The first is to determine this experimentally, in an approach that is called flux analysis. The second is to determine the kinetics and regulation of the processes in the organism and predict on this basis what will happen. A third is to assume that the biochemical network fulfils additional criteria, such as optimality of a so-called objective function. We shall briefly discuss these three approaches in reverse order. From its name, it might seem that the so-called flux-balance approach merely balances fluxes such that steady states are obtained and hence leads to all possible metabolic fluxes only as combinations of the extreme pathways. However, it does not. It makes additional assumptions through which it does arrive at unique predictions of the fluxes in the system. Although this may seem useful, a strong warning is in place: flux-balance analysis makes very strong assumptions, some of which are known to be wrong. Claims that the flux-balance method makes the proper metabolic map of microorganisms are therefore unwarranted speculations. The approach is useful, however, to determine what flux patterns should be optimal for a process that one is engineering (Edwards and Palsson, 2000; Schilling et al., 2000a,b; Ibarra et al., 2002). Details on how to work with flux-balance analysis are discussed in Addendum 2. Barrett et al. (2005) applied flux-balance analysis to Escherichia coli ‘virtually cultivated’ under 15 000 different conditions! The clustering of the predictions showed that the regulatory network governing the response of E. coli to such challenges responded mainly to the carbon source (glucose or not) and the presence of a preferred electron acceptor. Such network pattern recognition and reduction of complexity (from over 1000 genes to a few stable functional states) are main outputs of the research in the area. In our opinion, this shows one utility of a systems approach to such (micro)biological issues. However, again, this concerns an organism which is supposed to function optimally, but it does not always do so (see above). One should note that the conclusions drawn by Barrett et al. may also pertain to the non-optimal real organism. However, this is uncertain and requires further experimental work. Teusink and Smid recently (2006) argued that similar modelling approaches applied to lactic acid bacteria fermentations can assist industry in improving process performance significantly, be it yield of product or other process criteria. Non-linear optimization analyses may be needed to explain the yeast case (Rossell et al., 2006; Rossell and Westerhoff, in preparation).
13.4.4 Network topology and functional genomics: what is actually present Because one cannot be certain that the optimum states described by flux-balance analysis are real, one needs to establish which pathways are actually active. The most pertinent then is the flux analysis that we shall discuss in the next section, but functional genomics may help. By determining the transcriptome, or the proteome,
266
Modelling microorganisms in food
one may assess whether all the enzymes proposed to be present in the state found by flux-balance analysis are actually expressed. Most relevant in that regard is that proteomics has really taken off in the years since 2000 and many approaches are now available (reviewed in Volker and Hecker, 2005). The most common one still is the use of two-dimensional gel electrophoresis coupled to protein identification with mass spectrometry. This technique has been optimized with the introduction of fluorescent labels that allow for the post-labelling of proteins isolated from cell extracts (Bernhardt et al., 1999). The use of three labels, one for a common reference stain for a given situation and two for ‘test’ strains, enables the analysis of multiple samples on one and the same gel. This gives great advantages if one thinks about the improved reproducibility of inter-sample comparisons since the gel-to-gel variation in quality is removed. A recent study on salt stress adaptation in Bacillus subtilis illustrated this point (Höper et al., 2006). Because enzymes may be present but inactive, an ‘enzymome’ analysis, where the activity of all enzymes would be measured in cell extracts, might actually be the most pertinent. However, this has not yet been accomplished in the required highthroughput mode. Metabolomics assessing relaxation after perturbations could also show whether enzymes are active, but again this methodology is not yet genome-wide.
13.4.5 Flux analysis: actual network fluxes Experimental measurement of the fluxes appears to be necessary in order to know what the fluxes are in microorganisms and then perhaps to determine whether the true flux pattern corresponds with what should be optimal according to fluxbalance analysis. However, the experimental approach also has its difficulties. It is only the fluxes into and out of the microorganism which can be measured relatively easily. Fluxes inside the organism cannot. Flux analysis therefore gives only a subset of all fluxes. One of the problems is also the fact that in certain cases, circular reactions or bypasses occur, without extra information from co-factors, making the determination of the respective fluxes impossible. Numerous examples exist where it has been possible to derive, from existing biochemical knowledge concerning the network topology and the results of the experiments, some internal metabolic fluxes of the microbes under study. In one particular example, the internal flux distribution of carbon was calculated from the distribution of the end products and the increase in biomass in E. coli (Picon et al., 2005). In another example, the redistribution of fluxes in Enterococcus faecalis was calculated over the pyruvate dehydrogenase/pyruvate formate lyase node (Ward et al., 2000). The data showed that, in order to reach redox neutrality also at low pH under anaerobic pyruvate-limited culture conditions, the pyruvate dehydrogenase enzyme complex had to be active. Previously activity of the latter had been shown only under aerobic conditions! Sauer and colleagues have applied an elegant short cut. They studied rapidly growing cells and determined the composition of their biomass. By multiplying the
Systems biology and food microbiology
267
growth rate with that biomass composition they calculated the biosynthetic fluxes of all compounds necessary for that biomass (Fischer and Sauer, 2005). Using this method they found that B. subtilis does not seem to behave optimally. In many cases a microorganism has more than one way in which to convert its external substrates to its external products, and these ways may (obviously) differ in ATP efficiency. Then many, and in fact perhaps the more interesting, fluxes cannot be determined by standard flux analysis. This problem may be approached by the use of isotope fluxes, a topic which has been reviewed recently (Moreira dos Santos et al., 2004).
13.4.6 Actual network dynamics Knowing what the fluxes are may enable comparison with what they should be if the organism behaves optimally. However, by itself it does not make us understand how the fluxes would change if one were to engineer the organism. For this, one needs to know how the network behaves when concentrations change. This is a complex issue of systems biology, and answers require detailed consideration of the kinetics of the enzymes in the organism. The silicon cell program (www.siliconcell.net) takes this approach by making or collecting precise kinetic models of pathways in microorganisms. These are also made available on the world-wide web such that everyone can use them to engage in metabolic engineering in silico (cf. www.jjj.bio.vu.nl). The approach has been reviewed recently, also in terms of the metabolic engineering of Lactococcus lactis, where it made a successful prediction (Snoep et al., 2004, 2006). This may well be the ultimate approach. However, it requires not only DNA sequences but also the kinetic properties of all the components and will take much longer to complete. In cases with major fluxes, or with minor fluxes branching from major fluxes (Snoep et al., 2004), limited-scale silicon cell models should already be successful and therefore much is to be expected from this approach in the coming years.
13.4.7 Network control Metabolic engineers will want to know which enzymes they should activate or inhibit in order to change production fluxes into a required direction. Metabolic control analysis produces just this information and, therefore, this is the method of choice for metabolic engineering. It comes with a number of additional laws/ principles that can be useful in order to understand which steps in the organism an engineering approach should be directed to and why. Recent reviews have shown that the approach can be quite successful (Snoep et al., 2004). It was for instance shown that to increase the glycolytic flux in E. coli, ATP consumption processes should be activated (Koebmann et al., 2002). Metabolic control analysis works in a framework where gene expression is fixed. Its variant, hierarchical control analysis, deals explicitly with control through gene expression (Snoep et al., 2002). Because of lack of systems with
268
Modelling microorganisms in food
Fig. 13.3 The principle of REDUCE (Bussemaker et al., 2001; see also Boorsma et al., 2005 for a further extension of the tool). REDUCE (Regulator Element Detection Using Correlation with Expression) uses a statistical approach to find motifs based on (a limited set of) microarray experiments. By applying an unbiased search REDUCE selects those sequence motifs whose presence in upstream regions of a gene correlates with an induction or repression of gene-expression. Regions of up to 600 base pairs located on the coding strand of the genome are generally used for analysis. Next to continuous motifs also motifs with spacing of up to 20 base-pairs may be used in the analysis. Sequence motifs may also originate from direct biochemical binding studies in e.g. ChiP-chip studies (see e.g. Boorsma et al., 2005). For those sequence motifs that are shown to be linked to activated genes the data are subsequently coupled to transcription factor databases. Here the factors are documented and linked to specific microbial signal (stress) response pathways.
clear experimental data, the approach has not yet been implemented as much as it should (Kholodenko and Westerhoff, 2004a).
13.4.8 Network regulation: hierarchical regulation analysis Upon arriving at analyses of gene-expression experiments, individual genes are rarely the (initial) end point. Trends are sought that allow for a grouping of genes into metabolic and signalling pathways. Various techniques may help, such as hierarchical clustering and T-profiler (see e.g. Eisen et al., 1998; Boorsma et al., 2005). Both approaches have their pros and cons. The former tool allows for the identification of the behaviour of individual genes whereas the latter is more useful for analyzing the molecular switches that operate at the level of activating or inactivating entire pathways. The latter tool looks at the promoters of genes using both correlation analysis at the genome level [e.g. REDUCE, see Roven and Bussemaker (2003) and Fig. 13.3)] and information from direct DNA binding tests assaying protein (transcription factor)–DNA interactions in vitro. Such analysis allowed Zakrzewska et al. (2005) to analyze the response of bakers’ yeast to the antimicrobial compound chitosan. The yeast response against the weak-organic
Systems biology and food microbiology
269
Fig. 13.4 Schematic depiction of the philosophy behind (microbial) systems biology as applied to the food industry. The iterative cycle of events is valid both for applications studying microbial spoilage events as well as for applications studying microbial food fermentations. In the former case knowledge on stress response is used to validate its constituent molecules as antimicrobial targets. In the latter case the data are used to come to optimal preparation of the ferments, for example enhancing shelf life stability and, in the case of probiotics, acid resistance.
acid preservative sorbic acid has been analyzed similarly (De Nobel et al., 2001). Currently the tool is used for the assessment of the response of vegetative B. subtilis (spoilage) bacteria to such compounds (Ter Beek et al. 2006, submitted). The response to lipophilic weak acids of microbes involves changes in the expression of genes encoding proteins involved in energy metabolism, maintenance of intracellular pH and regulation of the metabolism of fatty acids and lipids. It remains crucial to realize that such genome-wide gene-expression analyses provide indicators for microbial response. Such ‘indicators’ subsequently need biochemical and physiological confirmation. In a recent account Koffas and Stephanopoulos (2005) have discussed how lysine production may be enhanced in Corynebacterium glutamicum through the use of a contemporary systems approach in metabolic engineering. This was made possible through the availability of the sequence of this organism (Ikeda and Nakagawa, 2003; Kalinowski et al., 2003) and its combination with the assessment of the metabolic profiles of such cells in culture. Thus metabolic engineering studies increasingly acquired a holistic character combining genomics, transcriptomics and metabolic data. As such they became a hallmark for similar approaches in other fields of microbiology (see also de Vos, 2001; Hermann, 2004). Figure 13.4 indicates our philosophy behind this. From the genome, proteome, transcriptome, metabolome and fluxome data, properly stored in data-
270
Modelling microorganisms in food
warehouses, the contemporary bioinformatics tools should enable the generation of models that will lead to an integral view of the microbe as a ‘system’. Through the use of iteration, this stimulates the generation of new hypotheses upon which validation experiments studying interactions of microbes with the food environment are performed. Perhaps in a technology-driven mode, many systems biology studies focus on the complete transcriptome of an organism, or perhaps the proteome or the metabolome. Yet, what finally happens is clearly the result of processes operating at various levels of cellular hierarchy, e.g. transcription, translation and metabolism. Hierarchical regulation analysis is an approach that enables one to dissect how much regulation occurs at the various levels of this hierarchy when an organism responds to a change in its environment. Rossell et al. (2006) have examined this for yeast subjected to starvation. They found that the organism regulated both at the metabolic level and at gene-expression levels, and they were able to quantify how much regulation occurred at each level. This approach is quite promising with a view to averting the homeostatic response that organisms display when they are being engineered or challenged with preservatives when promoting microbial food stability
13.5 Food safety 13.5.1 From molecules to cells (how to work with the parts list of life) Food safety is a major issue for food microbiology, and much of what we exemplified above for metabolic engineering can be used to the benefit of food safety systems. Overall, gene-expression profiles have helped us to elucidate the genes operative in resistance against the antimicrobial compounds used in the pharma and food industry (see below and e.g. Brul et al., 2002b; Cooper and Carucci, 2004). Currently the integration of such data with state of the art physiological data allows us to apply true systems biology approaches where metabolic and molecular data are jointly analyzed and translated into cellular models of homeostasis. A good example of such an approach is EcoCyc (SRI International, Menlo Park, CA, USA) [see Kesseler et al. (2005) and Barrett et al. (2005) for a transcription regulation network model]. Starting with such whole genome analyses, one can define a minimal set of genes that is required in a given microorganism for it to survive various environmental stress factors successfully (discussed in Pal et al., 2006). Upon listing the individual constituents of (single) cells the proper annotation of these molecules and their (presumed) distribution over various strains (representatives) in a species is very important. For this, more and more comparative genomics approaches are used where this is (partially) achieved through sequencing of (relevant part of) a genome covering a sufficient number of strains to be comprehensive for a certain species. The recent developments in parallel sequencing as reported in the August issue of Nature 2005 show that the day when it will really become feasible to sequence a microbial
Systems biology and food microbiology
271
genome is not too far in the future (Margulies et al., 2005). Such DNA-based analysis leads to a number of open reading frames that potentially encode proteins. In 50–60% of cases a straightforward comparison of sequences will lead to a prediction of gene (protein) function. It is with the other 40–50% where only a putative protein, i.e. a protein with unknown function, is predicted that it becomes clearly necessary to dig deeper for a correct initial identification. Current experimental approaches cover the use of either knock-out strains or specific mutants with inducible promoters to study the effects of removing or artificially inducing the protein of interest to different cellular levels. Subsequently the phenotypes of such cells should hopefully allude to possible functions of the genes/the proteins of which the synthesis is perturbed in the mutants (see e.g. Mori, 2004). This approach may be amplified by the pathway analysis methods discussed in the previous sections. Microorganisms found in food or suspected to be in food may be sequenced and then their metabolic potential can be determined by the extreme pathway analysis. They may then be recognized as making toxic products, or as being able to convert substrates in the food to less healthy products. Subsequently, all extreme pathways leading to the toxic product may be identified. For these pathways one may then examine which other food substances (e.g. nitrogen source) will be required in anabolic reactions. This may then suggest new ways of reducing in foods the unwanted abundance of these microorganisms or of their toxic products.
13.5.2 Antimicrobials Extreme pathway analysis allows one to study in silico effects of deletion and inhibition of various metabolic enzymes. By itself the extreme pathway analysis should enable one to determine which enzymes one should inhibit strongly so as to prevent growth, and this may suggest new antimicrobial targets. While one needs to be cautious in interpreting the results of the flux-balance method with its assumed maximum efficiency, the results are nonetheless often enlightening. In yeast it was, for instance, shown that the deletion phenotype of a substantial subset of metabolic enzymes depends on growth conditions (see e.g. Almaas et al., 2004; Reed and Palsson, 2004). Overall the method is now applied to an ever-increasing range of microorganisms. It was shown, for instance, that a core set of metabolic reactions present in both prokaryotes and at least one lower eukaryote E. coli, Helicobacter pylori and S. cerevisiae carry non-zero fluxes under all assessed growth conditions and are thus good antimicrobial targets (Almaas et al., 2005). If these could be deleted then growth might at least become less efficient. Indeed, most current antibiotics used in the medical field do interfere with the bacterial core enzymes (Almaas et al., 2005). Food-grade antimicrobials were not taken along for comparison but can and will obviously be included in future studies. Such types of study directly demonstrate the spin-off of microbial modelling work towards practical application. It is rare that antimicrobials are completely specific for a certain microorganism and do not affect its potential host. Consequently, one can rarely use them at
272
Modelling microorganisms in food
dosages that will completely inhibit the target molecules. Thus it should be better to engage in differential antibiotic design where one finds targets that when inhibited affect the pathogenic microorganism maximally and the host minimally. Metabolic control analysis deals with incomplete inhibition and this approach has been developed further into differential network-based drug design (Bakker et al., 2002).
13.5.3 From populations to single cells Many microorganisms exhibit strain-to-strain variability in genetic makeup and related physiologic characteristics. Cell–cell variability may be generated even in genetically homogeneous populations through various epigenetic systems. The microbial persistence of a small fraction of cells in a population was recently described and discussed by Balaban et al. (2004) and Levin (2004). Balaban and co-workers demonstrated non-inherited highly antibiotic-resistant sub-fractions in a population of E. coli cells using a neat combination of micro-fluidics and optical microscopy. Miller et al. (2004) indicated that the SOS-response, which blocks cell division in a period of DNA repair, might be responsible for the observed phenomenon. Single-cell analysis techniques that enable one to reveal particular genetic and physiological characteristics are crucial in correctly analyzing and possibly predicting behaviour of individual cells in such microbial populations. Such analysis techniques have emerged in the form of fluorescent dyes allowing researchers to assess membrane-related parameters such as membrane potential and exclusion of small and large molecular weight compounds. Furthermore, cellular staining for enzyme activities, either cytosolic or organellar, allowed for an assessment of several other physiological properties. These included the freeenergy transduction capacity, organelle pH and protein-degradation capacity. The development of fluorescent reporter proteins allowed for real-time measurements of protein localization and, in certain cases, also of protein function. Such measurements are then coupled to flow-cytometry analysis followed by cell sorting to assess the characteristics of individual cells in terms of biochemical and metabolic parameters (reviewed in Brehm-Stecher and Johnson, 2004). Even single cells may nowadays be analyzed in terms of their proteins using highly sensitive proteomics techniques (see e.g. Hellmich et al., 2005). Such ‘bio-nanotechnology’ based tools allow for an analysis in individual cells down to the molecular level. For studying the interactions of molecules, which ultimately determine the signalling state and behaviour of cells, the use of fluorescent techniques is already virtually standard technology (Waharte et al., 2006). It is good to realize that for many areas of applied research where antimicrobial strategies are the desired outcome the ability to assess population heterogeneity is crucial to a successful outcome of the studies. For instance in medical microbiology it is the one (multiple) antibiotic-resistant Staphylococcus aureus that is of great concern to the physician. Similarly, in the manufacturing of food it is generally not the major fraction of the cells but a minor fraction of stress-resistant
Systems biology and food microbiology
273
cells that needs to be understood for a proper risk assessment of food spoilage and food safety (see e.g. Geeraerd et al., 2005).
13.6 Areas for systems food microbiology in microbial food spoilage research In the analysis of microbial survival in food, a prime feature is the adaptation to weak-organic acid preservatives. Figure 13.5 highlights pathways that respond in yeast to the presence of these compounds. Various groups have studied in particular sorbic acid, both in an academic setting and from the perspectives of the food industry (e.g. Cole and Keenan, 1987; Holyoak et al, 1996; Piper et al., 1998; de Noble et al., 2001). Whereas initially the role of the proton-pumping ATPase present in the plasma membrane was seen as the prime event (see e.g. Holyoak et
Events related to problems with protein folding Transposon activity! Mutagenesis up?
Glucose Energy metabolism
ATP Growth <
Specific pumps
+ H
HA Cell wall and membrane maintenance
A–
pHi
Mating response
Pdr 12
H + ATPase HA
H+ +
H + ATPase
Sorbic acid CH3-CH=CH-CH=CH-COOH
H
+
A
–
Futile?
Fig. 13.5 Cellular signalling routes affected by the presence of the weak-acid food preservative sorbic acid in the culture medium. The schematic picture gives the cumulative result of experiments by groups before 1990 studying cellular physiology (see e.g. Cole, 1987), Piper et al. focussing on the characteristics of the multi-drug-resistance pump (Pdr12) in the 1990s (Piper et al., 1998), first genome-wide expression analysis and proteomics by our group and Unilever’s Peter Coote (de Nobel et al., 2001) and screening of a yeast mutant collection again by the Piper group (Mollapour et al., 2004). See text for further details.
274
Modelling microorganisms in food
al., 1996) it was soon realized that a system of processes determines the efficacy of sorbic acid. In the pre-micro-array age, Piper and co-workers already showed that the plasma membrane pump Pdr12 also plays an important role, presumably in expelling sorbate from the cells (Piper et al., 1998). It was realized that the protonated acid might diffuse into the cells, dissociate into its anion and a proton, therewith reducing the intracellular pH. The cellular reaction to this would be to pump out the protons and anions. Subsequent re-influx would occur and there a balance between the inward-directed passive influx and the outward-directed pumping of the anion and proton would arise at the expense of a continued dissipation of free energy (ATP). A first genome-wide expression and protein analysis study by de Nobel et al. (2001) highlighted that it could indeed be the case that changes in many more cellular processes are associated with the cellular physiology when analyzed two hours after a sorbic acid challenge. Subsequent work by Papadimitriou et al. (2006) showed that just inducing Pdr12 to its maximum, is clearly not enough to provide cells with an acquired resistance to sorbic acid. Recently, Resende et al. (in preparation) highlighted that down regulation of signalling pathways that govern the normal progress of the cell cycle is 45–120 minutes after stress application. The results of these micro-array studies corroborated and extended earlier analysis of the knock-out mutant collection (Mollapour et al., 2004). All in all it became clear that many of the observed events have to be analyzed in a time-resolved manner in order to obtain a proper picture of the role that the various cellular regulatory systems play during various phases of stress adaptation. Furthermore, in yeast it is highly relevant to study the pH homeostasis response of the various cellular compartments to the weak acid challenge. Currently, we have made in our group reporter strains with a pH-sensitive GFP variant, both expressed in the cytosol and targeted to the mitochondria. Detailed time-resolved analyses with these engineered strains are in progress (Orij, R., personal communication and manuscript in preparation; Orij et al., 2006). All such molecular and metabolic events should be analyzed subsequently by systems biology approaches such as those outlined above. Chapter 12 by Teixeira de Mattos et al. elsewhere in this book gives a specific example of what such systems biology mechanistic models based on biochemical ‘first principles’ could look like (see also Mensonides et al., 2002, 2005 and Mensonides, F.I., PhD thesis, University of Amsterdam, expected defence winter 2007).
13.7 Models of microbial ecology and food consumption Systems microbiology enables a comprehensive description at the cellular level of responses to environmental stress conditions. Subsequent challenges, in particular for food microbiology, are then to arrive at models that describe at the population level what the resulting distributions of normal versus extreme process-resistant cells are. Ultimately, of course, the view is that not only variation at the cellular level within a certain strain but surely the dynamic interaction of the
Systems biology and food microbiology
275
microorganisms with their dynamic environment, with each other and with their potential host is of crucial importance in predicting microbial food stability. Understanding microbial ecology of foods is a major challenge to the industry in the manufacturing of microbiologically stable products (Kilsby, 1999; Marthi et al., 2004; see also the strategy document of the European Technology Platform Food for Health at http://etp.ciaa.be). Models describing molecular mechanismsbased stress resistance events at an ecological level are generally agent-based systems (Grimm et al., 2005). Pattern-oriented modelling (POM) approaches may be useful (see e.g. Railsback, 2001). Such POM approaches should allow the researcher to perform bottom-up approaches to identify signalling networks in multicellular structures, the best known of which are biofilms. Metabolic control analysis has also been extended to include the control of a system of dynamically interacting microorganisms (Getz et al., 2003). In food microbiology the view is that with the current molecular tools, including nucleic acid based (sequencing and typing) tools and fluorescent reporter proteins, we can now address two long-standing issues already touched upon earlier in this chapter. First, the relevance of studies performed on laboratory strains can be verified in actual foodborne isolates. As indicated, more often than not these display different stress-resistance characteristics, generally being more resilient, as compared to laboratory strains (see e.g. original data in Kort et al., 2005; and discussed in Brul et al., 2006a). Second, the use of single-cell fluorescent probes allows analysis too in cells present in complex matrix environments, i.e. biofilms, the expression of specific regulatory pathways (see e.g. Veening et al., 2006). The latter is seen as highly topical by many in the food industry (discussed e.g. in Marthi et al., 2004). In addition, a stress response analysis using micro-array studies performed on cells that experience true processing conditions and recovery in actual foods should allow for a mechanistic underpinning of predictive models and the achievement of what was coined precision processing (Brul et al., 2002a). Figure 13.6 presents these thoughts in a schematic fashion. Initial experiments in this area were performed in the context of the manufacturing of savoury products (mainly soups). The data indicate that genome biomarkers can be used to arrive at a detection tool for the presence of the extremely heatresistant bacterial endospores (discussed in Brul et al., 2006a,b; also Oomes et al., Int J Food Microbiol, under review). Molecular markers that could be quantitatively determined were predictive for the repair capacity and linked outgrowth likelihood of such bacterial spores (Keijser, 2006). The term likelihood introduced again the need to transfer population data to single-cell/spore levels and to arrive at stochastic models. Instrumental in this is the use of single-cell growth analysis and flow cytometry to indeed separate individual cells from the bulk. Chapter 5 by Smelt and Brul highlights these issues in the context of estimating spore lag-time. Such data are obviously of great value in speeding up the time to food product stability analysis and in accelerating the innovation cycle in the food industry. Finally, in food manufacturing it should be realized at all times that optimal use of the mechanistic knowledge of microbial behaviour in the context of the total
276
Modelling microorganisms in food
Ingredients
Processing Microbial typing with genome DNA-chips
Species 1 Species 2
Species 3
Process setting
Distribution Mechanistic survival models using microarrays
Shelf life setting
Fig. 13.6 The proposed roles of genomotyping (species specific gene-chips) and geneexpression microarray analysis (mechanistic models for survival) in the context of a simple food production operation. The composition of the microbial ecology will be shown by the genomotyping chip (see the text for further details). Based on this information, process settings may be derived. The response of microorganisms to a processing treatment can be derived from a micro-array based analysis of the response of a strain (or a given set of strains) in terms of its genome-wide expression pattern. Such a response can be captured per population initially and later on per cell upon identification of suitable single cell fluorescent probes.
food chain can only be made if these data can be put in the context of organoleptic product parameters. Such parameters are flavour, texture, nutritional value, etc. These predictive food Quality Assurance models based on mechanistic insight into microbial behaviour will (have to) make use of physical parameters pertaining to the heat capacity and heat conductivity of compounds, heat transfer and mass transfer occurring during processing, and biochemical and nutritional parameters pertaining to the food. Such multidimensional models will in future allow us to reach an optimal balance between efficient product processing, microbial product stability and product quality (reviewed in Bruin and Jongen, 2003).
13.8 Sources of further information and advice Useful websites regarding (functional) genomics and in particular the newly developing and highly relevant field of systems biology are http:// www.systemsbiology.org, www.siliconcell.net and http://www.systembiology.net. Applications of a ‘systems’ approach (foreseen) in the food industry including primarily food microbiology are discussed in the document describing a new European Technology Platform for food research which can be downloaded at http://etp.ciaa.be. More current literature in the field of predictive (micro)biological modelling of cellular behaviour may be obtained from Westerhoff (2005) in the special issue on
Systems biology and food microbiology
277
Systems Biology of Current Opinion in Biotechnology edited by him [see also Pal et al. (2006) for a recent account of evolution of metabolic networks]. Finally, the books Systems Biology: properties of reconstructed networks (Palsson, 2006) and Systems Biology in Practice (Klipp et al., 2005) and Systems Biology. Perspectives and Definitions (Alberghina and Westerhoff, 2005) are recommended reference material for those interested in the basic principles and further insight into what systems biology may accomplish. In addition, there is a series of Advanced Courses on the topic (www.febssysbio.net).
13.9 Acknowledgements The authors would like to thank their coworkers at Unilever, the BioCentre Amsterdam and the University of Manchester for helping them to sharpen their thoughts. In particular we would like to thank Dr Balkumar Marthi, Science Area Leader Advanced Food Microbiology at the Unilever Food and Health Research Institute, as well as Sergio Rossell and Dr Gertien Smits of the BioCentre Amsterdam.
13.10 References Alberghina L and Westerhoff HV (eds) (2005) Systems Biology: Perspectives and Definitions, Berlin, Springer. Almaas E, Kovacs B, Vicsek T, Oltvai ZN and Barabasi AL (2004) Global organization of metabolic fluxes in the bacterium Escherichia coli, Nature, 427, 839–843. Almaas E, Oltvai ZN and Barabasi AL (2005) The activity reaction core and plasticity of metabolic networks, PLoS Comput Biol, 1, 557–563 Bakker BM, Assmus HE, Bruggeman F, Haanstra JR, Klipp E and Westerhoff HV (2002) Network-based selectivity of antiparasitic inhibitors, Mol Biol Rep, 29, 1–5. Balaban NQ, Merrin J, Chait R, Kowalik L and Leibler S (2004) Bacterial persistence as a phenotypic switch, Science, 5690, 1622–1625. Barrett CL, Herring CD, Reed JL and Palsson BO (2005) The global transcriptional regulatory network for metabolism in Escherichia coli exhibits few dominant functional states, Proc Natl Acad Sci USA, 102, 19103–19108. Bernhardt J, Buttner K, Scharf C and Hecker M (1999) Dual imaging of two-dimensional electropherograms in Bacillus subtilis, Electrophoresis, 20, 2225–2240. Birks JB (ed) (1962) Rutherford at Manchester, London, Heywood and Co. Boorsma A, Foat BC, Vis D, Klis FM and Bussemaker HJ (2005) T-profiler: scoring the activity of predefined groups of genes using gene expression data, Nucleic Acids Res, 33, W592–595. Brehm-Stecher BF and Johnson EA (2004) Single-cell microbiology: tools, technologies, and applications, Microbiol Mol Biol Rev, 68, 538–559. Bruin S and Jongen T (2003) Food process engineering: The last 25 years and challenges ahead, Comprehen Revs Food Sci Food Saf, 2, 42–81. Brul S, Montijn R, Schuren F, Oomes S, Klis FM, Coote P and Hellingwerf KJ (2002a) Detailed process design based on genomics of survivors of food preservation processes, Trends Food Sci Technol, 13, 325–333. Brul S, Coote P, Oomes S, Mensonides F, Hellingwerf K and Klis FM (2002b) Physiological
278
Modelling microorganisms in food
action of preservative agents: prospective of use of modern microbiological techniques in assessing microbial behaviour in food preservation, Int J Food Microbiol, 79, 55–64. Brul S, Schuren F, Montijn R, Keijser BJF, van der Spek H and Oomes SJCM (2006a) The impact of functional genomics on microbiological food quality and safety, Int J Food Micr (in press). Brul S, H van der Spek, Keijser BJF, Schuren F, Oomes SJCM and Montijn R (2006b) Functional genomics for optimal microbiological stability of processed food products, Proceedings IFT Spore Workshop 2005, Chicago, IL, IFT Press (in press). Bussemaker HJ, Li H and Siggia ED (2001) Regulatory element detection using correlation with expression, Nat Genet, 27, 167–171. Cole MB and Keenan MHJ (1987) A quantitative method for predicting shelf life of soft drinks using a model system, J Industr Microbiol Biotechnol, 2, 59–62. Cooper RA and Carucci DJ (2004) Proteomic approaches to studying drug targets and resistance in Plasmodium, Curr Drug Targets Infect Disord, 4, 41–51. Edwards JS and Palsson BO (2000) The Escherichia coli MG1655 in silico metabolic genotype: its definition, characteristics, and capabilities, Proc Natl Acad Sci, 97, 5528– 5533. Eisen MB, Spellmann PT, Brown PO and Botstein D (1998) Cluster analysis and display of genome-wide expression patterns, Proc Natl Acad Sci USA, 95, 14863–14868. Esty JR and Meyer KF (1922) The heat resistance of the spores of B. botulinus and allied anaerobes. XI, J Infect Dis, 31, 650–663. Fischer E and Sauer U (2005) Large-scale in vivo flux analysis shows rigidity and suboptimal performance of Bacillus subtilis metabolism, Nat Genet, 37, 636–640. Geeraerd AH, Valdramidis VP and Van Impe JF (2005) GInaFiT, a freeware tool to assess non-log-linear microbial survivor curves, Int J Food Micr, 102, 95–105. Getz WM, Westerhoff HV, Hofmeyr J-HS and Snoep JL (2003) Control analysis of trophic chains, Ecol Modell, 168, 153–171. Grimm V, Revilla E, Berger U, Jeltsch F, Mooij WM, Railsback SF, Thulke HH, Weiner J, Wiegand T and De Angelis DL (2005) Pattern oriented modelling of agent-based complex systems: lessons from ecology, Science, 5750, 987–991. Hellingwerf KJ, Postma PW, Tomasson J and Westerhoff HV (1995) Signal transduction in bacteria: phosphoneural networks in Escherichia coli?, FEMS Microbiol Rev, 16, 309– 321. Hellmich W, Pelargus C, Leffhalm K, Ros A and Anselmetti D (2005) Single cell manipulation, analytics, and label-free protein detection in microfluidic devices for systems nanobiology, Electrophoresis, 26, 3689–3696. Hermann T (2004) Using functional genomics to improve productivity in the manufacture of industrial biochemicals, Curr Opin Biotechnol, 15, 444–448. Holyoak CD, Stratford M, McMullin Z, Cole MB, Crimmins K, Brown AJ and Coote PJ (1996) Activity of the plasma membrane H(+)-ATPase and optimal glycolytic flux are required for rapid adaptation and growth of Saccharomyces cerevisiae in the presence of the weak acid preservative sorbic acid, Appl Environ Microbiol, 62, 3158–1364. Höper D, Bernhardt J and Hecker M (2006) Salt stress adaptation of Bacillus subtilis: a physiological proteomics approach, Proteomics, 6, 1550–1562. Hutchison III CA, Peterson SN, Gill SR, Cline RT, White O, Fraser CM, Smith HO and Venter JC (1999) Global transposon mutagenesis and a minimal Mycoplasma genome, Science, 286, 2165–2169. Ibarra RU, Edwards JS and Palsson BO (2002) Evolution towards predicted optimal growth in Escherichia coli K-12, Nature, 420, 186–189. Ikeda M and Nakagawa S (2003) The Corynebacterium glutamicum genome: features and impacts on biotechnological processes, Appl Microbiol Biotechnol, 30, 129–132. Kalinowski J, Bathe B, Bartels D, Bischoff N, Bott M, Burkovski A, Dusch N, Eggeling L, Eikmans BJ, Gaigalat L, Goesman A, Hartmann M, Huthmacher K, Kramer R, Linke B, McHardy AC, Meyer F, Mockel B, Pfefferle W, Puhler A, Rey DA, Ruckert C, Rupp O,
Systems biology and food microbiology
279
Sahm H, Wendisch VF, Wiegrabe I and Tausch A (2003) The complete Corynebacterium glutamicum ATCC 13032 genome sequence and its impact on the production of Laspartate-derived amino acids and vitamins, J. Biotechnol, 104, 5–25. Keijser BJF (2006) Control of preservation by biomarkers, European patent application P73012PC00. Kell DB (2006) Theodor Bucher Lecture. Metabolomics, modelling and machine learning in systems biology – towards an understanding of the languages of cells, FEBS J, 273, 873–894. Kesseler IM, Collado-Vides J, Gama-Castro S, Ingraham J, Paley S, Paulsen IT, Peralta-Gil M and Karp PD (2005) EcoCyc: a comprehensive database resource for Escherichia coli< Nucleic Acids Res, 33, D334–337. Kholodenko BN and Westerhoff HV (eds) (2004a) Metabolic Engineering in the Post Genomic Era, Wymondham, Horizon Bioscience. Kholodenko BN and Westerhoff HV (2004b) Metabolic engineering: What it was, and What it is; and What it should become, in Kholodenko BN and Westerhoff HV (eds), Metabolic Engineering in the Post Genomic Era, Wymondham, Horizon Bioscience, 1–35 and 436– 442. Kilsby DC (1999) Food microbiology: the challenges for the future, Int J Food Microbiol, 50, 59–63. Klipp E, Herwig R, Kowald A, Wierling Ch and Lehrach H (2005) Systems Biology in Practice: Concepts, Implementation and Application, Weinheim, Wiley-VCH. Koebmann BJ, Westerhoff HV, Snoep JL, Nilsson D and Jensen PR (2002) The glycolytic flux in E. coli is controlled by the demand for ATP, J Bacteriol, 184, 3909–3916. Koffas M and Stephanopoulos G (2005) Strain improvement by metabolic engineering: lysine production as a case study for systems biology, Curr Opin Biotechnol, 16, 361– 366. Kort R, O’Brien AC, van Stokkum IH, Oomes SJ, Crielaard W, Hellingwerf KJ and Brul S (2005) Assessment of heat resistance of bacterial spores from food product isolates by fluorescence monitoring by dipicolinic acid release, Appl Env Microbiol, 71, 3556– 3564. Levin BR (2004) Microbiology. Noninherited resistance to antibiotics, Science, 5690, 1578– 1579. Lin B (2006) Composition and functioning of iron-reducing communities in two contrasting environments, i.e. a landfill leachate-polluted aquifer and estuarine sediments, PhD thesis, Free University Amsterdam (http://dare.ubvu.vu.nl/bitstream/1871/9250/1/ Bin_thesis_VUlibrary.pdf). Margulies M, Egholm M, Altman WE, Attiya S, Bader JS, Bemben LA, Berka J, Braverman MS, Chen YJ, Chen Z, Dewell SB, Du L, Fierro JM, Gomes XV, Godwin BC, He W, Helgesen S, Ho CH, Irzyk GP, Jando SC, Alenquer ML, Jarvie TP, Jirage KB, Kim JB, Knight JR, Lanza JR, Leamon JH, Lefkowitz SM, Lei M, Li J, Lohman KL, Lu H, Makhijani VB, McDade KE, McKenna MP, Myers E, Nickerson E, Nobile JR, Plant R, Puc BP, Ronan MT, Roth GT, Sarkis GJ, Simons JF, Simpson JW, Srinivasan M, Tartaro KR, Tomasz A, Vogt KA, Volkmer GA, Wang SH, Wang Y, Weiner MP, Yu P, Begley RF, Rothberg JM (2005) Genome sequencing in microfabricated high-density picolitre reactors, Nature, 437, 376–380. Marthi B, Vaughan EV and Brul S (2004) Functional genomics and food safety, New Food, 7, 14–18. McMeekin TA and Ross T (2002) Predictive Microbiology: providing a knowledge-based framework for change management, Int J Food Microbiol, 78, 133–153. Mensonides FI, Schuurmans JM, Teixeira de Mattos MJ, Hellingwerf KJ and Brul S (2002) The metabolic response of Saccharomyces cerevisiae to continuous heat stress, Mol Biol Rep, 29, 103–106. Mensonides FI, Brul S, Klis FM, Hellingwerf KJ and Teixeira de Mattos MJ (2005) Activation of the protein kinase C1 pathway upon continuous heat stress in Saccharomy-
280
Modelling microorganisms in food
ces cerevisiae is triggered by an intracellular increase in osmolarity due to trehalose accumulation, Appl Env Microbiol, 71, 4531–4538. Miller C, Thomsen LE, Gaggero C, Mosseri R, Ingmer H and Cohen SN (2004) SOS response induction by beta-lactams and bacterial defense against antibiotic lethality, Science, 5690, 1629–1631. Mitchell P (1968) Chemiosmotic Coupling and Energy Transduction, Bodmin, Glynn Research Ltd. Mollapour M, Fong D, Balakrishnam K, Harris N, Thompson S, Schuller C, Kuchler K and Piper PW (2004) Screening the yeast deletant mutant collection for hypersensitivity and hyperresistance to sorbate, a weak organic acid food preservative, Yeast, 21, 927–946. Moreira dos Santos M, Akesson M and Nielsen J (2004) Metabolic flux analysis in the post genomic era, in Kholodenko BN and Westerhoff HV (eds), Metabolic Engineering in the Post Genomic Era, Wymondham, Horizon Bioscience, 89–106. Mori H (2004) From the sequence to cell modeling: comprehensive functional genomics in Escherichia coli, J Biochem Mol Biol, 37, 83–92. Nobel-de H, Lawrie L, Brul S, Klis F, Davis M, Alloush H and Coote P (2001) Parallel and comparative analysis of the proteome and transcriptome of sorbic acid-stressed Saccharomyces cerevisiae, Yeast, 18, 1413–1428. Orij R, Brul S and Smits G (2006) Online intracellular pH measurements upon sorbic acid stress in yeast, in Kuokka A and Penttilä M (eds), Proceedings of the International Specialised Symposium on Yeasts ISSY 25; Systems Biology of Yeasts – from Models to Applications, 18–21 June, Hanasaari, Espoo, Finland, 151. Pal C, Papp B, Lercher MJ, Csermely P, Oliver SG and Hurst LD (2006) Chance and necessity in the evolution of minimal metabolic networks, Nature, 440, 667–670. Palsson BO (ed) (2006). Systems Biology, Properties of Reconstructed Networks, Cambridge, Cambridge University Press. Papadimitriou MNB, Resende C, Kuchler K and Brul S (2006) High Pdr12 levels in spoilage yeast (Saccharomyces cerevisiae) correlate directly with sorbic acid levels in the culture medium but are not sufficient to provide cells with acquired resistance to the food preservative, Int J Food Microbiol (in press). Papin JA and Palsson BO (2004) Hierarchical thinking in network biology: The unbiased modularization of biochemical network-based pathway analysis methods, Trends Biotechnol, 22, 400–405. Papin JA, Stelling J, Price ND, Klamt S, Schuster S, Palsson BO (2004) Comparison of network-based pathway analysis methods, Trends Biotechnol, 22(8), 400–405. Picon A, Teixeira de Mattos MJ and Postma PW (2005) Reducing the glucose uptake rate in Escherichia coli affects growth rate but not protein production, Biotechnol Bioeng, 90, 191–200. Piper P, Mahe Y, Thompson S, Pandjaitan R, Holyoak C, Egner R, Muhlbauer M, Coote P and Kuchler K (1998) The Pdr12 ABC transporter is required for the development of weak organic acid resistance in yeast, EMBO J, 17, 4257–4265. Pirt SJ (1982) Maintenance energy: a general model for energy-limited and energy-sufficient growth, Arch Microbiol, 133, 300–302. Popper KR (1963) Conjectures and Refutations, London, Routledge and Kegan Paul. Railsback SF (2001) Getting ‘results’: the pattern-oriented approach to analyzing natural systems with individual-based models, Nat Res Modell, 14, 465–474. Reed JL and Palsson BO (2004) Genome-scale in silico models of E. coli have multiple equivalent phenotypic states: assessment of correlated reaction substets that comprise network states, Genome Res, 14, 1797–1805. Rossell S, van der Weijden CC, Lindenbergh A, van Tuijl A, Francke C, Bakker BM and Westerhoff HV (2006) Unraveling the complexity of flux regulation: a new method demonstrated for nutrient starvation in Saccharomyces cerevisiae, Proc Natl Acad Sci USA, 103, 2166–2171. Roven C and Bussemaker HJ (2003) REDUCE: an online tool for inferring cis-regulatory
Systems biology and food microbiology
281
elements and transcriptional module ectivities from microarray data. Nucleic Acids Res, 31, 3487–3490. Schilling CH, Edwards JS, Letscher D and Palsson BO (2000a) Combining pathway analysis with flux balance analysis for the comprehensive study of metabolic systems, Biotechnol Bioeng, 71, 286–306. Schilling CH, Letscher D and Palsson BO (2000b) Theory for the systemic definition of metabolic pathways and their use in interpreting metabolic function from a pathwayoriented perspective, J Theor Biol, 203, 229–248. Schuster S and Hilgetag C (1994) On elementary flux modes in biochemical reaction systems at steady state, J Biol Syst, 2, 165–182. Schuster S, Fell DA and Dandekar T (2000) A general definition of metabolic pathways useful for systematic organization and analysis of complex metabolic networks, Nat Biotechnol, 18, 326–332. Senn H, Lendenmann U, Snozzi M, Hamer G and Egli T (1994) The growth of Escherichia coli in glucose-limited chemostat cultures: a re-examination of the kinetics, Biochim Biophys Acta, 1201, 424–436. Snoep JL, van der Weijden CC, Andersen HW, Westerhoff HV and Jensen PR (2002) DNA supercoiling in Escherichia coli is under tight and subtle homeostatic control, involving gene-expression and metabolic regulation of both topoisomerase I and DNA gyrase, Eur J Biochem, 269, 1662–1669. Snoep JL, Hoefnagel MHN and Westerhoff HV (2004) Metabolic flux analysis in the post genomic era, in Kholodenko BN and Westerhoff HV (eds), Metabolic Engineering in the Post Genomic Era, Wymondham, Horizon Bioscience, pp. 357–375. Snoep JL, Bruggeman F, Olivier BG and Westerhoff HV (2006) Towards building the silicon cell: a modular approach, Biosystems, 83, 207–216. Stouthamer AH (1979) The search for correlation between theoretical and experimental growth yields, Int Rev Biochem, 21, 1–47. Ter Beek A, Keijser BJF, Boorsma A and Brul S (2006) Mild treatment of Bacillus subtilis with the food preservative sorbic acid induces a stringent-type response, but not the general stress response or sporulation, in Parente E, Cocolin L, Ercolini D and Vannini L (eds), Proceedings of the 20th International ICFMH symposium, Food Micro 2006; food safety and food biotechnology: diversity and global impact, Bologna, Italy 29 Aug–2 Sept 92. Teusink B and Smid EJ (2006) Modelling strategies for the industrial exploitation of lactic acid bacteria, Nat Rev Microbiol, 4, 46–56. Veening JW, Kuipers OP, Brul S, Hellingwerf KJ and Kort R (2006) Effects of phosphorelay perturbations on architecture, sporulation and spore resistance in biofilms of Bacillus subtilis, J Bacteriol, 188, 3099–3109. Volker V and Hecker M (2005) From genomics via proteomics to cellular physiology of the Gram-positive model organism Bacillus subtilis, Cell Microbiol, 7, 1077–1085. Vos de W (2001) Advances in genomics for microbial food fermentations and safety, Curr Opin Biotechnol, 12, 493–498. Waharte F, Spriet C and Heliot L (2006) Setup and characterization of a multiphoton FLIM instrument for protein-protein interaction measurements in living cells, Cytometry, 69, 299–306. Ward DE, van der Weijden CC, van der Merwe MJ, Westerhoff HV, Claiborne A and Snoep J (2000) Branched-chain α-keto acid catabolism via the gene products of the bdk operon in Enterococcus faecalis: a new, secreted metabolite serving as a temporary redox sink, J Bacteriol, 182, 3239–3246. Watson JD and Crick FH (1953) Molecular structure of nucleic acids; a structure for deoxyribose nucleic acid, Nature, 171, 737–738. Westerhoff HV (2005) Systems biology . . . in action, Curr Opin Biotechnol, 16, 326–328. Westerhoff HV and Palsson BO (2004) The evolution of molecular biology, Nat Biotechnol, 22, 1249–1252.
282
Modelling microorganisms in food
Westerhoff HV and Van Dam K (eds) (1987) Thermodynamics and Control of Biological Free-energy Transduction, Amsterdam, Elsevier. Westerhoff HV, Lolkema JS, Otto R and Hellingwerf KJ (1982) Thermodynamics of growth. Non-equilibrium thermodynamics of bacterial growth. The phenomenological and the mosaic approach, Biochim Biophys Acta, 683, 181–220. Westerhoff HV, Hellingwerf KJ and Van Dam K (1983) Thermodynamic efficiency of microbial growth is low, but optimal for maximal growth rate, Proc Natl Acad Sci USA, 80, 305–309. Wu L, Wang W, van Winden WA, van Gulik WM and Heijnen JJ (2004) A new framework for the estimation of control parameters in metabolic pathways using lin-log kinetics, Eur J Biochem, 271, 3348–3359. Zakrzewska A, Boorsma A, Brul S, Hellingwerf KJ and Klis FM (2005) Transcriptional response of Saccharomyces cerevisiae to the plasma membrane-perturbing compound chitosan, Eukaryot Cell, 4, 703–715.
13.11 Addendum 1 In the example we discussed in the main text of the chapter the analysis has not yet been completed. Let us continue and also take into account ATP and ADP production and consumption. We then find:
2 pyruvate −1 −1 dX d ≡ = ⋅ ≡ N v 0 8 2 ATP dt dt 0 − 8 − 2 ADP
•
v1 v2 v 3
Trying to find a steady-state solution one notices that there is none with positive rates, even though there are three equations with three unknowns (there are solutions with negative rates, such as the one representing v2 running backward, making pyruvate from carbon dioxide). The problem is that there are no reactions consuming ATP that have positive rates: inserting the pathways that lead to a steady state for pyruvate leads to a net rate of ATP production. Clearly again another reaction must be present in the genome. We now suppose that this may be the rate of formation of a product in this biological process. The product could be a new cell: Pyruvate + n ATP → n ADP + bioproduct; reaction 4 at rate v4 Matrix N then becomes: –1 –1 2 –1 0 8 2 –n N = 0 –8 –2 n 0 0 0 1 Here the first row describes the behaviour of pyruvate in the system, the second
Systems biology and food microbiology
283
ATP, the third ADP and the fourth that of a bioproduct. The time-dependence of the bioproduct now cannot become zero. However, this is acceptable as it can be seen as an external product of the system. We can now drop the fourth row from the matrix N. Equating dX/dt to zero, then there are three equations with four unknowns. However, two of these equations are dependent on one another, i.e. the third row of N is not informative either. This leaves us with two equations with four unknowns (v1, v2, v3, v4). Finally, we shall also consider the fact that the cells may require ATP consumption for maintenance, which we describe as a fifth reaction that only hydrolyzes ATP. As a further generalization we shall write p for the number of molecules of ATP produced per pyruvate entering respiration rather than the route to ethanol. Matrix N then becomes:
−1 −1 2 −1 0 N = p 2 − n − 1 0 The steady-state condition gives two equations for five unknowns. This leads to three extreme pathways, which we here give as the three columns of a kernel matrix of N:
0 2n − 2 2n − 2 0 ker( N ) = n n+ p 2p + 2 2 0 0
2 0 1 0 2
One may check that multiplication of matrix N with matrix ker(N) leads to the zero matrix. One may also check that the columns of N are indeed extreme pathways by taking any column and checking that subtracting even a small number times any other column will produce a negative rate. The first column corresponds to glucose-fermentation driven growth, the second to glucose-respiration driven growth and the third to fermentation just driving maintenance. Any possible steady state of the system, and hence any integral metabolic function of the pathway, now corresponds to a linear combination of the three extreme pathways, i.e.
vethanol vrespiration v glycolysis vbiosynthesis vmaintenance
v1 2n − 2 0 2 v2 0 2n − 2 0 = v = α ⋅ n + α ⋅ n + p + α ⋅1 1 2 3 3 v4 2 2 p + 2 0 2 0 0 v5
284
Modelling microorganisms in food
where α can take arbitrary values. However, not all possible steady-state solutions can be obtained with positive values of α. An example is the vector:
0 2 p4 = 1 0 2 p + 2 which is obtained for:
p + 1 − α1 n − 1 1 α 2 = α n − 1 p +1 3 i.e. for a negative value of α1 (n > 1) p4 is also a steady state for the system and corresponds to another extreme pathway, i.e. that of maintenance metabolism driven by respiration of glucose. Thus, this three-dimensional system is described in terms of a positive linear combination of four extreme pathways. For all the advantages that the identification of extreme pathways may deliver, this is a disadvantage, i.e. that the number of elemental descriptors exceeds the dimensionality of the system. The pathway matrix P is now defined as the matrix with the extreme pathways as its columns:
P = ( p1
p2
p3
0 2n − 2 2n − 2 0 p4 ) = n n+ p 2p + 2 2 0 0
2 0 0 2 1 1 0 0 2 2 p + 2
Here the entries in row i of P indicate whether reaction i with the process i, encoded by gene i is used in an extreme pathway. As extreme pathways thus encompass the boundaries of all functional states of a given network, the matrix P can be used to compute the biochemical potential of the network to define network structure:
Systems biology and food microbiology
vethanol vrespiration v glycolysis vbiosynthesis v maintenance
285
v1 2n − 2 0 2 0 v 0 2 2 0 − n 2 2 = v = α ⋅ n + α ⋅ n + p + α ⋅1 + α ⋅ 1 1 2 3 4 3 v4 2 2 p + 2 0 0 0 0 2 2p + 2 v5
with non-negative values of α. For instance, we may wonder whether our product of interest can be made anaerobically. This implies that v2 cannot be involved and therefore neither the second nor the fourth extreme pathway. If the genes of the first extreme pathway are expressed, the organism has the potential for α1 to be nonzero. Since there is a 2 on the fourth row of the first extreme pathway this means that the product could be formed by the system. Strictly speaking, this is a potential of formation only, as the genes may be expressed as mRNA but not active metabolically because of adverse regulation (see below). We may also wonder whether a deletion of a glycolytic gene would incapacitate our cells in terms of making the product. It would: since v3 occurs in all extreme pathways, all flux would stop. One may also ask what the yield on glucose would be of the anaerobic production process. The answer is:
Y=
vbiosynthesis vglycolysis
=
2 ⋅ α1 + 2 ⋅ ( p + 1) ⋅ α 2 n ⋅ α1 + (n + p ) ⋅ α 2 + α 3 + α 4
which can be rewritten as the Pirt equation (see also above): theor Ymax m =1+ Y vglycolysis
with:
α1 + 2 ⋅ ( p + 1) α2 = α n ⋅ 1 + (n + p) α2 2⋅
theor Ymax
The maximum yield ranges between 2/n (fermentation only) and 2(p + 1)/(p + n) (respiration only) (which for typical numbers could be between 1 (C3 per C6) and 1.8. By itself this procedure is not new. Stouthamer (1979) and others have estimated the maximum theoretical growth yield for anaerobic growth on the basis of the known composition of biomass and the known ATP stoichiometries for
286
Modelling microorganisms in food
catabolism and anabolism and inserted these into an analogue of the above equation. This led to the conclusion that the balance was off by almost a factor of 2; almost 50% of the ATP produced in catabolism is used for processes other than growth (Westerhoff et al., 1983). Up to this point the extreme pathway fluxes were bounded only by the requirement that they could not become negative. Biochemical reactions are normally also bounded in terms of a maximum rate, their Vmax. In a mathematical sense, extreme pathways and fluxes through them can be represented as (Papin and Palsson, 2004): v ss = Σ αipi 0 ≤ αi ≤ αi,max Here, vss indicates biochemically meaningful steady-state flux solutions and the pi values are the extreme pathways. The αi values give the weight of the ith flux through that extreme pathway in the overall steady-state flux distribution. It is clear that the basis vectors must be non-negative. This boundary condition leads to a convex analysis, based on equalities, i.e. that the multiplication of stoichiometry matrix with all fluxes must be zero (Nv = 0), and inequalities, such as 0 ≤ αi ≤ αi,max. That αi,max represents the maximum strength of the extreme pathway vis-à-vis the Vmax values in it, is somewhat problematic, however, as more than one extreme pathway may run through a single biochemical reaction and hence their maximum fluxes may depend on each other’s activity. This convex analysis alone does not lead to a unique prediction of the flux pattern; it gives the biochemical potential, but does not reveal which part of the potential the organism will actually use. In addition, it gives many possible flux patterns whereas in reality there is only one. In this case, whether the organism will choose fermentation or respiration depends on the kinetic properties of the network, or on the expression levels of the various metabolic routes.
13.12 Addendum 2 In flux-balance analysis the stoichiometry matrix is reordered so as to give Nexch:
v N exch = 0 b with: 0 ≤ vi ≤ vi,max and bi,min ≤ bi ≤ bi,max Here the rates v discussed above have been reordered so that the processes that communicate with the cell’s environment occur last and are called b rather than v.
Systems biology and food microbiology
287
These rates may have a lower bound different from zero and are the external process conditions the engineer wants to optimize. The relative lack of uniquely identified sequences for specific transport systems is a limitation here, but these are often replaced by the first or last step in the biochemical pathway. A key feature is to define possible solutions using for example a general linear objective function Z: Z = W•b = Σi wibi Here the vector w is a vector of weights (wi) on the exchange fluxes bi (and occasionally also on internal fluxes). Z is then minimized or maximized as appropriate. Solutions to these equations give the best use of the defined network to meet the stated objective function at steady state. Such a constraint-based optimization procedure is known as linear programming. The clearest example is when one wishes to optimize a single flux, in our example above the bioproduction flux. To illustrate this we return to the example discussed earlier in this chapter. For simplicity we drop the fermentative pathway and the maintenance, and retain the respiration of pyruvate coupled to biosynthesis. However, we now acknowledge that there are two respiratory pathways for pyruvate with two different yields of ATP. We therefore add a pathway p2′ which has an ATP yield that is one lower:
0 2n − 2 0 2n − 2 P = (p′2 ,p2) == n + p −1 n + p 2p 2 p + 2 The stoichiometry matrix now also has an extra column corresponding to this extra process, because the added process should also have a zero pyruvate and ATP balance:
−1 N′ = p
−1 2 −1 p − 1 2 − n
The bioproduction flux is given by: vbioproduction = (0 0 0 1) • (α2′ • p2′ + α2 • p2) = 2 • α2′ • p + 2 • α2 • (p + 1) Maximization of the bioproduction flux would thus lead to the continued increase of both processes. This would also lead to a continued increase in substrate consumption, however. Therefore it makes more sense to perform this optimization under the constraint that the substrate consumption flux is constant. This constraint translates to:
288
Modelling microorganisms in food n+p dα2′ = – dα2 • –––––––– n+p–1
Thus maximization leads to the exclusive use of respiration pathway 2 whenever n exceeds 1, i.e. more than one ATP is needed per C3. This can be annotated as:
dvbioproduct ion 2 ⋅ dα 2
= n −1 > 0
The effect of this flux-balance analysis is therefore that if there are two pathways in parallel that achieve the same biosynthesis, but differ in ATP stoichiometry, only the pathway with the best ATP stoichiometry survives. Indeed, flux-balance analysis leads to optimal states. A problem with this is that it is known that microbial growth does not attain maximum theoretical growth yield (see above), not even after correction for growth rate independent maintenance (Stouthamer, 1979; Westerhoff et al., 1982). Evidence has been given that microbial growth may be more optimized for maximum growth rate than for efficiency (Westerhoff et al., 1983). Once again, therefore, the results of flux analysis should only be taken to correspond to optimal states, which are most probably rarely the real states of actual microorganisms. Of course the engineer may evolve such organisms. Another case of interest in terms of growth optimization is that of cells that can choose between glucose respiration and glucose fermentation for their biosynthetic needs, such as growth. From the above equation for biosynthesis, it is clear that the growth rate would go up with both growth modes, i.e. both with the extreme pathway of respiration-driven growth (i.e. with increasing α2) and with that of fermentation-driven growth (i.e. with increasing α1). However, the situation assumes that all rates can increase to infinity, whereas at some stage there should be some limitation to the system. We may therefore choose to look upon optimum growth rate as a constant glycolytic flux (perhaps due to constant carbon supply by the environment). We therefore increase α2 and decrease α1 so as to maintain the glycolytic flux constant and obtain for the biosynthetic rate: n –– • vbiosynthesis = vglycolysis + p • (n – 1) • α2 – α3 – α4 2 This shows that the growth rate and hence growth yield increase linearly with the activity of the respiration-driven growth pathway, whenever its ATP production exceeds that of the fermentation-driven growth extreme pathway (i.e. n > 1). Because this is a linear optimization strategy, the optimum is at one end of the scale with maximum respiration. This result is not in accordance with the experimental observations for S. cerevisiae, which does prefer fermentation, but it may explain why many other cell types do prefer respiration. The fact that the result is not in accordance with experimental observations for S. cerevisiae is a good illustration of the effect of the assumptions in flux-balance analysis.
Index
A-optimal designs 30 access to models 93–8 activation shoulder 139 actual network dynamics 267 Aeromonas hydrophilia 84 Akaike criterion 38 antagonistic interactions 214–15 antimicrobials 168, 215, 271–2 activity based models 220 applications for models 10, 83–93 interaction models 221–2 mechanistic models 208–10 Arrhenius equation 35, 133–4 artificial intelligence 38–9 ATP/ADP production and consumption 233–7, 246, 255, 282–6 Australian Quarantine Inspection Service (AQIS) 13 Bacillus cereus 45–6, 69 Bacillus megaterium 71 Bacillus subtilis 71–2 bacterial spores dormant stage 69 environmental effects on 72 experimental approaches 72–3 germination and outgrowth 69–72 high-pressure inactivation 189 lag-time models 69–73 Baranyi-Roberts model 132 Bayesian belief networks (BBNs) 39, 65 Bayesian information criterion (BIC) 38 bio preservatives 188
biological variability 52 biphasic model 172 blueprint and metabolic potential 261–4 Box-Behnken design 26 broilers 14–15 CAC (Codex Alimentarius Commission) 15, 121 Campylobacter 67 thermal death kinetics in broilers 14–15 catabolic fluxes 232–7 cell-to-cell variability 272 Chick-Watson equation 199 chilling processes 13 classifications 251 Clostridium botulinum 8, 67 germination kinetics 77 recovery of heat injured spores 71 Clostridium perfiringens 71 Codex Alimentarius Commission (CAC) 15, 121 collection of data 12, 22–3, 183–5 ComBase 11, 84–5, 88, 119 communication of modelling results 59– 60 confidence intervals 47 control charts 99–100 critical analysis of literature 12–15 D-optimal designs 29–30, 33, 42–3 D-values 130, 133, 175 data collection 12, 22–3, 183–5
290
Index
processing 34–5 raw data 62 transformations 35 databases 11–12, 82–5 information sources 105 death question 90–3 defence mechanisms 229 descriptive interaction models 220 design in practice 30–4 see also experimental design distribution of lag-times 74 DNA sequence 252–3 Doehlert matrix 26 dormant stage of bacterial spores 69 dynamic pressure-temperature conditions 180–1 E-optimal designs 30 ecology models 223–4, 274–6 electron transport phosphorylation 233–6 electronic access to models 93–5 electronic temperature loggers 9, 17–18 elementary mode analysis 264 empirical models 27, 149–57 energy conservation 232–7 Enterobacteriaceae 215 E. sakazakii risk assessment 119 enzymome analysis 266 ERH Calc 95 Escherichia coli and chilling processes 13 flux-balance analysis 265 in ground beef 214 growth on meat 12–13 high-pressure inactivation 163 inactivation during production of UCFM meat 13–14 inactivation in meat gravy 91–3 experimental design 23, 24–34 design in practice 30–4 fractional factorial design 25–6, 27 full factorial design 25 of lag-time models 72–3, 77–8 optimal design 27–30, 34 response surface modelling 24–7 exposure assessment 101 F-tests 36–8 factorial design 25–6 fermented foods 13–14, 222 first-order inactivation kinetics 130–1 first-order inactivation model 171–2 first-order kinetic models 200–2 Fisher information matrix 29
fitting see model fitting flux-balance analysis 264–5, 266–7, 286– 8 food consumption 274–6 food matrices 181 food microbiology 259–60 food production 261–70 product attractiveness 222 sensory/nutritional attributes 8 food safety Food Safety Objectives (FSO) 110, 121–2 and interaction models 221 and systems biology 270–3 triangle 15–16 food spoilage research 273–4 fractional factorial design 25–6, 27 full factorial design 25 functional genomics 265–6 general interaction model 217 genomics 252–3, 261, 265–6 germination of bacterial spores 69–72 GFP (green fluorescent protein) fusions 252 Gompertz model 27, 68, 132 modified Gompertz model 173–4 good microbiology practice (GMP) 60–2 good modelling practice (GMP) 24, 60, 62–3 gravy 91–3 green fluorescent protein (GFP) fusions 252 ground beef 214 growth curves 154–6 Growth Predictor 88, 90, 94 growth question 86–90 growth rate models 68, 86, 149–57, 222– 3, 255 inhibition model 217–19 logistic growth 131–2 growth temperatures 230–2 HACCP 15–16 hazard characterization 101 hazard identification 101 heat treatments, and sensory/nutritional food attributes 8 heteroscedastic data 35 hierarchical regulation analysis 268–70 high-pressure inactivation 161–91 of bacterial spores 189 bio preservatives 188 biphasic model 172
Index and different food matrices 181 dynamic pressure-temperature conditions 180–1 first-order model 171–2 hurdle technology 188 kinetic behaviour 169–71 log logistic model 174 microbe-related factors 163–7 modified Gompertz model 173–4 pressure-temperature-time integrators (PTTIs) 190 primary models 171–6 process optimization 187 process parameters 168–9 product-related factors 167–8 reference organisms 185–7 secondary models 176–8 selecting a model 174–6 standardization of data collection 183–5 time-temperature integrators (TTIs) 190 Weibull model 172–3 homoscedastic data 35, 36 hurdle technology 188 hypothesis testing 23 imprecise predictive models 46–8 inactivation models 129–49 activation shoulder 139 Arrhenius equation 35, 133–4 Baranyi-Roberts model 132 D-values 130, 133 first-order inactivation kinetics 130–1 isothermal inactivation 130 log-linear model 133 logistic distribution 139 non-isothermal inactivation 145–9 non-linear isothermal inactivation models 134–5 residual survival 139, 141 secondary models 132–43 survival curves 136–7, 140, 143–5, 146–9 temperature-dependence 137–8 thermal death time 130, 141–3 Weibullian model 135–6 Williams-Landd-Ferry (WLF) model 134 z-values 133 see also high-pressure inactivation input distributions 53 interaction models see microbial interaction models isostatic principle 162 isothermal growth 131–2
291
isothermal inactivation 130 juice processing 90–1 kinetic behaviour and inactivation 169–71 kinetic modelling 9, 85–6 lactic acid bacteria 215 Lactococcus lactis 267 lag phase duration 63–4, 68 lag-time models 67–79, 116, 119 and bacterial spores 69–73 definition of lag-time 68 distribution of lag-times 74 experimental design 72–3, 77–8 lag-time microbial growth rate relationship 68–9 quantitative models 73–8 relevance of lag-time 67–8 Le Chatelier principle 161–2 linear regression techniques 34–5 linking models 113–19 Listeria Control Model 95 Listeria innocua 220 Listeria monocytogenes 87–90 HP inactivation 163 risk assessment 119, 120 on smoked salmon 116 Listeria Suppression Model 95 log logistic model 174 log-linear microbial death kinetics 8 log-linear model 133 log-transformations 35 logistic distributions 139 logistic growth 131–2 Lotka-Volterra-type model 220 lysine production 269 mathematical models see quantitative microbiology Maxwell-Boltzman distribution 200 measurement of growth and interactions 216 meat E. coli growth 12–13 gravy 91–3 mechanistic models 27, 198–212, 255 applications 208–10 first-order kinetic models 200–2 future tends 211–12 information sources 212 limitations 211 non log-linear survivor curves 202–3, 204–5
292
Index
normal log-linear survivor curves 203–4 strengths 210 survivor curve models 199–200 validation 205–8 weaknesses 210–11 metabolic control analysis 267–8 metabolic engineering 261–70 metabolic pathways 264 Michaelis-Menten kinetics 238, 239, 255 microbial ecology models 274–6 microbial interaction models 214–24 antagonistic interactions 214–15 antimicrobial activity based 220 architecture improvements 222–3 descriptive models 220 and ecology 223–4 and fermented foods 222 food safety applications 221 future trends 222–4 general model 217 growth inhibition model 217–19 measurement of growth and interactions 216 nutrient depletion 215–16, 219 and population dynamics 223–4 positive interaction models 223 product attractiveness applications 222 and risk assessment 222, 224 shelf stability applications 221 microbial physiology 254 microbiological physics 251–2 minimum growth temperature 49 model fitting 30–4, 36–8 modified Gompertz model 173–4 modular process risk model (MPRM) 123–4 Monod equation 218, 255 Monte Carlo simulation 17, 39, 50, 52, 58 Most Probable Number (MPN) 64 mRNAs 252–3 nested models 36–7 network control 267–8 network dynamics 267 network regulation 268–70 network topology 261–6 blueprint and metabolic potential 261–4 flux-balance analysis 264–5, 266–7, 286–8 and functional genomics 265–6 metabolic pathways 264 neural networks 38–9 non log-linear survivor curves 202–3, 204–5
non-isothermal growth curves 154–6 non-isothermal inactivation 145–9 non-linear isothermal inactivation models 134–5 non-linear regression techniques 34–5 normal log-linear survivor curves 203–4 nutrient depletion 215–16, 219 nutritional food attributes 8 Ockham’s razor 24 OD (optical density) measurements 78 Opti.Form 95 optimal design 27–30, 34 outgrowth stage of bacterial spores 69–72 parameter sensitivity function 27–9 parsimony principle 24 Pathogen Modelling Program (PMP) 45– 7, 84, 85, 91–2, 93–4 PathogenCombat project 17 pathway analysis 264, 271 pattern-oriented modelling (POM) 275 Perfringens Predictor 94 pH environment 167 PHI (Process Hygiene Index) 13 phosphorylation 233–6 Pirt-type equations 219 Plackett-Burman design 26 PMP (Pathogen Modelling Program) 45– 7, 84, 85, 91–2, 93–4 Poisson process 53 polynomial models 26, 176 POM (pattern-oriented modelling) 275 population dynamics 223–4 positive interaction models 223 predictive microbiology 7–19 applications of 10 critical analysis of literature 12–15 current activity 10–11 data collection 12, 22–3, 183–5 databases 11–12, 82–5, 105 historical development 8–10 imprecise predictive models 46–8 Pathogen Modelling Program (PMP) 45–7, 84, 85, 91–2, 93–4 QMRA (quantitative microbial risk assessment) 15–17, 110–11 technology and the application of 17–18 see also quantitative microbiology; uncertainty and variability pressure treatments see high-pressure inactivation pressure-temperature-time integrators (PTTIs) 190
Index primary inactivation models 171–6 probability distributions 48, 62 probability modelling 9, 86 Process Hygiene Index (PHI) 13 process optimization 187 process variability analysis 100–1 product attractiveness applications 222 protein synthesis reactions 237, 238 proteomics 265–6 published access to models 96–8 pyruvate 264 QMRA (quantitative microbial risk assessment) 15–17, 110–11 Quality Assurance models 276 quantitative microbiology 82–105, 111, 255 access to models 93–8 applications for models 83–93 control charts 99–100 databases 11–12, 82–5, 105 death question 90–3 future trends 103–4 growth question 86–90 lag-time models 73–8 linking models 113–19 process variability analysis 100–1 quantitative risk assessment 101–3 representivity of models 120–1 statistical sampling plans 99 trend analysis 99–101 types of models 85–6, 111 see also predictive microbiology rate equations 238–9, 246–9 Ratkowsky equation 119–20 raw data 62 recontamination model 112–13 reference organisms 185–7 regression techniques 34–5 representivity of models 120–1 residual mean square 36 residual sums of squares 37–8 residual survival 139, 141 residual variance 29, 36 response surface modelling 24–7 reutericyclin 215 risk assessment 119–20, 121–4 Enterobacter sakazakii 119 and Food Safety Objectives 121–2 information sources 105 and interaction models 222, 224 L. monocytogenes 119, 120 modular process risk model (MPRM) 123–4
293
QMRA (quantitative microbial risk assessment) 15–17, 110–11 quantitative 101–3 SIEFE model 122–3 risk characterization 101 RMS (residual mean square) 36 Rodriguez-Sapru model 205, 208–9, 211– 12 RSS (residual sums of squares) 37–8 Saccharomyces cerevisiae 228–42 catabolic fluxes 232–7 electron transport phosphorylation 233– 6 energy conservation 232–7 experimental setup 230 growth temperatures 230–2 kinetic model 237–42 rate equations 238–9, 246–9 validation 239–40 protein synthesis reactions 237, 238 substrate level phosphorylation 233 Safety Predictor (SSSP) software 94–5 Salmonella 67 HP inactivation 163 inactivation in beef 14 sandwich spreads 113–16 Schwartz criterion 38 Seafood Spoilage 94–5 secondary inactivation models 132–43, 176–8 sensitivity function 27–9 sensory/nutritional food attributes 8 shelf stability applications 221 SIEFE model 122–3 sigmoid growth curves 154–6 sigmoid survival curves 143–5 silicon cell program 267 single-cell analysis 272–3 Smart-Trace 18 smoked salmon 116 spatial models 223 spores see bacterial spores square root transformations 35 Staphylococcus aureus 67 HP inactivation 163 statistical sampling plans 99 stochastic simulation models 16–17 stochasticity 47–8, 63–4 strain-to-strain variability 51, 272 stress response mechanisms 229 substrate level phosphorylation 233 survival curves 136–7, 140, 143–5, 146–9 survival models 86, 199–200
294
Index
Sym’Previus 12, 119 systems biology 256–60 actual network dynamics 267 classifications 251 enzymome analysis 266 and food consumption 274–6 and food microbiology 259–60 and food production 261–70 and food safety 270–3 and food spoilage research 273–4 genomics 252–3, 261, 265–6 information sources 276–7 microbial physiology 254 microbiological physics 251–2 network control 267–8 network regulation 268–70 network topology 261–6 proteomics 265–6 single-cell analysis 272–3 T-optimal designs 30 T-profiler 268 temperature electronic loggers 9, 17–18 minimum growth temperature 49 temperature-dependence inactivation 137–8 time-temperature integrators (TTIs) 190 territory models 223 thermal death time 130, 141–3 time to inactivation models 86 time-temperature integrators (TTIs) 190
transformations of data 35 trend analysis 99–101 12D concept 259 UCFM meat production 13–14 uncertainty and variability 44–65, 120–1 communication of modelling results 59–60 confidence intervals 47 current developments and future trends 63–5 separating uncertainty and variability 54–7 stochasticity 47–8, 63–4 uncertainty 48, 52–4 variability 47–8, 51–2 worst case predictions 47 validation of models 205–8, 239–40 variability see uncertainty and variability Weibull model 135–6, 172–3 Williams-Landel-Ferry (WLF) model 133 wireless mesh networks 18 worst case predictions 47 Yersinia enterocolitica, UHT treatment 167–8 z-values 133, 175 Zygosaccharomyces bailii 96