Nondestructive Testing of Food Quality
EDITORS
Joseph Irudayaraj r Christoph Reh
Nondestructive Testing of Food Qual...
61 downloads
2015 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Nondestructive Testing of Food Quality
EDITORS
Joseph Irudayaraj r Christoph Reh
Nondestructive Testing of Food Quality
Nondestructive Testing of Food Quality
The IFT Press series reflects the mission of the Institute of Food Technologists – advancing the science and technology of food through the exchange of knowledge. Developed in partnership with Wiley-Blackwell, IFT Press books serve as leading edge handbooks for industrial application and reference and as essential texts for academic programs. Crafted through rigorous peer review and meticulous research, IFT Press publications represent the latest, most significant resources available to food scientists and related agriculture professionals worldwide.
IFT Book Communications Committee Dennis R. Heldman Joseph H. Hotchkiss Ruth M. Patrick Terri D. Boylston Marianne H. Gillette William C. Haines Mark Barrett Jasmine Kuan Karen Nachay
IFT Press Editorial Advisory Board Malcolm C. Bourne Fergus M. Clydesdale Dietrich Knorr Theodore P. Labuza Thomas J. Montville S. Suzanne Nielsen Martin R. Okos Michael W. Pariza Barbara J. Petersen David S. Reid Sam Saguy Herbert Stone Kenneth R. Swartzel
Nondestructive Testing of Food Quality
EDITORS
Joseph Irudayaraj r Christoph Reh
Joseph Irudayaraj, PhD, is an associate professor of Agricultural and Biological Engineering at Purdue University, West Lafayette, IN. With over 15 years of research and teaching experience in biological and food engineering, Dr. Irudayaraj has been a faculty member at the University of Saskatchewan, Utah State University, and Penn State. His current role at Purdue is to develop micro and nanosensors for food, health, and environmental applications. Christoph Reh, PhD, is a research scientist at Nestl´e Research Center, Lausanne, Switzerland working on scientific projects for innovative beverage concepts. Prior to his appointment he was involved for more than 10 years in process analytics including non-destructive testing for factory application and physico-chemical characterization of foods. C 2008 Blackwell Publishing and the Institute of Food Technologists All rights reserved Chapter 7 copyright is held by Malvern Instruments, Ltd.
Blackwell Publishing Professional 2121 State Avenue, Ames, Iowa 50014, USA Orders: Office: Fax: Web site:
1-800-862-6657 1-515-292-0140 1-515-292-3348 www.blackwellprofessional.com
Blackwell Publishing Ltd 9600 Garsington Road, Oxford OX4 2DQ, UK Tel.: +44 (0)1865 776868 Blackwell Publishing Asia 550 Swanston Street, Carlton, Victoria 3053, Australia Tel.: +61 (0)3 8359 1011 Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by Blackwell Publishing, provided that the base fee is paid directly to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For those organizations that have been granted a photocopy license by CCC, a separate system of payments has been arranged. The fee codes for users of the Transactional Reporting Service are ISBN-13: 978-0-8138-2885-5/2008. First edition, 2008 Library of Congress Cataloging-in-Publication Data Nondestructive testing of food quality / edited by Joseph Irudayaraj and Christoph Reh. – 1st ed. p. cm. – (IFT Press series) Includes bibliographical references. ISBN-13: 978-0-8138-2885-5 (alk. paper) ISBN-10: 0-8138-2885-6 (alk. paper) 1. Food–Quality. 2. Food industry and trade–Quality control. 3. Food adulteration and inspection. I. Irudayaraj, Joseph, 1961– II. Reh, Christoph. III. Series. TP372.5.N66 2008 664 .117–dc22 2007023792 The last digit is the print number: 9 8 7 6 5 4 3 2 1
Titles in the IFT Press series r Accelerating New Food Product Design and Development (Jacqueline H.P. Beckley, Elizabeth J. Topp, M. Michele Foley, J.C. Huang and Witoon Prinyawiwatkul) r Biofilms in the Food Environment (Hans P. Blaschek, Hua Wang, and Meredith E. Agle) r Calorimetry and Food Process Design (G¨on¨ul Kaletun¸c) r Food Ingredients for the Global Market (Yao-Wen Huang and Claire L. Kruger) r Food Irradiation Research and Technology (Christopher H. Sommers and Xuetong Fan) r Food Risk and Crisis Communication (Anthony O. Flood and Christine M. Bruhn) r Foodborne Pathogens in the Food Processing Environment: Sources, Detection and Control (Sadhana Ravishankar and Vijay K. Juneja) r High Pressure Processing of Foods (Christopher J. Doona, C. Patrick Dunne, and Florence E. Feeherry) r Hydrocolloids in Food Processing (Thomas R. Laaman) r Microbiology and Technology of Fermented Foods (Robert W. Hutkins) r Multivariate and Probabilistic Analyses of Sensory Science Problems (Jean-Francois Meullenet, Rui Xiong, and Chris Findlay r Nonthermal Processing Technologies for Food (Howard Q. Zhang, Gustavo V. Barbosa-Canovas, V.M. Balasubramaniam, Editors; C. Patrick Dunne, Daniel F. Farkas, James T.C. Yuan, Associate Editors) r Nutraceuticals, Glycemic Health and Diabetes (Vijai K. Pasupuleti and James W. Anderson) r Packaging for Nonthermal Processing of Food (J. H. Han) r Preharvest and Postharvest Food Safety: Contemporary Issues and Future Directions (Ross C. Beier, Suresh D. Pillai, and Timothy D. Phillips, Editors; Richard L. Ziprin, Associate Editor) r Processing and Nutrition of Fats and Oils (Ernesto M. Hernandez, Monjur Hossen, and Afaf Kamal-Eldin) r Regulation of Functional Foods and Nutraceuticals: A Global Perspective (Clare M. Hasler) r Sensory and Consumer Research in Food Product Design and Development (Howard R. Moskowitz, Jacqueline H. Beckley, and Anna V.A. Resurreccion) r Thermal Processing of Foods: Control and Automation (K.P. Sandeep) r Water Activity in Foods: Fundamentals and Applications (Gustavo V. BarbosaCanovas, Anthony J. Fontana Jr., Shelly J. Schmidt, and Theodore P. Labuza) r Whey Processing, Functionality and Health Benefits (Charles I. Onwulata and Peter J. Huth)
Contents
Contributors Preface
ix xiii
Chapter 1. An Overview of Nondestructive Sensor Technology in Practice: The User’s View Christoph Reh Chapter 2. The Influence of Reference Methods on the Calibration of Indirect Methods Heinz-Dieter Isengard Chapter 3. Ultrasound: New Tools for Product Improvement ˙ Ibrahim G¨ulseren and John N. Coupland Chapter 4. Use of Near Infrared Spectroscopy in the Food Industry Andreas Niem¨oller and Dagmar Behmer
1
33
45
67
Chapter 5. Application of Mid-infrared Spectroscopy to Food Processing Systems 119 Colette C. Fagan and Colm P. O’Donnell Chapter 6. Applications of Raman Spectroscopy for Food Quality Measurement Ramazan Kizil and Joseph Irudayaraj
143
Chapter 7. Particle Sizing in the Food and Beverage Industry 165 Darrell Bancarz, Deborah Huck, Michael Kaszuba, David Pugh, and Stephen Ward-Smith vii
viii
Contents
Chapter 8. Online Image Analysis of Particulate Materials Peter Schirg Chapter 9. Recent Advances in Nondestructive Testing with Nuclear Magnetic Resonance Michael J. McCarthy and Young Jin Choi
197
211
Chapter 10. Electronic Nose Applications in the Food Industry 237 Parameswarakumar Mallikarjunan Chapter 11. Biosensors: A Theoretical Approach to Understanding Practical Systems Yegermal Atalay, Pieter Verboven, Steven Vermeir, and Jeroen Lammertyn
283
Chapter 12. Techniques Based on the Measurement of Electrical Permittivity Malcolm Byars
321
Index
339
Contributors
Yegermal Atalay (11) Division Mechatronics, Biostatistics and Sensors, Department of Biosystems, Katholieke Universiteit Leuven, Willem de Croylaan 42, B-3001 Leuven, Belgium Darrell Bancarz (7) Malvern Instruments Ltd., Grovewood Road, Enigma Business Park, Malvern, Worcestershire, WR14 1XZ, United Kingdom Dagmar Behmer (4) Bruker Optik GmbH, Rudolf-Plank-Str. 27, 76275 Ettlingen, Germany Malcolm Byars (12) Process Tomography Ltd., 86, Water Lane, Wilmslow, Cheshire, SK9 5BB, United Kingdom Young Jin Choi (9) Department of Food Science and Biotechnology, San 56-1, Sillim-dong, Gwanak-gu, Seoul 151-742, Republic of Korea John Coupland (3) Department of Food Science, 103 Borland Lab, The Pennsylvania State University, University Park, PA 16802, USA Colette Fagan (5) Biosystems Engineering, UCD School of Agriculture, Food Science and Veterinary Medicine, Earlsfort Terrace, Dublin 2, Ireland
ix
x
Contributors
˙ Ibrahim Gulseren ¨ (3) Department of Food Science, 103 Borland Lab, The Pennsylvania State University, University Park, PA 16802, USA Deborah Huck (7) Malvern Instruments Ltd., Grovewood Road, Enigma Business Park, Malvern, Worcestershire, WR14 1XZ, United Kingdom Joseph Irudayaraj (6) 225 S. University Street, Purdue University, West Lafayette, IN 47907, USA Heinz–Dieter Isengard (2) University of Hohenheim, Institute of Food Science and Biotechnology, Garbenstr. 25, D-Stuttgart, Germany Michael Kaszuba (7) Malvern Instruments Ltd., Grovewood Road, Enigma Business Park, Malvern, Worcestershire, WR14 1XZ, United Kingdom Ramazan Kizil (6) Istanbul Technical University, Chemical Engineering Department, Maslak, 34469 Istanbul, Turkey Jeroen Lammertyn (11) Division Mechatronics, Biostatistics and Sensors, Department of Biosystems, Katholieke Universiteit Leuven, Willem de Croylaan 42, B-3001 Leuven, Belgium Parameswarakumar Mallikarjunan (10) 312 Seitz Hall, Virginia Tech, Blacksburg, VA 24061, USA Michael J. McCarthy (9) Department of Food Science and Technology, University of California– Davis, Davis, CA 95616-8598, USA Andreas Niemoeller (4) Bruker Optik GmbH, Rudolf-Plank-Str. 27, 76275 Ettlingen, Germany
Contributors
xi
Colm O’Donnel (5) Biosystems Engineering, UCD School of Agriculture, Food Science and Veterinary Medicine, Earlsfort Terrace, Dublin 2, Ireland David Pugh (7) Malvern Instruments Ltd., Grovewood Road, Enigma Business Park, Malvern, Worcestershire, WR14 1XZ, United Kingdom Christoph Reh (1) Nestle research Center, Vers-Chez-les-Blanc, CH-1000 Lausanne, Switzerland Peter Schirg (8) PS Prozesstechnik GmbH, Novartis Areal, K-970.1, CH-4002 Basel, Switzerland Pieter Verboven (11) Division Mechatronics, Biostatistics and Sensors, Department of Biosystems, Katholieke Universiteit Leuven, Willem de Croylaan 42, B-3001 Leuven, Belgium Steven Vermeir (11) Division Mechatronics, Biostatistics and Sensors, Department of Biosystems, Katholieke Universiteit Leuven, Willem de Croylaan 42, B-3001 Leuven, Belgium Stephen Ward-Smith (7) Malvern Instruments Ltd., Grovewood Road, Enigma Business Park, Malvern, Worcestershire, WR14 1XZ, United Kingdom
Preface
During the last few years, nondestructive testing of food quality has drawn increasing attention by the food industry and research institutions. Based on the overwhelming need and the motivation provided by the success of the past Institute of Food Technologists (IFT) symposia on food quality testing and measurements, we brought together scientists and engineers from academia and industry to provide their perspectives on nondestructive testing methods. When preparing the book we realized the opportunity that nondestructive testing has provided to food science and food technology. On one hand, the food industry is now able to automate a large number of production control analyses, allowing the reduction of analytical costs, improving processes, and increasing product quality to meet the quality standards and regulations as well as customer satisfaction. Because of nondestructive testing methods, it is now possible to follow food products during processing without disturbing the product as a result of sampling requirements. The improvements were made possible by developments in related technology areas such as computing, optical devices, and miniaturization. The rapid development of CCD optical chips combined with a huge drop in price is a simple example that will attest to this fact. We hope that this book will help people become aware of the different technologies available and increase the impact of nondestructive testing of food in production and research. We leave the readers with the advice that a holistic approach considering process, product, people, and method will always give the best application for nondestructive testing. We are very thankful to all of our authors from academia and industry for giving us their precious time and providing us several interesting perspectives and valuable insights. xiii
Nondestructive Testing of Food Quality
Chapter 1 An Overview of Nondestructive Sensor Technology in Practice: The User’s View Christoph Reh
Introduction This introductory chapter describes the area where nondestructive food testing is relevant and why it is considered to be an area of increased interest. This chapter should give an idea of the main drivers of this area of analytics and illustrate the limitations users will face when they will develop new applications. The requirements of a factory application are different from those of the use of nondestructive instrumentation in the field, on the farm, in a warehouse, in the supermarket, in central laboratories, or even for specific research purposes. The underlying argument of this chapter is that the understanding of the operation of the applied sensor is important to validate the application. Often the simple use of nondestructive instruments lets the user believe that the analysis performed is relevant and valid. However, in reality, it might not be so.
Why Do We Need Nondestructive Testing to Increase Food Quality? The success of nondestructive instrumentation in the food industry is driven by several considerations. Despite the often significant investment, more and more installations are beneficial to the operator because of their good implementation. This can only be achieved when the target 1
2
Nondestructive Testing of Food Quality
environment is well analyzed to fit the desired equipment in the optimum manner into plant operations. The planning phase is probably most important since the implications of people, instrumentation, methodology, required material, environment, and management are identified and translated into specifications. Success can be planned, and many difficulties can be avoided if the final installation is well understood from the start. The underlying drivers for using nondestructive instruments are either cost reduction or improved operations. During the evaluation phase, often only direct cost reductions and investments are considered. Because investments are often significant, the benefits are not always completely seen at the start of the project. The principal advantages of online applications are reduction of the analysis time, reduction of the cost of analysis, shortening of the release time, and, as a consequence, lowering of production costs. Additionally, operators can improve their process understanding, control of the process, and, as a consequence, the first time quality as a result of improved product consistency. Nondestructive testing equipment can be widely used throughout the food industry. The following areas are the most relevant: 1. 2. 3. 4. 5.
Raw material control in the field or at the factory reception Process control either online or off-line after sampling Rapid analysis of intermediate or final products in the laboratory Product development and storage testing Research
Raw Material Raw material is of great importance for the food industry. To keep the stock in the warehouse, ingredients are often delivered just-in-time. This requires very rapid release procedures forcing companies to apply rapid nondestructive testing widely. Other drivers of this trend are the increased consumer demand for fresh products. This results in much of the industry shortening the chain between the farm and the consumer. Wherever time can be cut out of the supply chain, the consumer will benefit. Another aspect is the relatively narrow specifications of raw materials required for more and more products. An integral part of nondestructive testing at raw material reception often ensures compliance
An Overview of Nondestructive Sensor Technology in Practice
3
Figure 1.1. Online near infrared analyzer Corona to perform compositional analysis in food production (Carl Zeiss GmbH, Jena).
with specifications set. The procedure leads in consequence to reduced product losses that are a result of more narrowly controlled specifications. In the longer term, an improvement of the consumer-perceived product quality is observed. Raw material control is therefore even extended into agricultural production. Ingredients can be oriented for their optimum use based on their on-site quality assessment. Process Control Process control can be done either online or off-line. Under online analysis, we normally understand that no human sampling is involved in the measurement process. We further differentiate direct and bypass solutions. In a direct measurement, the instrument does not affect the process, and the product is directly placed in the process line, a storage tank, or a mixing operation. Figure 1.1 is a typical installation of an online analyzer in direct measurement showing a diode-array near infrared spectrometer type Corona from Carl Zeiss GmbH (Jena, Germany). A bypass instrument is placed in a bypass loop to which the product is diverted in order to perform the measurement. The product is then returned to the line after measurement. This is applied
4
Nondestructive Testing of Food Quality
when the measurement instrument requires much defined measurement conditions, which cannot be achieved directly in the process. Examples are requirements for a specific distance between a transmitter and receiver, the collection of sufficient product, or the compaction of the product. In some cases, the product is discarded, which is uncharacteristic of nondestructive testing. Nevertheless, these applications might still be very beneficial because they have significant advantages in terms of reduction of the use of chemicals, reduction of the influence of humans, and very short measurement times. The reduction or elimination of the use of chemicals is a strong driver for using indirect nondestructive techniques. To avoid any risk, chemicals are often banned in areas where they come into contact with the food. The alternative to online analysis is the measurement either off-line, at-line, or near-line. In all of these applications, a sampling procedure from the process line toward the instrument is required. The instrument can be either located next to the line, in a production laboratory, or in a central laboratory. The sampling procedure is in all cases a risk for the quality of the analytical result. Operator influence is considered to be a major source of error. Another influence can be the physical modifications a product undergoes before it is measured. On the other hand, it is often easier to install an off-line nondestructive installation because normally a standard instrument setup can be used. Final Products The rapid analysis of intermediate or final products in the laboratory is from a measurement point of view very similar to off-line analysis. The motivation for this type of application is to increase the efficiency of the analysis required for the release of these products. This is especially advantageous if a large number of samples need to be screened. Typical drivers are also cost reductions because of reduced cost of analysis and the reduction of chemicals used in the laboratory. Product Development For product development and storage testing, nondestructive testing can be an advantage because of the ability to follow the properties of one single product over time. The majority of traditional analyses in the food
An Overview of Nondestructive Sensor Technology in Practice
5
industry are based on destructive procedures, and it is not always certain that all products are exactly the same. By following a single product, the evolution can often be established more precisely. Research The interest of research groups to apply nondestructive analysis is similar to the one mentioned for product development. The ability to follow one single product during a process or during its shelf life gives a huge advantage compared to traditional testing procedures. Even more, it gives the ability to follow changes, which could not be detected using traditional approaches. The development of the area of nondestructive analysis for food research is principally driven by the improved resolution of sensors and the mathematical capabilities of today’s computers. In the case of food, this allows researchers to follow processes such as drying, cooking, baking, crystallization, homogenization, gellification, or agglomeration.
Changes in the Food Industry and Consequences for the Use of Sensors The food industry is undergoing a significant process of consolidation. This typically results in an increase in the size of the production facility allowing the operator better use of the installation. It is obvious that real-time online analysis or rapid near-line analysis based on nondestructive techniques leads to clear benefits for the operator. One of the observed trends is the increased automation of production. This allows an increased use of online instrumentation and especially of nondestructive instruments. The following list gives some of the advantages: r r r r r r r
Improved product quality Less downtime between production cycles Reduction of waste Increase of capacity Improved operational security Better use of energy and resources Shorter holding time of raw materials and finished products
6
Nondestructive Testing of Food Quality
A large number of production facilities are still labor intensive with a low level of automation. There is a strong tendency to reduce the human influence on the product by having more continuous or automated batch processes. This is only possible if the lot sizes are sufficiently big to generate an economical advantage. Another driver is the increased demand for traceability of the production. With an automated process and integrated sensors for measuring key attributes, the product quality at any point in time can be mapped. More and more, products receive a time coding, which often can be traced back to the data collected during production. On the other hand, not all product parameters can be measured automatically. Whereas the principal chemical composition and some physical product aspects can be measured by nondestructive instruments, the sensory characterization of the product still requires human testing. Despite massive efforts to develope electronic noses or electronic tongues, only very few applications are used industrially for product release. It is more common to use the available measurement capabilities to optimize the process and to keep the process conditions in an operating range where the required sensory parameters are delivered. The products are controlled for their key sensory aspects by a panel of experienced people for release purposes. This procedure is unlikely to be changed in the coming years because minor components or modifications can lead to significant changes in the product. It should be pointed out that contrary to the chemical and pharmaceutical industries, food products are generally less defined regarding their chemical composition. The majority of the raw agricultural products have quite a wide specification. Additionally, not all processes in the food industry are understood in detail. This is especially true for all aspects related to aroma and taste of food because of the very complex chemistry occurring during processing. Additionally, if the product at the time of production is not yet in physical and chemical equilibrium, it does not provide the characteristics the consumer perceives.
What Are the Central Elements of Successful Use? The successful use of nondestructive instrumentation relies on a complete understanding of the environment. The nondestructive technology
An Overview of Nondestructive Sensor Technology in Practice
7
and its technical capabilities are only one piece. The following elements are of central importance: 1. 2. 3. 4. 5. 6.
Staff Instrumentation Method Consumables Place of installation Management
Involving the Right Staff Staff with adequate training is often key to installing a nondestructive testing instrumentation. This is normally not an issue for an academic application where the instrument is often a central part of the research. When used in an industrial environment, this issue becomes more critical. Often management considers nondestructive testing equipment easier to use and, therefore, assumes that normal factory staff should be easily able to install and operate the application. This assumption is especially incorrect for the period between the selection of the equipment and its installation. It might be true for the operational phase as long as the staff is well trained to perform the maintenance of the instrument. To develop correct specifications, to define adequate methods, and to find the correct location, a global understanding of the task is required. The most difficult applications are normally online applications because the process defines quite a number of parameters affecting the measurement. Often it is best to bring together a team covering production, engineering, quality assurance, and product development to correctly plan and install the equipment. After training of the operational staff, outside help will be required on an occasional basis.
Specifying the Instrumentation The specification process is crucial for the success of any large investment in analytical equipment, and this is especially true for nondestructive testing installations. As already mentioned, it would be
8
Nondestructive Testing of Food Quality
preferable to run this process with a team of people with different expertise. The specification process is very well described by Bedson (1996). This paper gives guidance on how to perform equipment validation for any analytical instrument. In practice, it is very difficult to give general advice for the installation process covering a large number of different technologies because the critical points can vary from one technology to another. It is therefore extremely useful to consider the introduction of a nondestructive application as a process with a generic structure. Bedson (1996) developed guidance for the equipment qualification process, including the following four stages: 1. 2. 3. 4.
Design qualification Installation qualification Operational qualification Performance qualification
Design Qualification Design qualification covers all tasks related to planning and selecting the application, including development of the specifications leading to selection of the supplier. The choice to develop an application should start from a clearly defined need for a certain measurement. The design qualification should lead to the development of instrument specifications, which will be the basis of the relationship with the instrument supplier. These specifications strongly depend on where the instrument will be deployed. For process control, an optimum performance within a relatively small range of variation of the targeted parameter might be targeted. For a research application, one will choose an instrument with high flexibility regarding its range of application. Apart from purely instrument-related specifications, one should also define the requirements related to staff, methods, installation, and consumables. Installation Qualification The next step of the process is installation qualification covering all of the procedures related to installation of the equipment in its place of use. One of the most time-consuming exercises can be calibration, especially when process parameters are measured indirectly with techniques such
An Overview of Nondestructive Sensor Technology in Practice
9
as near infrared spectroscopy, refractometry, microwave absorption, or similar techniques. To calibrate correctly the choice of a well-adapted reference method is often critical. Other issues in this context are reference laboratory performance, sampling procedures, and generation of a sample set covering the calibration range required. In a production environment, the variation in concentration of one naturally occurring ingredient can be quite small. Sometimes the range needs to be extended to develop a stable calibration. In other cases, physicochemical changes originating from the production process can affect either the reference analysis or the nondestructive technique. These are some of the problems which should be explored by the team during either installation or operational qualification. Operational Qualification Operational qualification is required for the instrument to operate under defined conditions. This step is usually less critical when the two earlier steps have been performed well. It is obvious that a better specification of an application will lead to fewer surprises at this stage. It is important to accurately document all actions that have been performed. The better the documentation, the easier required actions can be identified. Performance Qualification The final step of the described process is performance qualification where one has to demonstrate that the installation performs according to the specifications set at the beginning of the process. For process equipment, the focus of this validation process is assessment of the precision, accuracy, and robustness of the instrumental setup. It should be pointed out that the qualification process does not stop after the four qualification steps have been completed. Instruments will have to undergo requalification after any significant change such as: r r r r r
Change of place of installation Modification of the instrument or the operating software Replacement of parts Maintenance Modification of product or process
10
Nondestructive Testing of Food Quality
The degree of this requalification is strongly dependent on the type of equipment and the gravity of the change. Based on experience, during the qualification process, a protocol should be established for the most common changes. Changes related to the equipment often can be estimated based on validation studies. This can be best illustrated for the place of installation. Presentation of the food product to the equipment is often the most critical parameter for the overall performance of the equipment. During installation and calibration, aspects such as the distance of the sensor to the product, aperture of a measurement window, or angle of measurement are often assessed and documented. These data are very helpful to define the critical parameters for requalification. In the case of maintenance, repair, or upgrade, requalification is mainly targeted toward assessing the equal functioning of the new component. For most of these changes, the equipment supplier normally provides a testing procedure. It is important to ensure the presence of these tests during design qualification because it limits the time required for requalification. Because measurement equipment is normally maintained by the quality assurance department, changes in the product or process are often overlooked or their influence on the measurement is underestimated. Compositional changes due to recipe adaptations or even natural variations of raw materials can cause differences in the results. Other sources of difference can be changing operating conditions of ovens, mixers, homogenizers, and other processing devices. The effect of homogenization of milk on the measurement of fat and solid-non-fat (SNF) by mid-infrared spectroscopy is widely known and studied. This example will be discussed in more detail later in this chapter. For the reliable composition analysis of powders by near infrared spectroscopy, particle size is a critical parameter. Particle size distributions of powders often vary as a result of the operating conditions of mills, spray dryers, or agglomerators. This influence can be limited either by including the variation of the particle size in the calibration model or by better controlling the operating conditions of the process unit. In reality, an approach taking both factors into account will probably be chosen. This illustrates that due to the introduction of nondestructive measurement, variation of the process and, as a consequence, of the product, can be detected and fixed. This leads then to an improved definition of the product and production with higher consistency.
An Overview of Nondestructive Sensor Technology in Practice 11 Defining the Method Definition of the method is complementary to instrument specification and focuses more on the operational aspects of the application. This work should be done just after a first decision has been determined on what technology will be applied for the testing procedure. It might be that the required method procedures are part of the final decision of what technology will be applied. Defining the method is the outcome of the equipment qualification process mentioned under instrument specification. It is very important to translate all of the knowledge collected into an actionable method. The individual steps need to be well defined and documented to ensure long-term application by the operator. Operator training must include generating alertness to the critical points of any nondestructive testing application.
Ensuring the Supply of Consumables The permanent availability of consumables is probably the easiest point to achieve. The importance of this point is the operational availability of nondestructive testing instrumentation. Especially in the case of online applications, an outage can lead to significant losses in production. This point is especially relevant in countries or regions where after-sales support from the supplier is limited or slow. Apart from a service guarantee from the equipment supplier, it is often advantageous to perform some of the maintenance in-house with your own staff. In this context, the availability of all consumables required for the normal operation of the application should be kept in stock. This could even include equipment parts in order to be able to perform any repairs that have a higher risk of happening. Training of the operator for these tasks needs to be additionally considered.
Identifying the Place of Installation The place of installation refers to technical and operational parameters. The technical parameters are driven by the technology used for the
12
Nondestructive Testing of Food Quality
nondestructive testing procedure. Almost all technologies have technical limitations in order to achieve an optimal result. These limitations will be covered in more detail in the chapters on major technologies used in the food industry. At this point it should be mentioned that in the case of nondestructive testing, normally the measurement setup should be adapted to the product and, in the case of production control, to the process line. Therefore, nondestructive instruments are often used after sample preparation. Sample preparation is applied when the outcome of the testing procedure can be improved by optimizing sample presentation to the equipment. It might be, therefore, important to decide if sample preparation might lead to a better result. To clarify the technical impact of the place of installation on the quality of the measurement, thorough knowledge of both aspects is required. Sample presentation of the measurement technology needs to be adapted to produce reliable and actionable results. The instrument setup needs to base its result on a representative sample. Parameters such as sample volume or measurement time are very important in this context. The operational aspects are most often more important than the technical aspects because the economical advantage is principally driven by optimization of the operation. It is of key importance to measure or control in order to be able to correct. But it is of limited use to measure a parameter that can no longer be changed when the result of the testing procedure comes available. There might be an advantage to automating a testing procedure used for product release. Nevertheless the gain in determining a process parameter, which can be used for process control, will lead to important savings for the operator.
Getting Management Support Management support is the ultimate requirement for successful installation of nondestructive instruments. Automated analysis of product quality control often requires a change in mind-set. It goes along with significant changes on the production floor and in the factory laboratory. This affects the day-to-day work of the staff involved in the specific area and can completely change the way product quality is approached. As a consequence of improved product quality monitoring, more proactive intervention in the production process will be required,
An Overview of Nondestructive Sensor Technology in Practice 13 which needs management commitment toward continuous improvement. Often automation is solely seen as a cost reduction exercise, and neither management nor the involved staff is prepared to provide the necessary input to further improve. Cost is the principal driver of any management support. Sometimes management is confronted with an investment decision for equipment without having a good idea of the overall cost of ownership. Nondestructive testing instrumentation is often expensive, therefore, it can be easily overlooked that other savings compensate for the investment. Automated or semiautomated food testing tends to be less labor intensive in routine use and, despite an expensive introduction phase, major savings can be made on salaries. One aspect often overlooked is gain through improved product quality and process reliability. Because these aspects can be related to consumer satisfaction and subsequently to increased sales, the final gain is difficult to evaluate. Practice has shown that quality improvement via process optimization becomes increasingly the driver for using nondestructive testing. Modern sensors provide permanent availability, excellent measurement precision, and short frequency of measurement. As more and more food plants are operated 24 hours a day, this argument becomes increasingly important. Costs, which can be better calculated, are the investment cost, cost of infrastructure, operation cost, and cost of running the project. All critical factors resulting out of the mentioned elements should be translated into specifications to define the requirements to properly operate nondestructive food control. Here is a summary of 10 points that should be well defined before investment: 1. The overall business environment to achieve a significant advantage for the operation. 2. Clear documentation related to the use of the instrument, including easy-to-use operating manuals, identification of versions, protocols for equipment qualification, etc. 3. Well-defined level of skill to operate the instrument and the details for required training. 4. The sample throughput and the sampling details. 5. The requirements for data acquisition, data processing, and transformation of the data into actionable information. 6. The requirement for services, utilities, and consumables, such as electricity, special gases, supports, etc.
14
Nondestructive Testing of Food Quality
7. The location and its environmental conditions affecting the operation of the instrument. 8. Maintenance and installation of the instrument including aspects, such as calibration, validation, and servicing of the instrument (service contract defining procedures and intervals). 9. Support of the instrument by the supplier or a third party, clarifying points like delay of intervention or the availability of a replacement unit. 10. Health and safety requirements related to the installation, which could either affect the staff or the product.
PAT Initiative—What Can the Food Industry Learn from the Pharmaceutical Industry? The Process Analytical Technology (PAT) initiative (Guidance for Industry 2004) is another interesting development, which should have a longer-term impact on the way the food industry operates. This will be especially relevant for the segment of the food industry covering health food or functional food. This product area has been significantly growing over the last years with segments such as infant products, baby food, sports nutrition, or clinical nutrition. Additionally, other products are fortified with vitamins or minerals, for example, beverages, juices, breakfast cereals, or dairy products. For many products the concentration of these ingredients needs to be ensured by the producer within limits given by legislation. Traditionally, this had been done as in the pharmaceutical industry by very frequent analysis for product release. This practice can be very expensive in the long run and is highly reactive. The food industry has, therefore, moved increasingly toward nondestructive online or at-line applications. This gains time and avoids out-of-norm products. The PAT initiative (Guidance for Industry 2004) states that product quality and performance are ensured through the design of effective and efficient manufacturing processes. As a consequence, product quality and compliance are produced, and not measured, into the product. Specifications need to be based on the understanding of the process and the variability occurring due to the influence of ingredients and process parameters. Continuous real-time quality assurance was, therefore, identified as a key element to help reduce the variability of release
An Overview of Nondestructive Sensor Technology in Practice 15 parameters. The combination of the latest scientific understanding of formulation and manufacturing process and an adequate process control strategy should lead to a continuous improvement of the manufacturing process. Kueppers and Haider (2003) have summarized very well the advantages of process analytical chemistry for industry, which also applies to the food industry. Accordingly on-line process analysis and especially nondestructive food testing allows the following: r r r r
Collection of constant information on process status Identification of problems Validation of the process Improvement of the applied analytical measurement by error reduction
Nondestructive Sensors for Production Control In this and the next part of the chapter, some recently published applications of nondestructive food testing are described, which are or could be used to control food production or could be applied for research purposes. The border between both fields of applications is quite open, and the techniques are often used for both. The examples are chosen to illustrate some of the issues arising during the use of these technologies. It is not the aim to elaborate these techniques in detail or to give a complete overview of all possible applications. These techniques and applications will be discussed in more specific detail later in this book. For production, the way nondestructive food testing is performed is very important and influences the outcome that can be expected. As mentioned earlier, there are three main approaches: r r r
Off-line analytics At-line analytics Online analytics
Off-line Analytics Off-line analytics uses standard laboratory equipment and requires sampling and transport of the food product to the instrument. Placement of the nondestructive equipment in a central laboratory allows the use of
16
Nondestructive Testing of Food Quality
standard equipment without special adaptation to factory floor use. The biggest disadvantage is the slow response time and, as a consequence, the absence of direct process adaptation. Another disadvantage is often modification of the sample during the sampling, which can be overcome by a good definition of sampling procedure. At-line Analysis At-line analysis requires equipment well defined for the application. An easy, well-defined sampling procedure is essential because the environment near the line and the qualifications of the staff limit complex procedures. Equipment needs to be robust. In contrast to off-line instruments, at-line instruments are always available because of their dedication to a specific application and clear ownership of the operation. At-line analysis allows adaptation of the process conditions dependent on the frequency of sampling. It does not allow real process control. Online Analytics Online analytics is the only way of controlling processes in real time because it allows correlating measurement results, the processing parameters, and product characteristics. Preferably any measurement is done directly, but in some cases, automated sampling is necessary to apply a technology. Online analytics has its highest value for applications targeted to properties that can be easily influenced by the process. Examples are water content of a dryer, particle size distribution after a homogenizer, or composition after a mixer. Probably the most important process parameter in the food industry is the water content. Water has a great influence on the stability of food products during storage, which could be either related to microbiology or be physicochemical. Additionally, narrow regulatory limits are in place because of economical aspects or food safety. A wide range of techniques, such as near infrared (NIR) spectroscopy, mid-infrared (MIR) spectroscopy, refractometry, capacitance sensors, microwave absorption, and nuclear magnetic resonance, has been employed for the nondestructive determination of water during production. NIR is probably the most successful at this time because the water absorption band of 1,940 nanometers (nm) is not as greatly influenced by other variations of a given product. Benson and others (2001) describe very well the specific
An Overview of Nondestructive Sensor Technology in Practice 17
Figure 1.2. Near infrared analyzer MM710 to determine the water content in food (Infrared Engineering).
use of filter-based process analyzers for water determination or moisture determination. Figure 1.2 illustrates a process analyzer, MM710 from Infrared Engineering, Maldon, United Kingdom, used for the online determination of the moisture content in food. Most of the analyzers today are based on the application of a filter-wheel, but in the future we can expect an increase in new technologies like Fourier transform near infrared (FT-NIR) or diode-array-based detectors. The advantage of filter-based technology for process analyzers is their proven robustness and low maintenance. One major issue for the use of near infrared spectroscopy is the need to calibrate the sensor via a reference method. As shown by Reh (2004), the analytical results from the reference method need to link to the signal measured by the sensor, which is, in the case of near infrared spectroscopy, the absorption of the water molecule. Ideally this absorption should not be influenced by the structure of the product on the production line. The quality of the reference method is the most important element for good calibration. We have therefore dedicated a specific chapter in this book to the relation of reference method and calibration of the sensor. In Dionisi and others (1998), reference methods for the
18
Nondestructive Testing of Food Quality 5.00
Gauge output
4.50
4.00
3.50
3.00
2.50
2.00 2.00
2.50
3.00
3.50
4.00
4.50
5.00
Reference moisture value (%)
Graph 1.1. Determination of moisture in skimmed milk powder using a near infrared process analyzer.
determination of total fat are reviewed. The total fat content comes second in importance for nondestructive compositional analysis. Because fat is defined via the definition of a reference method, it is sometimes difficult to relate it to an indirect signal. In consequence, variation in the origin of the fat or oil can impact the performance of the measurement. Graph 1.1 presents a typical calibration for moisture determination in skimmed milk powder with an industrial filter based on near infrared process analyzer. Other industrial examples, besides dairy products, are soluble coffee, potato chips, sugar confectionary, tobacco, cereals, grains, pasta, and cookies. The most critical parameters for this application are distance of the sensor from the product, measurement time, sample presentation, and finally, as already mentioned, choice of the reference methodology including sampling. The composition of milk products has been determined for quite some time using milk analyzers using mid-infrared absorption. In the past these instruments were based on filter technology (Lefier et al. 1996), whereas today more and more Fourier transform infrared (FT-IR) technology is applied (Lanher 1996). Filter-based analyzers principally only determine the SNF and fat content of fresh milk. FT-IR analyzers today are used in the production of a variety of milk products, ice cream,
An Overview of Nondestructive Sensor Technology in Practice 19
Figure 1.3. Mid-infrared analyzer to control the composition of liquid milk online (Foss Electric).
beverages, and wine. Additional parameters such as protein, several carbohydrates, or certain additives can be quantified. FT-IR analyzers have been used in the dairy industry as process analyzers for some time (Reh 2001). Figure 1.3 shows a ProcesScan FT process analyzer from Foss Electric, Hillerod, Denmark. This instrument quantifies fat, protein, lactose, and water content continuously during production. The instrument is operated in bypass because the FT-IR technology requires a very well controlled sample preparation to provide accurate results. The frequency of 1 measurement per 30 seconds allows optimizing the standardization process and achieving very narrow targets for the final product. Curda and Kukackova (2004) have used NIR to follow processed cheese manufacturing via the assessment of dry matter, fat content, and crude protein content. NIR can be used in combination with fiber optics without any significant sample presentation. On the other hand, variations of the process conditions should be included in the calibration model to avoid false predictions in the case of too large variations of parameters such as temperature or particle size of the fat globules.
20
Nondestructive Testing of Food Quality
Computer vision is a technology with rapid growth in the food industry. Brosnan and Sun (2004) have reviewed this technology recently and have identified the analysis of meat, fish, pizza, cheese, and bread as major applications. Tan (2004) reviewed specifically the application of computer vision in meat quality evaluation. Based on several results, Tan concluded that color image processing is a useful technique for meat quality evaluation allowing efficient characterization of muscle color, marbling, maturity, and muscle texture. Up to a certain extent, parameters such as textural attributes, sensory scores, and cooked-meat tenderness could be calibrated. One major advantage of the technology compared to human grading is the elimination of inconsistencies between different testers and thereby an increase in consistent quality. Lamb tenderness was predicted by Chandraratne and others (2006) using image surface texture features. Results of texture analysis were correlated with the data from a 3-CCD color digital camera. Results were best using artificial neural networks as expected for complex biological systems with numerous influencing parameters. Other applications are the sorting of agricultural products such as fruits, vegetables, or grains. Leemans and Daestain (2004) reported a sorting method for apples based on image analysis. It illustrates the complexity of the grading process and how difficult it is to fix the border between classes. Definition of the borders between classes becomes even more important for food products, for example, for the automated elimination of cookies from the production line. Hatcher and others (2004) described one application in this area. They applied image analysis to sort oriental noodles with black spots from the production lot. In this case, eliminated product leads most of the time to an operative loss because of the very limited possibilities to validate them differently. In another agricultural work, Kilic and others (2007) developed a classification system for beans using a computer vision system and artificial neural networks based on size and color quantification of the samples. The system was calibrated with a training set of 69 samples and validated with another set of 71 samples. The final system was then tested with another group of samples achieving an overall performance of the system for the classification of beans of about 91%. Vision systems are replacing human inspection and provide in most of the cases better performance at even higher throughput of the lines. Commercially these systems are very useful for products where either the visual aspect of the product is very important or where the safety of
An Overview of Nondestructive Sensor Technology in Practice 21 the products is assured. A typical application is the control of the completeness of food products like pizza. Du and Sun (2005a) studied the spread of pizza sauce and the correct presence of the toppings on a pizza using color vision. Munkevik and others (2007) extended the application of a computer vision system for appearance-based descriptive sensory evaluation of meals. This application illustrates the potential of these systems in replacing or reducing sensorial analysis. Food safety aspects can be the detection of foreign bodies or spoilage by either chemical or microbiological nature. Monitoring should only be seen as an additional assurance because, in general, the design of the production facility has the largest impact on limiting the presence of foreign bodies or avoiding any type of spoilage. The increasing number of applications can be explained by the cost reduction for cameras and computer equipment and the further development of adapted software running these systems. Additionally, cameras able to operate in the infrared region (Wen and Tao 2000) have become available, providing additional information and analytical opportunities. Park and others (2006) illustrated the use of hyperspectral imaging or imaging spectroscopy for the detection of surface fecal contamination of chicken carcasses. This application illustrates the possibilities of having imaging capabilities combined with spectroscopical information, which could be applied even at quite high processing speeds. Kim and others (2005) studied the automated detection of fecal contamination of apples based on multispectral fluorescence image fusion. The application has been able to detect 100% of the spots on apples artificially contaminated with cow feces. Fluorescence tools are very sensitive and able to detect contaminations not visible to the naked eye. In another application, Katsumata and others (2007) used photoluminescence to evaluate cereals for quality purposes allowing, for example, to differentiate glutinous rice from nonglutinous rice. Noda and others (2006) have used energy dispersive X-ray fluorescence (ED-XRF) to determine the phosphorous content of potato starch. This is of interest because potato starch is rich in starch phosphate, which additionally can be related to textural attributes. The work showed that the starch phosphate content predicted by ED-XRF can be related to peak viscosity of a potato starch paste. The validity of the calibration including 20 samples has been checked with a validation set including 15 samples. In a similar approach, Perring and others (2005) have shown the potential of the use of wavelength dispersive X-ray fluorescence to
22
Nondestructive Testing of Food Quality
Figure 1.4. ED-XRF analyzer MiniPal 4 to determine minerals in food products (PANalytical).
determine several minerals in infant cereal matrices. They were able to determine sodium, magnesium, phosphorous, potassium, and calcium as macroelements, and magnesium, iron, and zinc as trace elements. It is obvious that the limit of quantification, limit of detection, and the observed repeatability will depend on the levels of the measured mineral and matrice of the product it is in. Figure 1.4 shows an ED-XRF analyzer MiniPal 4 from PANalytical, Almelo, Netherlands, that can be used for the applications described above. Physical parameters are another area increasingly tested and applied in industry, especially because textural attributes are heavily related to consumer perception of food products. In every application, a model has to be developed linking measurable parameters to textural attributes. A number of these applications are described throughout this book. The reliability of these models is often weaker than models for chemical applications for several reasons. In some cases, the instrument measures a signal related to the chemistry of the product, which is then related to a physical observation causing variation of texture. In other applications, the precision of the textural method or physical test might limit the prediction capability of the nondestructive technique. In other cases, in-line measurement and off-line results of reference methodology might not correlate. Singh and others (1997) evaluated a refractometer for total solids, a pycnometer for density, and a pH meter and a viscometer in continuous food processing of fruit preparation and yogurt-based beverages. The viscometer was based on the dampening of oscillation due to
An Overview of Nondestructive Sensor Technology in Practice 23
Figure 1.5. Pilot plant installation with in-line particle sizer (see left side of picture) for a fermentation process (Messtechnik Schwartz GmbH, D¨usseldorf, Germany).
the changing viscosity. All sensors gave satisfactory results applicable to process control. The first three methods related well to the calibration method. Only the viscosity measurement could not be related because the measured viscosity depends on instrumental setup and measurement conditions. In quite a few cases, the viscosity of food materials changes during the sampling period prior to the reference measurement. Allais and others (2006) used fluorescence spectroscopy to study the relationship among density, color, and texture of ladyfinger batters and biscuits. They were able to relate fluorescence spectra with macroscopic properties, such as density, hardness, and springiness, and the authors concluded the applicability of the method to online industrial application after further validation of the system. It seems that some biological (nicotinamide adenine dinucleotide [NADH] content) or chemical (egg content) parameters are linked to structural attributes, which consequently allows the prediction of texture by spectroscopical methods. Particle sizing is very often overlooked when nondestructive food testing is discussed. We therefore included two chapters tackling this subject. Figure 1.5 illustrates an on-line image analyzer from Messtechnik Schwartz GmbH, D¨usseldorf, Germany, able to characterize
24
Nondestructive Testing of Food Quality
particles in a process. Automated image analysis allows processes, such as crystallization, homogenization, agglomeration, sieving, or dissolution to be followed. Apart from particle size distributions, other aspects such as elongation relationship, sphericity, presence or absence of unexpected particles, abrasion of crystals, or particle shape can be studied. In another example, Hepworth and others (2004) determined the bubble size distributions in beer by applying computer vision. The aim of the research was to develop a system able to follow bubble nucleation, bubble growth, and bubble velocities in beer after it was poured into a glass. Using this method one can optimize beer processing in order to better match consumer expectations. Sometimes more simple instrumental approaches can be used to study food nondestructively. Nunes and others (2006) performed a study of milk using microwave spectroscopy with frequencies between 1 and 20 gigahertz (GHz). The technology allowed milk composition to be roughly determined. Nevertheless, some doubt was raised as a result of the significant variation of the spectra in the case of spoilage or physicochemical variations. Therefore, the equipment can be used very well to study physicochemical modifications caused by microbiological spoilage. In other studies, Tanaka and others (2005) analyzed the dielectric properties of soy sauce and Everard and others (2006) the dielectric properties of processed cheese and could relate them to the composition of the product. In the second study, 16 cheeses were characterized at temperatures from 0.3 to 3 GHz for their dielectric properties at temperatures between 5◦ C and 85◦ C. The results of the statistical model suggests that dielectric measurements can be used as a quality control tool to measure the moisture content and the inorganic salt content of processed cheese. To complete this section on production control, three additional examples should illustrate technologies we have not included in specific chapters in this book. This does not judge their value, and for their use, the same rules apply as for the technologies we have covered in more detail. Bairi and others (2007) described a simple method for the determination of the thermal diffusivity of foods based on an analytical solution of the 1D Fourier equation applied to a cylinder. It is obvious that in the food industry, temperature control during production, storage, and transport is essential. Castillo and others (2005) used a simple-to-use optical sensor technology to measure the whey fat concentration in cheese making. The method is based on the determination of responses
An Overview of Nondestructive Sensor Technology in Practice 25 of light sidescattering and transmission using a fiber optic spectrometer. Specific wavelength ratios then can be used to predict parameters, such as the whey fat concentration. This method can be easily adapted to an industrial process where instrumentation with specific filters could also be applied. In the last example, Chen and others (2004) measured the electric resistance of tubing to detect the fouling of milk production lines and the effect of the cleaning process. This is one application where the placement of the sensor is very important. The sensor should be placed where fouling occurs first and then could be used as an indicator for initiation of the cleaning process.
Nondestructive Sensors for Product Development and Food Research Nondestructive measurement of products during a treatment has been a huge step forward in understanding food processing. It allows following a single product through the process without any sample procedure stopping the process or modifying the measured parameter as a result of the sample preparation. Magnetic resonance imaging (MRI), ultrasound spectroscopy, and dielectric imaging are technologies that are now available to study product changes during cooking, drying, freezing, or thawing. Because of the calculation speed of computers, higher resolution of the equipment, and shorter response time of the sensor a great deal of information can be collected. This often allows the development of models for the analyzed process. MRI could be used for process control requirements but faces the difficulty that it can only be operated through a limited number of materials. Therefore, it has been more successfully applied in studying nondestructive food processes. Lucas and others (2005) characterized the ice gradients in a dough stick during freezing and thawing. This type of application allows the influence of process parameters on water distribution in the dough to be followed and, as a consequence, optimization of the process conditions. Measuring water distribution in a product is almost only possible by nondestructive testing, and MRI is one of the key technologies. Thybo and others (2004) could predict sensory texture attributes of cooked potatoes with nuclear magnetic resonance (NMR) imaging of raw potatoes. The group suggests that MRI relates to the water distribution and some anatomic structures within the raw
26
Nondestructive Testing of Food Quality
potatoes, which are of importance for the perceived textural properties of the cooked potatoes. Veliyulin and others (2006) used nondestructive NMR imaging to study the bursting of the belly of herrings. It relates to the decomposition of the feed the fish has consumed prior to capture during heavy feeding season. In the case of belly bursts, the fish often is no longer consumable. Xing and others (2007) used NMR imaging to study the drying of pasta by measuring the moisture distribution in pasta. The investigators were able to achieve interesting results on the diffusion of water under different drying conditions, which can be used to understand and optimize industrial drying. Nondestructive texture analysis of porous cereal products was performed by Juodeikiene and Bawsinskiene (2004) using a low frequency acoustic spectrometer. In the study, structural and mechanical properties of cereal products such as wafer sheets, crisp bread, crackers, and ring-shaped rolls were estimated according to amplitude of a penetrating acoustic signal. This type of application is of interest for product development and industrial process control because the measured parameters are of high relevance for quality as perceived by the consumer. Additionally, these texture parameters are easily modified by the processing parameters. In a similar approach, Gan and others (2006) looked at noncontact ultrasonic quality measurements of food products during processing, giving the example of monitoring a coagulation process. Noncontact is a very important innovation for ultrasound applications because in the past the instrument had to be coupled to a process line, limiting these applications to liquid matrices. Resa and others (2007) studied the monitoring of lactic acid fermentation in culture broth using ultrasonic velocity. He demonstrated the use of ultrasound for process control purposes by relating ultrasound velocity and bacterial catabolism. This technique is especially adapted to biotechnological processes because they are often operated as batch processes. Ultrasound velocity has its strength in determining process changes because physical parameters, such as temperature, pressure, or particle size influence the accuracy of the measurement. Other studies looked into monitoring of the milk gelation process (Nassar et al. 2004), the freezing process of food such as gelatin, chicken, or beef (Sigfusson et al. 2004), monitoring of the specific gravity of food batters (Fox et al. 2004), and evaluation of the turgidity and hydration of orange peel (Camarena and Martinez-Mora 2006). These examples show that ultrasound is most useful in the determination of physical changes during processing.
An Overview of Nondestructive Sensor Technology in Practice 27 The quality and authenticity of food products become more and more important due to free trade between countries. Karoui and others (2006) have reviewed the use of nondestructive techniques for the rapid authentication of dairy products. Dairy products are traded in high volumes and relatively high prices and are, therefore, sometimes vulnerable to adulteration. Standard chemical or physical analysis is slow, expensive, and does not always provide all the elements required for proper authentication. Alternative techniques, such as near infrared spectroscopy, midinfrared spectroscopy, front face fluorescence spectroscopy, or nuclear magnetic resonance coupled with chemometrics might not provide all the required information either, but could be a more rapid way of identifying products with significant differences versus a defined standard. Standard analytics can then be used for validation purposes. In another review, Karoui and others (2006) looked at methods to evaluate egg freshness in research and industry. Besides destructive analysis, they identified near infrared, mid-infrared, and fluorescence spectroscopy as potential techniques to screen large numbers of eggs for freshness. For suspicious products, according Karoui and De Baerdemaeker (2006), further physicochemical analysis is required to establish the cause of an out-of-norm classification. In a similar way, these technologies can be used to monitor raw material quality. Another area for application of spectroscopic techniques is the assessment of fruit or vegetable quality. The quality of tomatoes during storage and maturation was studied by van Dijk and others (2006) using kinetic and near infrared models to describe firmness, loss of firmness, loss of moisture, and pectin degrading enzymes. Zude and others (2006) predicted apple fruit flesh firmness and soluble solids content by measurement on the tree and during shelf life. This shows that with the help of miniaturized visible/near infrared (VIS/NIR) spectrometer the analysis can be brought to the field allowing the time of harvest to be defined in real time. Measurement of color changes using video image analysis was used by Lana and others (2006) to describe the effects of temperature during storage and ripening. In a similar approach, Jha and others (2007) used color measurement for the nondestructive evaluation of mango maturity. The quality of the prediction will depend on the information collected by the instrument and if this information describes the predicted parameter, for example, maturity. In the case of mango, the color information seems to be sufficient whereas in other cases information related to the chemical composition collected by near infrared will significantly improve the prediction. FT-Raman spectroscopy was used
28
Nondestructive Testing of Food Quality
by Kimbaris and others (2006) to quantify unsaturated acyclic components in garlic oil. FT-Raman spectroscopy together with the already mentioned mid-infrared and near infrared spectroscopy is part of the spectroscopic methods mainly focused on chemical composition. Near infrared spectroscopy has its strength in the quantification of major ingredients and can be easily applied to heterogeneous materials. The other two techniques are often used to determine ingredients at lower concentrations because of specificity of the signals. Conclusions This chapter shows the potential of nondestructive food testing for industry and research. It illustrates the huge advancements that have been, and will be, made as a result of advances in instrumentation and automation, and the improved performance of modern sensors. The second argument is especially valid for all imaging technologies where the technological development has been especially strong. It can be foreseen that decreasing prices and the resulting increased use of nondestructive testing equipment will lead to increased applications, and a better understanding of the process, which will result in food products that are safe and improved in quality. Acknowledgement The author would like to express his gratitude to Elizabeth Prior at Nestl´e Research Center for her help in finalizing this chapter. References Allais I, R-B Edoura-Gaena, and E Dufour. 2006. Characterisation of lady finger batters and biscuits by flourescence spectroscopy: relation with density, colour and texture, Journal of Food Engineering, 77, pp. 896–909. Bairi A, N Laraqi, and JM Garcia de Maria. 2007. Determination of thermal diffusivity of foods using 1D Fourier cylindrical solution, Journal of Food Engineering, 78, pp. 669–675. Bedson P and M Sargent. 1996. The development and application of guidance on equipment qualification of analytical instruments. Accred Qual Assur 1, 265–274.
An Overview of Nondestructive Sensor Technology in Practice 29 Benson IB and JWF Millard. 2001. Food compositional analysis using near infrared absorption technology, pp. 137–186, In: Kress-Rogers E and CJB Brimelow. Instrumentation and sensors for the food industry, 2nd Ed., CRC Press, Cambridge, England. Brosnan T and D Sun. 2004. Improving quality inspection of food products by computer vision—a review, Journal of Food Engineering, 61, pp. 3–16. Camarena F and JA Martinez-Mora. 2006. Potential of ultrasound to evaluate turgity and hydration of the orange peel, Journal of Food Engineering, 75, pp. 503–507. Castillo M, FA Payne, MB Lopez, E Ferrandini, and J Laencina. 2005. Optical sensor technology for measuring whey fat concentration in cheese making, Journal of Food Engineering, 71, pp. 354–360. Chandraratne MR, S Samarasinghe, D Kulasiri, and R Bickerstaffe. 2006. Prediction of lamb tenderness using image surface texture features, Journal of Food Engineering, 77, pp. 492–499. Chen XD, DXY Li, SXQ Lin, and N Oezkan. 2004. On-line fouling/cleaning detection by measuring dielectric resistance-equipment development and application to milk fouling detection and chemical cleaning monitoring, Journal of Food Engineering, 61, pp. 181–189. Curda L and O Kukackova. 2004. NIR spectroscopy: a useful tool for rapid monitoring of processed cheeses manufacture, Journal of Food Engineering, 61, pp. 557–560. Dionisi F, B Hug, and C Reh. 1998. Fat extraction from foods: Classical methods and new developments, Recent Res. Devel. In: Oil Chem., 2, pp. 223–236. Du C-J and DW Sun. 2005a. Pizza sauce spread classification using colour vision and support vector machines, Journal of Food Engineering, 66, pp. 137–145. Du C-J and DW Sun. 2005b. Comparison of three methods for classification of pizza topping using different colour space transformations, Journal of Food Engineering, 68, pp. 277–287. Everard CD, CC Fagan, CP O’Donnell, DJ O’Callaghan. and JG Lyng. 2006. Dielectric properties of process cheese from 0.3 to 3 GHz, Journal of Food Engineering, 75, pp. 415–422. Fox P, P Probert Smith, and S Sahi. 2004. Ultrasound measurements to monitor the specific gravity of food batters, Journal of Food Engineering, 65, pp. 314–324. Gan TH, P Pallav, and DA Hutchins. 2006. Non-contact ultrasonic quality measurements of food products, Journal of Food Engineering, 77, pp. 239–247. Guidance for industry. 2004. PAT—A framework for innovative pharmaceutical development, manufacturing and quality assurance, FDA, Pharmaceutical CGMPs, September. Hatcher DW, SJ Symons, and U Manivannan. 2004. Developments in the use of image analysis for the assessment of oriental noodle appearance and colour, Journal of Food Engineering, 61, pp. 109–117. Hepworth NJ, JRM Hammond, and J Varley. 2004. Novel application of computer vision to determine bubble size distributions in beer, Journal of Food Engineering, 61, pp. 119–124. Jha SN, S Chopra, and ARP Kingsly. 2007. Modeling of color values for non-destructive evaluation of maturity of mango, Journal of Food Engineering, 78, pp. 22–26.
30
Nondestructive Testing of Food Quality
Juodeikiene G and L Bawsinskiene. 2004. Non-destructive texture analysis of cereal products, Journal of Food Engineering, 37, pp. 603–610. Karoui R and J De Baerdemaeker. 2006. A review of analytical methods coupled with chemometric tools for the determination of the quality and identity of dairy products, Food Chemistry, in press. Karoui R, B Kemps, F Bamelis, B De Ketelaere, E Decuypere, and J De Baerdemaeker. 2006. Methods to evaluate egg freshness in research and industry: A review, Eur Food Res Technol, 22, pp. 727–732. Katsumata T, T Suzuki, H Aizawa, and E Matashige. 2007. Photoluminescence evaluation of cereals for a quality control application, Journal of Food Engineering, 78, pp. 588–590. Kilic K, IH Boyaci, H Koeksel, and I Kuesmenoglu. 2007. A classification system for beans using computer vision system and artificial neural networks, Journal of Food Engineering, 78, pp. 897–904. Kim MS, AM Lefcourt, Y-R Chen, and Y Tao. 2005. Automated detection of fecal contamination of apples based on multispectral fluorescence image fusion, Journal of Food Engineering, 71, pp. 85–91. Kimbaris AC, NG Siatis, CS Pappas, PA Tarantilis, DJ Daferera, and MG Polissiou. 2006. Quantitative analysis of garlic (Allium sativum) oil unsaturated acrylic components using FT-Raman spectroscopy, Food Chemistry, 94, pp. 287–295. Kueppers S and M Haider. 2003. Process analytical chemistry—future trends in industry, Anal Bioanal Chem, 376, pp. 313–315. Lana MM, LMM Tijskens, and O van Kooten. 2006. Effects of storage temperature and stage of ripening on RGB colour aspects of fresh-cut tomato pericarp using video image analysis, Journal of Food Engineering, 77, pp. 871–879. Lanher BS. 1996. Evaluation of Aegys MI 600 Fourier Transform Infrared milk analyzer for analysis of fat, protein, lactose and solid non-fat: a compilation of eight independent studies, J. of AOAC Int., 79 (6), pp. 1388–1399. Leemans V and M-F Daestain. 2004. A real time grading method of apples based on features extracted from defects, Journal of Food Engineering, 61, pp. 83–89. Lefier D, R Grappin, and S Pochet. 1996. Determination of fat, protein and lactose in raw milk by Fourier Transform Infrared Spectroscopy and by analysis with a conventional filter-based milk analyzer, J. of AOAC Int., 79 (3), pp. 711– 717. Lucas T, A Grenier, S Quellec, A Le Bail, and A Davenel. 2005. MRI quantification of ice gradients in dough during freezing of thawing processes, Journal of Food Engineering, 71, pp. 98–108. Munkevik P, G Hall, and T Duckett. 2007. A computer vision system for appearancebased descriptive sensory evaluation of meals, Journal of Food Engineering, 78, pp. 246–256. Nassar G, B Nongaillard, and Y Noel. 2004. Study by ultrasound of the impact of technological parameters changes in the milk gelation process, Journal of Food Engineering, 63, pp. 229–236. Noda T, S Tsuda, M Mori, S Takigawa, C Matsuura-Endo, S-J Kim, N Hashimoto, and H Yamauchi. 2006. Determination of the phosphorus content in potato starch
An Overview of Nondestructive Sensor Technology in Practice 31 using an energy-dispersive X-Ray fluorescence method, Food Chemistry, 95, pp. 632–637. Nunes AC, X Bohigas, and J Tejada. 2006. Dielectric study of milk for frequencies between 1 and 20 GHz, Journal of Food Engineering, 76, pp. 250–255. Park B, KC Lawrence, WR Windham, and DP Smith. 2006. Performance of hyperspectral imaging system for poultry surface fecal contaminant detection, Journal of Food Engineering, 75, pp. 340–348. Perring L, D Andrey, M Basic-Dvorzak, and J Blanc. 2005. Rapid multimineral determination in infant cereal matrices using wavelength dispersive X-ray fluorescence, Journal of Agricultural and Food Chemistry, 53 (12), pp 4696–4700. Reh C. 2001. In-line and off-line FTIR measurements, pp. 213–232, In: Kress-Rogers E and CJB Brimelow, Instrumentation and sensors for the food industry, 2nd Ed., CRC Press, Cambridge, England. Reh C, SN Bhat, and S Berrut. 2004. Determination of water content in powdered milk, Food Chem., 86, pp.457–464. Resa P, T Bolumar, L Elvira, G Perez, and F Montero de Espinosa. 2007. Monitoring of lactic acid fermentation in culture broth using ultrasonic velocity, Journal of Food Engineering, 78, pp. 1083–1091. Sigfusson H, GR Ziegler, and JN Coupland. 2004. Ultrasonic monitoring of food freezing, Journal of Food Engineering, 62, pp. 263–269. Singh PC, RK Singh, RS Smith, and PE Nelson. 1997. Evaluation of in-line sensors for selected properties measurements in continuous food processing, Food Control, 8 (1), pp. 45–50. Tan J. 2004. Meat quality evaluation by computer vision, Journal of Food Engineering, 61, pp. 27–35. Tanaka F, K Morita, P Mallikarjunan, Y-C Hung, and GOI Ezeike. 2005. Analysis of dielectric properties of soy sauce, Journal of Food Engineering, 71, pp. 92–97. Thybo AK, PM Szcypinski, AH Karlsson, S Donstrup, HS Stodkilde-Jorgensen, and HJ Andersen. 2004. Prediction of sensory texture quality attributes of cooked potatoes by NMR-imaging (MRI) of raw potatoes in combination with different image analysis methods, Journal of Food Engineering, 61, pp. 91–100. Van Dijk C, C Boeriu, F Peter, T Stolle-Smits, and LMM Tijskens. 2006. The firmness of stored tomatoes (cv. Tradiro). 1. Kinetic and near infrared models to describe firmness and moisture loss, Journal of Food Engineering, 77, pp. 575–584. Veliyulin E, HS Felberg, H Digre, and I Martinez. 2006. Non-destructive nuclear magnetic resonance image study of belly bursting in herring (Clupea harengus), Food Chemistry, in press. Wen Z and Y Tao. 2000. Dual-camera NIR/MIR imaging for stemend/calyx identification in apple defect sorting, Transactions of the ASAE, 43 (2), pp. 449–452. Xing H, PS Takhar, G Helms, and B He. 2007. NMR imaging of continuous and intermittent drying of pasta, Journal of Food Engineering, 78, pp. 61–68. Zude M, B Herold, J-M Roger, V Bellon-Maurel, and S Landahl. 2006. Non-destructive tests on the prediction of apple fruit flesh firmness and soluble solids content on the tree and in shelf life, Journal of Food Engineering, 77, pp. 254–260.
Chapter 2 The Influence of Reference Methods on the Calibration of Indirect Methods Heinz-Dieter Isengard
Introduction Direct methods, also called primary methods, measure the property of the sample or an analyte as such. Such direct or primary methods often serve as reference methods. Primary methods may be relatively complicated and time-consuming, and they may require sophisticated instruments and experienced personnel or be very expensive. The use of other methods may therefore be advantageous to avoid these drawbacks. These other methods, however, do not necessarily measure the analyte or the property of the sample itself, but may measure something that depends on the extent of the property or the concentration or amount of the analyte. They are therefore called indirect or secondary methods. Secondary methods may work with simpler and cheaper equipment and may be easy to carry out. In other situations, the main advantage is a gain of time for one analysis. The secondary method may even be applicable in-line or the taking of samples may be avoided. On the contrary, an indirect method does not measure the sample property or the analyte itself. It needs a relation or correlation to a direct method to which it is referred (and which is therefore the reference method). The value measured must allow a conclusion to the property of the sample or the concentration or amount of analyte.
33
34
Nondestructive Testing of Food Quality
Calibration The property or analyte concentration of a sample must be measured with a direct method, which serves in this sense as the reference method. The values found by this technique are often called true values or laboratory values, because they are usually measured in an analytical laboratory in a more elaborate and time-consuming way rather than near a production line. Thus, samples with known properties are then measured by a secondary method, and the values obtained are related to the true values received by the direct technique. This is done with several samples having different properties or analyte concentrations. In this way, value pairs are obtained. The indirect values are plotted versus the direct values, and a regression curve (usually a straight line) is erected. This is the calibration line. A sample with unknown property or analyte concentration can now be measured by the indirect method. From the value measured, the property or analyte concentration can be read off from the calibration line. When chemometric methods are applied, the analytical data points are calculated from many measurements. These values are often called predicted values. The calibration line is therefore the graph of the predicted values versus the true values obtained by the reference method. This calibration line should ideally be a straight line with the gradient running through the origin of the coordinate system. The scattering of data points around this line, expressed as the regression coefficient, is a measure of the precision of the method.
Correctness of Results It is always possible to construct a regression line through a number of data points. For every secondary measurement, a corresponding value for the analyte concentration will then be found. In the case of chemometric methods like near infrared (NIR) spectroscopy where the calibration is based on a large number of measurements, the regression coefficient may even be high, indicating a high precision of the measurements. This, however, is no proof for the high accuracy of predicted values. If the reference method used is not really adequate, a good (in terms of mathematics) calibration may be established, based,
The Influence of Reference Methods on Calibration
35
however, on erroneous values. These erroneous reference values will then be translated into predicted values with high precision and repeatability. Consequently, they contain the same errors as the reference method. A secondary method cannot yield more accurate results than the reference method used. It is therefore a vital condition for correct results obtained by the indirect method that the reference values be correct. The reference values must be true.
Examples: Water Content Determination Water content—and thus dry matter—is often determined by drying techniques, particularly by drying the product at a certain temperature for a certain time in a drying oven. Drying techniques, such as the classical oven drying, vacuum drying, freeze drying, infrared drying, or microwave drying, do not distinguish between water and other volatile substances. The result of all of these methods is not water content but the mass loss the product undergoes under the conditions applied. These conditions (sample size, temperature, pressure, time, energy input, criteria to stop the analysis) can principally be freely chosen. The result depends very much on these conditions but may be very reproducible. This alone shows that this technique, leading to different results when the parameters are changed, cannot be the correct one, because water content is a sample property that has a certain, though unknown value. From the scientific point of view, the results of drying methods should therefore not be called water content but rather mass loss on drying with indication of the drying conditions. In the past years, the term moisture content was introduced as a compromise. It means the relative mass loss by evaporation of water (though possibly not all of the water) and other volatile compounds under the drying conditions. The problem with all drying techniques is that they do not measure water specifically. All of the volatile compounds under the analytical conditions contribute to the mass loss, even compounds that are not originally contained in the sample but are formed by chemical reactions during the analysis, particularly by decomposition reactions at higher temperatures. On the other hand, strongly bound water may escape detection.
36
Nondestructive Testing of Food Quality
These conflicting errors, inclusion of other volatiles on the one hand and water not detected on the other hand, may account for each other when the drying parameters are chosen in an appropriate way. The appropriate choice of the parameters necessitates, of course, that the true water content has been analyzed before with a method selective for water as a primary method. The parameters of the secondary method must then be chosen in a way in which the result corresponds to the water content determined with the primary method. When the secondary method is calibrated in this way, it can be applied for this particular type of product. The calibration is product-specific, and the same parameters cannot be applied for other types of samples. The most important primary method to determine water content is the Karl Fischer titration, which is based on a chemical reaction selective for water: (2.1a) ROH + SO2 + Z → ZH+ + ROSO− 2 − − + + − ZH +ROSO2 +I2 + H2 O + 2Z → 3ZH + ROSO3 + 2I (2.1b) Overall reaction: − 3Z + ROH + SO2 + I2 + H2 O → 3ZH+ + ROSO− 3 + 2I
(2.2)
Z is a base (very often imidazole), and ROH is an alcohol, usually methanol, recently also ethanol in special reagents. In the first step, the alcohol is esterified with sulphur dioxide to form alkyl sulphite. The base provides for a practically complete reaction, as shown in Equation 2.1a. In the second step, this alkyl sulphite is oxidized by iodine to form alkyl sulphate; this reaction requires water, as shown in Equation 2.1b. The overall reaction (Equation 2.2) shows that the consumption of iodine is stoichiometrically equivalent to water present in the sample. In many situations, water content is determined by drying techniques, most often in a classical drying oven. The results are regarded as water content, although they represent only the mass loss under the applied conditions. This has consequences if this technique is used to calibrate a secondary method like the NIR spectroscopy. For a number of products, the drying results correspond quite well to the water content. In other cases, the difference may be enormous.
The Influence of Reference Methods on Calibration
37
Table 2.1. True values (by Karl Fischer titration [KFT] and oven drying [OD]) and predicted values for six samples of wheat semolina. Reference values (true values) in g/100 g
Sample 1 Sample 2 Sample 3 Sample 4 Sample 5 Sample 6
NIR values (predicted values) in g/100 g
KFT
OD
Calibrated by KFT
Calibrated by OD
12.49 12.88 12.98 13.01 13.47 13.97
12.49 12.99 12.99 13.16 13.52 14.23
12.48 12.79 12.88 12.89 13.46 13.97
12.60 12.96 13.13 12.99 13.61 14.23
An example for the first situation is the water content determination in wheat semolina. Figure 2.1 shows the NIR calibration line based on Karl Fischer titration as the primary method. Figure 2.2 depicts the NIR calibration line based on oven drying (2 hours at 130◦ C). Both figures contain the values used for calibration (calibration set [C-set]) and the values found for validation (validation set [V-set]). The lines for the Cset and the V-set are in both cases nearly identical. This is an indication of a successful calibration. A comparison of Figures 2.1 and 2.2 shows also that they are nearly identical. Table 2.1 gives examples for determinations by the primary methods (Karl Fischer titration [KFT] and drying) and the values that were predicted by NIR measurements calibrated with the corresponding primary method. The juxtaposition shows that both techniques yield practically the same results. The statistical values are, however, slightly better for the KFT-based measurements. For this product, water content can be measured by drying, and NIR determinations can be calibrated on this basis, not only on KFT. For another product, a lactoserum, things are completely different. By KFT, water content can be determined easily. Drying of the sample at 145◦ C leads to a continuous mass loss, far beyond the Karl Fischer value. Figure 2.3 shows the drying curve. The interesting phenomenon is, however, that the dried product still contains water, which can be found by KFT. This is due to the lactose content. The α-modification of lactose contains one mole of water per mole. This water of crystallisation
38 .
Figure 2.1. Calibration line and sample determination based on Karl Fischer titration.
39 Figure 2.2. Calibration line and sample determination based on oven drying.
40
Nondestructive Testing of Food Quality Lactoserum Euvoserum
16.0 14.0 12.0 10.0 % 8.0 6.0 4.0 2.0 0.0 0
50
100
150 Drying time in minutes
200
250
Water content [%] of the original sample determined by Karl Fischer titration
Gravimetric loss of mass [%] of the sample in a drying closet at 145 °C Residuary water content [%], by Karl Fischer titration, of the sample in a drying closet at 145 °C
Figure 2.3. Drying curve of Lactoserum Euvoserum at 145◦ C, water content of the original product determined by Karl Fischer titration and water content of the dried product after various drying times.
cannot be detected completely by drying at this temperature. The mass loss is caused by decomposition reactions in the product, which occur already at lower temperatures. An NIR calibration on the basis of the drying method is possible. This is shown in Figure 2.4. The calibration was established using the samples dried for various times in the oven. The calibration is successful and the mass loss can be predicted. It is, however, obviously not a useful method to determine the water content. Because the original sample contains only 4.5 grams (g) of water per 100 g, values above this value are not possible. The relevant area is highlighted in Figure 2.4. A correct calibration within the relevant water content values could be established using KFT as the reference method. The calibration line is shown in Figure 2.5.
41 Figure 2.4. Calibration line for mass loss determination at 145◦ C of Lactoserum Euvoserum with indication of the highest possible water content in the sample.
42 Figure 2.5. Calibration line for water content determination of samples predried at 145◦ C of Lactoserum Euvoserum based on Karl Fischer titration.
The Influence of Reference Methods on Calibration
43
Conclusion Secondary methods must be calibrated against a primary or direct method. This direct method must determine the entity to be analyzed. The transfer of the values measured by the secondary method is purely a mathematical process, particularly when chemometric or other analytical tools are involved. The relevance or chemical logic of the secondary values and of the calibration is not checked. The secondary method may measure a different property of the sample. Particularly when the calibration line is good, it may lead to the false conclusion that the sample property to be measured is well analyzed. A spectacular error is possible when water content is to be measured. Often drying techniques are used for this purpose. They yield, however, a mass loss as a result. This is caused because all compounds are volatile under the drying conditions, including those that are formed during drying by chemical reactions. For the samples that contain other volatile substances than water or undergo decomposition reactions, a calibration on the basis of a drying technique may be possible. This calibration can then however only predict a mass loss, but not the water content of the product.
Chapter 3 Ultrasound: New Tools for Product Improvement ˙ Ibrahim G¨ulseren and John N. Coupland
Introduction Low-intensity ultrasound is used as a nondestructive evaluation (NDE) technology in fields as diverse as oceanography, geological studies, medicine, material science, and also in foods (Blitz 1963, Hill et al. 2004, Rose 2004, Coupland 2004). The term ultrasound here refers to inaudible, high frequency (that is, >20 kilohertz [kHz]), low energy sound waves that propagate as deformations in the media of interest. Due to their low energy content, ultrasonic propagation for sensing applications does not influence the physical properties of the materials, but is itself influenced by them. In general, ultrasonic sensors are appropriate for online use because they are relatively cheap and robust compared to most other sensing technologies and are applicable for many food applications. In recent years, a number of reviews were published regarding ultrasound and foods (Coupland 2004, Coupland and Saggin 2003, Povey 1998, McClements 1995), and this work focuses mostly on recently published work. The two most common types of ultrasonic waves employed in ultrasonic studies are bulk longitudinal (L) and shear/transverse (T) waves. L-waves propagate in the same direction as the direction of oscillation of material elements (Figure 3.1), whereas for T-waves, propagation is perpendicular to the motion of the direction of oscillation (Figure 3.1). Liquids and gases do not support T-wave propagation over useful distances. Shear waves travel more slowly than longitudinal waves.
45
(a)
(b)
(c)
Figure 3.1. Diagram showing the displacement of volume elements from (a) their equilibrium position due to the propagation of (b) L-waves and (c) T-waves. Propagation is left to right in both cases.
46
Ultrasound: New Tools for Product Improvement
47
Wave propagation is described in terms of the ultrasonic velocity (c, that is, distance traveled per unit time) and attenuation coefficient (α, that is, logarithmic loss of wave-energy per unit distance). These terms can be expressed concisely as a complex wave number, (k) (Coupland and Saggin 2003): ω k = + iα (3.1) c where ω (= 2π f ) is the angular frequency. Ultrasonic parameters are only useful for sensing because they can be measured and related to the physicochemical properties of the food: ρ k = (3.2) ω E where E is the appropriate modulus and ρ the density. Ultrasonic propagation is an adiabatic process rather than an isothermal one and so the moduli may be different than those made in traditional measurements where heat has time to dissipate. Equation 3.3 is often written for L-wave propagation in low attenuation fluids as: 1 c= (3.3) ρκ where κ is the adiabatic compressibility (that is, reciprocal bulk modulus).
Measurement Methods The most commonly used ultrasonic measurement techniques (that is, pulsed and resonance methods), have all been reviewed in detail elsewhere (for example, Coupland 2004, Buckin and Smyth 1999, Sheen et al. 1995, 1996). Briefly, pulsed methods measure the time taken and energy lost for a pulse of ultrasound to travel through a fixed distance in the sample. Pulsed methods can either be implemented with a single transducer acting as a transmitter and receiver in pulse-echo mode (Figure 3.2a) or with separate transmitter and receiver transducers in through transmission mode (Figure 3.2b). Resonance methods measure the frequencies at which a fixed path length of sample resonates under a continuous single frequency of ultrasound (Figure 3.2c). Although
Transducer Container wall Sample container
Container wall
(a)
Transducer 1 Container wall
Sample container
Container wall Transducer 2 (b)
Transducer 1 Container wall
Sample container
Container wall Transducer 2 (c)
Figure 3.2. Diagram showing the alignment of transducers and ultrasonic path for (a) pulse-echo, (b) through transmission, (c) resonator, (d) reflectance, (e) guided wave, and (f) ultrasonic Doppler velocimetry measurement systems.
48
Ultrasound: New Tools for Product Improvement
49
Transducer Transducer Container wall
Sample container Container wall (d)
Transducer Container wall Sample container
Ultrasonic path
Container wall
(e)
Transducer 1 Container wall Sample container
(f)
Container wall
Figure 3.2. (cont.)
resonance methods tend to be the most precise in a lab setting, it may not be possible to achieve their advantages in an online setting. Pulsed measurements are often more rapid. Precise ultrasonic measurements require a fixed and well-defined path length for the sound to propagate across. The path length must be sufficient to cause measurable changes in the ultrasonic signal, yet not too
50
Nondestructive Testing of Food Quality
great so that diffraction of the wave and attenuation by the food will completely absorb the sound energy. This can also be a limitation for online applications where it is often challenging to find a suitable path for an ultrasonic signal in process equipment. Finally, ultrasonic properties are highly dependent on temperature, and temperature control can often limit the precision of both online and laboratory ultrasonic measurements. Solid foods that cannot be contained in a pipe or sample cuvette are particularly difficult to analyze because it is hard to couple the ultrasonic transducer to the sample (even a thin layer of air will attenuate ultrasound completely), and it is hard to control the temperature adequately. In addition, because the transducers must be held against the food and manually supported a fixed distance apart (usually on a pair of calipers or via a stepper-motor), it is not possible to adequately precisely calibrate the path length. Two recent developments in measurement technology have increased the range of options available for ultrasonic measurements, particularly online. Noncontact Measurements Unlike most ultrasonic sensors that must be in direct contact with the food (either directly or more likely via a delay line), noncontact ultrasonic sensors transmit ultrasonic waves through ambient air, which makes them more practical in the monitoring of online processes. However, the acoustic signal will be affected by instability in the air coupling the transducer to the food, and some compensation is required (Cho and Irudayaraj 2003a). Furthermore, any nonparallelism between the food surface and transducer will massively increase the signal losses and limit the value of the readings. Reflectance Measurements In a reflectance measurement, the properties of the food are calculated from the proportion of an ultrasonic pulse reflected at the sample surface. The main ultrasonic parameter that influences the reflection/ transmission of ultrasound from one material to another is the acoustic impedance (Z): Z = ρc
(3.4)
Ultrasound: New Tools for Product Improvement
51
If an ultrasonic wave passes across a boundary between two materials with different impedances (for example, between a food and the container wall), it will be partially reflected and partially transmitted (Figure 3.2d). The amplitude (a) of the signal reflected at the interface between material 1 and 2 can be formulated as follows: Z2 − Z1 ar = Ra = (3.5) ai Z1 + Z2 where subscripts r and i represent the reflected and incident echoes, respectively. The reflectance coefficient can be readily measured and related to the impedance and hence physical properties of the material under investigation. A reflectance measurement is particularly useful when the material is too attenuating to allow a measurement of transmitted sound (that is, shear waves in fluids or otherwise highly attenuating foods). Some recent applications of reflectance methods include the determination of solution viscosity by shear wave reflectance (Saggin and Coupland 2001c), the estimation of foam bubble size by longitudinal wave reflectance (Kulmyrzaev et al. 2000). In all of these cases, the wave would be too highly attenuated by the food to allow transmission measurements. In other cases, reflectance measurements may be preferred over transmission measurements because the instrumental setup is frequently simpler and easier to implement online, particularly because a stable path length is no longer required. For example, reflectance measurements have been used to estimate the concentration of simple solutions (Saggin and Coupland 2001c), the crystallization of lipids (Saggin and Coupland 2002a), and the dissolution of powders (Saggin and Coupland 2002b). Reflectance measurements may be limited because they are only sensitive to the surface of the food and because the transducers may vary in output, they must be regularly calibrated. It is possible to generate combinations of shear and longitudinal waves as a guided wave that can travel long distances along a pipe (or other solid) by frequent reflections with the pipe wall (Figure 3.2e). Each reflection at the wall-food interface will affect the wave properties and so the guided wave signal will depend on the food material properties in a manner similar to reflectance measurements. Guided waves are particularly useful for the analysis of equipment surfaces (such as commercial steel pipes) to detect fouling for example (Hay and Rose 2003).
52
Nondestructive Testing of Food Quality
The ultrasonic transducer is located at a fixed location on the pipe, while the wave propagates through the flowing medium (Figure 3.2f).
Applications An important paradigm in the study of food quality is that composition (ingredients) is modified by processing to produce structures that are responsible for the functional properties of foods, particularly texture. Ultrasound has been used to measure composition, structure, and texture of foods, and in this section, we will review some of the major recent developments in these fields. Ultrasonic Measurement of Food Composition The measurement of the composition of simple binary mixtures by ultrasonic velocity measurements is the simplest and often most successful group of applications. Examples include solid fat in liquid oil (McClements and Povey 1987, 1988a), sugar solutions and fruit juices (Contreras et al. 1992), and changes in emulsion composition (Dickinson et al. 1994). Bamberger and Greenwood (2004) used this approach to measure the density of a liquid flowing in a pipe. For simple liquids, attenuation is typically low and relatively hard to measure and velocity gives large and measurable differences. Measured velocity can be related to composition either via empirical calibration curve or by using theoretical approaches. For example, in the analysis of a two-component system, Equation 3.6 can be rewritten as: 1 φ 1−φ 2 c = + · (3.6) [(1 − φ)ρ1 + φρ2 )] ρ1 c12 ρ2 c22 where ø is the volume fraction. For a two-component system with known volume fraction and component velocities, a convenient definition of mixture velocity becomes: c = c1 φ + c2 (1 − φ)
(3.7)
It is important to recognize that any changes in structure may overwhelm differences due to changes in composition. One good example is the apparent variation of ultrasonic velocity in salmon muscle due to the alignment of myosepta. It is necessary to account for this effect to allow
Ultrasound: New Tools for Product Improvement
53
the estimation of fat content in salmon muscle by ultrasonic velocity measurement (Shannon et al. 2004). Although ultrasonic velocimetry is useful to characterize simple binary mixtures, most foods contain many more ingredients, and their ultrasonic quantification is more complex. The ultrasonic properties of food vary with frequency, but rarely enough for spectral resolution of multiple components (as is often used in infrared [IR] methods). Therefore, to measure multiple components, it is usually necessary to combine the ultrasonic method with another technique (for example, ultrasonic velocity and density measurements for ethanol determination in wine) (Resa et al. 2004) or to make measurements at different temperatures to exploit the different temperature dependency of the ultrasonic properties of different food components. For example, the speed of sound in the aqueous portion of chicken increases with temperature and solids content while the speed of sound in the fatty portion decreases with temperature. Using the method of mixtures, Chanamai and McClements (1999) were able to make measurements at two temperatures and calculate the fat, water, and solids content of a chicken sample. Similarly, the oil content in waste water from an olive oil extraction process was measured (Benedito et al. 2004) and the composition of processed foods (Simal et al. 2003). In some cases, reflectance (impedance) measurements are preferable to velocity measurements. For example, foams scatter ultrasound strongly and traditionally through transmission measurements cannot usually be made. However, Fox and others (2004) were able to use ultrasonic reflectance measurements to measure overrun in cake batter. This work is particularly interesting because they also considered bubble size distribution in their theoretical analysis. Elmehdi and others (2003a, 2003b, 2004) were able to reach similar conclusions based on attenuation measurements made in through transmission by using thin (1 to 5 centimeter [cm]) bread samples that were freeze-dried and by using bread dough. Phase transitions in lipids, sugars, salts, and water are important in food quality and can typically be readily detected and monitored by ultrasound, because the physical properties of the solid and liquid phases are very different. In some cases, this can be considered as a simple change in composition and approached using similar methods as described above (for example, McClements and Povey 1987, 1988). Martini and others (2005a, 2005b, 2005c) simultaneously measured the
54
Nondestructive Testing of Food Quality
temperature dependence and ultrasonic properties of six fat blends. The ultrasonic attenuation and velocity increased with increasing solid fat content (SFC) values, because velocity values are more sensitive to crystallization (Martini et al. 2005a). Furthermore, SFC data obtained from ultrasonics agreed well with the SFC data from low-resolution pulsed nuclear magnetic resonance (NMR) (Martini et al. 2005b). Ultrasound is typically more sensitive to small changes in solids than the more commonly used NMR methods, and this is particularly important in the study of droplet crystallization in emulsions. For example, the extent of supercooling in emulsified lipids (Kloek et al. 2000) and the effects of additives on droplet nucleation rate (Awad 2004) were measured ultrasonically. More recent work has examined cases where the structures formed can influence the ultrasonic properties and must also be considered. Lipid crystallization is complex because the solid can exist in a variety of different polymorphic forms and microstructures. Recently, several workers have considered whether the ultrasonic properties of a partially crystalline fat system depend simply on the SFC or also on other microstructural properties. Singh and others (2002, 2004) showed that for cocoa butter and anhydrous milk fat, ultrasonic velocity was not a simple function of SFC and argued that more stable (that is, more dense) polymorphic forms had higher ultrasonic velocities. Martini and others (2005c) showed that ultrasonic attenuation depends on crystal size and lipid microstructure. In another study by Hindle and others (2002), ultrasound was suggested to be sensitive to polymorphic changes in a cocoa butter in an oil-in-water emulsion. Beyond approximately 10% solids, many semicrystalline lipids become highly attenuating and it is difficult to make ultrasonic measurements. In their recent work on highly crystalline fats, Martini and others (2005a, 2005b, 2005c) used a novel chirp-wave signal generator and special transducers, which together had exceptional penetrating power, and they were able to make measurements across a substantial (∼8 cm) thickness of various fats with up to ∼20% solids. Saggin and Coupland (2002a) took an alternative approach and measured the impedance of a sample of semicrystalline fats by a reflectance technique and measured the solid fat content. Ultrasound has been applied less often to the study of phase transitions in water in foods (for example, freezing), but some recent work shows
Ultrasound: New Tools for Product Improvement
55
the technique has some possibilities. Measurement of freezing should be a relatively straightforward application, particularly because ice has (under most conditions) none of the issues around polymorphism that are controversial in the lipids work. However, most frozen foods have a very large volume fraction of ice, and the assumptions in the simple formulations often used to relate solids content to ultrasonic velocity (as shown in Equations 3.6 and 3.7) break down. Furthermore, air bubbles are usually formed during freezing, which scatter sound strongly and can dominate the signal. Lee and others (2004) measured the ultrasonic velocity and ice content (by NMR) in frozen orange juice samples (0 to −50◦ C). Although an increase in ice content corresponded to an increase in ultrasonic velocity, they were unable to provide a simple relationship between these variables akin to those shown in Equation 3.7. Recently we made similar measurements in sucrose solutions (Figure 3.3) and showed (1) the onset of freezing corresponded to an increase in ultrasonic velocity, (2) the temperature at which the increase initiated decreased with increased sugar content, and (3) the ultimate speed of sound decreased with increased sugar content (presumably because of the reduced ice content). Taking an alternative approach, Sigfusson and others (2004) measured the ultrasonic properties of a block of food parallel to the direction of heat flux. They were able to reflect part of an ultrasonic pulse from the moving ice front and thereby position it and calculate the proportion of the food frozen. Ultrasonic Measurement of Food Structure Ultrasound can be used to characterize various scales of structure in food. Macroscale Structure Ultrasound can be used to measure millimeter scale structures in foods using the techniques of imaging. The time of flight of an ultrasonic pulse reflecting from a food surface can measure its position and hence shape by scanning the position of the transducer across a two-dimensional grid. Similarly reflectance from internal structures can visualize internal defects. Most imaging is done with L-wave sound. In most imaging operations, both the sample and transducers are immersed in a tank of water to allow good acoustic coupling as the
56
Nondestructive Testing of Food Quality 3500
3000
C (m/s)
2500
2000
1500
1000 −12
−10
−8
−6
−4
−2
0
2
4
6
T (°C)
Figure 3.3. Speed of sound in sucrose solutions as a function of temperature (●: 5%, h: 10%, ▼: 20%, : 30%, ■: 40%, : 50%, ◆: 60%, ♦: 70% sucrose). The formation of ice corresponded to the discontinuity seen.
transducers move. This is obviously impractical with most nonpackaged foods; however, the recent development of air-coupled ultrasound has opened up a range of new applications. For example, Saggin and Coupland (2001b) and Cho and Irudayaraj (2003b) used noncontact ultrasound to measure the thickness of a variety of sliced meat and cheese products. One useful application of one-dimensional imaging is to position the surface of a liquid during a filling operation. Griffin and others (2001) described how ultrasonic techniques (time of flight and Doppler shift measurements) could be used to monitor and control a bottle-filling operation. Jeffries and others (2002) implemented ultrasonic sensors as part of a control system for liquid level measurements, while other workers used noncontact ultrasonic sensors to measure liquid level in containers (Gan et al. 2002). Foreign bodies (for example, fragments of glass, wood, metal, or plastic) typically have impedances significantly different from that of
Ultrasound: New Tools for Product Improvement
57
the food so they can be ultrasonically imaged as part of a quality control process. This approach was used to detect foreign bodies in bottled beverages, fruit juices, and pie filling (Zhao et al. 2003, Zhao et al. 2004). Internal defects in foods can also be imaged acoustically if there is an impedance mismatch. Benedito and others (2001) were able to detect defects and air cells in cheese. Some of the difficulties that must be overcome include nonparallelism (that is, oblique incidence) and the location of the foreign body and variation in the acoustic properties of the food (Hæggstr¨om and Luukkala 2001). Microscale Structure Ultrasonic imaging is limited in resolution by the wavelength of the sound (∼mm) and the finite size of the acoustic beam. However, smaller structures can be characterized if they scatter sound (that is, are acoustically dissimilar to the material in which they are imbedded). Many colloidal scale objects scatter sound in a frequency-dependent manner, and the spectra can be readily measured. If a theoretical prediction of the ultrasonic properties can be made as a function of structural properties of interest (for example, particle size and concentration), it is possible to calculate details of the structure. This technology has been widely implemented in the commercial development of ultrasonic particle sizers capable of measuring a wider range of size (from 0.01 up to 1,000 micrometer [µm]) and concentration (up to 20 to 30%) than other devices (McClements and Coupland 1996). The principles of scattering theory (McClements and Coupland 1996) and ultrasonic particle sizing (Coupland and McClements 2001) have been described in detail elsewhere. Important recent developments include the extension of scattering theory to higher volume fractions and include flocculated systems (McClements 1991, Dukhin and Goetz 1996). These theoretical developments have allowed characterization of complex instability mechanisms in food emulsions. For example, the flocculation of an emulsion leads to a reduction of the scattering efficiency of the individual droplets that can be detected as changes in attenuation coefficient (Herrmann et al. 2001, Chanamai et al. 2000). Biopolymer structure can also be characterized ultrasonically. Some workers have used ultrasonic scattering to measure the apparent size of protein aggregates (Griffin and Griffin 1990), but it is important to remember that proteins can also absorb ultrasonic energy through chemical resonance of protonation-deprotonation reactions as well as
58
Nondestructive Testing of Food Quality
scattering from aggregates and changes in molecular hydration (Bryant and McClements 1999). Ultrasonic attenuation and velocity were used to detect thermal denaturation in concentrated bovine serum albumin solutions (Apenten et al. 2000). However, Corredig and others (2004) showed that the ultrasonic measurements (for whey protein) were not identical to those from differential scanning calorimetry, suggesting that the ultrasonic method is responding to distinct molecular processes. Measurement of Food Texture Rheology involves the study of the deformation of matter in response to applied forces. Because ultrasonic propagation involves small deformations of the material in response to the sound wave and their elastic recovery, it is reasonable to consider an ultrasonic measurement a microrheological technique and for the readings to relate to macrorheological properties. However, ultrasound operates at higher frequencies (∼103 to 106 times) than those commonly seen in most real-world material deformations and used in small deformation rheological measurements. Furthermore, ultrasonic measurements do not depend purely on the elastic modulus but also on density differences and scattering from homogeneities. Despite these difficulties, there have been considerable successes in using ultrasound to measure the rheological and flow properties of foods. The various approaches to this problem can be categorized as (1) the use of T-waves to directly probe (or infer) shear modulus, (2) the measurement of flow rate (or flow profile) under known conditions and back calculation of viscosity, and (3) the measurement of L-wave properties and correlation with texture. Shear Wave Methods Because we are typically more concerned with the shear modulus than the bulk modulus of foods, shear ultrasonic waves (T-waves) provide a more direct method to access the appropriate modulus. However, as noted above, shear waves do not propagate well in fluids. One way around this measurement difficulty is to use reflectance (impedance) measurements at the interface between the food and a solid delay line of known properties. The proportion of ultrasonic energy reflected from the interface of two materials depends on the impedance mismatch (Equation 3.5).
Ultrasound: New Tools for Product Improvement
59
The normalized shear reflectance for food and hydrocarbon oils was shown to be proportional to the viscosity of oils (Saggin and Coupland 2001a) and diluted honey (Kulmyrzaev and McClements 2000). However, mechanical relaxations that occur in the fluid between the high-frequency ultrasonic measurements and the low-frequency conventional measurements mean the relationship is complex (Kulmyrzaev and McClements 2000, Saggin and Coupland 2004). For example, concentrated sugar syrups have a glass transition frequency below the ultrasonic measurement frequency used (that is, 10 megahertz [MHz]) and therefore, had a very low ultrasonically measured viscosity while their conventionally measured viscosity remained high (Kulmyrzaev and McClements 2000, Saggin and Coupland 2004). The presence of a biopolymer reduced the glass transition frequency by binding water (Saggin and Coupland 2004). Other workers studied the mechanical properties of a casein gel using a shear resonance method and conventional low-oscillatory rheological techniques (Buckin and Kudryashov 2001). Again, time-scale and wavelength differences prevented extrapolation of physical properties such as viscosity, storage modulus, and loss tangent in between the two techniques, but shear resonance was able to reveal complementary information about submicron structure. In other cases, a more empirical approach has been taken to the shearwave characterization of food texture. For example, a number of studies focused on the ultrasonic evaluation of structural and rheological properties of dough and bakery products (Lee et al. 2004a, Elmehdi et al. 2003, Elmehdi et al. 2004, Ross et al. 2004). Fermentation of the dough was characterized by pronounced velocity dispersion at low frequencies (that is, <6 MHz). In addition, ultrasonic measurements were shown to be more sensitive to changes in rheology compared to extensional viscometry, but problems can arise due to the high attenuation of the dough (Lee et al. 2004b). In this last case, L-waves were employed. Flow Curve Determination Although shear waves provide a direct way to probe material bulk modulus, other workers have taken a more indirect, but frequently more practical route to rheological measurements by using ultrasonic techniques to measure the flow rate or flow profile of a fluid under known conditions and back-calculating the viscosity.
60
Nondestructive Testing of Food Quality
Fluid flow rate can be measured by ultrasonic Doppler velocimetry (UDV), which depends on the phase shift caused by propagation in a flowing fluid. The ultrasonic transducer is located at a fixed location on the pipe, while the wave propagates through the flowing medium (Figure 3.2f). The change in frequency between the transmitted wave and its corresponding echo after propagation can be related to the flow velocity profile. Based on a rheological model for the flowing liquid, it is possible to construct a flow profile (Dogan et al. 2003). Recently this technique has been applied to more realistic food processing situations (that is, online viscosity determination of a power-law fluid with a temperature gradient) (Wang et al. 2004). An enhancement of the Doppler technique is to generate an entire flow profile directly by tomographic analysis. In this technique, multiple ultrasonic pulses are sent through the fluid, and the position of the entrained scatterers are measured by the time of flight of returning echoes while their velocity is measured from the frequency shift in the sound. Combining several of these signals, it is possible to map the velocity of the entrained scattering particles (and hence the flowing fluid itself) as a function of position across the pipe. The flow profile obtained can be used to calculate the rheological properties of the fluid. Recently this approach was compared to off-line rheological measurements and online magnetic resonance imaging (MRI) of fluid flow for tomato juice and high-fructose corn syrup samples (Choi et al. 2002). Although the values from all three methods agreed, UDV has the advantages of being online, noninvasive, and relatively cheap (compared to MRI). Both UDV and MRI suffer a reduction in the sensitivity of flow velocity measurements around the pipe walls (Choi et al. 2002). Correlation with L-wave Measurements Although there is only a limited theoretical basis to expect longitudinal wave properties to depend on shear modulus, there are often useful empirical correlations. For example, Benedito and others (2002) investigated the changes in composition of food oils during frying and were able to show a useful empirical relationship between viscosity and L-wave velocity. However, as with any correlation-based method, uncertainty as to the exact factor to which the sensor is responding can lead to confusion. For example Dalgleish et al. (2004) used an ultrasonic resonator to detect the onset of formation of a casein gel. However, they were able to show that the ultrasonic measurements were more sensitive to associated
Ultrasound: New Tools for Product Improvement
61
changes in casein micelle microstructure compared to the changes in bulk properties. A popular empirical texture measurement method using L-wave ultrasound is to monitor the ripening and softening of fruits. For example, a reduction of firmness in plums was associated with a parabolic reduction in attenuation (Mizrach 2004), and maturity related parameters were correlated qualitatively and/or quantitatively to ultrasonic properties in avocado and mango (Mizrach 2000). Similarly, measurements of resonance frequency and apparent attenuation were related to chilling injury in tomatoes (Verlinden et al. 2004). However, as with all measurement systems where there is no reliable theoretical basis to relate sensor response with the property of interest, it is difficult to decide a priori if an application will be successful. For example, while mealy apples have some physical properties distinct from nonmealy fruit, these could not be detected ultrasonically (Mizrach et al. 2003). Similar changes in texture, structure, and composition occur during the ripening of cheeses, and ultrasound has also been used to empirically characterize the changes (Benedito et al. 2000). A novel approach to ultrasonic viscosity determination was taken by Herrmann and McClements (1999), who measured the ultrasonic spectra of fine (190 nanometer [nm]) monodisperse silica particles (5% by volume) suspended in different fluids. Rather than solving the scattering problem to measure particle size (see above), they solved for continuous phase viscosity.
Conclusions Using ultrasonics, we can characterize the food composition, monitor some unit operations (for example, fermentation, heating, cooling), or elucidate molecular processes that may not be otherwise measurable (for example, high frequency rheology, particle sizing in concentrated systems, some online applications). However, the complex structure and composition of foods makes it difficult to uniquely characterize foods using ultrasound alone. The development of ultrasonic sensors will depend on their two major strengths: 1. Making measurements in places where other measurement methods are not appropriate, and 2. Measuring events not detectable by other devices.
62
Nondestructive Testing of Food Quality
One of the major advantages of ultrasound is its flexibility in an online environment. We expect more measurement devices to be integrated into production equipment in novel ways. These will allow greater process control but also better understanding of changes in product structure in process conditions. This will be particularly important in crystallization operations (for example, freezing, confectionary manufacture), baking and dough preparation technologies, and homogenization of fluids. In particular, the combination of ultrasonic and other sensing devices will be used to develop sensor suites capable of a more holistic description of product/process. Acknowledgement The project was supported by the National Research Initiative of the United States Department of Agriculture (USDA) Cooperative State Research, Education and Extension Service, grant number 2003-3550313852. References Apenten RKO, B Buttner, B Mignot, D Pascal, and MJW Povey. 2000. Determination of the adiabatic compressibility of bovine serum albumen in concentrated solution by a new ultrasonic method. Food Hydrocolloids, 14(1): 83–91. Awad TS. 2004. Ultrasonic studies of the crystallization behavior of two palm fats O/W emulsions and its modification. Food Research International, 37(6): 579–586. Bamberger JA and MS Greenwood. 2004. Non-invasive characterization of fluid foodstuffs based on ultrasonic measurements. Food Research International, 37(6): 621–625. Benedito J, JA Carcel, N Sanjuan, and A Mulet. 2000. Use of ultrasound to assess Cheddar cheese characteristics. Ultrasonics, 38(1–8): 727–730. Benedito J, J Carcel, M Gisbert, and A Mulet. 2001. Quality control of cheese maturation and defects using ultrasonics. Journal of Food Science, 66(1): 100–104. Benedito J, A Mulet, J Velasco, and MC Dobarganes. 2002. Ultrasonic assessment of oil quality during frying. Journal of Agricultural and Food Chemistry, 50(16): 4531–36. Benedito J, A Mulet, Clements G, and JV Garcia-Perez. 2004. Use of ultrasonics for the composition assessment of olive mill wastewater (alpechin). Food Research International, 37(6): 595–601. Blitz J. 1963. Fundamentals of ultrasonics. Butterworths, London. Bryant CM and DJ McClements. 1999. Ultrasonic spectroscopy study of relaxation and scattering in whey protein solutions. Journal of the Science of Food and Agriculture, 79(12): 1754–1760.
Ultrasound: New Tools for Product Improvement
63
Buckin V and C Smyth. 1999. High-resolution ultrasonic resonator measurements for analysis of liquids. Seminars in Food Analysis, 4(2): 113–130. Buckin V and E Kudryashov. 2001. Ultrasonic shear wave rheology of weak particle gels. Advances in Colloid and Interface Science, 89 (Sp. Iss): 401–422. Chanamai R and DJ McClements. 1999. Ultrasonic determination of chicken composition. Journal of Agricultural and Food Chemistry, 47(11): 4686–92. Chanamai R, N Herrmann, and DJ McClements. 2000. Probing floc structure by ultrasonic spectroscopy, viscometry, and creaming measurements. Langmuir, 16(14): 5884–5891. Cho B and JMK Irudayaraj. 2003a. Design and application of a non-contact ultrasound velocity measurement system with air instability compensation. Transactions of the ASAE, 46(3): 901–909. Cho BK and JMK Irudayaraj. 2003b. A noncontact ultrasound approach for mechanical property determination of cheeses. Journal of Food Science, 68(7): 2243–47. Choi YI, KL McCarthy, and MJ McCarthy. 2002. Tomographic techniques for measuring fluid flow properties. Journal of Food Science, 67(7): 2718–24. Contreras NI, P Fairley, DJ McClements, and MJW Povey. 1992. Analysis of the sugar content of fruit juices and drinks using ultrasonic velocity measurements. International Journal of Food Science and Technology, 27(5), 515–529. Corredig M, E Verespej, and DG Dalgleish. 2004. Heat-induced changes in the ultrasonic properties of whey proteins. Journal of Agricultural and Food Chemistry, 52(14): 4465–71. Coupland JN. 2004. Low-intensity ultrasound. Food Research International, 37(6): 537–543. Coupland JN and DJ McClements. 2001. Droplet size determination in food emulsions: comparison of ultrasonic and light scattering methods. Journal of Food Engineering, 50(2): 117–120. Coupland JN and R Saggin. 2003. Ultrasonic sensors for the food industry. 45: 101–166. Dalgleish D, M Alexander, and M Corredig. 2004. Studies of the acid gelation of milk using ultrasonic spectroscopy and diffusing wave spectroscopy. Food Hydrocolloids, 18(5): 747–755. Dickinson E, M Jianguo, and MJW Povey. 1994. Creaming of concentrated oil-in-water emulsions containing xanthan. Food-Hydrocolloids, 8(5): 481–497. Dogan N, MJ McCarthy, and RL Powell. 2003. Comparison of in-line consistency measurement of tomato concentrates using ultrasonics and capillary methods. Journal of Process Engineering, 25(6): 571–587. Dukhin AS, PJ Goetz, and CW Hamlet Jr. 1996. Acoustic spectroscopy for concentrated polydisperse colloids with low density contrast. Langmuir, 12(21): 4998–5003. Elmehdi HM, JH Page, and MG Scanlon. 2003a. Using ultrasound to investigate the cellular structure of bread crumb. Journal of Cereal Science, 38(1): 33–42. Elmehdi HM, JH Page, and MG Scanlon. 2003b. Monitoring dough fermentation using acoustic waves. Food and Bioproducts Processing, 81(C3): 217–223. Elmehdi HM, JH Page, and MG Scanlon. 2004. Ultrasonic investigation of the effect of mixing under reduced pressure on the mechanical properties of bread dough. Cereal Chemistry, 81(4): 504–510.
64
Nondestructive Testing of Food Quality
Fox P, P Probert-Smith, and S Sahi. 2004. Ultrasound measurements to monitor the specific gravity of cake batters. Journal of Food Engineering, 65(3): 317–324. Gan TH, DA Hutchins, and DR Billson. 2002. Preliminary studies of a novel air-coupled ultrasonic inspection system for food containers. Journal of Food Engineering, 53(4): 315–323. Griffin WG and MCA Griffin. 1990. The attenuation of ultrasound in aqueous suspensions of casein micelles from bovine milk. Journal of the Acoustical Society of America, 87(6): 2541–50. Griffin SJ, JB Hull, and E Lai. 2001. Development of a novel ultrasound monitoring system for container filling operations. Journal of Materials Processing Technology, 109(1–2): 72–77. Hæggstr¨om E and M Luukkala. 2001. Ultrasound detection and identification of foreign bodies in food products. Food Control, 12(1): 37–45. Hay TR and JL Rose. 2003. Fouling detection in the food industry using ultrasonic guided waves. Food Control, 14(7): 481–488. European Physical Journal E, 5(2): 183–188. Herrmann N and DJ McClements. 1999. Influence of visco-inertial effects on the ultrasonic properties of monodisperse silica suspensions. Journal of the Acoustical Society of America, 106(2): 1178–1181. Herrmann N, Y Hemar, P Lemar´echal, and DJ McClements. 2001. Probing particle-particle interactions in flocculated oil-in-water emulsions using ultrasonic attenuation spectrometry. Hill CR, JC Bamber, and GR ter Haar. 2004. Physical principles of medical ultrasonics. 2nd ed. John Wiley & Sons, Ltd. West Sussex, England. Hindle SA, MJW Povey, and KW Smith. 2002. Characterizing cocoa-butter seed crystals by the oil-in-water emulsion crystallization method.Journal of the American Oil Chemists Society, 79(10): 993–1002. Jeffries M, E Lai, and JB Hull. 2002. Fuzzy flow estimation for ultrasound-based liquid level measurement. Engineering Applications of Artificial Intelligence, 15(1): 31–40. Kloek W, P Walstra, and T van Vliet. 2000. Nucleation kinetics of emulsified triglyceride mixtures. Journal of the American Oil Chemists’ Society. 77(6): 643–652. Kulmyrzaev A, C Cancelliero, and DJ McClements. 2000. Characterization of aerated foods using ultrasonic reflectance spectroscopy. Journal of Food Engineering, 46(4): 235–241. Kulmyrzaev A and DJ McClements. 2000. High frequency dynamic shear rheology of honey. Journal of Food Engineering, 45(4): 219–224. Lee S, LJ Pyrak-Nolte, and O Campanella. 2004a. Determination of ultrasonic-based rheological properties of dough during fermentation. Journal of the Texture Studies, 85: 33–51. Lee S, LJ Pyrak-Nolte, P Cornillon, and O Campanella. 2004b. Characterisation of frozen orange juice by ultrasound and wavelet analysis. Journal of the Science of Food and Agriculture, 84(5): 405–410. Martini S, C Bertoli, ML Herrera, I Neeson, and A Marangoni. 2005a. In situ monitoring of solid fat content by means of pulsed nuclear magnetic resonance
Ultrasound: New Tools for Product Improvement
65
spectrometry and ultrasonics. Journal of the American Oil Chemists’ Society, 82(5): 305–312. Martini S, ML Herrera, and A Marangoni. 2005b. New technologies to determine solid fat content online. Journal of the American Oil Chemists’ Society, 82(5): 313–317. Martini S, C Bertoli, ML Herrera, I Neeson, and A Marangoni. 2005c. Attenuation of ultrasonic waves: influence of microstructure and solid fat content. Journal of the American Oil Chemists’ Society, 82(5): 319–328. McClements DJ. 1991. Ultrasonic characterisation of emulsions and suspensions. Advances in Colloid and Interface Science, 37(1–2): 33–72. McClements DJ. 1995. Advances in the application of ultrasound in food analysis and processing. Trends in Food Science and Technology, 6: 293–299. McClements DJ and JN Coupland. 1996. Theory of droplet size distribution measurements in emulsions using ultrasonic spectroscopy. Colloids and Surfaces A: Physicochemical and Engineering Aspects, 117(1–2): 161–170. McClements DJ and MJW Povey. 1987. Solid fat content determination using ultrasonic velocity measurements. International Journal of Food Science and Technology, 22(5), 491–499. McClements DJ and MJW Povey. 1988. Comparison of pulsed NMR and ultrasonic velocity techniques for determining solid fat contents. International Journal of Food Science and Technology, 23(2), 159–170. Mizrach A. 2000. Determination of avocado and mango fruit properties by ultrasonic method. Food Research International, 38(1–8): 717–722. Mizrach A. 2004. Assessing plum fruit quality attributes with an ultrasonic method. Food Research International, 37(6): 627–631. Mizrach A, A Bechar, Y Grinshpon, A Hofman, H Egozi, and L Rosenfeld. 2003. Ultrasonic classification of mealiness in apples. Transactions of the ASAE, 46(2): 397–400. Povey MJW. 1998. Ultrasonics of food. Contemporary Physics, 39(6): 467–478. Resa P, L Elvira, and FM Mondero de Espinosa. 2004. Concentration control in alcoholic fermentation processes from ultrasonic velocity measurements. Food Research International, 37(6): 587–594. Rose JL. 2004. Ultrasonic waves in solid media. Cambridge University Press, Cambridge, UK. Ross KA, LJ Pyrak-Nolte, and OH Campanella. 2004. The use of ultrasound and shear oscillatory tests to characterize the effect of mixing time on the rheological properties of dough. Food Research International, 37(6): 567–577. Saggin R and JN Coupland. 2001a. Oil viscosity measurement by ultrasonic reflectance. Journal of the American Oil Chemists Society, 78(5): 509–511. Saggin R and JN Coupland. 2001b. Non-contact ultrasonic measurements in food materials. Food Research International, 34(10): 865–870. Saggin R and JN Coupland. 2001c. Concentration measurement by acoustic reflectance. Journal of Food Science, 66(5): 681–685. Saggin R and JN Coupland. 2002a. Measurement of solid fat content by ultrasonic reflectance in model systems and chocolate. Food Research International, 35(10): 999–1005.
66
Nondestructive Testing of Food Quality
Saggin R and JN Coupland. 2002b. Ultrasonic monitoring of powder dissolution. Journal of Food Science, 67(4): 1473–1477. Saggin R and JN Coupland. 2004. Rheology of xanthan/sucrose mixtures at ultrasonic frequencies. Journal of Food Engineering, 65(1): 49–53. Shannon RA, PJ Probert-Smith, J Lines, and F Mayia. 2004. Ultrasound velocity measurement to determine lipid content in salmon muscle; the effects of myosepta. Food Research International, 37(6): 611–620. Sheen S, H Chien, and A Raptis. 1995. An in-line ultrasonic viscometer. Rev. Prog, Quant. Nondestructive Evaluation. 14A: 1151–58. Sheen S, H Chien, and A Raptis. 1996. Measurement of shear impedances of viscoelastic fluids. IEEE Ultrason. Symp. Proc., pp. 453–457. Sigfusson H, GR Ziegler, and JN Coupland. 2004. Ultrasonic monitoring of food freezing. 62(3): 263–269. Singh AP, DJ McClements and AG Marangoni. 2002. Comparison of ultrasonic and pulsed NMR techniques for determination of solid fat content.Journal of the American Oil Chemists’ Society, 79(5): 431–437. Simal S, J Benedito, G Clemente, A Femenia, and C Rossell´o. 2003. Ultrasonic determination of the composition of a meat-based product. Journal of Food Engineering, 58(3): 253–257. Verlinden BE, V De Smedt, and BM Nicola¨ı. 2004. Evaluation of ultrasonic wave propagation to measure chilling injury in tomatoes. Postharvest Biology and Technology, 32(1): 109–113. Wang L, KL McCarthy, and MJ McCarthy. 2004. Effect of temperature gradient on ultrasonic Doppler velocimetry measurement during pipe flow. Food Research International, 37(6): 633–642. Zhao B, OA Basir, and GS Mittal. 2003. Detection of metal, glass and plastic pieces in bottled beverages using ultrasound. Food Research International, 36(5): 513–521. Zhao B, Y Jiang, OA Basir, and GS Mittal. 2004. Foreign body detection in foods using the ultrasound pulse/echo method. Journal of Food Quality, 27(4): 274–288.
Chapter 4 Use of Near Infrared Spectroscopy in the Food Industry Andreas Niem¨oller and Dagmar Behmer
Introduction The potential of near infrared (NIR) spectroscopy was discovered in the 1960s, first for the rapid characterization of agricultural and food products. Today NIR is widely used in the agriculture and food industry for nondestructive qualitative and quantitative analysis of raw materials, in-process materials, and finished products throughout the entire manufacturing process. Key to this rapid propagation in the past decades was not only the development of easy-to-use instruments, accessories, and software, but also the huge success story of the personal computer (PC). In fact, the availability of affordable computing power was the main limitation for the development because the evaluation of NIR data is a very demanding task. Since the early 1990s, commercially available PCs are now powerful enough for the task, which gave the final impulse to the worldwide use of NIR in many industries. By looking at the advantages of NIR spectroscopy, it becomes clear why its use is so valued. NIR allows fast and nondestructive analysis with little or no sample preparation. With a minimum of training, any person in production or in the lab is able to perform an analysis, without individual-related errors. The fast measurement and the simultaneous determination of multiple components allow the analysis of many more samples in a given time, leading to a better understanding and control of raw materials, products, and the production process. Furthermore, by remote sampling using fiber optic probes, the analysis can be performed in real-time without any interaction of operators. 67
68
Nondestructive Testing of Food Quality
In this chapter, an overview is given of the principles and advantages of NIR in general, followed by a description of different types of NIR instruments and accessories. Another important aspect is the development of calibrations for NIR that are based on multivariate statistical methods, which are different from the classical univariate approach commonly used in analytical chemistry. Finally, applications of NIR on various food materials and products are described. Here we are mainly describing what to take care of when analyzing samples and how to develop a robust application. Such practical aspects and details of sampling or measurement conditions are of great importance to make NIR work successfully. History and Basic Principles of NIR Spectroscopy History of NIR Spectroscopy Infrared spectroscopy has been a well-established research tool in academia and industry for several decades. The infrared radiation was first postulated by Sir Isaac Newton in 1666, but it was not until 1800 that Sir William Herschel discovered the relative energy of this radiation by breaking the natural sunlight with a prism and placing a thermometer beyond the red end of the visible spectrum. From the resulting rise in temperature, Herschel postulated that the spectral range of the sunlight is continued beyond the red part of the visible. It took more than one and one-half centuries before the near infrared spectroscopy was recognized as an accepted analytical tool. It was left to Karl Norris in the 1960s at the United States Department of Agriculture (USDA) to bring the technology of NIR spectrometers forward. Only at that time, computers were emerging with sufficient power to analyze the large amounts of spectral information from agricultural samples together with compositional data. Soon after this, Phil Williams, grain research scientist at the Canadian Grain Commission (CGC), achieved remarkable success by replacing Kjeldahl measurements for the determination of protein in grain by NIR spectroscopy. This was the beginning of the NIR era, where applications spread quickly beyond agriculture into other industries (Williams 2001, Williams and Norris 2001). In the course of development of analytical technology, the demand for reliable and fast analysis tools is constantly growing. Securing the best possible product quality at every stage in the production chain is
Use of Near Infrared Spectroscopy in the Food Industry
69
of vital interest in all industries. With NIR spectroscopy, the identity of raw materials can be checked at the goods in control within seconds, an in-line monitoring during the process ensures that everything is under control, and testing the finished product before the delivery to the customer safeguards the success of the company. NIR technology (Siesler et al. 2001) has been used for some decades now in the food and feed industry, and in recent years, more and more other industries see its benefits. Applications can be found in the chemical, petrochemical, and polymer industry as well as in the pharmaceutical industry. Even paper and textile manufacturers take advantage of this technology. But what is the story behind this success? What is NIR? Near-infrared spectroscopy can be defined as the analysis of materials regarding their tendency to absorb light in a certain area of the electromagnetic radiation. The resulting spectrum is a molecular fingerprint of the material that allows for very precise identification and quantification of the given material. The near infrared portion of the electromagnetic spectrum extends from 800 to 2,500 nanometer (nm) (12,500 to 4,000 centimeter [cm]−1 ), between the conventional midinfrared (MIR) region at longer wavelengths and the visible (VIS) range at shorter wavelengths. The NIR spectrum is characterized by overtones and combinations of the fundamental molecular vibrations of molecules containing C–H, N–H, or O–H groups, making NIR spectroscopy the first choice for the analysis of all major parameters in food like moisture, fat, protein, starch, or amino acids. Figure 4.1 illustrates the high information content of NIR spectra with multiple information about components containing C–H, N–H, and OH groups. The NIR bands usually overlap leading to a spectrum with broad peaks, which makes the NIR spectrum of a sample more difficult to interpret as compared to its MIR spectrum. However, within these NIR spectra, which are comparably poor in features, there is considerable information about the molecular and physical structure of the sample. This information can be accessed by modern multivariate data processing and evaluation methods to analyze the sample composition. Advantages of NIR Most spectroscopic techniques are fast and accurate compared to wet chemistry, but NIR has some other advantages that makes it very
70
Nondestructive Testing of Food Quality
Figure 4.1. NIR spectra of food samples with assignment of spectral regions with characteristic absorbance bands for C–H, N–H, and O–H groups. From bottom to top at 7,000 cm−1 : Sunflower seed, milk powder, cheese, and raw meat.
useful for routine analysis in quality control labs and in process control. Because of the low absorption of the overtone and combination bands in the NIR range, it is not necessary to dilute liquid samples, and a bigger amount of even heterogeneous solid or pasty samples can be analyzed at a time compared to MIR measurements. In addition, NIR samples could be measured through glass (that is, liquids in glass vials or cuvettes and solid samples in sample cups made of glass or with glass windows). Even remote measurements using quartz fiber optics are possible and are particularly well suited for online process monitoring with remote sampling. For all mentioned properties, the main advantage of NIR is that no sample preparation is required, which is a potential source for errors in many other techniques. Most samples are filled in a vial or sample cup and simply measured with the appropriate accessory or instrument module. Because of the nondestructive nature of the spectroscopic analysis, the samples can be used for further tests. In addition, the running costs are quite low. No special materials, solvents, preparation time, and waste disposal are needed. These factors can be expensive or add to the cost of spectroscopic analyses.
Use of Near Infrared Spectroscopy in the Food Industry
71
As a spectroscopic analysis, it is fast (measurement times are 10 to 60 seconds). Without sample preparation, even more time is gained compared with wet chemical analysis. Therefore, NIR is providing a high sample throughput in a lab and nearly real-time analysis in process monitoring. Finally, NIR is a unique tool especially for food analysis because of the simultaneous determination of multiple components per measurement. In contrast to running different destructive wet chemistry methods on subsamples, all major components like moisture, fat, protein, and others are derived from only one measurement. The overall analysis time and the amount of work are the same for several components whereas for wet chemistry, all efforts and time must be summed up to compare. An additional advantage is that NIR spectra often contain information about physical properties such as particle size, thermal, and mechanical pretreatment, viscosity, density, temperature, etc. However, for certain applications, this could be a disadvantage, because such effects could mask the spectral information of interest. NIR is not a primary method, and instruments need to be calibrated for the quantitative evaluation of spectra or the identification of materials. Here the precision of quantitative NIR methods is directly related to reference methods based mainly on wet chemistry, which are less precise than spectroscopic methods in general. The high information content of NIR spectra and therefore its ability to determine multiple components on a single measurement are in contrast to the limited interpretation of absorbance bands and spectral features that are an overlay of several overtone and combination bands. Therefore, NIR is sometimes perceived as complex, but it is mainly used in the analysis of known samples with expected composition and not for investigation of unknown samples like MIR. The complexity of NIR information requires more sophisticated evaluation methods and more samples for building multivariate calibrations. NIR Instruments To successfully implement NIR spectroscopy, it is essential to have the best instrument for your given task. Various factors have to be taken into account. The instrument specifications are important, as well as the user interfacing and software provided, the cost of the instrument, and the support from the manufacturer. Another essential issue is the
72
Nondestructive Testing of Food Quality
Figure 4.2. Principle of filter-based instruments.
sample presentation, that is, how much effort is needed to invest before the measurement and the flexibility of the instrument regarding different types of samples (for example, can solids and liquids be measured with the same instrument without changing the setup). Different Types of NIR Instrumentation The following technologies are most common in the NIR spectroscopy. Filter-based Photometers
Filter instruments are the simplest and normally cheapest NIR instruments available and are used to analyze only major components. Compared to the other technologies, they do not produce spectra, because they are not continually scanning. Instead, the instrument uses various filters (normally from 2 up to 19), mounted on a rotating wheel (Figure 4.2) to select small ranges of wavelengths or wave numbers in the spectrum (Figure 4.3). The filter wavelengths are chosen depending on the requested analysis task, for example protein, moisture, or oil. Filter photometers generally lack the flexibility and performance of other instrument types, and can be prone to errors (for example, if the temperature changes). They have the advantage of low cost and can be useful for dedicated analysis of components in high concentrations either in the laboratory or online in a production facility. Scanning Dispersive Grating Spectrophotometers
In this type of spectrometer, broadband light is directed to the sample, and the transmitted or reflected light is then passed through a narrow slit. A concave diffraction grating then disperses the light into separate wavelengths, which are scanned across an exit slit by moving the grating. The discrete wavelengths that pass through the exit slit are
Use of Near Infrared Spectroscopy in the Food Industry
73
Figure 4.3. Dedicated wave numbers covered by filters of a filter-based instrument.
sequentially measured by the detector. Figure 4.4 shows the principle of operation of a scanning dispersive spectrometer. Like the other types of spectrophotometer, these instruments are flexible and offer a full range of analytical techniques. Detector Array Dispersive Spectrophotometers
In a detector array dispersive spectrometer, the scanning grating, exit slit, and detector in Figure 4.5 are replaced by a stationary grating and a detector array. Since different wavelengths fall on different detector elements, they are measured almost simultaneously, so detector array
Figure 4.4. Principle of scanning dispersive grating spectrophotometers.
74
Nondestructive Testing of Food Quality
Figure 4.5. Principle of detector array dispersive spectrophotometers.
dispersive spectrometers can be very fast. The limited number of elements in the array, however, means that this type of spectrometer is usually either low resolution, or only covers a limited portion of the spectrum. With no moving parts, these instruments are also very rugged. Fourier Transform NIR Spectrophotometers
In a Fourier transform (FT) instrument, the broadband light source is directed to a Michelson interferometer. The interferometer consists of a beamsplitter, a fixed mirror, and a mirror that moves back and forth very precisely. The beamsplitter reflects half of the light to the fixed mirror and transmits the other half to the moving mirror. Both the fixed and moving mirror will direct the light back to the beamsplitter, where they interfere. This interference is changing periodically as the moving mirror is displaced, and therefore, the intensity of the light at the detector changes. The intensity as a function of mirror displacement is called an interferogram, which must then be Fourier transformed to obtain the spectrum. A detailed description of the operation of Fourier transform spectrometers can be found in the literature (Griffiths and de Haseth 1986). A schematic representation of a modern Fourier transform instrument with cube corner mirrors is shown in Figure 4.6, where both mirrors are moved during measurement. Fourier transform spectrometers offer high resolution, good speed, and high signal-to-noise ratios. Their biggest advantage, however, stems from the fact that the position of the moving mirror is controlled using a helium neon (HeNe) laser. The inherent wavelength stability of the laser results in very high wavelength accuracy and precision. This in turn means a calibration model is very stable over time, and permits easy transfer of calibration models between instruments.
Use of Near Infrared Spectroscopy in the Food Industry
75
Figure 4.6. Principle of a Fourier transform (FT) NIR spectrophotometer.
Advantages and Disadvantages of Spectrometer Technologies All types of instruments are able to analyze major components such as fat, moisture (dry matter), or protein. But even for these instruments, there will be differences in long-term stability and accuracy of the results. The following list details various instruments: r
Filter instruments with interference bandpass filters are a cheap solution for analysis of the major components. The limits for the application are set by the low information obtained on a few discrete wavelengths. Filter calibrations are less accurate over time because the filters are sensible to shifts in the spectra (for example, caused by sample temperature changes) and are not able to compensate changes in the process and product composition. r Scanning dispersive grating instruments are widely and successfully used for all kinds of applications with dedicated sampling modules and accessories. The only relevant limitation is the low wavelength accuracy and resolution because of the slit arrangement and the limited mechanical precision of the wavelength registration. This can affect instrument stability, existing calibration models, and make transfer of a calibration model to a new instrument difficult. In real life, after a change in the optical setup or for the transfer of data between instruments, a standardization procedure is needed.
76
Nondestructive Testing of Food Quality
r
Diode array dispersive instruments are used mainly because of small size, mechanical robustness, and scanning speed. Fast measurements in the region of milliseconds are required (for example, in sorting of fruits or in general for real-time analysis of moving objects, such as on conveyor belts). Because of the number of diodes, the resolution is comparably low, which gives a limit in the information content and therefore in applications beyond major components. Like the scanning grating instruments, the wavelength accuracy gives limitations in long-term stability of predictions and the transferability of data. Because of the small dimensions of the instruments, the temperature of the environment has an influence on the mechanical setup and thus, on the wavelength accuracy as well. r FT instruments have important advantages, which is why, for example, in MIR, only Fourier transform infrared (FT-IR) instruments are available on the market. For standard food applications, the higher resolution of Fourier transform near infrared (FT-NIR) is not of importance for the application. But internally, the high resolution is used to control the wavelength or wave number axis with a highest precision compared to the other discussed setups. In addition to the long-term stability of a single instrument, this is the key for regulatory agencies and big companies to run multiple instruments (for example, in a network with the same calibrations without standardization). With the high throughput of FT instruments, it is possible to have different measurement channels and modules in one instrument. Such multipurpose instruments allow the analysis of different types of sample (liquid, solid, pasty) in an optimal way while the light is switched automatically and via software-control between channels (transmission, reflection, fiber optic probes). Measurement Modes and Sampling Techniques There are three important types of optical measuring modes: transmission, diffuse reflection, and transflection. Based on these modes, different sampling accessories are used depending on the optical properties of the sample and the way NIR is applicable for a certain application. Transmission In transmission measurements, light is directed at a sample with a focused or parallel beam. Some light is absorbed but the remaining
Use of Near Infrared Spectroscopy in the Food Industry
77
Figure 4.7. Principle of transmission measurements.
light is transmitted to the detector (Figure 4.7). This type of measurement is not only used for clear liquids (direct transmission); even diffusely reflecting or slightly scattering samples like grain and pasty samples can be analyzed in this way (diffuse transmission). For transmission measurements, the following sampling tools and accessories are important: r
By measuring liquids in rectangular quartz cuvettes of path length between 1 millimeter (mm) and 10 mm, the optic settings are optimal concerning material and parallel windows, but cuvettes must be cleaned afterwards. In routine use, however, custom-sized disposable glass vials are much more important because they are thrown away with the sample after analysis. Vials with diameters between 5 and 22 mm allow easy filling with pipettes without air bubbles. The standard size is 8 mm diameter with around 5 mm path length. It is important to maintain glass material and diameter over time to keep calibrations valid (Figure 4.8).
Figure 4.8. Transmission measurement of olive oil samples in 8-mm disposable vials.
78
Nondestructive Testing of Food Quality
Figure 4.9. Setup of fiber optic probes for different measurement modes: transmission probe (left), transflection probe (middle), and reflection probe (right). r
Cuvette or vial holders may include a temperature-regulated block. The control of the temperature is most important for aqueous solutions. Other samples like oils and fats are heated to obtain a clear liquid by melting. r With fiber optic-based liquid probes, the measurement is similar to a rectangular cuvette, but the light is guided to and from the sample with single fibers. Like a cuvette, the cavity at the probe head has a fixed path length that enables a liquid sample to enter (Figure 4.9, left). Handheld probes are used off-line or at-line to analyze liquids directly in containers (for example, in the warehouse). For online or in-line installation standard and customized process, probes with a broad range of materials and geometric setups are available. r Petri dishes are used for pasty samples that are a consistency between solids and liquids concerning their optical properties. In contrast to reflection with transmission polystyrene, Petri dishes can be used where glass is not allowed (for example, in production areas in dairy and meat industry). In addition, the cheap disposable polystyrene Petri dish can be discarded together with the sticky and fatty sample; no cleaning is necessary. r For grain analysis, special Near Infrared Reflectance and Transmission (NIT) instruments were developed where the grain is moving through the instrument.
Use of Near Infrared Spectroscopy in the Food Industry
79
Figure 4.10. Principle of transflection measurements.
Transflection Transflection is an extension of the transmission technique, as shown in Figure 4.10. When a mirror is placed behind the sample, the light transmitted through the sample is reflected back through the sample and into the diffuse reflectance probe or integrating sphere. Transflection thus measures a combination of transmission and reflection. This technique is useful for emulsions, gels, and turbid liquids. Transflection probes (Figure 4.9, middle) are available as well to analyze turbid liquids, such as milk, or in a fermentation process in-line. Diffuse Reflection When light is reflected from solid surfaces or particles in powders, pellets, or granulates, it is referred to as diffuse reflection. Here the reflected energy has an angular distribution independent of the incident angle of the illuminating beam (Figure 4.11). Depending on the sample, light may penetrate beyond the surface a significant distance (for example, for powders it’s approximately 2 to 4 mm depending on particle size, wavelength, and density). Therefore, there is the potential of quantifying the components within the sample. The following two types of accessories measure by diffuse reflection: r
In an integrating sphere, light is directed in a broad nearly parallel beam onto a sample, as shown in Figure 4.12. The diffuse reflected light is well distributed in the sphere by multiple diffuse reflections at the gold-plated inner surface. In this way, the detector is facing homogenized light independent from the specific point of entry and the incident angle of the light. Because of this and the large sampling spot (approximately 14-mm diameter), an integrating sphere is
80
Nondestructive Testing of Food Quality
Figure 4.11. Principle of diffuse reflection measurements.
well suited to inhomogeneous samples and particle sizes above a fine powder. r If the sample is very inhomogeneous or has large particles, a rotating cup, as shown in Figure 4.13, can be placed on the integrating sphere to provide further averaging. r The diffuse reflectance probe uses a fiber-optic cable with multiple fibers. Half of the fibers in the bundle are typically used to
Sample
Detector
Mirror
Source
Figure 4.12. Principle of an integrating sphere.
Use of Near Infrared Spectroscopy in the Food Industry
81
Figure 4.13. Rotating cup on integrating sphere.
transmit light to the sample, and half to return the reflected light to the spectrometer. However, the sample spot will be between 1- and 5-mm diameter so they can be used only for homogeneous samples or when inhomogeneous samples are moving during measurement. A schematic diagram of a diffuse reflectance probe is shown in Figure 4.9, right. Categories for Implementation of NIR Instruments Applications could be implemented in different ways depending on the demands of quality control, the sampling method, and the location of the NIR measurement. r
Laboratory or off-line analysis: Samples collected from the various parts of production are analyzed in the laboratory. r At-line analysis: The instrument is placed close to the operation area (warehouse, production, or packaging) to reduce analysis time. The at-line analysis is beneficial when a few production machines should be controlled with a centrally placed NIR instrument. Retrieving
82
Nondestructive Testing of Food Quality
samples, accessing the spectrometer, and cleaning the accessories must be designed for unskilled personnel with little knowledge of analytics. Depending on the environment, the instrument may need to be sealed against dust and water. r Online analysis: The analysis is done automatically without any actions from operators. In contrast to in-line analysis, the samples are taken out of the process for analysis or are measured in a bypass setup. r In-line analysis: Here, the sampling accessory (for example, a fiberoptic probe), is integrated into the production line without manual sample collection. For online and in-line analysis, dedicated NIR process instruments are available on the market with dedicated properties in terms of ruggedness, sealed enclosures and cabinets, cleaning in place capability, and integration to existing control systems. Calibration Development Successful calibrations always rely on good NIR measurements (such as the correct measurement technique, the right experiment parameters, and appropriate sample presentation). Of course, a good reference analysis for those component or parameter values to be determined by NIR spectroscopy is also important. But even if both are assured, a good modeling of the calibration data with the optimal parameters is essential. The amount of time that is initially spent measuring samples, performing wet chemistry, and turning the raw data into a good calibration model is well worth the effort to achieve the ultimate goal: reliable predictions of sample properties without sample preparation in a matter of seconds. NIR Calibrations and Reference Analysis NIR is not a “primary” method; therefore, a proper calibration for evaluation of NIR spectra is necessary. For a calibration, accurate reference values obtained from a reliable reference method with good reproducibility are very important. The stability and accuracy of an NIR method is directly linked to the performance of the reference method.
Use of Near Infrared Spectroscopy in the Food Industry
83
There is a common rule that NIR cannot perform better than the reference method on which the NIR calibration is based. However, in the case of extremely large data sets, NIR could perform better because NIR has better reproducibility as a result of little or no sample preparation. This means the absolute information about the component is available from the spectra, and NIR predictions become slightly better because of averaging effects in the calibration step (that is, the effect of samples with reference values at the edge of the reference method accuracy is compensated by a high number of samples). In addition, sometimes care must be taken when NIR and the chosen reference method match (that is, both techniques are the same) is analyzed. An example is the simple analysis of water and moisture, respectively. Water shows strong absorptions in the near infrared and is therefore easy to determine, where differently bound water (adsorbed, absorbed, and crystalline) will give a global signal in NIR. The different kinds of water can often be calibrated separately, but depending on the reference method, different parts are referenced. This actual simple determination of moisture is a concise example for the systematic differences that can occur between NIR and the reference method. In the laboratory, there are different ways for the determination of moisture, which are based either on the chemical conversion of water (Karl Fischer titration [KFT]) or on weight loss by drying in the microwave oven, infrared moisture balance, or the classical drying oven method. Especially when drying the sample, it can happen that the NIR spectrum and the reference value do not fit well, and an apparently bad NIR method is the result. If the sample contains other volatile compounds, they also get lost during the drying process so that the loss on drying is not equal to the evaporated water. On the other hand, the evaporation of moisture depends on the diffusion of water vapor from inside the solid toward the surface. The speed and efficiency of this diffusion process is influenced by the amount of the sample, the particle size, and particle packaging. Therefore, the results are changing with time and sample preparation. The NIR spectrum, however, represents the moisture content absolutely and correctly. The error caused by the difference between moisture and loss on drying can have a great impact on the accuracy of the NIR calibration. Another important influence on the performance of an NIR method is the correct sampling technique. As for any other analytical method, it is important that the sample is representative of the entire batch. The
84
Nondestructive Testing of Food Quality
sample volume should be comparable to that given to the reference analysis. The optimum case would be an NIR measurement followed by the reference analysis of exactly the same sample and volume. The aspect of sample volume is often unaccounted for and can have huge effects, as shown in the following example. One important factor for the manufacturing of sweets is the moisture content in the raw candy mixture. It must be in a narrow range and is controlled in the laboratory by a KFT to an accuracy of 0.1%. To ensure a reliable analysis day and night even by untrained personnel, the analysis should be moved to NIR measurements in reflection with a solid probe, especially since an NIR spectrometer was already available at the goods in control. All samples were pretreated equally, measured with the probe, and referenced by KFT. When comparing the calibrations of sugar-free sweets and sweets containing sugar (Figure 4.14), huge differences could be seen regarding the accuracy. For the sugar-free sweets, the error of the calibration was in the range of the reference analysis, but for the sweets containing sugar, no satisfactory result could be achieved, even when using exactly the same methodology. The only difference between NIR and the reference method was the analyzed sample amount. For KFT, a sample amount of 400 to 500 milligrams (mg) was used, whereas the probe only records a sample amount of approximately 50 mg. Testing smaller amounts than 50 mg with the KFT also showed large differences in the results. So the uneven distribution of the moisture in the sugar-containing samples was the reason of the larger error, compared to the sugar-free samples. By moving the sample in a cup and therefore enlarging the measured sample amount for NIR, a far better calibration method could be achieved, as shown in Figure 4.15. It is also possible that the sample amount measured with NIR spectroscopy is larger than the amount measured with the reference method. The analysis of sausages like salami is a good example of NIR spectroscopy. With NIR, they are scanned in rotating Petri dishes in transmission or reflectance. Because of the rotation of the sample, a good averaging over a reasonably large sample is achieved. For the reference analysis like fat with Soxhlet extraction, only a few grams are used. Because of the local inhomogeneous distribution of fat and meat particles, the fat value derived from Soxhlet could be significantly different from the NIR value.
2.8 2.6 2.4
NIR value / %
2.2 2 1.8 1.6 1.4 1.2 1 0.8 0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
3
3.2
3.4
2.8
reference value / % 3.6 3.4 3.2
NIR value / %
3 2.8 2.6 2.4 2.2 2 1.8 1.6 1.4 1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3.6
reference value / %
Figure 4.14. NIR calibrations of sweets on moisture: (top) sugar free, (bottom) sugar containing. 3
NIR value / %
2.8 2.6 2.4 2.2 2 1.8 1.8
2
2.2
2.4
2.6
2.83
3
reference value / %
Figure 4.15. Moisture calibration of sugar-containing samples after recalibration with moving sample.
85
86
Figure 4.16.
Nondestructive Testing of Food Quality
Sample areas measured with NIR by static and moving samples.
The relationship of sample amount or sample size analyzed with NIR and reference method can be explained with Figure 4.16. As a visual example, the dark polymer pellets are the component of interest in a matrix of white pellets. With a small measurement spot for NIR or a small amount taken for reference analysis, the results will differ between 0 and 3. With a rotating sample for NIR and a reasonable amount for reference analysis, an average value of about 1 is achieved, which is much closer to the real value. Theoretical Basics of Multivariate Calibration A calibration allows the operator of an analytical instrument to relate the instrument signal (independent variable) to properties or concentrations of the sample of interest. The calculation of calibration models is for many analytical techniques, such as high performance liquid chromatography (HPLC), atomic-absorption spectroscopy (AAS), or inductively coupled plasma (ICP), a simple regression step. The measured readings are simply single values (peak intensity, peak area) that are directly correlated with the component concentration values. This so-called univariate analysis involves the use of a single instrument signal to determine a single analyte.
Use of Near Infrared Spectroscopy in the Food Industry
87
Figure 4.17. NIR reflectance spectra of meat and salami with different fat content. From top to bottom: minced meat, 2% fat; minced meat, 11% fat; and salami, 42% fat.
In NIR, the situation is different because the primary measurement result is a spectrum that can potentially contain information of multiple analytes. Figure 4.17 shows spectra of meat and salami sausages with 1,150 data points showing that there are several areas with information about content of fat (peaks around 8,400, 5,700, and 4,300 cm−1 ) and water (10,200, 6,900, and 5,150 cm−1 ), for example. The performance of an NIR calibration could be improved by making use of this multiple information. Therefore, multivariate calibration approaches must be used where more independent variables are correlated to a concentration at a time. Variables taken into account could be selected wavelengths of an NIR spectrum, spectral regions, or even a complete spectrum. The main advantage of those multivariate methods is the ability to calibrate for a component of interest without having to account for any interference in the spectra by other components or the sample matrix. A brief overview is provided about multivariate analysis or so-called chemometric approaches required to evaluate NIR data. For further information, refer to dedicated publications (Martens and Næs 1989, Massart et al. 1997, Massart et al. 1998, Siesler et al. 2001, Bakeev 2005).
88
Nondestructive Testing of Food Quality
Like in other spectroscopic methods, the quantification of component concentrations or other properties is based on Beer-Lambert Law, which states that the absorbance is directly proportional to the concentration of a component. In theory the absorbance at a single wavelength of a sample with multiple components is calculated as an additive function of each of the component concentrations. A=ε∗c∗d
(4.1)
where A is the measured absorbance, ε is the wavelength-dependent molar absorptivity coefficient, c is the analyte concentration, and d is the path length. Here it is assumed that there is no interference in a spectrum between the individual sample components (that is, absorbance at a single wavelength is an additive function of the component concentrations). In addition the concentrations of all the components in the samples must be known even for future samples. For real samples, simple calibration methods will fail because it is impossible to know the entire composition of a mixture. A solution to this problem is to rearrange Beer’s Law and to use the Multiple Linear Regression (MLR) or Inverse Least Squares (ILS) approach: c = A/(ε ∗ d)
(4.2)
By combining the absorptivity coefficient E and the path length d into a single constant p, this can also be expressed as: c = pA + E
(4.3)
where E is a matrix of concentration prediction error. Here the constituent concentrations are an additive function of absorbance values at a single wavelength. This is the most common approach because it’s an important difference to Beer’s law. No knowledge of the sample composition is needed beyond components of interest. There are important drawbacks of this approach that are directly related to the practical application of it. Many wavelengths show collinearity (that is, the absorbance in a spectrum tends to increase and decrease simultaneously according to the concentrations of the components). Furthermore, the number of selected wavelengths must
Use of Near Infrared Spectroscopy in the Food Industry
89
be limited to avoid an effect known as overfitting. Using some wavelength reflecting the components of interest will improve the prediction accuracy. However, by adding more and more wavelength, at some point the predictions will start to get worse because the calibration equations are adapted more and more to the calibration spectra, including spectral noise, which is unique to the calibration set, but not to the samples being analyzed. The selection of only a few representative wavelengths out of the several hundreds available is a task for sophisticated algorithms and needs to be controlled carefully. One needs to select enough wavelengths for an accurate least squares model not affected by the collinearity of the spectral data and few enough to avoid the overfitting effect. To overcome the problem of collinearity and to reduce the number of independent variables (wavelengths), an orthogonal transformation of the data is used for multivariate calibration called Factor Analysis or Principal Component Analysis (PCA) (Malinowski and Howery 1980). Here a number of correlated variables (wavelength) are transformed into a (much smaller) number of uncorrelated variables called principal components. The transformation is driven by looking for variance in the data set. It is assumed that the greatest variances in the calibration spectra represent the major changes in the different component concentrations of the mixtures. The first principal component (PC) or factor accounts for the largest orthogonal part of the variance in the data set, and each succeeding PC accounts for as much of the remaining variance as possible. This way, most of the useful spectral variance based on chemical differences is transformed into so-called loadings of the first principal components or factors (Figure 4.18). Each individual spectrum can now be represented by a linear combination of these loadings. The coefficients related to a spectrum are called scores and are used as new latent variables in multivariate calibration. Performing a regression on the scores of selected PCs and the concentration values of the component of interest leads to a multivariate model based on the orthogonal latent variables instead of collinear wavelength. This approach is called Principal Component Regression (PCR), which is simply a PCA followed by a regression step. The major advantage is the ability to use a reduced number of independent latent variables. The selection of a proper combination of the principal components however is a challenge in applying PCR (Frost and Molt 1998).
90 Figure 4.18.
Example of factorization of simple spectra into corresponding loadings and scores.
Use of Near Infrared Spectroscopy in the Food Industry
91
A related approach called PLS or PLSR (Partial Least Squares Regression) (Haaland and Thomas 1988, Martens and Næs 1989) has some important advantages over PCR, which make it the most commonly used multivariate technique in NIR spectroscopy. PLS actually uses the concentration information during the decomposition process. By calculating the PCs, PLS is taking advantage of the correlation relationship that already exists between the spectral data and the component concentrations. That means only parts of the variance are transformed in the PCs, which are correlating with the concentration values. Therefore, for each component, separate PLS loadings and scores are calculated to store the specific related variance of the data set. This makes the multivariate calibration models based on PLS more specific and more robust, which is important at least for complex mixtures. All application results presented here are based on PLS multivariate models. For more algorithms, refer to the references for Local Weighted Regression (LWR or LOCAL) (Shenk and Westerhaus 1997, Berzaghi et al. 2000), Artificial Neural Networks (ANN) (Næs et al. 1993, Despagne and Massart 1998), and Support Vector Machines (SVM) (Cogdill and Dardenne 2004). Building Calibration Models When the calibration samples are selected, measured by the NIR spectrometer, and then analyzed by the corresponding reference method(s), the calibration method can be developed and validated. The validation of the calibration model is the most essential step, because it proves its ability to predict the desired component concentration or parameter. Such an evaluation is performed by predicting a certain number of samples with known analyte concentration with the chemometric model. A comparison of the predicted values with the actual values shows the precision of the model. In general, two types of validation principles are used for PLS models: cross validation and external validation (also referred to as test set validation). External Validation (Test Set Validation) The sample set used in external validation consists of samples with known reference data that were not used for the model development. The samples must be completely independent. The predicted values of the independent samples are compared with those from the
92
Nondestructive Testing of Food Quality
reference method, often expressed in a plot of NIR predicted versus reference values. An indicator of the average error for NIR prediction on future unknown samples is the Root Mean Square Error of Prediction (RMSEP). Good models are characterized by low RMSEP values, close to the error of the reference method. The advantage of external validation is that it is a true independent validation. The main disadvantages of external validation are that more samples are required and it wastes the valuable calibration data in order to check the predictive ability of the calibration models.
Cross Validation When the number of available calibration samples is small, a procedure called cross validation (also called internal validation) is often used. In this case, individual samples (defined by the user) are taken from the calibration set. Using the remaining samples, an intermediate chemometric model is established and used to analyze the previously extracted samples. A comparison of the results with the actual concentration values shows how precisely the model predicts the samples. By extracting the samples beforehand, it is guaranteed that they are not known to the calibration model and are thus independent. An independent data set is very important; only in this way can the actual precision of prediction be assessed realistically. To assess the complete data set, the samples analyzed previously are returned to the data set, and a second set of test spectra is removed for analysis. This procedure of removing samples, analyzing them, and returning them to the calibration data set is continued successively until all samples or sample subsets have been analyzed once. A comparison of the resulting analysis values with the original raw data allows the calculation of the predictive error of the complete data system, the Root Mean Square Error of Cross Validation (RMSECV). For a cross validation, it is important to remove only a few samples from the data set, because the model built from the remaining data set must be very similar to the model created from the original data. For data sets with fewer than 50 samples, it is strongly recommended to remove not more than one sample for the cross validation. Cross validation and external validation should lead to comparable results. If this is not the case, too few samples were analyzed to establish a reliable method.
Use of Near Infrared Spectroscopy in the Food Industry
93
Results and Parameters of Validation Even NIR is an analytical method with a certain error in prediction. Errors exist in all measurements and cannot be avoided. It is important to identify the accuracy that is needed for a plant to keep a process under control. For NIR, the main source of error is the reference method, but on the other hand, the modeling leading to a PLS calibration must be performed optimally as well to keep the error to a minimum. The result of an external or test set validation is the RMSEP. The RMSEP is the mean deviation of the prediction from the respective reference value calculated with the given model. The result of the cross validation is the mean deviation of all predictions, the RMSECV. The validation results (Test Set or Cross Validation) are displayed by a plot of predicted values versus the reference values. Examples are given in the figures in the application section. The straight line does not represent the regression line but the perfection, that is, predicted and reference values are identical. The number of ranks is equal to the number of factors that were chosen during the multivariate calibration. In most cases, NIR calibrations have a rank of below 10, but if difficult properties are calibrated or the matrix effects have a strong influence, the number can be higher. Experience shows that the fewer ranks a model uses, the more robust it will be in the future. If two models have almost identical RMSEP or RMSECV values, the model with the fewer ranks should be favored, whereas a difference of one rank does not have a big influence. The R2 value states how much information of the reference values the model explains. The closer the value is to 100%, the better the model. The value is related to the RMSEP or RMSECV and the calibration range. The smaller the RMSEP or RMSECV and the broader the range, the closer this number will be to the 100%. How Many Samples in a Calibration Data Set? It is very important to include an adequate number of samples in the calibration set. In general, one should incorporate as many samples as are available. Moreover, the samples should be evenly distributed over the calibration range of interest. Whether or not there are enough samples to build a stable calibration model can only be determined by performing validation tests with an independent validation set. As a rule of thumb, one should start with a minimum of 10 to 15 samples for each independent chemical component or other source of variation. American
94
Nondestructive Testing of Food Quality
Society for Testing and Materials (ASTM) provides guidelines that can be used to judge whether a calibration set contains sufficient samples (ASTM 1997). Calibration Model Updating When a calibration model is finished, it is a good practice to check periodically to ensure the validity of the results (Martens and Næs 1989). This will normally be carried out more frequently at the beginning of a calibration development and will become less, once there is more faith in the robustness of the model. These routine checks are however not only important in the maintenance of the calibration models, but also to demonstrate the capability of NIR spectroscopy and therefore to build up confidence in those using the technique in the laboratory. In addition, this could ensure that the NIR calibration is robust over time regarding changes in the raw materials, in the production process, or in the recipes of final products.
Spectral Transfer Since the late 1980s, NIR has been used more and more in general and especially in the food industry mainly because of improved and cheaper computation power. In addition, the spectrometers changed in design and performance. Even today, many instruments that are quite old are used an average of 10 years before being replaced. Besides the cost of a new instrument, the development of new calibrations would be even more cost and time intensive. In many cases, it would take months or years to collect a new set of representative samples. Here a transfer of the existing calibration to the new instrument is of great benefit. A direct transfer to a new instrument is critical but won’t work if old and new instrument are based on different techniques (for example, dispersive NIR and FT-NIR. Even the same spectrometer technique gives no guarantee because of different characteristics or sampling modules of the new one. In such cases the only way to use the existing database is to transfer not the calibration model but the spectra. The spectra could be transferred and adapted to the characteristic of the new instrument by using transfer algorithms like Direct Standardization (DST) or Piecewise Direct Standardization (PDS) (Fearn 2001, Wang et al. 1991). Such algorithms
Use of Near Infrared Spectroscopy in the Food Industry
95
are able to deal with varieties in spectra of different instruments and sampling techniques. Sources of varieties are: r r r r r
spectral characteristics of baseline and band shape based on principle of instrument and sampling accessory optical and digital resolution differences in absorbance response (y-axis) scale and data point grid on x-axis (wavelength or wave number axis) x-axis precision (wavelength alignment) (for example, between dispersive and FT instruments)
By using modern techniques of spectral transfer, new concepts and business of calibration development became possible. There are commercial calibration packages available that are based on samples sets scanned on one type of NIR instruments (for example, INGOTR calibration packages from Central Laboratories in the United Kingdom and CRAW in Belgium. The spectral database was transferred to other types of instruments (for example, from a different supplier), and new, dedicated calibration models were calculated. Using this method, a great deal of time collecting samples over the years and a great deal of work and costs for scanning samples and wet chemistry are saved. Direct Calibration Transfer It is important to know that a method that is created on one instrument will not lose its validity if an optical component or even the whole instrument is exchanged. Moreover, for a majority of larger companies, the transfer of calibration models from one central spectrometer (the so-called “master”) to a network of external instruments all over the world (“slaves”) is a prior condition to use NIR. The costs for developing only a few chemometric models the first time or repeatedly would easily amount to the order of the price of the NIR spectrometer itself. Since differences can exist between two spectrometers of even the same type (see above), a standardization procedure is a requirement for most instrument technologies. There are several approaches stated in the literature (Osborne et al. 1999, Tillmann 2000, Fearn 2001).
96
Nondestructive Testing of Food Quality
In general, there is always an error introduced by differences of instruments, mainly in x-axis adjustment (wavelength accuracy) and y-axis response (absorbance). Note that such differences are accumulated in predictions by multivariate methods by making use of many data points of a spectrum. Even by adding more and more data affected by instrument shifts or drifts to a calibration, the error of the final model could even increase over time. Here the modeling itself has an influence on how sensitive the multivariate model is to shifts. Differences in the absorbance response (y-axis) can be balanced by normalization as part of the data pretreatment sequence, for example. Deviations in the x-axis between instruments are much harder to compensate without standardization with reference samples. A successful direct calibration transfer depends on how big the error is compared to the error of sampling and reference methods. For FT-NIR instruments, a direct calibration transfer is possible in most cases because of the very high accuracy and reproducibility of the xaxis adjustment. So the error between FT-NIT instruments is significantly lower than the overall reference analysis error.
Applications of NIR in Food Industry Applications in food industry can be found everywhere along the entire production chain of various food products. In this chapter, the focus is on processed food products. The great field of NIR applications on food crops and agriculture products is beyond the scope of this chapter. For a comprehensive presentation of details for applications on grain, flour and millings products, cereals, oil seeds, and fruits and vegetables, refer to the book edited by Roberts and others (2004). In the following paragraphs, applications of different stages of the production process and for various food products are presented. For many food processes, the analysis of incoming raw materials and ingredients is the most important step, affecting all further steps of production later on. Examples are: r
the analysis of raw milk for standardization purposes in the cheese production
Use of Near Infrared Spectroscopy in the Food Industry
97
r
the adjustment of processed cheese recipes for the melting mixture depending on the quality and composition of the various cheeses available on the market at that time r the analysis of meat prior to the production of sausages to achieve constant composition and quality Currently NIR is more and more involved in process control by analyzing intermediate products and also for the real-time analysis of final products. Examples of how to intervene and change parameters to secure the target quality are: r
the composition of chocolate paste after the conche before the molding in forms r the control of mayonnaise for fat content and acidity Final products are analyzed to identify products out of specification. Examples are: r r r
acidity and iodine value of fats and oils nutritional value of residual olive cakes for the feed industry fat and dry matter of yogurt
Meat and Meat Products For the meat industry, knowledge of the composition of fresh meat and other ingredients is necessary to adapt the recipes to produce a constant quality of sausages, salami, and various other types of meat products. NIR is commonly used for the analysis of water, fat, and protein of minced meat samples to improve the efficiency of the production and maintain final quality. For the finished products, NIR is also used for the determination of additional parameters like salt content (Ellekjær et al. 1993) and pH. Here the standard methods, titration, and pH instrument are generally more accurate but NIR is able to give a reasonable estimation simultaneously to the analysis of the major components. The measurement of meat and meat products is a good example for how the general conditions in the production define the sample presentation and even the technique of the NIR measurement. The spectrometers are often placed in or close to the production to keep reaction times short,
98
Nondestructive Testing of Food Quality 80
Fat content / %
70 60 50 40 30 20 10 0 0
10
20
30
40
50
60
70
80
Water content / %
Figure 4.19. Cross correlation of water and fat content in raw meat and sausages with r 2 = 0.86.
if the product runs out of specification. In these areas, glass is normally not allowed for safety reasons, which makes it impossible to use most of the NIR accessories like quartz glass beakers or glass Petri dishes. Only Petri dishes made from polystyrene can be used, which have the additional advantage of being disposable. The use of polystyrene Petri dishes leads however to a strong limitation for reflection measurements. The absorptions of the plastic add strongly to those of the sample, because the light travels twice through the bottom of the Petri dish. Thus, several regions of the spectrum are lost for the evaluation because of too high absorption values (for example, in the region of fat between 6,300 and 4,000 cm−1 [1,600 and 2,500 nm]). In this case, transmission measurements of the sample through the Petri dish with a Si-diode detector are recommended. This type of detector covers the region between 15,500 and 9,000 cm−1 , which are predominantly the overtones and combination bands of fat and water components. Because the signals are very weak in this region, a sample thickness of 15 mm is not a problem. The sample is measured in a rotating Petri dish, which leads to a good averaging effect and covers much more sample than just measuring the surface in reflection. When setting up calibration, it must be generally taken care if components behave collinearly (that is, if they change in the same way). With meat and sausage, this is the case even for the main components fat and water (Figure 4.19). For some sample types, this relation is constant and does therefore not influence the stability of the calibrations.
Use of Near Infrared Spectroscopy in the Food Industry
99
Figure 4.20. Salami.
For methods that cover several types of meat or sausage, care has to be taken for the selection of the frequency ranges so that the calibrations for the different parameters are independent (not intercorrelation or crosscorrelation). Correlating methods are not stable if the raw material or the recipe changes slightly. Practically spoken, the regions for fat absorptions should not be used to calibrate water or vice versa. Luckily the absorptions of fat and water are in widely separated areas of the spectrum so that a dedicated selection can be performed. (See Figure 4.20.) The amount of different meats and meat products has to be taken into account. More than 1,500 different kinds of sausage are claimed to exist with an extreme wide range or recipes. German sausages can be divided into three categories: Br¨uhwurst (fresh sausages), Kochwurst or Kochp¨okel (precooked sausages and ham), and Rohwurst (raw cured or keeping sausages such as salami). Depending on the accuracy desired, generalized methods could be developed for meat products, or productspecific methods (for example, only for salami) could be devised for better estimation of the composition. Table 4.1 show calibration results for different meat products. Milk and Dairy Products Today, NIR is already widely established for analysis of dairy products and has drastically reduced the amount of wet chemical analysis
100
Nondestructive Testing of Food Quality
Table 4.1. Calibration results for meat and sausages. No. Samples Range/% Meat & Sausages Water Fat Protein Salt
Calibration No. PCs
r2
RMSECV/%
145 143 142 101
22.9–74.5 1.5–85.9 2.7–28.0 1.7–5.0
5 5 9 8
99.54 99.58 97.63 97.69
1.1 1.1 0.73 0.17
Sausages w/o Salami Water Fat Protein Salt
70 70 70 62
44.2–74.5 1.5–42.1 10.2–21 1.7–5.0
8 3 7 6
99.31 99.51 99.82 50.37
0.72 0.81 0.40 0.14
Low Fat Ham Water Fat Protein Salt
23 23 23 20
72.1–74.5 1.5–4.6 19.0–21.0 1.8–2.5
6 4 5 4
91.3 99.54 90.94 52.14
0.19 0.20 0.17 0.12
required. In many countries in recent years, many of dairy companies merged, and the production of dairy products was concentrated at larger facilities. The high output of modern plants might become very critical, because a lot of money could be wasted in a short time if a process became out of control or did not follow specifications. In this context, NIR is becoming even more important to allow fast analysis of an increasing amount of samples. In addition, process control is currently of greater interest. It offers financial benefits by (1) avoiding bad products already in use during production and (2) increasing yield. Milk Milk is the starting point of any dairy product and is one of the most controlled food products in the world. The composition of milk changes from season to season and even from cow to cow, which makes a standardization step necessary to maintain milk quality for all further process steps in dairy production. For example, the most important step in ensuring the quality of cheese is to use milk with a dedicated composition because the final cheese quality cannot be changed.
Use of Near Infrared Spectroscopy in the Food Industry
101
Commonly, dedicated analyzers are used based on FT-IR technology with transmission flow cells to analyze milk and liquid products like whey and cream. Because of the high sensitivity of MIR to water, a narrow flow cell with a 6-micrometer (µm) path length is used. In such small dimensions, milk is a heterogeneous emulsion of fat in water, and an ultrasonic homogenizer is required to reduce the size of fat droplets, especially in raw milk. In addition, the sample must be brought to a certain temperature to maintain the reproducibility of the analysis over time. The flow cell is cleaned automatically with a rinsing cycle and afterwards, pure water is used as a standard sample for background measurement. Overall FT-IR-based milk analyzers are working well, but the complex instruments have noticeable maintenance costs. In the past NIR was used in routine exclusively to analyze finished milk products like cheese and yogurt. Recently more and more interest is shown in the analysis of milk by NIR to use only one instrument in the laboratory for all kind of samples. NIR instruments are an alternative in general as a backup for the FT-IR-based milk analyzers or for small companies to invest in only one instrument. However, in the NIR region, milk is not easy to measure, but it is feasible (Tsenkova et al. 1999, Tsenkova et al. 2000). Some light is transmitted through the sample like in a clear liquid but another portion of the light is also reflected because of the opacity of the milk. Transmission measurements can be done with a small path length of 1 mm or less to avoid total absorbance due to high water content and losses by scattering. Transflectance measurements are another option to collect both transmitted and reflected light. The path length of 1 mm combined with a larger sample spot compared to FT-IR allows milk analysis even of raw milk without a homogenizer. The sample temperature must still be adjusted and controlled before the measurement. The sample is then pumped into a flow cell housed in a heatable vial holder or a transmission or transflectance probe is held into the sample. Table 4.2 shows that NIR spectroscopy gives reasonably good results compared to the typical laboratory reproducibility. Milk Powder In the laboratory, the measurement of milk powder is a simple reflection measurement through the quartz glass bottom of a sample cup. By measuring from the bottom, a reproducible sampling is possible because
102
Nondestructive Testing of Food Quality
Table 4.2. Calibration results for milk. Calibration Typical lab No. No. reproducibility Samples PCs Fat Protein Lactose SNF
0.04 0.04 0.05 0.05
122 111 64 49
8 7 6 5
Validation
No. RMESCV Samples
r2 99.76 88.50 98.68 69.78
0.046 0.050 0.055 0.078
18 18 18 18
r2
RMSEP
99.89 67.03 79.43 45.44
0.044 0.048 0.042 0.114
such a fine powder will give a homogeneous and compact surface in contact with the glass. Most of the milk powders are highly homogeneous so that, even with a measurement spot of 15 mm, a representative sample amount can be measured. However, the sample cup is normally rotated eccentrically during the measurement to collect more sample information and to level out slight inhomogeneities. For the lab analysis of fat, moisture, and protein in whey powder, the calibration results are given in Table 4.3. In-process quality control of solids in general and especially with milk powder is challenging. As compared to liquids, the transport and the measurement of solid materials in the process are very demanding, which also makes the use of spectroscopic methods difficult. For NIR analysis (for example, with fiber-optic probes), on one hand, a good and reproducible contact to the sample is necessary. On the other hand, the sample should be exchanged frequently to be able to monitor the changes in the process. It is therefore highly necessary to know the
Table 4.3. Calibration results for milk powder (whey powder) in a lab environment. Calibration No. Samples Fat 148 Protein 193 Moisture 152
Range/%
No. PCs
3.4–9.4 4 72.4–79.1 9 5.2–6.8 3
r2
RMSECV/%
99.17 0.09 95.70 0.17 88.14 0.07
Use of Near Infrared Spectroscopy in the Food Industry
103
Figure 4.21. Automatic sampler for milk powder.
process stream and flow properties of the material to find the optimum position for the fiber-optic probe. One other difficulty regarding the analysis of milk powder is the high fat content. It often leads to a stationary grease film on the probe. Even a regular cleaning in shutdown periods may not help to avoid such problems if the fat content of the milk powder is high. One possible solution is a process NIR system (Niem¨oller et al. 2004) with a single reflectance probe inserted into an automatic sampler (Figure 4.21). The sampler uses a pneumatic ram to take subsamples into a pipe (spool) in which the probe is located. The scanned sample is then returned to the process or diverted and collected for reference analysis. The advantage of using a sampler is to have better control of sample position, which is very important for the repeatability and accuracy of probe bases analysis of solids in general. Moreover, it is possible to collect the exact sample for reference analysis that had been scanned by the analyzer. Especially for high fat types of milk powder, the probe tip must be cleaned regularly, which is easier in such a setup. The results obtained under process conditions are shown in Table 4.4, which are very comparable to the lab results (Table 4.3).
104
Nondestructive Testing of Food Quality
Table 4.4.
Calibration results for milk powder in a lab environment. Calibration
No. No. Samples Range/% PCs r 2 Fat Protein Moisture
146 147 153
25.8–30.2 7 99.14 23.3–26.0 7 95.61 2.1–3.4 4 94.54
Validation No. RMESCV/% Samples R2 RMSEP/% 0.11 0.14 0.06
139 139 140
98.50 92.49 93.30
0.14 0.18 0.08
Butter An important aspect of butter production is the amount of butterfat in the finished product. In the United States, all products sold as “butter” must contain a minimum of 80% butterfat by weight, whereas Europeanstyle butters generally have a higher ratio, up to 85% butterfat. For cost efficiency, producers aim to run the process at the maximum of the allowed water content to save the more expensive ingredient, the butterfat itself. A close process control is therefore necessary, where it is crucial to obtain results fast, thus being able to intervene immediately if the product fails to follow the specifications. Compared to milk powder, the NIR setup for butter analysis is normally easier (Niem¨oller et al. 2004). In this case, an FT-NIR spectrometer system with optical multiplexer operating in the 10,000 to 4,000 cm–1 (1,000 to 2,500 nm) region was installed with a single reflectance probe inserted downstream from the butter churn (Figure 4.22). A sampling valve was located close to the probe so a representative scanned sample could be obtained. The aim was to analyze moisture content in butter as well as the salt content of salted butter. Reference moisture tests were performed using the Kohman method, and salt estimations were performed using the Mohr titration method (Niem¨oller et al. 2004). The more difficult part of the work was to obtain a representative sample set (Figure 4.23), because the process was constantly measuring 16% water. It was therefore necessary to obtain samples during the start and the termination of the process. Only this way could out-of-specification samples be taken to broaden the calibration range. The results of the calibration and validation of butter samples are shown in Table 4.5.
Use of Near Infrared Spectroscopy in the Food Industry
105
Table 4.5. Calibration results for butter. Calibration No. No. Samples Range/% PCs Salted Moisture Salt Unsalted Moisture
r2
Validation No. RMESCV/% Samples RMSEP/%
171
15.2–16.2 10 87.44
0.06
32
0.06
64 382
1.16–1.47 15.2–16.9
0.03 0.06
32 79
0.04 0.07
7 9
80.86 79.22
Cheese Like for sausages discussed above, there is a great abundance of cheese varieties (about 2,000 worldwide). The cheeses differ in the type of milk used, the way curd is formed, the ripening process, and the final content of water and fat. A possible classification of important cheese varieties could be made according to the following guidelines: 1. Unripened Cheese r Cottage cheese, cream cheese, and mozzarella (heated plastic curd), for example
Figure 4.22. Automatic sampler for butter.
106
Nondestructive Testing of Food Quality
NIR value / %
16.9
16.4
15.9
15.4
14.9 14.9
15.4
15.9
16.4
16.9
reference value / %
Figure 4.23. Plot of NIR predictions versus reference values for moisture in butter.
2. Ripened Cheese r Hard cheeses, such as Chester, Cheddar, Emmentaler, etc.) r Slicing cheeses, such as Edam, Gouda, Tilsiter, etc.) r Semisolid slicing cheese, such as Roquefort, Gorgonzola, etc.) r Soft cheeses, such as Brie, Camembert, etc.) 3. Processed Cheese r Processed or melting cheese is made from hard cheeses by heating in the presence of 2–3% melting salts and other ingredients like milk powder and cream. The first step of quality control in cheese production is the standardization of the milk used for the curd formation. As mentioned already, this is the most important step because of its extremely high impact on the final quality of cheese products. However, there are other critical steps in the process where fast analysis tools like NIR are beneficial. After curd formation, the whey is pressed out to a certain amount of residual moisture. The molded pieces of curd could be analyzed on water (whey) content or reverse on dry matter by a contactless NIR sensor to adjust and control the following pressing step. This leads to a more uniform dry matter content before the ripening step and finally to a smoother quality. The main area of NIR application regarding cheese is the control of the final product to be in specifications. Depending on the type of cheese, some aspects have to be taken into account to set up a NIR method. For sausages (see discussion above), the easiest way of measuring cheeses is the transmission mode with a Silicon detector with samples
Use of Near Infrared Spectroscopy in the Food Industry
107
in polystyrene Petri dishes. Using this method, glass material is avoided for safety reasons in or close to the process, and the sample can be discarded with the dish after analysis. The extreme differences of cheeses in consistency (solid, soft, gelatinous, grainy) and composition influence the sampling procedure for NIR and lead to different optical properties. r
Hard and slicing cheeses are to be ground to fill the Petri dish to 15-mm height. In hard cheeses, the NIR light is transmitted not only through the bulk but even by diffuse transmittance based on multiple scattering at the particles. The filling and compaction of the ground cheese is an important step but with fairly wide limits. In the beginning, this step can be controlled by checking the weight, but after a short time, operators are used to performing it with adequate repeatability. r Other cheeses like soft cheeses can be spread in a Petri dish without any preprocessing. Because of the higher optical density compared to ground hard cheeses, sometimes the lids of Petri dishes are used for transmission measurements with a sample thickness of about 6 mm. r Some cheeses, like mozzarella, with a high content of residual whey or special products like feta with edible oil added are sometimes not easy to present to the NIR with high repeatability. It is clear that for each different way of sampling (Petri dish or lid), different calibration models are required. Even for the same sampling, the setup of calibrations must be done carefully in terms of combining different types of cheeses. Combining is beneficial to gain a broader range of component values because a certain type of cheese has limits for fat and dry matter that are too narrow to set up a dedicated calibration model. On the other hand, only similar types of cheese should be part of a calibration to keep the errors of prediction low. Hard and slicing cheeses can be easily handled with joined calibration models, whereas soft cheeses must be selected carefully to form different sets of calibrations. Another option to analyze cheeses is the reflectance mode, which is independent from sample thickness, which makes the sampling easier. Reflectance is only feasible if glass Petri dishes are allowed and cleaning of them is acceptable. Depending on the sample homogeneity, the scanning of the surface by reflectance measurement is giving equal results to transmission mode. Compared to transmission with the limited spectral range of a Silicon detector, the reflectance measurement could be an advantage because of multiple information pieces in the
108
Nondestructive Testing of Food Quality
Table 4.6. Calibration results for hard and slicing cheeses. Calibration
Fat Dry Matter
No. Samples
Range/%
No. PCs
r2
RMSECV/%
110 110
15.4–34.3 48.5–65.9
4 6
99.84 99.29
0.26 0.35
broader spectral range of the standard NIR detectors (InGaAs or PbS) used here. Processed or melted cheese is a special case even for the application of NIR (Blazquez et al. 2004). The main ingredients are hard cheeses that are analyzed by NIR to get the needed component values to adapt the recipes for the formulation step. The raw materials (hard cheeses, melting salts, and other ingredients) are mixed prior to the melting step. The analysis of the final mixture is another nice example for the control of an important process step by NIR. Checking the final mixture before the melting is the last step where the composition of the final product can be changed if necessary. This is the critical step to meet the final specifications. Until the mixture is analyzed, the mixer is blocked so analysis time becomes the limitation factor for the overall throughput. In this case, NIR is greatly beneficial with short analysis time and has the capability to be used at-line by process operators. See Table 4.6. Yogurt The worldwide sales of yogurt have shown a dramatic increase in the past years. Yogurt is a cultured product with a short shelf life and is not yet difficult to produce, but high throughput is required considering the small unit size and low profit margins. Also, many companies produce a wide range of flavors and textures, from plain fruit yogurts to chocolateflavored yogurt drinks. With NIR spectroscopy, the common quality parameters like fat and dry matter can be determined very quickly (for example, at-line in the production area). Interestingly, only one calibration model per parameter is enough, no matter what flavors or ingredients are chosen. Yogurt with chocolate flakes can be measured with the same model as strawberry or hazelnut yogurt. This is due to the low sensitivity of the NIR compared to other spectroscopic techniques and the use of chemometrics.
Use of Near Infrared Spectroscopy in the Food Industry
109
Table 4.7. Calibration results for yogurt. Calibration
Fat Dry Matter
No. Samples
Range/%
No. PCs
r2
RMSECV/%
56 56
0.2–8.4 13.8–31.7
2 4
99.61 99.01
0.14 0.36
Depending on whether glass is allowed for sampling at the location of the spectrometer, different techniques can be used. If the Food and Drug Administration (FDA) or Hazard Analysis and Critical Control Point (HACCP) have no objections, glass beakers or Petri dishes can be used for the analysis of yogurt in reflection on the integrating sphere. However, if glass is an issue, the use of polystyrene Petri dishes is advised. In this case, transmission measurements of the sample through the Petri dish with a Si-diode detector are recommended. See Table 4.7.
Mayonnaise, Edible Oil, and Olives Mayonnaise Mayonnaise consists mainly of oil, water, and emulsifiers. NIR spectroscopy is used to control the amount of fat in the finished mayonnaise product as well as the dry matter. The mayonnaise under investigation in this study is a high-fat product, containing mainly rapeseed oil (78%) and water (14%). The rest of the composition is egg matter, salt (1%), sugar, mustard, natural identical aroma, and β-carotene as a vegetable coloring agent. For the calibration model, 21 samples were synthesized in the lab with 15 minutes total preparation time. Each sample had a compound range between 73.54 and 82.11% fat by weight. In addition another 6 samples were considered for the analysis from which 4 samples around the target value of 78% fat by weight were taken from normal production. The quantitative determination of fat content from mayonnaise was performed by Soxhlet extraction with a time demand of about 10 hours per analysis. A solvent diethylether was used, and the amount of extracted fat is calculated by its weight related to the input of weighted substance. The accuracy of this method is 0.2% absolute to the measured
110
Nondestructive Testing of Food Quality
Table 4.8. Calibration results for mayonnaise. Calibration
Fat Dry Matter
No. Samples
Range/%
No. PCs
r2
RMSECV/%
21 21
73.5–82.1 80.0–87.1
3 5
99.11 98.95
0.24 0.22
value. The values for the dry matter content were gained by infrared drying. The NIR spectra from the mayonnaise samples were acquired with an integrating sphere for the collection of the scattered radiation from the emulsion in a cup. The advantage of this setup is the relatively large measurement area of approximately 15-mm diameter, which is enhanced by a large rotating device for inhomogeneous samples. All of the mayonnaise samples were filled into flat Petri dishes, which were measured successively. The time for the measurement and the quantitative evaluation of the NIR spectra is about 9 seconds (no sample preparation required). This calibration model was applied to both lab and production samples, and the amount of fat predicted was found to be in good conformity with the results from classical Soxhlet and the drying method, as shown in Table 4.8. For mayonnaise it is beneficial to analyze the mixture of oil, water, and the other ingredients even in-line for a real-time process control. This could be done as in the case of butter with a simple installation of a reflection probe with a fiber bundle. Unfortunately for high-fat mayonnaise, a layer of fat can appear on the probe window, which causes drift over time in the prediction of fat. In such situations, optical methods come to a natural limit. Iodine Value for Edible Oils The iodine value (IV) is a measure of the total number of unsaturated double bonds present in fats and oils. It is generally expressed in terms of “number of grams of iodine that will react with the double bonds in 100 grams of fats or oils.” A high IV oil contains a greater number of double bonds than a low IV oil. Fats with high iodine value are usually soft or liquid and are less stable to oxidation. The American Oil
Use of Near Infrared Spectroscopy in the Food Industry
111
Table 4.9. Calibration results for iodine value in edible oils. Calibration
IV
No. Samples
Range/%
No. PCs
r2
RMSECV/%
479
0–186
10
99.96
1.1
Chemists’ Society’s (AOCS) recommended method D1959-97 (Wij’s procedure) is widely used for determination of IV. The method involves the addition of halogen in the presence of potassium iodide and the titration of the liberated iodine with sodium thiosulfate using a standard starch solution as the indicator. This analysis takes at least 30 minutes and has to be performed on samples pulled out of the process line. NIR spectroscopy, on the other hand, is fast (analysis time on the order of less than 1 minute) and does not require the use of any solvents or reagents. Measurements in the laboratory were performed by AOCS Cd1e-1 method using an FT-NIR spectrometer with 8-mm disposable vials. The results for calibration and validation can be found in Table 4.9. Olives and Olive Oil The vast majority of olives grown all over the world are used for the manufacture of olive oil. The value of an olive crop is mostly determined by the oil content. Depending on the time of the harvest and the olive type, the olive oil content may vary between 10 and 30%. Determining the exact oil content is essential for the farmers and the industry alike to estimate the value of the collected olives and to optimize further harvests. Traditionally, the oil content is determined by the Soxhlet extraction, a wet chemical method requiring a large amount of solvents. The solvent usage creates health and safety risks as well as environmental problems and is therefore becoming increasingly unacceptable by the industry. Furthermore, the results are operator dependent, and the procedure is slow compared to spectroscopic methods. The measurement was carried out in diffuse reflection on an FTNIR spectrometer equipped with an integrating sphere (Behmer et al. 2006). A sample rotator was used to enhance the averaging effect of the integrating sphere. The reference values for the oil content were obtained using the Soxhlet extraction method. The reference values of moisture
112
Nondestructive Testing of Food Quality
Table 4.10. Calibration results for olive paste. Calibration No. No. Samples Range/% PCs Oil Humidity
372 386
12.5–26.9 43.4–60.5
5 7
Validation No. RMSEE/% Samples RMSEP/%
r2 95.05 98.30
0.67 0.53
342
0.67
were obtained by drying in the oven and reweighing. The results of the olive paste analysis can be found in Table 4.10. A second application in this field was the determination of acidity content in olive oil (Behmer et al. 2006, Garrido-Varo et al. 2000). The quality of an olive oil is a major economic issue; therefore, adulteration is a common concern. The acidity is, apart from the organoleptic evaluations, a major criterion for the classification of the olive oil into the categories “Virgin” and “Extra Virgin.” According to the current Council Regulation (EC) 1513/01, the maximum level of free acidity of an extra virgin olive oil must not be higher than 0.8 gram per 100 grams (0.8%). The acidity level in the oil is increased if the olives were attacked by insects. They were collected from the ground instead of straight from the tree or if there was a time lag, between harvests and processing to oil. The analysis was performed on olive oil samples with an acidity value in the range of 0.1 to 1.1 degree. The reference values were obtained by wet chemical analysis. The samples were measured in transmission with an FT-NIR spectrometer with a temperature-controlled sample compartment in 8-mm glass vials. (See Figure 4.8.) For calibration results, see Table 4.11. FT-NIR spectroscopy is used to determine the most important quality parameters of water and oil content in milled olive paste with an Table 4.11. Calibration results for olive oil. Calibration
Acidity
No. Samples
Range/%
No. PCs
r2
RMSECV/%
199
0.1–1.1
10
99.12
0.02
Use of Near Infrared Spectroscopy in the Food Industry
113
excellent degree of accuracy. NIR also offers an excellent prediction rate for the analysis of acidity, the key parameter for the quality of olive oil. With the ease of use, rapid analysis time, and no sample preparation, it is a useful tool for the quality control of extra virgin olive oil, but will never be able to replace organoleptic testing. Chocolate Chocolate is a valuable product with comparable expensive ingredients. For quality control reasons and to avoid a waste of such ingredients, it is of high interest to control the critical steps in the process. The art of chocolate making requires, for example, carefully mixing a variety of ingredients into the chocolate liquor. The determination of fat in cocoa butter, which is the most important and a very expensive ingredient, is important. To save money by using a minimum of cacao butter and to maintain the quality, a precise method is needed for an accurate fat analysis. The reference lab method is Soxhlet extraction, which takes 2 hours and is fairly slow to react adequately on deviations in the process. In addition, the cost of 13 € per Soxhlet analysis is a cost factor for approximately 5,000 samples a year. NIR is a tool that saves time and money. Chocolate production is a batch process with some production lines running in parallel. The yield is most impacted when the process is controlled at the conche. Here the final composition can be checked before the chocolate is brought into its final form. The NIR should be installed at-line to be able to analyze samples from different production lines. The measurement accessory for reflection has to be appropriate for the pasty samples and must allow easy handling by various operators. A sampling cup is not the right choice because it is not easy to fill with pasty chocolate and is difficult to clean. The best solution was an optical probe with fiber bundle that can be easily stacked into the sample and afterwards cleaned just by whipping the tip of the probe. In Table 4.12 and Figure 4.24, the results of a method for analyzing fat in chocolate paste and chocolate liquor are shown. One general method is used for more than 100 recipes, and it is used in two factories. This is a good example of a global method that can be transferred to another place by saving all the work of re-creating such a method at another place. Furthermore, other properties like moisture, protein, and sugars can be
114
Nondestructive Testing of Food Quality
Table 4.12. Calibration results for chocolate. Calibration
Fat
Validation
No. Samples
Range/%
r2
RMSECV/%
No. Samples
486
26.2–65.9
99
0.29
290
r2 99.9
RMSEP/% 0.30
analyzed in chocolate paste and chocolate liquor. Other applications are performed to control other important steps like lecithin blending on chocolate. Beverages The NIR technology is also used more and more for various applications in the beverage industry. Currently, no product is put on the shelf without passing intensive quality tests. The following examples show the wide span of applications in the beverages industry. Caffeine in Instant Coffee The aim was to replace the HPLC analysis for monitoring the caffeine level in instant coffee. To reduce the analysis time, chemicals were used and generally a method was needed that was easier and less prone to individual errors. NIR spectroscopy fulfilled those requirements. The coffee samples were measured on an integrating sphere with a sample rotator to average different particle sizes and packing effects. It was shown that even at low concentrations, a good correlation was found 64
NIR value / %
59 54 49 44 39 34 29 24 24
29
34
39
44
49
54
59
64
reference value / %
Figure 4.24. Validation with independent samples of calibration of fat in chocolate.
Use of Near Infrared Spectroscopy in the Food Industry
115
Table 4.13. Calibration results for instant coffee. Calibration
Caffeine
No. Samples
Range/%
No. PCs
r2
RMSECV/%
20
0.8–3.6
4
99.57
0.08
between the reference values from the HPLC analysis and the NIR spectroscopic values. See Table 4.13. Alcohol in Alcoholic Beverages The standard method for testing the alcohol content of beverages is currently based on density measurements. This technique, though very accurate, is invasive, time consuming, and difficult to perform online. NIR spectroscopy is a good option here, especially because the alcohol content can be measured in the process itself with fiber-optic transmission probes, guaranteeing a close control with minimum manual intervention by the personnel. An array of different alcoholic beverages ranging from cider to whiskeys was analyzed using a fiber-optic transmission probe with an optical path length of 1 mm. The concentration of alcohol in percent volume per volume (V/V) ranged from 4 to 60%. The accuracy of the NIR calibration (Table 4.14) was equally good compared to the manual density measurements. Again NIR is able to analyze major components in samples with very different composition due to the high information content in the spectra and with assistance of the multivariate calibration approach. Even during production of ethanol, for example, by fermentation processes (Table 4.15), NIR can be used to detect and control the increase of the ethanol content over time (Livermore et al. 2003). This can be done at-line with taken samples, which is suitable for a process that Table 4.14. Calibration results for alcoholic beverages. Calibration
Alcohol
No. Samples
Range/%
No. PCs
r2
RMSECV/%
18
4.3–59.7
5
99.99
0.18
116
Nondestructive Testing of Food Quality
Table 4.15. Calibration and validation results for ethanol in fermented corn mash. Calibration No. No. Samples Range/% PCs Ethanol
111
0–12
r2
5 99.86
Validation No. RMESCV/% Samples r 2 RMSEP/% 0.14
40
99.8
0.15
is well understood and running for hours. On the other hand, in-line monitoring with a transflectance probe is advantageous to determine automatically the endpoint of the fermentation process.
Conclusions NIR is definitely a powerful tool that should not be ignored in the food industry. Water and common organic molecules such as starch, protein, oil, fiber, ash, acids, sugars, and ethanol can be monitored quickly, easily, and reliably by NIR spectroscopy at every stage of the operation. The quick and reliable NIR analytical result can help the food industry to keep raw materials, production processes, and finished products in control. NIR is a tool of the future that will replace some of the older and slower techniques, mainly those related to wet chemistry. NIR technology can also assist in research projects where more analyses can be done with fewer resources. Many samples can be screened in a short time to select noticeable samples for further investigation using more expensive or time-consuming methods. There are exciting times ahead for the next generation of NIR spectrometers that will continue to improve as hardware and software become more advanced, and the precision and accuracy of NIR technology steadily increase. One of biggest hurdles in implementing NIR methods is to generate reliable calibration models after the NIR instrument is purchased. Today and even in the foreseeable future, it is possible that instrument vendors or companies that specialize in NIR model development could provide turnkey calibration for the food industry, leaving more time for companies to focus on optimization of operations.
Use of Near Infrared Spectroscopy in the Food Industry
117
References ASTM. 1997. Annual Book of ASTM Standards. Standard Practices for Infrared, Multivariate, Quantitative Analysis. Vol. 03.06. 827–852. Bakeev KA, Ed. 2005. Process Analytical Technology. Blackwell Publishing. Behmer D, A Montasell, and C Villar Pascual. 2006. Applications with a Mediterranean flavour—Quality control of olives and olive oil with FT-NIR. Near Infrared spectroscopy: Proceedings of the 12th International Conference, NIR Publications, Chichester, UK. Berzaghi P, JS Shenk, and MO Westerhaus. 2000. LOCAL prediction with near infrared multi-product databases. J. Near Infrared Spectrosc. 8:1–9. Blazquez C, G Downey, C O’Donnell, D O’Callaghan, and V Howard. 2004. Prediction of moisture, fat and inorganic salts in processed cheese by near infrared reflectance spectroscopy and multivariate data analysis. J. Near Infrared Spectrosc. 12:149–158. Cogdill RP and P Dardenne. 2004. Least-squares support vector machines for chemometrics: an introduction and evaluation. J. Near Infrared Spectrosc. 12:93–100. Despagne F and DL Massart. 1998. Neural networks in multivariate calibration. Analyst 123:157R–178R. Ellekjær MR, KI Hildrum, T Næs, and T Isaksson. 1993. Determination of the sodium chloride content of sausages by near infrared spectroscopy, J. Near Infrared Spectrosc. 1:65–75. Fearn T. 2001. Review: Standardisation and calibration transfer for near infrared instruments: a review. J. Near Infrared Spectrosc. 9:229–244. Frost JF and K Molt. 1998. Use of a genetic algorithm for factor selection in principal component regression. J. Near Infrared Spectrosc. 6:185–190. Garrido-Varo A, C Cobo, J Garcia-Olmo, MT Sanchez-Pineda, R Alcala, JM Horcas, and A Jimenez. 2000. The feasibility of near infrared spectroscopy for olive oil quality control. In: AMC. Davies and R Giangiacomo, Eds. Near Infrared spectroscopy: Proceedings of the 9th Int. Conference, NIR Publications, Chichester, UK. Griffiths PR and JA de Haseth. 1986. Fourier Transform Infrared Spectrometry. John Wiley & Sons, New York. Haaland DM and EV Thomas. 1988. Partial least squares methods for spectral analyses. 1. Relation to other quantitative calibration methods and the extraction of qualitative information. Anal. Chem. 60:1193–1202. Livermore D, Q Wang, and RS Jackson. 2003. Understanding near infrared spectroscopy and its applications in the distillery. In: KA Jacques, TP Lyons, and DR Kelsall, Ed. The Alcohol Textbook, 4th edition, Nottingham University Press, Nottingham, UK. Malinowski EH and DG Howery. 1980. Factor Analysis in Chemistry, John Wiley & Sons, New York. Martens H and T Næs. 1989. Multivariate Calibration. John Wiley and Sons, New York. Massart JA, BGM Vandeginste, LMC Buydens, S De Jong, PJ Lewi, and J SmeyersVerbeke. 1997. Handbook of Chemometrics and Qualimetrics, Part A. Elsevier.
118
Nondestructive Testing of Food Quality
Massart JA, BGM Vandeginste, LMC Buydens, S De Jong, PJ Lewi, and J SmeyersVerbeke. 1998. Handbook of Chemometrics and Qualimetrics, Part B. Elsevier. Næs T, K Kvaal, T Isaksson, and C Millera. 1993. Artificial neural networks in multivariate calibration. J. Near Infrared Spectrosc. 1:1–11. Niem¨oller A, D Behmer, D Marston, and B Prescott. 2004. Online analysis of dairy products with FT-NIR spectroscopy. In: AMC. Davies and A Garrido-Varo, Ed. Near Infrared spectroscopy: Proceedings of the 11th Int. Conference, NIR Publications, Chichester, UK. Osborne BG, Z Kotwal, IJ Wesley, L Saunders, P Dardenne, and JS Shenk. 1999. Optical matching of near infrared reflectance monochromator instruments for the analysis of ground and whole wheat. J. Near Infrared Spectrosc. 7:167–178. Roberts CA, J Workman Jr, and JB Reeves III. 2004. Near-infrared spectroscopy in Agriculture. ASA, CSSA and SSSA Publications, Madison, Wisconsin, USA. Shenk JS and MO Westerhaus. 1997. Investigation of a LOCAL calibration procedure for near infrared instruments. J. Near Infrared Spectrosc. 5:223–232. Siesler HW, Y Ozaki, S Kawata, and HM Heise, Eds. 2001. Near-Infrared Spectroscopy: Principles, Instruments, Applications. Wiley-VCH, Weinheim. Tillmann P. 2000. Networking of near infrared spectroscopy instruments for rapeseed analysis: a comparison of different procedures. J. Near Infrared Spectrosc. 8:101– 107. Tsenkova R, S Atanasova, K Toyoda, Y Ozaki, K Itoh, and T Fearn. 1999. Near Infrared Spectroscopy for Dairy Management: Unhomogenized Milk Composition Measurement. J. Dairy Science, Vol. 82, 11, 2344–2352. Tsenkova R, Atanassova S, K Itoh, Y Ozaki, and K Toyoda. 2000. Near Infrared Spectroscopy for Biomonitoring: Cow Milk Composition Measurement in a Spectral Region from 1100 to 2400 Nanometers. Journal of Animal Science, 78:515–522. Wang Y, D Veltkamp, and BR Kowalski. 1991. Multivariate instrument standardization. Anal Chem. 63:2750–2756. Williams PC. 2001. Implementation of near infrared technology. In: Near-infrared technology in the agricultural and food industries. 2nd ed. American Association of Cereal Chemists. 153–170. Williams PC and K Norris. 2001. Variables affecting near-infrared spectroscopic analysis. In: Near-infrared technology in the agricultural and food industries. 2nd ed. American Association of Cereal Chemists. 171–199.
Chapter 5 Application of Mid-infrared Spectroscopy to Food Processing Systems Colette C. Fagan and Colm P. O’Donnell
Introduction Mid-infrared (MIR) spectroscopy is based on the absorption of radiation in the 4,000 to 400 centimeter (cm)−1 region of the electromagnetic spectrum. An MIR spectrum of a food product will reveal information pertaining to the molecular bonds present and hence provide details of its molecular structure. Therefore, MIR spectroscopy may be used in both the qualitative and quantitative analysis of a food product. Spectroscopy has been defined as the interaction of radiation with matter (Pomeranz and Meloan 1994). MIR spectroscopy has been widely used in the pharmaceutical industry for the identification of chemical compounds as a result of the direct link between the chemical nature of a compound and its absorption of MIR radiation. Exposure to infrared radiation causes the excitation of covalent bonds within a molecule resulting in the promotion from the ground energy state to the rotational or vibrational levels. The wavelength at which a vibration occurs can be associated with a particular bond type and mode of vibration (that is, stretching or bending). The vibrational energy is also directly related to the strength of the bond and the mass of the molecular system (Reh 2001). This allows for the identification of specific chemical entities. The food industry today is faced with a number of challenges. Consumers are demanding food products that are of both high quality and consistency. Consumers, as well as producers and distributors, are also
119
120
Nondestructive Testing of Food Quality
concerned about the issue of product safety, authenticity, and traceability. Therefore the food industry requires techniques that will assist in obtaining the objectives of high quality, consistency, and authentication as well as facilitating competitiveness and cost efficiency. As in any manufacturing process, high quality food products can only be achieved through precise control and monitoring of both quality indices and factors that affect the final quality of the product. In 2001 the Food and Drug Administration (FDA) launched an initiative with the pharmaceutical industry, the goal of which is to design and develop processes that can consistently ensure a predefined quality at the end of the manufacturing process. The concept, called Process Analytical Technology (PAT), is defined by the FDA as a system for designing, analyzing, and controlling manufacturing through timely measurements of critical quality and performance attributes of raw and in-process materials and processes with the goal of ensuring final product quality. This concept applies not only to the pharmaceutical industry but also to the food processing industry. Therefore, the food processing industry requires tools for real-time control of production lines to determine if in-process material, during a given processing step, meets the necessary compositional or functional specifications of each predetermined quality standard (Karoui et al. 2006a). Advances in MIR spectroscopy (Fourier transform infrared [FTIR] spectroscopy) and sample presentation techniques (attenuated total reflectance crystals) has led to increased interest in MIR spectroscopy because it is a rapid, nondestructive, relatively low-cost method, and provides a great deal of information with only one test. These attributes make MIR spectroscopy an ideal PAT tool. A number of studies have assigned various food constituents (lipids, amides, moisture, sugars) to specific bands in the MIR spectra. A selection of these regions and their associated mode of vibration of some food constituents are given in Table 5.1. This principle has meant that MIR spectroscopy can provide rapid characterization of food products; hence, this technology has been applied to the prediction of composition, quality, and authenticity of a diverse number of food products, including meat, cheese, and apples. There is a growing body of work in which MIR spectroscopy has been used in the prediction of numerous attributes of food products (McQueen et al. 1995, Downey 1998, Chen et al. 1998, Irudayaraj et al. 1999).
Application of Mid-infrared Spectroscopy
121
Table 5.1. Selected molecular group mid-infrared absorption frequencies. Peak wave number (cm−1 )
Functional group
Fingerprint Region 1,036, 1,088 C-O 1,060 C-O 900–1,200 C-O, C-C O-H 1,115–1,170 C-O 1,232 C-H 1,240 C-O 1,371 C-H 1,274, 1,372, O-C-H, C-C-H, 1,445, 1,486 C-O-H 1,400–1,477 C-H
Mode of vibration
Food constituent
Stretch Stretch Stretch
Cheese Carbohydrates Apple Carbohydrates Apple
Stretch Bend Stretch Bend Bend
Cheese Cheese Cheese Cheese Cheese
Bend
Cheese
Functional Group Region 1,535–1,570 1,620–1,690 1,640 1,600–1,900 1,700–1,765
Amide II Amide I O=H
Stretch Stretch Bend
C=O
Stretch
2,869
CH2
2,926
CH3
3,047–3,703
O=H
Symmetric Lipid stretch Antisymmetric Lipid stretch Stretch Moisture
Protein Protein Moisture Organic acids Lipids
Food product
Cheese Cheese Cheese Apple Cheese, Oil Cheese Cheese Cheese
Although MIR spectra (Figure 5.1) contain a great deal of information on the molecular makeup of a food product, all spectroscopic signals of a molecular group are strongly influenced by neighboring molecular groups (Reh 2001). The complexity of food substances enhances these difficulties as the presence of various substances can results in peak shifts. Although this provides a challenge in developing calibration models, powerful statistical techniques such as chemometrics can be employed. Chemometric techniques such as principal component analysis (PCA) and partial least squares (PLS) regression can be employed
122
Nondestructive Testing of Food Quality
Absorbance
Processed Cheese 0.5
Goats Milk
0.4
Olive Oil
0.3 0.2 0.1 0 930
1430
1930 2430 2930 Wavenumber (cm−1)
3430
3930
Figure 5.1. Mid-infrared spectra of processed cheese, goat’s milk, and olive oil.
to compute a new smaller set of variables, which are linear combinations of the spectral data, to be used in the prediction model.
Equipment Fourier Transform Infrared Spectroscopy In recent decades the development and improvement in instrumentation has increased the feasibility of applying MIR spectroscopy to the precise control and monitoring of a manufacturing process. Fourier transform (FT) spectrometers (Figure 5.2) provided the advantages of greater sensitivity and speed of spectral acquisition, as well as employed relatively simple sample presentation techniques (Downey 1998). The main component of an FTIR spectrometer is the interferometer (Figure 5.3), which contains a fixed mirror, a movable mirror, and a beam splitter. Traditionally the amount of energy absorbed by a substance when the frequency of the infrared radiation was varied was recorded using a monochromator. Interferometry is the applied science of combining two or more waves, which are said to interfere with each other, making use of the entire spectrum. Infrared radiation from the
Application of Mid-infrared Spectroscopy
123
Figure 5.2. A Fourier transform infrared spectrometer (Thermo Fisher Scientific).
source is directed to the beam splitter where it is split in two. While one beam is reflected to the fixed mirror, the other is reflected to the movable mirror. The position of the moveable mirror induces a time lag as both beams are reflected back to the beam splitter. Therefore, at the beam splitter, when the reflected beams recombine, they undergo
Beam from infrared source
Movable mirror Beam splitter Beam to sample chamber
bnbbnbnbn Fixed mirror
Figure 5.3. Schematic of an interferometer.
124
Nondestructive Testing of Food Quality
constructive and destructive interference. This beam is directed through the sample compartment to the detector producing an interferogram containing spectral information on the sample. The interferogram is then transformed digitally, from detector response versus optical path difference using a fast FT algorithm on a personal computer (PC), providing a typical absorbance versus wavelength spectra. To improve the signal-tonoise ratio, a number of interferograms are generally averaged. Studies have reported the coaddition of 32 interferograms (Karoui et al. 2006b) up to 200 interferograms (Dogan et al. 2007) at resolutions of between 4 and 8 cm−1 . Before scanning a sample, a background scan is collected using a blank sample cell. Ratioing (a single) beam sample data with (a single) beam background data will yield a transmittance or absorbance spectrum of the sample in which noise arising from external sources (for example, the environment) are eliminated. FTIR spectrometers can also be fitted with a purge system to eliminate instabilities caused by the presence of air. A spectrometer, for example, may be purged with liquid nitrogen. These systems assist in eliminating carbon dioxide and moisture interferences in the spectra. The presence of atmospheric carbon dioxide can be observed in the olive oil spectra (Figure 5.1) as a peak in the region of 2,400 to 2,295 cm−1 . Another advantage of using FTIR is that it is internally calibrated hence providing long-term instrument stability. The FTIR spectrometer uses a helium/neon (HeNe) laser due to its constant wavelength, stability and the high accuracy at which it can be read by the instrument (0.01 cm−1 ). Fixing the distance of the laser to the interferometer in conjunction with the known wavelength allows the instrument to perform an internal calibration. The evolution of guided-wave optics and linear detector technology has created new possibilities for the realization of miniature infrared spectrometers that can rival much larger FTIR bench spectrometers (Kruzelecky and Ghosh 2002). Such spectrometers enhance the possibility of online or at-line process monitoring as well as handheld use. Sample Presentation Methods Sample presentation is possibly the most critical factor in the successful application of MIR spectroscopy to food analysis. To ensure that a relevant spectrum is obtained, a representative portion of the sample must be analyzed. This is directly connected with choosing an
Application of Mid-infrared Spectroscopy
125
appropriate sample presentation technique. Advances in sample presentation techniques have led to MIR spectroscopy being more accessible to the analysis of food samples and in particular solid samples. Reh (2001), Wilson and Tapp (1999), and Coates (1998) have reviewed relevant sample presentation techniques and identified those most appropriate for analyzing various products.
Transmission Windows and Cells Transmission-based sample presentation techniques have been most commonly used in sampling gases, liquids, pastes, and powders. However, because of the ease of preparation and cleaning, attenuated total reflectance (ATR) methods have become more popular for liquids, pastes, and even powder. The handling of solid samples has traditionally been the most difficult due to problems involved in obtaining suitable contact between the sample and the source beam. Transmission techniques have been used in the analyses of solid samples, however some form of sample pretreatment is usually required. Transmission cells are typically used to present gas samples. The path length is determined by the concentration of analyte required. Cells with a short path length (1 to 20 cm) are used for analyses of gases at high concentrations, and cells with long path lengths (1 to 20 meters [m]) are used for analyses of gases at very low concentrations (Coates 1998). Sealed or semipermanent transmission cells with a short path length can also be used to analyze nonvolatile liquids. Cells are available with fixed or variable path length cells. An appropriate path length, usually in the range 0.01 to 1 mm, must be selected that optimizes the absorption of major and minor bands. For analyzing pastes and viscous liquids, smear or capillary films may be formed using either one or two transmission windows. The film must be homogenous and free of air bubbles. In any of the mentioned applications, it is imperative to select a transmitting cell or window manufactured from the most appropriate material. The use of compressed potassium bromide (KBr) pellets has been used for the presentation of powdered or granular products. As outlined by Coates (1998), some general guidelines should be followed in the preparation of KBr pellets. The sample should be ground to less than 0.1 micrometer (µm) and well mixed with dry KBr powder. The mixture is then compressed at a nominal 10-ton pressure for 5 to 10 minutes. The pressed pellet is placed in a holder and into the instrument. This
126
Nondestructive Testing of Food Quality Sample
ATR crystal Beam from infrared source
To detector
Figure 5.4. Schematic of internal reflection in an ATR crystal.
technique has been reported for the characterization of hazelnuts (Dogan et al. 2007) and coffee beans (Kemsley et al. 1995). Polyethylene cards have also been used in presenting Emmental cheese (Karoui et al. 2006b). This time-intensive technique involved dispersing the sample in water and applying an aliquot to the card, which is left to dry overnight prior to analysis. Attenuated Total Reflectance As mentioned previously, ATR can be used in conjunction with various liquid and solid samples. ATR is based on the principle of the transmission of radiation through an optical element called an internal reflectance element (IRE) (Coates 1998). As shown in Figure 5.4, the source radiation enters the ATR crystal and is reflected at the sampleelement interface. Because of the angle of incidence used in the accessory, multiple reflections are generated. At each point where reflections occur along the sample-element interface, radiation penetrates a short distance into the sample where it decays logarithmically as a wave into the sample medium. Therefore, the reflected energy is reduced by the energy absorbed by the sample, which is detected, and a spectra is produced. Materials used to manufacture IRE have high refractive index such as zinc selenide (ZnSe) (Figure 5.5) or germanium (Ge). As with other techniques, even and immediate contact between the sample and the crystal are required. ATR is easily applied to liquids such as milk (I˜no´ n et al. 2004), olive oil (Tay et al. 2002), and honey (Justino et al. 1997), and this technique has been used to predict both qualitative and quantitative parameters in these products. ATR has also
Application of Mid-infrared Spectroscopy
127
Figure 5.5. Attenuated total reflectance ZnSe crystal (Specac Ltd.).
been successfully used for solid products. However, when considering using ATR for a solid product, it is critical to attain complete coverage of the crystal surface, and the surface analyzed should be representative of the product. To avoid air bubbles between the sample and ATR crystal, a clamp mechanism may be used to apply evenly distributed pressure to the sample. Care should be taken to avoid applying excessive pressure, which will damage the ATR crystal. ATR techniques have been used in conjunction with ground coffee beans (Dupuy et al. 1995), minced meats (McElhinney et al. 1999), and processed cheese (Fagan et al. 2007). In the case of all applications, it is important to ensure that the crystal surface is clean and dry before spectral measurements are taken. Diffuse Reflectance Diffuse reflectance methods are generally used for products with textured surfaces and powders (Coates 1998). Sample preparation often involves grinding to a fine powder, which may be mixed with KBr, and placed in a sample holder. Infrared radiation is directed to the sample holder, and reflected light is collected by a mirror, then sent to the detector (Reh 2001). Chemometric Methods The successful implementation of applying MIR spectroscopy to food processing has driven by its straightforward and powerful linkage to
128
Nondestructive Testing of Food Quality
0.02
Principal component 2
6 months
9 months
12 months
0
−0.02 −0.15
0 Principal component 1
0.15
Figure 5.6. Principal component scores plot of cheddar cheese samples at the 6-, 9-, and 12-month ripening stage.
chemometric methods. A number of references available review these methods (Karoui et al. 2003, Kramer 1998, Martens and Næs 1989). Two of the most widely used chemometric methods are principal component analysis (PCA) and partial least squares (PLS) regression. Both PCA and PLS are used to compress the spectral data and find a few linear combinations of the original data. In PCA these new uncorrelated and standardized variables are called principal components (PC), and only spectral data are used in their calculation. These data can then be used to develop calibration equations for the predication of various parameters. By plotting the PCs, it is also possible to distinguish interrelationships between different variables, and detect and interpret sample patterns, groupings, similarities, or differences. Figure 5.6 shows a PCA scores biplot of cheddar cheese samples at the 6-, 9-, and 12-month ripening stage using PC 1 and PC 2. It can be observed that there is clustering of the cheese samples. It is possible to differentiate between the 6-month samples and the 9-/12-month samples along PC 1, and further distinctions were possible between the 9- and 12-month samples along PC 2. PLS regression is a bilinear modeling method in which the new smaller set of variables is called PLS loadings (L). In PLS regression,
Application of Mid-infrared Spectroscopy
129
the reference data are actively used in estimating L to ensure that the first components are those that are most relevant for prediction. This method again allows for a simplified interpretation of the relationship between response variables and the prediction variables.
Applications MIR spectroscopy has been applied to a wide range of food products. Table 5.2 provides a summary of a selection of these reported applications. It is clear from the number of emerging applications that the potential of MIR spectroscopy will continue to grow. A number of applications for meat, poultry, dairy, beverage, and other products will be discussed. Meat and Poultry MIR analysis of meat products has been applied to the prediction of compositional variables as well as the issue of authenticity. van de Voort (1992) highlighted the advantages of FTIR-ATR spectroscopy as a quantitative quality control tool for the food industry. He outlined methods under development for the analyses of meat in which evidence of the benefits of FTIR in quality control applications were given. Vill´e and others (1995) developed a method for the determination of total fat and phospholipid content in intramuscular pig meat with FTIR. Meat samples were extracted with chloroform and methanol, and FTIR was used in transmission mode with a cell path length of 0.5 millimeter (mm). A linear regression equation was developed for total fat and the C=O band in the spectra (1,785 to 1,697 cm−1 ), which had a determination of coefficient of 99%. Vill´e and others (1995) found that the most useful region for the determination of phospholipid content was 1,282 to 1,020 cm−1 . However, because of the complexity of the spectra, the determination coefficient was only 62%. Al-Jowder and others (1997) went on to investigate the potential of MIR ATR spectroscopy in conjunction with PCA to address the authenticity of selected fresh meats. In this study, fresh samples of turkey, pork, and chicken were minced and a portion frozen for 15 days. When spectra were subjected to PCA, it was possible to discriminate between chicken, pork, and turkey meats. The basis for this discrimination was
Table 5.2. Selection of reported food analysis applications of mid-infrared spectroscopy. Application
Category
Established applications Meat and poultry Compositional analysis Dairy
Product
Analyte
Reference
Pork Nonfat dry milk Condensed milk Raw milk Cheddar cheese Emmental cheese
Fat and phospholipids Whey protein content Fat and total solids Fat, protein, lactose Protein, fat, moisture WSNa , TNb , ph, NaCl, fat
Vill´e et al. (1995) Mendenhall and Brown (1991) Natheir-Dufour et al. (1995) Lefier et al. (1996) McQueen et al. (1995) Karoui et al. (2006b, 2006c)
Emerging applications Meat and poultry
130
Dairy Authenticity or traceability Fruit and alcoholic beverages Other products Process or product monitoring
Contamination monitoring
Dairy
Other products
Chicken, pork, turkey mix
Chicken, pork, turkey Fresh or frozen Beef and lamb mix Lamb Beef and kidney or liver mix Kidney and liver Emmental cheese Country of origin Jura, Gruy`ere and L’Etivaz cheese Country of origin Apple juice Pure apple juice Apple juice Apple variety Apple juice and syrup Beet and cane syrup Red wine Vintage year and origin White wine must Variety Cake formulation Lard adulteration Honey Adulterant Cheddar cheese Texture development Soft cheese Cheese microflora/ Lactococcus sp. Cheddar cheese Age Processed cheese Texture and meltability Burcella Identify Burcella sp. Salmonella Classify or serotypes Spiked apple juice Identify, quantify bacterial species
Al-Jowder et al. (1997) McElhinney et al. (1999) Al-Jowder et al. (1999) Pillonel et al. (2003) Karoui et al. (2005a) Gomez-Carracedo et al. (2004) Reid et al. (2005) Sivakesava et al. (2001) Picque et al. (2005) Roussel et al. (2003) Syahariza et al. (2005) Tewari and Irudayaraj (2004) Irudayaraj et al. (1999) Lefier et al. (2000) Dufour et al. (2000) Fagan et al. (2007) Gomez et al. (2003) Kim et al. (2005) Yu et al. (2004)
Application of Mid-infrared Spectroscopy
131
found to be the lipid (1,740 cm−1 ) and protein (1,650 and 1,550 cm−1 ) band with protein bands increasing in intensity for chicken, turkey, and pork consecutively. Al-Jowder and others (1997) also found that it was possible to differentiate between fresh samples and the frozen samples that had been thawed. Their results also highlighted the potential of the technique as a tool for the semiquantitative analysis of turkey (R = 0.80) or pork (R = 0.84) in chicken. McElhinney and others (1999) also investigated the potential of MIR spectroscopy and chemometric for the quantitative analysis of meat mixtures. Minced beef and lamb were mixed in varying proportions and analyzed using an ATR sample accessory. PLS regression models successfully predicted the lamb content of the mixtures with an R value of 0.97; however, the study also developed models using nearinfrared (NIR) spectra, which were found to have an R value of 0.99 (McElhinney et al. 1999). Al-Jowder and others (1999) investigated the authentication of meat products. They found that it was possible to develop a model to correctly identify an unadulterated beef sample, while rejecting samples adulterated with kidney or liver (Al-Jowder et al. 1999). PLS regression models were also successful at predicting the percentage of added kidney (R = 0.98) and added liver (R = 0.99). These authors also found that it was possible to use MIR spectroscopy to discriminate between pure beef and beef containing 20% offal (heart, kidney, liver, and tripe) in both raw and cooked samples (Al-Jowder et al. 2002). They also observed discrimination between different adulterants. Ripoche and Guillard (2001) applied MIR spectroscopy to the determination the fatty acid content in fat extracts of pork meat. Samples were extracted using chloroform and methanol, and placed in an ATR cell. Models were developed that predicted four fatty acids with R2 values between 0.91 and 0.98. Fl˚atten and others (2005) recently reported on the prediction of iodine values and fatty acids in samples of pork adipose tissue using MIR spectrometry and PLS regression. They found that iodine values (R = 0.99) and fatty acids (R = 0.98) could be successfully predicted.
Dairy Products The two dairy products that have been most widely characterized using MIR spectroscopy are milk and cheese products.
132
Nondestructive Testing of Food Quality
Mendenhall and Brown (1991) investigated the potential of MIR spectra in the range 1,200 to 1,400 cm−1 to predict whey protein concentration in nonfat dry milk. Whey protein and nonfat dry milk powder were mixed in varying proportions and reconstituted to a constant total solids content prior to analysis. An ATR sample accessory was used to collect spectra. They found that the PLS regression model successfully predicted the concentration of whey protein in adulterated samples (R = 0.99) and that accuracy was not affected by processing conditions, source of nonfat dry milk (NDM), or origin of whey protein concentrate powder (Mendenhall and Brown 1991). FTIR-ATR spectroscopy has also been investigated for the compositional analysis of sweetened condensed milk (Nathier-Dufour et al. 1995). Fat and total solids content were predicted with accuracy and reproducibility in the order of ±0.09 and ±0.03 for fat and ±0.55 and ±0.30 for total solids, respectively. They concluded that with the standardization of the sample preparation protocol, the method produced good results and offered both ease of sample handling and rapid sample processing (Nathier-Dufour et al. 1995). The following year, predictions of the fat, crude protein, true protein, and lactose content of raw milk by FTIR spectroscopy and a traditional filter-based milk analyzer were assessed (Lefier et al. 1996). They concluded that because the FTIR instrument provided more spectral information related to milk composition than the filter instrument, the single-calibration FTIR analysis of milk samples collected in different seasons was more accurate (Lefier et al. 1996). Because of the success of the FTIR measuring principle for the analysis of milk, this technology has been successfully commercialized with products such as the MilkoScanTM FT 120, which employs this principle in compliance with International Dairy Federation (IDF) and Association of Official Analytical Chemists (AOAC) standards. The quality of any given type of cheese is related to a large extent to its texture, which in turn is influenced by moisture and other composition components, and processing conditions. Therefore, MIR spectroscopy has been investigated as a tool for predicting not only compositional parameters but also textural attributes. McQueen and others (1995) used FTIR and ATR spectroscopy on 24 cheese samples to obtain protein, fat, and moisture content. Standard reference methods were used to determine reference values for protein, fat, and moisture content. Prediction correlation coefficients between 0.81 and 0.92 and standard errors of prediction between 4% and 9% were obtained using this technique.
Application of Mid-infrared Spectroscopy
133
Although this study used an ATR crystal, Chen and Irudayaraj (1998) suggested the use of a microtome sampling technique prior to collecting MIR spectra of cheese. They observed well-separated fat- and proteinrelated bands in the spectra of cheddar and mozzarella cheese samples. The intensity of these bands was found to increase with increasing fat and protein contents (Chen and Irudayaraj 1998). MIR spectroscopy has also been used for the determination of fat and protein contents of full fat and reduced fat cheddar cheese during ripening (Chen et al. 1998). The organoleptic quality of cheese is determined by complex changes that occur during ripening (McSweeney and Fox 1993, Irudayaraj et al. 1999). The three reactions, which are primarily responsible for the development of texture and flavor, are proteolysis, glycolysis, and lipolysis. Irudayaraj and others (1999) investigated the use of MIR spectroscopy to follow texture development in cheddar cheese during ripening. They demonstrated that springiness could be successfully correlated with a number of bands in MIR spectra. The development of cheese microflora during ripening is extremely important in the development of flavor and texture. Lefier and others (2000) demonstrated that FTIR spectroscopy could be used as a rapid and robust method for the qualitative analysis of cheese flora. They developed a model composed for the identification of Lactococcus sp. using a number of strains of Lactcoccus lactis ssp. lactis and cermoris. Lucia and others (2001) investigated the suitability of FTIR spectroscopy to assess the contribution of different strains of Yarrowia lipolytica to cheese ripening. The found that significant differences occurred during ripening in the amide I and amide II bands of the spectra of curds and cheeses obtained from milk inoculated with Lactococcus lactis subsp. lactis and various strains of Y. lipolytica. Further studies have also shown that MIR spectroscopy is a useful technique for characterizing changes in proteins during cheese ripening (Mazerolles et al. 2001). Dufour et al. (2000) also demonstrated that MIR spectroscopy has the ability to discriminate between cheddar cheese samples at various ripening stages. The level of water-soluble nitrogen (WSN) in cheddar cheese is known to increase during the aging process and can be taken as an indicator of cheese ripening. Karoui and others (2006b, 2006c) have successfully predicted the WSN content of Emmental cheese using MIR spectroscopy and a polyethylene card sample presentation technique (R2 = 0.80). These studies also predicted a number of other chemical parameters such as nonprotein nitrogen (NPN), total nitrogen (TN), pH, sodium chloride content (NaCl), and
134
Nondestructive Testing of Food Quality
fat content. In summer-produced Emmental cheese, NPN and pH were predicted with R2 values of 0.71 and 0.56 while NaCl and TN were not successfully predicted (Karoui et al. 2006c). In winter-produced Emmental cheese, PLS regression models for predicting NPN, TN, pH, NaCl, and fat content had R2 values of 0.85, 0.62, 0.82, 0.70, and 0.69, respectively (Karoui et al. 2006b). Cattaneo and others (2005) applied MIR spectroscopy (4,000 to 700 cm−1 ) to the evaluation of the shelf-life period in which freshness is maintained in Crescenza cheese. PCA was found to detect the decrease of Crescenza freshness and to define the critical day during shelf life (Cattaneo et al. 2005). Fagan and others (2007) determined the potential of MIR spectroscopy coupled with PLS regression for the prediction of processed cheese instrumental texture and meltability attributes. A meltability model allowed for discrimination between high and low melt values (R2 = 0.64). The hardness and springiness models gave approximate quantitative results (R2 = 0.77), and the cohesiveness (R2 = 0.81) and Olson and Price meltability (R2 = 0.88) models gave good prediction results. Karoui and others (2004) investigated the potential of FTIR spectroscopy to discriminate 166 Emmental cheeses produced during summer (72 samples) and winter (93 samples) from five European countries: Germany, Austria, Finland, France, and Switzerland. The analyses were carried out on dry matter of the samples, and the second derivatives of the spectra between 1,050 and 1,800 cm−1 were calculated. The data were analyzed by PCA and PLSR according to the season of production, the treatment of the milk, and the geographic origin. PCA showed a good discrimination of summer and winter cheeses; 85% and 81% of the calibration and validation samples were correctly classified by PLS regression, respectively. The effect of the milk heat treatment was not detected by PCA. PLS regression showed a good classification for 93% of the calibration and validation samples from raw and pasteurized milk. The discrimination between the five production regions was weaker; 71% and 68% of the calibration and validation samples were correctly classified, respectively. It was concluded that MIR spectroscopy is a promising method for the discrimination of cheeses according to the season of manufacture and the heat treatment applied. Pillonel and others (2003) used MIR spectroscopy in combination with chemometrics to investigate the potential for discriminating Emmental cheeses of various geographic origins. The normalized
Application of Mid-infrared Spectroscopy
135
spectra were analyzed by PCA and linear discriminant analysis (LDA) of the PCA scores. The MIR transmission spectra achieved 100% correct classification in LDA when differentiating the Swiss Emmental from the other samples pooled as one group. Karoui and others (2005a) have also used MIR spectroscopy to determine the geographic origin of cheese. They investigated the potential of MIR and fluorescence spectroscopy for determining the geographic origin of experimental French Jura hard cheeses and Swiss Gruy`ere and L’Etivaz protected denomination of origin cheeses. Although it was possible to discriminate between the samples based on origin, it was found that fluorescence spectra produced better results than the MIR spectra (1,700 to 1,500 cm−1 ) (Karoui et al. 2005a). Karoui and others (2005b) also investigated the potential of NIR, MIR, and front-face fluorescence spectroscopy to discriminate Emmental cheeses from different European geographic origins. Almost 90% of cheese samples were classified by factorial discriminant analysis using either MIR (3,000 to 2,800 cm−1 ) or NIR spectra. The classification obtained with the tryptophan fluorescence spectra was considerably lower. However, it did allow a good discrimination of Emmental cheeses made from raw milk or from thermised milk. Fruit and Alcoholic Beverages MIR spectroscopy has been used for a variety of beverage applications. In the case of apple juice, the majority of these studies have investigated the classification of samples based on variety of quantity of pure juice present. Gomez-Carracedo and others (2004) found that using a combination of FTIR spectroscopy and a chemometric technique known as potential curves, it was possible to classify apple juice beverages on the basis of the percent of pure apple juice present. Apple juice samples have also been differentiated on the basis of apple variety (Bramley, Elstar, Golden Delicious, and Jonagold) and heat treatment by Reid and others (2005) using PLS regression and linear discriminant analysis. There results showed correct classification of between 78.3% and 100% using MIR spectroscopy, which was slightly lower than when NIR spectra were used. Both MIR and NIR spectroscopy were found to classify 77.2% of the samples according to heat treatment (Reid et al. 2005). A recent study reported on the discrimination of apple varieties using MIR spectroscopy. Rudnitskaya and others (2006) found that FTIR-ATR
136
Nondestructive Testing of Food Quality
had the potential to discriminate between apples of different varieties as well as determine the apples’ organic acids’ content. The discrimination based on variety was improved when data from an electronic nose was combined with the MIR data. However, no significant difference was found when applied to quantitative data processing (Rudnitskaya et al. 2006). Sivakesava and others (2001) has reported the use of MIR spectroscopy and chemometrics to detect adulteration of apple juice with beet and cane syrup with correct classifications of between 100% and 96.2% samples, respectively. MIR spectra of wine have primarily been used to determine the geographic origin or variety of samples. The use of MIR for authentication of red wines on the basis of vintage year enabled correct classification levels of up to 100%, while correct geographical classification of the same wines achieved average levels of 85% (Picque et al. 2005). Roussel and others (2003) also differentiated musts of white wine grapes on the basis of their variety. MIR spectroscopy was used to correctly classify 90.3% of the sample, however combining the MIR spectra with aroma data and ultraviolet data led to the correct classification of 95.3% of samples (Roussel et al. 2003). MIR spectroscopy has also been applied to the quality control and authentication of spirits and beer. Lachenmeier (2007) used a purposebuilt FTIR interferometer with an injection unit for liquids with automatic thermostating of the sample. Although no sample preparation was required for spirits, beer samples were degassed before analysis. Lachenmeier reported that the PLS regression models for predicting spirit parameters density, ethanol, methanol, ethyl acetate, propanol-1, isobutanol, and 2-/3-methyl-1-butanol (R2 = 0.90−0.98), as well as for the beer parameters ethanol, density, original gravity, and lactic acid (R2 = 0.97−0.98) had excellent accuracy, while other beer attributes were only semiquantitatively predicted (Lachenmeier 2007).
Other Food Products MIR spectroscopy has also been applied to a wide variety of other food products. These include characterization of the following: r r
irradiated hazelnuts (Dogan et al. 2007) classification of corn starch (Dupuy et al. 1997)
Application of Mid-infrared Spectroscopy
137
r
detection of lard adulteration in cake formulation (Syahariza et al. 2005) and in chocolate and chocolate products (Che Man et al. 2005) r detection of adulterated honey (Kelly et al. 2004, Tewari and Irudayaraj 2004) r authenticity and quality determination of olive oils (Tay et al. 2002, Innawong 2004, Yang et al. 2005) Another emerging area is the application of MIR spectroscopy in the detection and identification of spoilage and pathogenic microorganisms. Rapid detection of such microorganisms will increase food safety and quality and hopefully reduce the number of foodborne illnesses. Such techniques may also have a role to play in counteracting potential bioterrorism in the food supply chain. Gomez and others (2004) determined that FTIR spectroscopy could be used to classify and identify a number of Brucella sp. Kim and others (2005) applied FTIR spectroscopy to the classification of intact cells and bacterial lipopolysaccharides (LPS) of Salmonella enterica serotypes. Although they found that it was not possible to correctly classify intact cells, they correctly classified 100% of the LPS. Baldauf and others (2006) have gone on to differentiate between Salmonella enterica serotypes by preparing ethanol or buffer suspensions of single colonies, which were then mounted on a ZnSe crystals or diamond plate. The potential of this technology to be applied to the identification of microorganisms in food products has been investigated by Yu and others (2004) for apple juice. Yu and others (2004) differentiated and quantified the level of eight bacterial species (Enterbacter sp., Salmonella, Serratia sp., Pseudomonas sp., Vibrio cholerae, and Hafnia alvei), which had been used to spike apple samples. The differentiation of microorganisms in apple juice was possible at a level of 103 colony-forming unit/milliliter(CFU/ml).
Conclusions and Future Developments In conclusion, MIR spectroscopy in conjunction with an appropriate sample preparation technique is rapid, easy to use, and has potential for the analysis of a wide variety of food products. Global demands for increased food safety, quality assurance programs, and an increased concern regarding food bioterrorism, are affecting the world’s food supply chain. MIR spectroscopy has a potential role in maintaining the integrity
138
Nondestructive Testing of Food Quality
of the food supply chain, particularly in the areas of authenticity of both raw ingredients and final products, traceability of a product back through the food chain from “food to farm,” and identification of microbial contaminants, as well as its more traditional role in quality control and assurance. The areas of authenticity, traceability, and food safety are emerging research areas that represent important future applications for this technology. The development of miniature MIR spectrometers will facilitate the installation of this technology online or at-line in food processing facilitates, and equipment manufacturers are likely to put renewed emphasis on developing online instrumentation rather than laboratory instrumentation to facilitate improved process monitoring of product quality. This will also further the aim of PAT, which will result in improved process monitoring and control throughout the manufacturing process. Online monitoring will also greatly assist food processors to adhere to increasingly stringent authenticity legislation.
References Al-Jowder O, EK Kemsley, and RH Wilson. 1997. Mid-infrared spectroscopy and authenticity problems in selected meats: a feasibility study. Food Chem., 59:195– 201. Al-Jowder O, M Defernez, EK Kemsley, and RH Wilson. 1999. Mid-infrared spectroscopy and chemometrics for the authentication of meat products. J. Agric. Food Chem., 47:3210–3218. Al-Jowder O, EK Kemsley, and RH Wilson. 2002. Detection of adulteration in cooked meat products by mid-infrared spectroscopy. J. Agric. Food Chem., 50:1325–1329. Baldauf NA, LA Rodriguez-Romo, AE Yousef, and LE Rodriguez-Saona. 2006. Differentiation of selected Salmonella enterica serovars by Fourier transform mid-infrared spectroscopy. Appl. Spectrosc., 60:592–598. Cattaneo TMP, C Giardina, N Sinelli, M Riva, and R Giangiacomo. 2005. Application of FT-NIR and FT-IR spectroscopy to study the shelf-life of Crescenza cheese. Int. Dairy J., 15:693–700. Che Man YB, ZA Syahariza, MES Mirghani, S Jinap, and J Bakar. 2005. Analysis of potential lard adulteration in chocolate and chocolate products using Fourier transform infrared spectroscopy. Food Chem., 90:815–819. Chen M and J Irudayaraj. 1998. Sampling technique for cheese analysis by FTIR spectroscopy. J Food Sci, 63:96–99. Chen M, J Irudayaraj, DJ McMahon, and MX Chen. 1998. Examination of full fat and reduced fat Cheddar cheese during ripening by Fourier transform infrared spectroscopy. J Dairy Sci, 81:2791–2797.
Application of Mid-infrared Spectroscopy
139
Coates JP. 1998. Review of infrared sampling methods. In: Applied Spectroscopy, a compact guide for practitioners. Eds. J Workman Jr. and AW Springsteen. Academic Press, San Diego, pp. 50–89. Dogan A, G Siyakus, and F Severcan. 2007. FTIR spectroscopic characterization of irradiated hazelnut (Corylus avellana L.). Food Chem., 100:1106–1114. Downey G. 1998. Food and food ingredient authentication by mid-infrared spectroscopy and chemometrics. Trends Anal Chem 17:418–424. Dufour E, G Mazerolles, MF Devaux, G Duboz, MH Duployer, and MN Riou. 2000. Phase transition of triglycerides during semi-hard cheese ripening. Int. Dairy J., 10:81–93 Dupuy N, JP Huvenne, L Duponchel, and P Legrand. 1995. Classification of green coffees by FT-IR analysis of dry extract. Appl. Spectrosc. 49:580–585. Dupuy N, C Wojciechowski, CD Ta, JP Huvenne, and P Legrand. 1997. Mid-infrared spectroscopy and chemometrics in corn starch classification. J. Mol. Struct. 410:551–554. Fagan CC, C Everard, CP O’Donnell, G Downey, EM Sheehan, CM Delahunty, DJ O’Callaghan, and V Howard. 2007. Prediction of processed cheese instrumental texture and meltability by mid-infrared spectroscopy coupled with chemometric tools. J. Food Eng., 80:1068–1077. Fl˚atten A, EA Bryhni, A Kohler, B Egelandsdal, and T Isaksson. 2005. Determination of C22:5 and C22:6 marine fatty acids in pork fat with Fourier transform mid-infrared spectroscopy. Meat Sci, 69:433–440. Gomez MAM, MAB Perez, FJM Gil, AD Diez, JFM Rodriguez, PG Rodriguez, AO Domingo, and AR Torres. 2003. Identification of species of Brucella using Fourier Transform Infrared spectroscopy. J. Microbiol. Methods, 55:121– 131. Gomez-Carracedo MP, JM Andrade, E Fernandez, D Prada, and S Muniategui. 2004. Evaluation of the pure apple juice content in commercial apple beverages using FTMIR-ATR and potential curves. Spectr. Lett., 37:73–93. Innawong B, P Mallikarjunan, J Irudayaraj, and JE Marcy. 2004. The determination of frying oil quality using Fourier transform infrared attenuated total reflectance. Lebensmittel-Wissenschaft und-Technologie, 37:23–28. I˜no´ n FA, S Garrigues, and M de la Guardia. 2004. Nutritional parameters of commercially available milk samples by FTIR and chemometric techniques. Analytica Chimica Acta, 513:401–412. Irudayaraj JS, M Chen M, and DJ McMahon. 1999. Texture development in Cheddar cheese during ripening. Can Agr Eng 41:253–258. Justino LG, M Caldeira, VMS Gil, MT Baptista, AP Da Cunha, and A Gil. 1997. Determination of changes in sugar composition during the aging of honey by HPLC, FTIR and NMR spectroscopy. Carbohydr. Polym., 34:435. Karoui R, G Mazerolles, and E Dufour. 2003. Spectroscopic techniques coupled with chemometric tools for structure and texture determinations in dairy products. Int. Dairy J., 13:607–620. Karoui R, E Dufour, L Pillonel, D Picque, T Cattenoz, and J-O Bosset. 2004. Determining the geographic origin of Emmental cheeses produced during winter
140
Nondestructive Testing of Food Quality
and summer using a technique based on the concatenation of MIR and fluorescence spectroscopic data. Eur. Food Res. Technol. 219:184–189. Karoui R, J-O Bosset, G Mazerolles, A Kulmyrzaev, and E Dufour. 2005a. Monitoring the geographic origin of both experimental French Jura hard cheeses and Swiss Gruy`ere and L’Etivaz PDO cheeses using mid-infrared and fluorescence spectroscopies: a preliminary investigation. Int. Dairy J., 15:275–286. Karoui R, E Dufour, L Pillonel, E Schaller, D Picque, T Cattenoz, and J-O Bosset. 2005b. The potential of combined infrared and fluorescence spectroscopies as a method of determination of the geographic origin of Emmental cheeses. Int. Dairy J., 15:287–298. Karoui R, AM Mouazen, E Dufour, L Pillonel, E Schaller, J De Baerdemaeker, and J-O Bosset. 2006a. Chemical characterisation of European Emmental cheeses by near infrared spectroscopy using chemometric tools. Int. Dairy J. 16:1211–1217. Karoui R, AM Mouazen, I Dufour, L Pillonel, D Picque, J-O Bosset, and J Baerdemaeker. 2006b. Mid infrared spectrometry: a tool for the determination of chemical parameters of Emmental cheeses produced during winter. Lait, 86:83–97. Karoui R, AM Mouazen, I Dufour, L Pillonel, D Picque, J De Baerdemaeker, and J-O Bosset. 2006c. Application of the MIR for the determination of some chemical parameters in European Emmental cheeses produced during summer. Eur. Food Res. Technol. 222:165–170. Kelly JFD, G Downey, and V Fouratier. 2004. Initial study of honey adulteration using midinfrared (MIR) spectroscopy and chemometrics. J. Agric. Food Chem., 52:33–39. Kemsley EK, S Ruault, and RH Wilson. 1995. Discrimination between Coffea arabica and Coffea canephora variant robusta beans using infrared spectroscopy. Food Chem., 54:321–326. Kim S, BL Reuhs, and LJ Mauer. 2005. Use of Fourier Transform Infrared spectra of crude bacterial lipopolysaccharides and chemometrics for differentiation of Salmonella enterica serotypes. J. Appl. Microbiol., 99:411–417. Kramer R. 1998. Chemometric techniques for quantitative analysis. Marcel Dekker, New York. Kruzelecky RV and AK Ghosh. 2002. Miniature spectrometers. In: Handbook of vibrational spectroscopy. Vol. 1. J Chalmers and PR Griffiths, Eds. Wiley and Sons, pp. 423–435. Lachenmeier DW. 2007. Rapid quality control of spirit drinks and beer using multivariate data analysis of Fourier transform infrared spectra. Food Chem., 101:825–832. Lefier D, R Grappin, and S Pochet. 1996. Determination of fat, protein, and lactose in raw milk by Fourier transform infrared spectroscopy and by analysis with a conventional filter-based milk analyzer. J. AOAC Int., 79:711–717. Lefier D, H Lamprell, and G Mazerolles. 2000. Evolution of Lactococcus strains during ripening in Brie cheese using Fourier transform infrared spectroscopy. Lait, 80:247–254. Lucia V, B Daniela, and L Rosalba. 2001. Use of Fourier transform infrared spectroscopy to evaluate the proteolytic activity of Yarrowia lipolytica and its contribution to cheese ripening. Int. J. Food Micro., 69:113–123.
Application of Mid-infrared Spectroscopy
141
Martens H and T Næs. 1989. Multivariate calibration. Wiley Chichester, UK. Mazerolles G, MF Devaux, G Duboz, MH Duployer, N Mouhous Riou, and E Dufour. 2001. Infrared and fluorescence spectroscopy for monitoring protein structure and interaction changes during cheese ripening. Le Lait 81:509–527. McElhinney J, G Downey, and C O’Donnell. 1999. Quantitation of lamb content in mixtures with raw minced beef using visible, near and mid-infrared spectroscopy. J Food Sci 64:587–591. McQueen DH, R Wilson, A Kinnunen, and EP Jensen. 1995. Comparison of two infrared spectroscopic methods for cheese analysis. Talanta, 42:2007–2015. McSweeney PLH and PF Fox. 1993. Cheese: methods of chemical analysis In: Cheese: chemistry, physics and microbiology. Vol 1. 2nd ed. PF Fox, Ed. Chapman and Hall, London, pp. 341–388. Mendenhall IV and RJ Brown. 1991. Fourier transform infrared determination of whey powder in nonfat dry milk. J. Dairy Sci. 74:2896–2900. Nathier-Dufour N, J Sedman, and FR van de Voort. 1995. A rapid ATR/FTIR quality control method for the determination of fat and solids in sweetened condensed milk. Milchwissenschaft, 50:462–466. Picque D, T Cattenoz, G Corrieu, and JL Berger. 2005. Discrimination of red wines according to their geographical origin and vintage year by the use of mid-infrared spectroscopy. Sci. Aliments, 25:207–220. Pillonel L, W Luginbuhl, D Picque, E Schaller, R Tabacchi, and J-O Bosset. 2003. Analytical methods for the determination of the geographic origin of Emmental cheese: mid- and near-infrared spectroscopy. Eur. Food Res. Technol., 216:174–178. Pomeranz Y and CE Meloan. 1994. Food analysis theory and practice. 3rd ed. Chapman and Hall, London; New York, p. 10. Reh C. 2001. In-line and off-line FTIR measurements. In: Instrumentation and sensors for the food industry. 2nd ed. E Kress-Rogers and CJB Brimelow, Eds. Woodhead and CRC Press, Cambridge and Boca Raton, FL, p. 213–232. Reid LM, T Woodcock, CP O’Donnell, JD Kelly, and G Downey. 2005. Differentiation of apple juice samples on the basis of heat treatment and variety using chemometric analysis of MIR and NIR data. Food Res. Int., 38:1109–1115. Ripoche A and AS Guillard. 2001. Determination of fatty acid composition of pork fat by Fourier transform infrared spectroscopy. Meat Sci., 58:299–304. Roussel S, V Bellon-Maurel, JM Roger, and P Grenier. 2003. Fusion of aroma, FT-IR and UV sensor data based on the Bayesian inference. Application to the discrimination of white grape varieties. Chemometrics Intell. Lab. Syst., 65:209–219. Rudnitskaya A, D Kirsanov, A Legin, K Beullens, J Lammertyn, BM Nicola¨ı, and J Irudayaraj. 2006. Analysis of apples varieties—comparison of electronic tongue with different analytical techniques. Sens. Actuator B-Chem. 116:23–28. Sivakesava S, JMK Irudayaraj, and RL Korach. 2001. Detection of adulteration in apple juice using mid infrared spectroscopy. Appl. Eng. Agric., 17:815–820. Syahariza ZA, YB Che Man, J Selamat, and J Bakar. 2005. Detection of lard adulteration in cake formulation by Fourier transform infrared (FTIR) spectroscopy. Food Chem., 92:365–371.
142
Nondestructive Testing of Food Quality
Tay A, RK Singh, SS Krishnan, and JP Gore. 2002. Authentication of olive oil adulterated with vegetable oils using Fourier Transform Infrared spectroscopy. Lebensmittel-Wissenschaft und-Technologie, 35:99–103. Tewari J and J Irudayaraj. 2004. Quantification of saccharides in multiple floral honeys using Fourier transform infrared microattenuated total reflectance spectroscopy. J. Agric. Food Chem., 52:3237–3243. van de Voort FR. 1992. Fourier transform infrared spectroscopy applied to food analysis. Food Res. Int., 25:397–403. Vill´e H, G Maes, R De Schrijver, G Spincemaille, G Rombouts, and R Geers. 1995. Determination of phospholipid content of intramuscular fat by Fourier Transform Infrared spectroscopy. Meat Sci., 41:283–291. Wilson RH and HS Tapp. 1999. Mid-infrared spectroscopy for food analysis: recent new applications and relevant developments in sample presentation methods. Trends Anal. Chem., 18:85–93. Yang H, J Irudayaraj, and MM Paradkar. 2005. Discriminant analysis of edible oils and fats by FTIR, FT-NIR and FT-Raman spectroscopy. Food Chem., 93:25–32. Yu CX, J Irudayaraj, C Debroy, Z Schmilovtich, and A Mizrach. 2004. Spectroscopic differentiation and quantification of microorganisms in apple juice. J. Food Sci., 69:S268–S272.
Chapter 6 Applications of Raman Spectroscopy for Food Quality Measurement Ramazan Kizil and Joseph Irudayaraj
Basic Principles of Raman Spectroscopy Being nondestructive and applicable to a wide range of food products without a necessity of sample preparation, Raman spectroscopy is becoming one of the more promising analytical tools for quality control and structure elucidation in food science. A wealth of information could be obtained on the chemical composition and molecular conformation of substances through well-resolved frequency responsive scattering bands. Raman spectroscopy is versatile and applicable to molecules of any size and physical state. In addition, introduction of chemometric tools for signal preprocessing and application of multivariate calibration and pattern recognition models have made Raman spectroscopy a practical tool for qualitative and quantitative assessment of food components and systems. Like other spectroscopic techniques, the working principle of Raman spectroscopy relies on the physics of interaction between electromagnetic waves and matter. Impingement of light on a substance results in scattering and absorbance. Although most of the scattered light has the same frequency/energy as that of the incident light, only a slight fraction of the incident light donates or receives energy to contribute to a change in the vibrational and rotational state of molecules. The change in the photon energy as a result of inelastic scattering of light with molecules is known as “Raman shift” and is a function of wave numbers. Since only molecules with distorted electron densities or polarizabilities due
143
144
Nondestructive Testing of Food Quality
to energy exchange are Raman active, Raman spectroscopy is considered to be selective in detecting apolar molecules, ring structures, and double- or triple-bonded structures. Four basic selection rules apply to Raman spectroscopy (Pelletier 1999): 1. Nonpolar or slightly polar groups often show strong Raman scattering due to stretching vibrations. 2. Intensity of Raman scattering bands arising from stretching vibrations are much stronger than those from deformation vibrations. 3. Symmetrical vibrations, which do not deform the molecule, have much higher Raman scattering than those from unsymmetrical vibrations. 4. The presence of multiple bonds (for example, C=C) in the sample molecule often give intense Raman scattering bands arising from stretching vibrations. Selected parameters pertinent to band positions and vibrational frequencies are spatial arrangements of functional groups, the Fermi resonance, interatomic distances, physical state of the sample, polarity of environment, and the nature of hydrogen bonds. Classification of Raman Spectrometers Raman spectroscopy can be classified into three distinct categories based on the source of excitation used in the instrumentation. When a visible excitation is used in instrumentation, the Raman spectrometer is known as visible excitation Raman spectroscopy. The advantage of visible Raman spectroscopy is the integration of a sensitive yet inexpensive charge coupled device (CCD) for the detection of Raman scattered portion of the incident light. Another unique opportunity of visible Raman spectroscopy is that a microscope unit can be coupled to the spectrometer so that the microstructure of food products can be studied. The advent of confocal microscopes have made Raman applications more useful, because the use of spatial filtering by an optically conjugated pinhole in confocal setting contributions from the out-of-focus region are eliminated (Baia et al. 2002). However, strong fluorescence background is a disadvantage in visible Raman spectroscopy of biological samples (Hanlon et al. 2000).
Raman Spectroscopy for Food Quality Measurement
145
Raman spectrometers equipped with ultraviolet (UV) lasers are called UV resonance Raman spectrometers. UV light has the potential to create an excited electronic state. Resonance effect can be attained using a UV excitation at which the wavelength of the incident photon matches with an optical absorption of internal electronic excitations in scattering molecules. Since the magnitude of the induced dipole created by UV excitation is immense, the intensity of Raman scattering can be increased by a factor of 102 to 106 times compared to a dispersive Raman system (McCreery 2000). The use of UV excitation is an effective way of circumventing fluorescence; however, UV resonance excitation may cause photothermal damages in samples. For excitation sources with longer wavelengths, an interferometerbased instrumental approach known as Fourier Transform Raman (FT– Raman) spectroscopy is employed to collect Raman-scattering signals. A compact diode-bar pumped Nd-YAG laser, which provides continuous wave excitation at 1,064 nanometers (nm), is the most frequently used light source of current FT–Raman instruments. The Michelson interferometer is the most commonly used wavelength stabilizing system in FT–Raman spectrometers. Using a longer wavelength eliminates both fluorescence background and photodecomposition of samples due to low laser powers. Discussions of the early history of Raman effect (Long 1988) and the current advances in instrumentation of Raman spectroscopy are provided in Pitt and others (2005). Regardless of the type of instrumentation, the ease of experimentation is a unique advantage of Raman spectroscopy. Samples without a necessity of pretreatment are irradiated by the monochromatic laser light source, and collection optics with the necessary Rayleigh scattering filters deliver the scattering photons to the detector. CCD cameras are good for UV and visible light sources, but germanium detectors are used when near infrared (NIR) irradiation is used. Using fiber optics, it is even possible to obtain in vivo signal collection or to monitor a continuous processing. In a typical Raman analysis of food material, assignment of Ramanscattering bands to the corresponding vibrational modes of molecules is integral. The fingerprint information can be used to identify a molecule of interest in complex food matrix or elucidate intermolecular or intramolecular interactions in food systems. In addition, the changes in chemical composition or molecular structure of food due to
146
Nondestructive Testing of Food Quality
processing can be monitored by intensity or frequency shifts of specific vibrational modes. The key concerns in selecting an analytical technique include consideration of the limitations imposed by the concentration of the target, optical and physical nature of the experimental environment, sensitivity of the technique, and operational cost. The relative advantages and strengths of the chosen technique over other methods should also be considered in choosing an appropriate Raman collection system for food quality analysis.
Raman Spectroscopy in Structural and Qualitative Analysis of Basic Food Components Proteins Raman spectroscopy is increasingly employed with proteins and proteinrich food products to study protein interactions and elucidate structural information after processing at the molecular level. Since the structure is directly related to the functional properties of proteins, estimation of the secondary structure of proteins through Raman spectroscopy provides an insight in understanding of protein functions, such as whipping or gelation properties of foods containing high protein. Raman spectra provide valuable information on the chemistry of side chains or conformation of the polypeptide backbone that can be used for prediction of the secondary structure of proteins in solution or in crystalline state (Tuma 2005, Parker 1983, Tu 1986). Raman band intensities at various frequencies caused by vibrational motions of polypeptide backbone and amino acid side chains are often accommodated by the surrounding microenvironment around those functional groups (Li-Chan 1997). Raman spectra attributed to the amide I and amide III and skeletal stretching modes furnish useful information about the backbone conformations of proteins. Whereas, Raman bands related to the stretching or bending vibrational modes of assorted amino acid functional groups are employed to predict the environment around these groups (Carey 1982, Przybycien and Bailey 1991). There are several methods developed for the interpretation of Raman spectra to obtain protein structures. All
Raman Spectroscopy for Food Quality Measurement
147
of these interpretations are based on conformationally sensitive amide vibrations of the protein backbone. The most accurate interpretation of protein secondary structure analysis can be obtained by concurrent inspection of amide I and amide III Raman bands. In the literature there are many illustrations of protein secondary structure determination by using either Fourier deconvolution or least square analysis (Sushi and Byler 1988, Przybycien and Bailey 1991). Thermal processing brings about changes in the global structure of proteins in food, altering the functionality in a negative manner. Understanding how protein structure changes with processing is important to predict the functional ability of protein containing foods. Li Chan and Nakai (1991a) applied Raman spectroscopy to study structural changes of proteins during gelation. The heating effect on lysozyme formation of coagulum-type gels was easily monitored using Raman spectroscopy (Li Chan and Nakai 1991b). As a result of heating, the authors reported that one of the disulfide bonds of the protein left the lowest energy conformation and was transferred into gauche-gauche-trans conformation. Aromatic amino acid side chains have unique Raman spectrum that can be used to monitor the polarity of a microenvironment or to estimate hydrogen bonding structures of proteins. The intensity of tryptophan Raman bands around 760 cm−1 will be reduced if polar solvents are used to dissolve proteins with internally buried tryptophan residues. Heating of proteins (Nonaka et al. 1993) resulted in a decrease in the intensity of the tryptophan band. Tyrosine is a good indicator of the environment as well as hydrogen bonding of the phenolic hydroxyl groups. Honzatko and Williams (1982) have suggested that unusually high tyrosine intensity indicates strong hydrogen bonding, with tyrosine being a proton acceptor. Quaternary structure transition due to aromatic amino acid residues can be detected by the recently developed UV resonance Raman spectroscopy (Jayaraman et al. 1995, Nagai et al. 1996). Tryptophan and tyrosine show intensified Raman bands when the wavelength of excitation is set around 230–235 nm (Nagai et al. 1995). Chi and Asher (1998) have employed UV resonance Raman spectroscopy to determine the changes in the structure of myoglobin when it is exposed to acid denaturation. Amide I and II bands were used to determine the protein secondary structure; tyrosine and tryptophan were observed to monitor the change in the environment.
148
Nondestructive Testing of Food Quality
Fats and Oils The early Raman instrumentation using visible excitation was only employed to pure lipids (Bailey and Horvat 1972, Butler et al. 1979) because visible Raman spectroscopy was insufficient to analyze commercial oils and margarines due to a strong fluorescence of carotene and other coloring agents. Since the fluorescence has severely affected the Raman spectrum quality of edible fats and oils, Fourier transform infrared (FT–IR) was leading vibrational spectroscopic technique in detecting fats and oils before the advent of FT–Raman spectroscopy. The superiority of the Raman technique over the IR spectroscopy is its sensitivity toward detecting double-bonded structures, such as C=C, which is weak in the IR spectrum. Precise detection of the total degree of unsaturation of oils is important to the oil and food industry, considering the nutritional value of fats. Traditional titration techniques using iodine are the earliest and most time-consuming method of estimating the total amount of fats. Sadeghi-Jorachi and others (1991) studied the relationship between the unsaturation associated Raman scattering band and iodine value (IV) for various oils and margarines using an FT-Raman setting. Applying modern chemometric calibration tools, fatty acid unsaturation of salmon (Afeth et al. 2006) and fat composition of a complex model food system (Afeth et al. 2005) was quantified using Raman spectroscopy. The sensitivity of Raman spectroscopy in detecting unsaturation related changes have been used to develop various pattern recognition models to classify commercial and essential oils and fats (Baeten 1988, Yang et al. 2005, Strehle et al. 2006). Kizil (2003) has developed a pattern recognition model for the classification of gamma irradiation-treated adipose tissues using Raman spectrum related to the radiochemical changes in C-C and C-H bonds. Figure 6.1 provides the sample spectra of vibrational modes associated with fatty acid and phospholipid constituents in tissue, and Table 6.1 provides the tentative band assignments indicative of structure or molecular conformation. The most significant Raman scattering bands, which are depicted in Figure 6.1, pertinent to functional groups of lipid molecules are listed in Table 6.1. Initial Raman studies on fats and oils were restricted to pure substances only; however, advances in Raman instrumentation made it available to investigate lipid-rich real food samples, such as olives, fish, and adipose tissue.
Raman Spectroscopy for Food Quality Measurement
149
Table 6.1. Tentative FT-Raman band assignments for adipose tissue characterization (Kizil 2003). Raman Responses (cm−1 )
Band Assignments
847 1,060–1,090 1,264 1,302 1,439 1,654 1,746 2,852 2,895 3,004
C-N stretch coupling with PO2 stretch in phospholipids Stretch of carbonyl C-C and POC in phospholipids In plane =C-H deformation (cis transform) In phase CH3 twist CH2 bend cis C=C stretch C=O stretch in ester Symmetric stretch of aliphatic C-H Asymmetric stretch of olefinic C-H Asymmetric stretch of aliphatic cis (=C–H)
Marquardt and Wold (2004) performed an exploratory study on Raman analysis of fish fat, collagen, and muscle. Muik and others (2003) studied prediction of free fatty acid composition of olives and olive oils. Beattie and others (2006) have also studied the potential of Raman spectroscopy in prediction of individual fatty acids of adipose tissue.
2895
2852
1746
1654
1302
1439
3004
1
1080
847
Intensity
1.5
0.5 pork lamb beef
0 500
1000
1500
2000
2500
3000
3500
−1
Raman Shift (cm )
Figure 6.1. FT-Raman spectra of beef, lamb, and porcine adipose tissues.
150
Nondestructive Testing of Food Quality
The texture of hydrogenated vegetable oil is affected by the trans isomer content of fatty acids. Johnson and others (2002) have investigated the potential of FT-Raman spectroscopy in determination of cis/trans isomers. Heat-induced lipid oxidation in vegetable oils was monitored using FT-Raman spectroscopy (Muik et al. 2005). Formation of aldehydates, conjugated double-bond structures, and isomeration of cis to trans bonds due to heating were monitored in the C=C stretching region. Raman spectroscopy has also employed to study phase transition in cacao butter of commercial chocolates probing the C-C stretching spectrum of micro-Raman spectroscopy (Celedon and Aguilera 2002). Carbohydrates and Carbohydrate-based Foods The structural sensitivity of Raman spectroscopy has made the technique attractive for analyses of sugars or sugar-containing foods. Because Raman spectra contain information on the nature of glycosidic linkage in polysaccharides (Sekkal et al. 1995, Kacurakova and Mathlouthi 1996, Nikonenko et al. 2005), anomeric configuration of simple sugars can be determined by specific Raman modes (Arboleda and Loppnow 2000), and crystalline or amorphous properties of carbohydrates can be characterized by Raman spectroscopy (Soderholm et al. 1999). Gelatinization and retrogradation processes of starch have long been investigated using both visible and FT-Raman spectroscopy (Bulkin et al. 1987, Kim et al. 1989, Schuster et al. 2000). Gelatinization of different starches was screened by recording the changes in Raman “fingerprint” characteristics of granular starches particularly at 478, 1,082, 1,123, and 1,340 cm−1 . Kim and others (1989) proposed a molecular-level mechanism for the gelatinization of maize starch based on the information obtained from Raman spectral responses recorded during the course of gelatinization. FT-Raman spectroscopy was employed to investigate the involvement of water in the starch gelatinization (Schuster et al. 2000). Celedon and Aguilera (2002) studied the loss of birefringence in starch granules as a function of temperature using a visible Raman microprobe. Tentative band assignments for carbohydrate-related constituents are provided in Table 6.2. The dominating Raman band as a result of the skeletal mode C-C stretch at 480 cm−1 was screened to determine the level of birefringence and monitor the advance of gelatinization with temperature. Kizil (2003) investigated the quality of gamma irradiation-treated starches, starch
Raman Spectroscopy for Food Quality Measurement
151
Table 6.2. Tentative FT-Raman band assignments for carbohydrates (Kizil and Irudayaraj 2007). Raman Responses (cm−1 ) In crystalline state
In solution
Band Assignments
below 700 763 860 1,087 1,122 1,260–1,280 1,339 1,382
below 700 779 840 1,076 1,124 1,272 1,335 1,370–1,410
1,460 2,800–3,000 3,100–3,600
1,462 2,800–3,000 3,000–3,600
Skeletal modes of the ring structure C-C stretching C(1)-H, CH2 deformation C-O-H bendin C-O stretching, C-O-H bend CH2 OH (side chain) related mode C-O-H bending, CH2 twist CH2 scissoring, C-H and C-O-H deformation CH2 bend C-H stretch O-H stretch
gels, honey, and fructose using FT-Raman spectroscopy. Irradiationinduced radiochemical damages to starch were characterized monitoring the changes in glycosidic linkage (Kizil et al. 2002). Hydrogen bonding abstraction from C-H bonds of starch gels due to the attack of hydroxyl radicals was detected in C-H stretching region of FT-Raman spectra (Kizil et al. 2006). Raman spectroscopy was found to be sensitive in detecting irradiation-induced damages in anomeric carbon and C-H bonds of honey and fructose (Kizil and Irudayaraj 2007).
Contemporary and Special Applications of Raman Spectroscopy for Food Quality Measurements Due to its high spatial resolution and nondestructive nature, application of confocal Raman microscopy to cereal grains has significantly contributed to the understanding of grain texture (Piot 2002). The microscope unit of a Raman spectrometer is focused onto a grain sample, and Raman signal is collected from a spot of a few microns in radii. The microscope stage could be moved a small distance until scanning of
152
Nondestructive Testing of Food Quality
the desired line or area collecting Raman signals is complete for each movement. Such a Raman system was used to study the microstructure of the wheat kernel (Piot et al. 2000) and the confirmation of lipidbinding proteins and composition of the starchy endosperm of wheat grains (Bihan et al. 1996). Piot and others (2000) determined the degree of branching and the conformation of glycosidic bonds of wheat polysaccharides using confocal Raman setup. Regarding lipids, the authors predicted the degree of unsaturation of acyl chains of fatty acid and phospholipid composition using the C=C and PO− 2 related vibrational modes. Piot and others (2001) investigated the quality parameters of wheat kernel through confocal Raman investigation of the composition of endosperm cell walls at different stages of the grain development to estimate cell wall structure and mechanical resistance of grains. Raman spectra of endosperm cell walls at different maturation stages were collected and the analysis revealed that amide I band at 1,656 cm−1 and phenylalanine residue band at 1,003 cm−1 could be used to monitor changes in protein content. Similarly, change in the lipid content during maturation was probed using the band appeared at 867 cm−1 . Raman results of Piot and others (2001) have shown that endosperm cell wall structure might contain not only arabinoxylan chains with ferulic ester branching but also components such as lipids and proteins. The mechanical resistance difference of endosperm cell walls was attributed to involvement of proteins and lipids in the cell wall structure. The usefulness of Raman microspectroscopy in quality assessment of grains was investigated by probing specific molecular changes involved in grain cohesion and fracture generation during milling so that flour characteristics could be predicted from grains of different degrees of hardness (Piot el al. 2001). Hardness is an important quality criterion of grains to be processed into flour. FT-Raman spectroscopy was employed to sodium hydroxide (NaOH)-soaked wheat to understand the molecular mechanism of wheat color class test (Ram et al. 2003). Combining partial least square (PLS) algorithms with Raman measurements, Archibald and others (1998) determined the total dietary fiber content of a wide variety of cereal foods. The quality of rice and its cooking were assessed through Raman analyses by monitoring the changes in protein-related bands due to denaturation (Barton et al. 2002). UV-resonance Raman spectroscopy has long been considered to be a powerful technique for identification of microorganism in
Raman Spectroscopy for Food Quality Measurement
153
pure cultures (Nelson 1991). However, UV-Raman identification of microorganisms or quantification of specific constituents in a complex medium is difficult. The spectral fingerprints associated with bacteria should be extracted from a stronger extra bacterial background in the Raman spectrum to identify microorganisms in a complex medium. Harhay and Siragus (1999) employed hydrogen-deuterium exchange (HDE) protocol to UV-resonance Raman spectroscopy to detect bacteria in a complex food matrix. The idea behind the HDE, which has long been used to investigate molecular dynamics (Englander and Mayne 1992), is that the exchange of hydrogen atoms for deuterium perturbs the sample and this induces time-dependent changes in the vibrational spectra of the constituents of food. If the response of bacteria-specific components to the perturbation is predetermined, the target bacteria will be distinguished from the rest of the constituents.
Multivariate Qualitative Raman Spectroscopy for Food Quality Assessments Historically, Raman spectroscopy has been considered to be a qualitative analytical tool because it provides useful information on molecular composition, structure, and molecular interaction. The combination of chemometrics and multivariate statistical methods along with Raman spectroscopy is becoming an attractive protocol to study definitive authentication and quality indices of food products because chemometrics can often be used to extract useful chemical information that is present but hidden or overlapped in Raman spectra due to the complexity of the food systems. One of the most common multivariate tools to study quality of food is discriminant analysis (DA). As a subset of chemometrics, DA is used to develop a classification rule to be used to determine the identity or quality of an unknown sample. One of the widely used DA techniques is linear discriminant analysis (LDA). Canonical variate analysis (CVA) is another discriminant analysis method that can be used to discriminate between groups of observation. Like principal component analysis (PCA) and PLS, CVA is a projection and data reduction method. Scores of CVA seek to maximize both between-groups variance and within-group variance. The aim of CVA transformation is to provide the best possible one- or two-dimensional
154
Nondestructive Testing of Food Quality
representations for illustrating the differences between groups (Kemsley 1998). CVA can also be used to define the group membership of individual observation. In most chemometrics software, group membership is assigned through the use of “tolerance” regions or “confidence” levels. The most commonly used tolerance region is 95%. The assignment rules are the same as defined in statistical language. For example when an observation does not fall within the (1 − α) × 100% tolerance region of a group, where α is the tolerance level, the observation is rejected in that group. Kizil et al. (2007) studied the irradiation quality of food samples using FT-Raman spectroscopy and has shown the differentiation of various doses of gamma irradiated basic food components and food samples such as starch, honey, and fats applying CVA to FT-Raman spectroscopy measurements. Although the conventional one-dimensional representation of Raman spectra of irradiated and control (nonirradiated) beef adipose tissues do not show visually detectable patterns of changes in the spectrum (Figure 6.2), Kizil (2003) has shown that the control tissues can successfully be discriminated from the irradiated tissues (Figure 6.3). In addition, dose-dependent discrimination of irradiated samples was achieved using the CVA analysis implemented with data from the radiochemical damage-related changes in C=C and C-H stretch regions. As seen in Figure 6.3, the separation between the control and irradiation sample becomes large as the applied dose increases. Using CVA, Kizil (2003) demonstrated the potential of Raman spectroscopy in detecting radiation-induced chemical changes to be used in discrimination of foods with respect to extent of irradiation. Raman quality measurements of commercial fats and oils have been the main application area of Raman spectroscopy, since unsaturation of fatty acids and cis, trans conformers can be well reflecting in the Raman spectrum of C=C. Baeten and others (1998) studied the classification of oils and fats using FT-Raman spectroscopy and chemometrics. Oils and fats from 21 different sources, either from vegetable or animal sources, were analyzed and PCA was employed to classify the samples based on the level of unsaturation. The results showed that monounsaturated and polyunsaturated oil sources were clearly separated in the discriminant space by applying stepwise linear discrimination analysis. It was one of the first reports demonstrating the ability of Raman spectroscopy in classification of edible oils and fats according to degree of unsaturation.
1.5
Intensity
1
155 10 kGy 4.5 kGy 3 kGy 1 kGy 0.5 kGy 0.2 kGy Control
0.5
0 500
1000
1500
2000
2500
3000
3500
−1
Raman Shift (cm )
Figure 6.2. FT-Raman spectra of beef adipose tissue at various irradiation doses.
156
Nondestructive Testing of Food Quality 6
CV 2
4
Non-irradiated
2
Irradiated at 0.5 kGy Irradiated at 1.0 kGy
-6
-4
-2
2
4
6
8
-2
Irradiated at 3 kGy
-4
Irradiated at 4.5 kGy
-6
Irradiated at 10 kGy Irradiated at 0.2 kGy
-8
CV 1
Figure 6.3. Discrimination of porcine adipose tissues using CVA.
One of the recent studies investigated the quality of essential oils of different species and aroma plants (Strehle 2006). In this study, Raman spectroscopy was found to be sensitive in identification of the main ingredients of essential oils, mostly monoterpenes and phenylpropane derivatives. A pattern recognition technique called hierarchical cluster analysis applied to the Raman data of oils from different botanical origin grouped chemotaxonomically, meaning that chemical composition of oils rather than botanical origin were the determining factor in assessing the quality of oils. FT-Raman spectroscopy has been employed to discriminate the botanical origin of green and roasted coffees for rapid quality assessment of coffee (Rubayiza and Meurens 2005). Two distinct Raman modes appearing at 1,478 and 1,567 cm−1 were found to arise from the kahweol content of coffee. Kahweol is a type of hydrocarbon in the form of diterpene. It is found only in the beans of coffea arabica. Kahweol elevates serum cholesterol and alanine aminotransferase enzyme level in liver (Urgert et al. 1005). PCA was applied to the kahweol content related region of Raman spectra to differentiate the samples based on the amount of kahweol present in coffee samples. Olives of different quality were analyzed using FT-Raman spectroscopy to detect the most appropriate olives for the production of high quality virgin olive oil (Muik et al. 2004). Two different pattern
Raman Spectroscopy for Food Quality Measurement
157
recognition techniques were tested to discriminate sound olives, olives with frostbite, oils collected from the ground, fermented olives, and olives with diseases. Marquardt and Wold (2004) demonstrated that a Raman system equipped with 785-nm laser excitation and a CCD camera can be used for rapid quality screening of fish. Raman measurements of fish fillets of species known to be either high or low in carotenoids, collagen, or fat were treated with semiquantitative PCA. The potential of Raman spectroscopy in rapid and nondestructive analysis of fish quality was demonstrated correlating the spectral information and relative concentration information.
Multivariate Quantitative Raman Spectroscopy for Food Quality Assessments Raman spectral features of most food products, which contain chemical compositional information, are often in the form of overlapped peaks of Raman active molecular groups. This necessitates the application of special spectral enhancement procedures and mathematical treatments to carry out a quantitative investigation for determination of the concentration of molecule of interest in food products. Application of mathematical and statistical procedures to extract useful chemical information from spectroscopic data is called chemometrics. Chemometrics is made possible to treat Raman signals, which can be considered to be multivariate spectroscopic data, with mathematical operations and statistical tests to develop calibration models for prediction of the concentration of samples whose compositions are covered within the universe of the training set. Special chemometrics tools known as multivariate calibration or eigenvector quantification methods are applied to spectroscopic data to create separate calibration and prediction models from Raman measurements. A series of calibration standards spanning from a certain concentration or physical property range is first prepared. Empirical calibration models are then derived using the calibration standards to correlate Raman spectra with the analyte concentrations/physical properties. In the prediction step, concentrations/physical properties of unknown samples are estimated using the calibration model. If the calibration
158
Nondestructive Testing of Food Quality
samples were chosen to cover the unknown concentration range of samples, the estimation would have a precision similar to that obtained in the calibration set. Eigenvector quantification methods, such as PCA and PLS, involve a mathematical operation that decomposes the measurement matrix A into its eigenvectors (principal components) and removes all of the eigenvectors whose corresponding eigenvalues are nearly zero (Martens and Næs 1989). Conceptually, only the eigenvalues that represent the largest variation in raw data are determined and retained in the analysis. In other words, raw data (matrix A) is reduced to a set of eigenvectors that retains most of the information offered by raw measurements. Since the principal components are computed from the original data to represent variations in each spectrum, the concentration of constituents that make up samples can be regressed from the principal components. The regression of concentration using principal components is known as principal component regression (PCR). The principal component calibration phase involves two steps; first the calculation of principal components and scores, and scores against concentrations from original measurement matrix, and regression of scores against concentrations. The difference between PLS and PCA is that prior information on concentration is incorporated in the calculation of eigenvectors, hence the prediction ability of PLS regression is enhanced. There are increasing numbers of quantitative Raman applications to food. For example, Afseth and others (2005) applied the PLS regression method to Raman spectra of a real food, Norwegian quality salmon having an iodine value range of 147.8–170 grams (g) I2 /100 g fat, to predict the fatty acid unsaturation of samples. The authors used 785-nm excitation and a fiber-optic probe to collect Raman signal directly from cuts. In another application involving an FT-Raman data collection protocol, the fructose and glucose content of honey was predicted using the PLS method (Batsoulis et al. 2005). The calibration model was prepared through high performance liquid chromatography (HPLC) analysis to predict the glucose and sucrose content of honey. Raman spectroscopy was also applied to predict the extent of adulteration of honey with cheap sugars. Paradkar and Irudayaraj (2001) predicted the extent of beet and cane invert sugar adulterations in honey applying PLS-based regression analysis to FT-Raman data.
Raman Spectroscopy for Food Quality Measurement
159
Future and Conclusion Raman spectroscopy offers a unique opportunity in both quantitative and qualitative investigation of food products because of its nondestructive nature and ease in sampling. The uniqueness of Raman effect provides valuable and fruitful information on the chemical composition and intermolecular or intramolecular interactions of food components. Since molecular level investigations significantly contribute to the understanding of chemical or physical changes in food, Raman spectroscopy has a high potential in quality measurements. Application of multivariate quantitative analyses, such as CVA or LDA, to Raman measurement can be used to develop models to classify food products with respect to quality-related properties. With the introduction of efficient spectrometers and advances in optics, more sensitive and portable units can be expected to enter the quality control/analysis sector.
References Afseth NK, VH Segtnan, BJ Marquardt, and JP Wold. 2005. Raman and near-infrared spectroscopy for quantification of fat composition in a complex food model system. Applied Spectroscopy, 59:1324–1332. Afseth NK, JP Wold, and VH Segtnan. 2006. The potential of Raman spectroscopy for characterization of fatty acid unsaturation of salmon. Analytica Chimica Acta. 572:85–91. Allotta F, MP Fontana, R Giordano, P Migliardo, and F Wanderlingh. 1981. Raman scattering in lysozyme solutions.Journal of Chemical. Physics 75:4307–4309. Arboleda PH and GR Loppnow. 2000. Raman spectroscopy as a discovery tool in carbohydrate chemistry. Analytical Chemistry. 72:2093–2098. Archibald DD, SE Kays, DS Himmelsbach, and FE Barton. 1998. Raman and NIR spectroscopic methods for determination of total dietary fiber in cereal foods: A comparative study. Applied Spectroscopy. 52:22–31. Baeten V, P Hourant, MT Morales, and R Aparicio. 1988. Oil and fat classification by FT-Raman Spectroscopy. Journal of Agriculture and Food Chemistry. 46:2638– 2646. Baia L, K Gigant, U Posset, G Schottner, W Kiefer, and J Popp. 2002. Confocal microRaman spectroscopy: theory and application to a hybrid polymer coating. Applied Spectroscopy. 4:536–540. Bailey GF and RJ Horvat. 1972. Raman scattering analysis of cis/trans isomer composition of edible vegetable oils. Journal of American Oil Chemistry Society. 49:494– 498.
160
Nondestructive Testing of Food Quality
Barton FE, DS Himmelsbach AM McClung, and El Champagne. 2002. Twodimensional vibration spectroscopy of rice quality and cooking. Cereal Chemistry. 79:143–147. Batsoulis AN, NG Siatis, AC Kimbaris, EK Alissandrakis, CS Pappas, PA Tarantilis, PC Harizanis, and MG Polissiou. 2005. FT-Raman spectroscopic simultaneous determination of fructose and glucose in honey. Journal of Agricultural and Food Chemistry. 53:207–210. Beattie JR, SEJ Bell, C Borgaard, A Fearin, and BW Moss. 2006. Prediction of adipose tissue composition using Raman spectroscopy: average properties and individual fatty acids. Lipids. 41:287–294. Bihan TL, JE Blochet, A Desormeaux, D Marion, and M Pezelot. 1996. Determination of the secondary structure and conformation of puroindolines by infrared and Raman spectroscopy. Biochemistry. 35:12712–17722. Bulkin BJ, Y Kwak, and CM Dea Iaian. 1987. Retrogradation kinetics of waxy-corn and potato starches—a rapid Raman spectroscopic study. Carbohydrate Research. 160:95–112. Butler M, N Salem, W Hoss, and J Spoonhower. 1979. Raman spectral analysis of the 1300 cm−1 region for lipid and membrane studies. Chemistry and Physics of Lipids. 24:99–102. Carey SP. 1982. Biological application of Raman spectroscopy, New York: Academic Press. Celedon A and JM Aguilera. 2002. Applications of microprobe Raman spectroscopy in food science. Food Science and Technology International. 8:101–108. Chi Z and A Asher. 1998. UV resonance Raman spectroscopy to monitor structural change in the myoglobin. Biochemistry. 37:2872–2884. Englander SW and L Mayne. 1992. Protein folding studies using hydrogen-deuterium exchange labeling and 2-dimensional NMR. Annual Review of Biophysics and Biomolecular Structure. 21:243–265. Hanlon EB, R Manoharan, T-W Koo, KE Shafer, JT Motz, M Fitzmaurice, JR Kramer, I Itzkan, RR Dasari, and MS Feld. 2000. Prospects for in vivo Raman Spectroscopy. Physics in Medicine and Biology. 45:R1–R59. Harhay GP and FR Siragusa. 1999. Hydrogen-deuterium exchange and ultraviolet resonance Raman spectroscopy of bacteria in a complex food matrix. Journal of Rapid Methods and Automation in Microbiology. 7:25–38. Honzatko RB and RW Williams. 1982. Raman spectroscopy of avidin; secondary structure, disulfide conformation, and the environment of tyrosine. Biochem. 21(24):6201–6205. Jayaraman R, M Hata, and Y Soma. 1995. UV Resonance spectroscopy of amino acids. Trends in Analytical Chemistry. 10:337–342. Johnson GL, RM Machado, KG Friedl, ML Achenbach, PJ Clark, and SK Reidy. 2002. Evaluation of Raman spectroscopy for determining cis and trans isomers in partially hydrogenated soybean oil. Organic Process Research & Developments. 6:637–644. Kacurakova M and M Mathlouthi. 1996. FTIR and laser-Raman spectra of oligosaccharides in water: Characterization of the glycosidic. Carbohydrate Research. 284:145– 157.
Raman Spectroscopy for Food Quality Measurement
161
Kemsley EK. 1998. Discriminant analysis and class modeling of spectroscopic data. Chichester; John Wiley & Sons. Kim I-H, Yeh An-I, BL Zhao, and SS Wang. 1989. Gelatinization kinetics of starch by using Raman spectroscopy. Biotechnology Progress. 5:172–174. Kizil R, J Irudayaraj, and K Seetharaman. 2002. Characterization of irradiated starches by using FT-Raman and FTIR spectroscopy. Journal of Agriculture and Food Chemistry. 50:3912–3918. Kizil R. 2003. Characterization of gamma irradiation damages on major food components using vibrational spectroscopy. PhD Dissertation, Penn State University: University Park. Kizil R and J Irudayaraj. 2006. Discrimination of irradiated starch gels using FT-Raman spectroscopy and chemometrics. Journal of Agriculture and Food Chemistry. 54:13– 18. Kizil R and J Irudayaraj. 2007. Rapid evaluation and discrimination of γ -irradiated carbohydrates using FT-Raman spectroscopy and canonical discriminant analysis. Journal of the Science of Food and Agriculture. 87:1244–1251. Li Chan E and S Nakai. 1991a. Importance of Hydrophobicity in Food Emulsions. In: Microemulsions and Emulsions in Foods. pp 193. London, Blackie. Li Chan E and S Nakai. 1991b. Raman spectroscopic study of thermally and/or dithio-threitol induced gelation of lysozyme, J Agric Food Chem. 39(4):1238– 1246. Long D. 1988. Early history of the Raman effect. International Review of Physical Chemistry. 7:314–349. Marquardt BJ and JP Wold. 2004. Raman analysis of fish: a potential method for rapid quality screening. Lebesmittel Wissenschaft und Technologie 37:1–8. Martens H and T Næs. 1989. Multivariate Calibration. Wiley: Chichester. McCreery RL. 2000. Raman Spectroscopy for Chemical Analysis. John Wiley: New York. Muik B, B Lendl, A Molina-Diaz, and MJ Ayora-Canada. 2003. Direct, reagent-free determination of free fatty acid content in olive oil and olives by Fourier transform Raman spectrometry. Analytica Chimica Acta. 487:211–220. Muik B, B Lendl, A Molina-Diaz, D Ortega-Calderon, and MJ Ayora-Canada. 2004. Discrimination of olives according to fruit quality using Fourier transform Raman spectroscopy and pattern recognition techniques. Journal of Agricultural and Food Chemistry. 52:6055–6060. Muik B, B Lendl, A Molina-Diaz, and MJ Ayora-Canada. 2005. Direct monitoring of lipid oxidation in edible oils by Fourier transform Raman spectroscopy. Chemistry and Physics of Lipids. 134:173–182. Nagai M, H Wajcman, T Nakatsukasa, and A Lahary. 1995. UV resonance Raman Spectroscopy of amino acids. Biochemistry. 34:734–742. Nagai M, A Lahary, and T Nakatsuakska. 1996. Detection of quaternary structure transition using UV resonance Raman spectroscopy. Analytical Chemistry. 70:235– 245. Nelson WH, R Manoharan, and JF Sperry. 1991. UV resonance Raman studies of bacteria. Applied Spectroscopy Reviews. 27:67–124.
162
Nondestructive Testing of Food Quality
Nikonenko NA, DK Buslov, NI Sushko, and RG Zhbankov. 2005. Spectroscopic manifestation of stretching vibrations of glycosidic linkage in polysaccharides. Journal of Molecular Structure. 752:20–24. Nonaka M, E Li Chan, and S Nakai. 1993. Raman spectroscopic studies of thermally induced gelation of whey proteins. Journal of Agriculture and Food Chemistry. 39:1238–1245. Paradkar M and J Irudayaraj. 2001. Discrimination and classification of beet and cane inverts in honey by FT-Raman spectroscopy. Food Chemistry. 76:231–239. Parker FS. 1983, Application of infrared, Raman and resonance Raman spectroscopy, New York: Plenum Press. Pelletier MJ. 1999. Analytical applications of Raman Spectroscopy, Oxford: Blackwell Science. Piot O, J-C Autran, and M Manfait. 2000. Spatial distribution of protein and phenolic constituents in wheat grain as probed by confocal Raman spectral imaging. Journal of Cereal Science. 30:57–71. Piot O, J-C Autran, and M Manfait. 2001. Investigation by confocal Raman microspectroscopy of the molecular factors responsible for grain cohesion in the triticum aestivum bread wheat: Role of the cell walls in the starchy endosperm. Journal of Cereal Science. 34:191–205. Piot O, JC Autran, and M Manfait. 2002. Assessment of cereal quality by micro-Raman analysis of the grain molecular composition. Applied Spectroscopy. 56:1132– 1138. Pitt GD, DN Batchelder, R Bennet, RW Bormett, AW Hayward, BJE Smith, KPJ Williams, YY Yang, KJ Baldwin, and S Webster. 2005. Engineering aspects and applications of the new Raman instrumentation. IEE Proceedings—Science, Measurement and Technology. 152:241–318. Przybycien TM and JE Bailey. 1991. Secondary structure perturbations in salt-induced protein precipitation. Biochim et Biophysica Acta. 1076:103–111. Ram MS, FE Dowell, and LM Seitz. 2003. FT-Raman spectra of unsoaked and NAOHsoaked wheat bran, kernel and ferulic acid, Cereal Chemistry. 80:188–192. Rubayiza AB and M Meurens. 2005. Chemical discrimination of arabica and robusta coffees by FT-Raman spectroscopy. Journal of Agricultural and Food Chemistry. 53:4654–4659. Sadeghi-Jorabchi H, H Hendra, PJ Wilson, and PS Belton. 1990. Determination of the total unsaturation in oils by FR-Raman spectroscopy. Journal of American Oil Chemistry Society. 67:483–486. Sadeghi-Jorabchi H, RH Wilson, PS Belton, JD Edwards-Webb, and DT Coxon. 1991. Quantitative analysis of oils and fats by FT-Raman Spectroscopy. Spectrochim Acta. 47A:1449–1458. Schuster KC, H Ehmoser, JR Gapes, and B Lend. 2000. On-line FT-Raman spectroscopic monitoring of starch gelatinization and enzyme catalyzed starch hydrolysis. Vibrational Spectroscopy. 22:181–190. Sekkal M, V Dincq, P Legrand, and JP Huvenne. 1995. Investigation of the glycosidic linkages in several oligosaccharides using FT-IR and FT-Raman spectroscopies. Journal of Molecular Structure. 348:349–352.
Raman Spectroscopy for Food Quality Measurement
163
Soderholm S, YH Roos, N Meinander, and M Hotokka. 1999. Raman spectra of fructose and glucose in amorphous and crystalline states. Journal of Raman Spectroscopy. 30:1009–1018. Strehle KR, P Rosch, D Berg, H Schulz, and J Popp. 2006. Quality control of commercially available essential oils by means of Raman spectroscopy. Journal of Agriculture and Food Chemistry. 54:7020–7026. Sushi H and DM Byler. 1988. Fourier deconvolution of amide I Raman band of proteins as related to conformation. Applied Spectroscopy. 42:819–826. Tu AT. 1986. Spectroscopy in biological systems. New York: Wiley. Tuma R. 2005. Raman spectroscopy of proteins: from peptides to large assemblies. Journal of Raman Spectroscopy. 307–319. Urgert R, AGM Schulz, and MB Katan. 1995. Effects of cafestrol and kahweol from coffee grounds on serum lipids and serum liver enzymes in humans. The American Journal of Chemical Nutrition. 61:149–154. Yang H, J Irudayaraj, and MM Paradkar. 2005. Discriminant analysis of edible oils and fats by FTIR, FT-NIR and FT-Raman Spectroscopy. Food Chemistry. 93:25–32.
Chapter 7 Particle Sizing in the Food and Beverage Industry Darrell Bancarz, Deborah Huck, Michael Kaszuba, David Pugh, and Stephen Ward-Smith
Introduction Particle size is an extremely important parameter in the food and drinks sector. It influences many physicochemical behaviors, from how a bulk powder flows to its dissolution rate, to mouthfeel and product stability. The science of particle sizing is not covered in any great detail in this chapter. However, the application of various particle-sizing techniques within the food and drinks industry is discussed with the use of pertinent examples.
Particle Size Concepts Before discussion of the common techniques used for particle size analysis in the food industry, it is important to highlight some concepts about particle sizing. Firstly, what is actually meant by the term “particle”? A particle can be defined as a three-dimensional body with finite mass and small to negligible dimensions that is discontinuous to its surrounding matrix (Jillavenkatesa et al. 2001). Examples include oil droplets in water (an emulsion), water droplets in air (an aerosol), and starch particles in a liquid (a suspension). Secondly, how can the particle size of a three-dimensional object that is often complex in shape be reported as a single number? As an 165
166
Nondestructive Testing of Food Quality 360 mm
140 mm
120 mm
Figure 7.1. A cuboid with dimensions of 360 × 140 × 120 mm.
example, how can the size of a three-dimensional object such as the cuboid shown in Figure 7.1 be defined? From this example, it can be seen that to fully characterize the cuboid, it is necessary to use three numbers (360 × 140 × 120 millimeters [mm]). The situation becomes even more difficult as the complexity of the shape increases. There is only one shape that can be completely described with a single dimension and that is the sphere (or circle if we are thinking in two dimensions). For this reason, many particle-sizing techniques report the size of the measured particles in terms of spherical or circular equivalence of some measured property (Allen 1992). For example, the cylinder shown in Figure 7.2 would require two dimensions (360 micrometers [µm] × 120 µm) to describe it. But, if the equivalent sphere method is used, its volume could be calculated and the diameter of a sphere with the same volume as the cylinder can be reported. In this example, this would be equal to 213 µm. Many particle-sizing techniques therefore measure some property of the particle (or group of particles) and report this response as a diameter of a sphere that would produce the same response. As an example, if the cylinder shown in Figure 7.2 were measured by microscopy, the operator may well report the size as 360 µm diameter because this would be the diameter of a sphere having the same length. If the same cylinder were measured using a sieve, the operator could well report the size as 120 µm diameter, because this is the second longest dimension and the cylinder could pass through an aperture of
Particle Sizing in the Food and Beverage Industry
167
360 µm
213 µm 120 µm
Figure 7.2. A cylinder with dimensions of 360 µm × 120 µm has the same volume as a sphere 213 µm in diameter.
120 µm. If the cylinder were measured by laser diffraction, the size would be reported as 213 µm, this being the volume of the cylinder. We have three different answers from three different techniques, and each of them is correct! This example illustrates the difficulty when comparing results obtained with different techniques and why it can be fraught with difficulty. Particle size standards are often spherical to simplify calibration and overcome this ambiguity. Discussion on the most common particle-sizing techniques used in the food and drink industry will follow.
Commonly Used Sizing Techniques in the Food Industry Sieving Sieving has been used as a separation technique in the food industry for thousands of years (for example, the separation of wheat from chaff by the Egyptians). Sieving is still in use for many applications to this day. The most attractive feature of the technique is its price; compared to most analytical techniques, it is extremely low in cost. Another advantage is that separation of undesirable contaminants and particle sizing can be done simultaneously. There are various types of sieves including punched hole sieves and mesh sieves of many diameters. The size range extends from the mm range (woven wire sieves can measure up to 125 mm) to the micron range (micromesh sieves can measure down to
168
Nondestructive Testing of Food Quality
5 microns). The user can also choose to sieve their product dry or wet. For most food applications, dry sieving is preferred because the product could be affected by the application of moisture (for example, flour and coffee would swell). The main problem with sieving is that it will not disperse most food samples because the fine particles tend to stick to the larger particles and are separated with them. Therefore, it is not an ideal technique for measurement of samples in cases when monitoring the level of fines is important. Coffee is a good example of this. The fines provide much of the flavor, as shown later in this chapter. Some degree of dispersion can be achieved by shaking the sieve on a mechanical shaker, but full dispersion is unlikely to be achieved. Prolonged shaking may also cause attrition. Fragile samples such as freeze-dried coffee or milk would not be suitable for sieve analysis. Sieving is also shape dependent. An elongated particle will be separated on the basis of its second largest dimension, so the size obtained may be substantially smaller than that obtained by other measurement techniques. However, for cases in which the material being measured is low in value and a more expensive technique would not be economically justified, it represents a useful measurement technique. Light Scattering Techniques Light scattering is a consequence of the interaction of light with the electric field of a particle or small molecule. This interaction induces a dipole in the particle electric field that oscillates with the same frequency as that of the incident light. Inherent to the oscillating dipole is the acceleration of charge, which leads to the release of energy in the form of scattered light. Dynamic Light Scattering Dynamic light scattering (DLS) (sometimes referred to as photon correlation spectroscopy or quasi-elastic light scattering) is a noninvasive technique for measuring the size of particles, typically in the submicron region. Particles and macromolecules in solution undergo Brownian motion, which arises from collisions between the particles and the solvent molecules. As a consequence of this particle motion, light scattered from the particle ensemble will fluctuate with time. In DLS, a digital
Particle Sizing in the Food and Beverage Industry
169
correlator continually adds and multiplies these short time scale fluctuations in the measured scattering intensity to generate an autocorrelation function. This is analyzed to extract the diffusion coefficients and subsequently the particle size information (International Standard on Photon Correlation Spectroscopy ISO13321 1996, Lawson and Hanson 1995, Twomey 1997, Provencher 1979, Provencher 1982a, Provencher 1982b). In most DLS instruments, a monochromatic coherent helium neon (HeNe) laser with a fixed wavelength of 633 nanometers (nm) is used as the light source that converges to a waist of focus in the sample by use of a focusing lens. Light is scattered by the particles at all angles. However, a DLS instrument normally only detects the scattered light at one angle and this, conventionally, is 90 degrees.
Latest Advances in Dynamic Light Scattering
Historically, the technique of DLS has been restricted to dilute dispersions, or the single scattering regime in which only singly scattered photons are detected. For concentrated dispersions, the individual photon may be scattered by particles many times before it reaches the detector. This phenomenon is termed multiple scattering. It has been demonstrated that, even for samples with very low turbidity levels, the effect of multiple scattering can be significant enough to complicate the evaluation of DLS data (Berne and Pecora 1976). Hence, even though direct particle sizing of samples at high concentrations is highly desired in both modern laboratories and industrial processes, samples typically have to be considerably diluted (0.01–0.1% volume per volume [V/V]) for classical DLS measurements. Overdilution of samples, on the other hand, can result in large errors in the size measurements as well, because of the low signal-to-noise ratio and number fluctuations of particles at low concentrations (Finsy 1994). Each type of sample material will have its own ideal range of sample concentration where optimal measurements should be made (International Standard on Photon Correlation Spectroscopy ISO13321 1996). Recently, cross-correlation DLS instrumentation has been developed for particle sizing of concentrated dispersions (Schatzel 1991). In this technique, two beams illuminating the same scattering volumes of a sample are measured by two detectors with identical scattering vectors, and then their outputs are cross-correlated. In theory, since multiply scattered signals do not contribute to the cross-correlation of the signals,
170
Nondestructive Testing of Food Quality
the intensity cross-correlation function is determined solely by singly scattered light, thereby avoiding the effect of multiple scattering. Diffusive wave spectroscopy (DWS) is another new technique for particle sizing of concentrated dispersions (Pine et al. 1988). In contrast to the cross-correlation technique, the DWS technique requires multiple scattering as a necessity rather than something to be avoided. Similar to the DLS method, the DWS technique involves measurement of fluctuations in the intensity of detected light to determine the particle diffusion coefficient and corresponding particle size using the Stokes-Einstein relationship. DWS can overcome the problem of multiple scattering by modeling the light propagation as a diffusion process. This aspect allows DWS to be used in more concentrated samples than its single scattering counterpart. Unlike conventional DLS however, no independent information about polydispersity or the particle size distribution can be extracted from DWS measurements because of the inherent averaging over a large range of spatial length scale (Horne and Davidson 1993). Consequently, the DWS technique is more suitable for study of particle interactions and dynamic properties of dense dispersions with known particle size information. Another way in which the concentration limit can be extended in dynamic light scattering is to use backscatter detection (German patent 19725211, U.S. patent 6016195, Japan patent 2911877). In this optical configuration, the illuminating laser does not have to travel through the entire sample and because the scattered light passed through a shorter path length of sample, multiple scattering is reduced and therefore higher sample concentrations can be measured. Large particles, such as dust, mainly scatter in the forward direction. Therefore, backscatter detection minimizes the effects of dust. If the measurement position within the cuvette can be changed, the system can be optimized for both high and low sample concentrations. For small particles, or samples of low concentration, it is beneficial to maximize the amount of scattering from the sample. As the laser passes through the wall of the cuvette and into the dispersant, the laser will “flare.” This flare may swamp the scattering signal. Moving the measurement point away from the cuvette wall toward the center of the cuvette will minimize the chance of detecting this flare (Figure 7.3a). Large particles or samples of high concentration scatter much more light. In this situation, measuring closer to the cuvette wall will reduce the effect of multiple scattering by minimizing the path length over
Particle Sizing in the Food and Beverage Industry (a)
(b)
Cuvette
Focussing Lens
Detector
171
Laser
Detector
Cuvette
Focussing Lens
Laser
Figure 7.3. Schematic diagram showing the measurement position for (a) small, weakly scattering samples and for (b) concentrated, opaque samples. The measurement position is achieved by moving the focusing lens accordingly.
which the scattered light has to pass (Figure 7.3b). The measurement position can be automatically determined through a combination of the intercept of the correlation function and the intensity of the scattered light that has been detected. Dynamic Light Scattering Food Applications
Conventional DLS instrumentation requires the sample to be significantly diluted to reduce multiple scattering effects. Dilution of any sample could change its morphology and so the ability to measure samples at or near the neat concentration is very desirable. An instrument incorporating backscatter detection allows measurement to be made at much higher concentrations for the reasons discussed above. To illustrate the capabilities of a backscatter detection instrument, measurements of a flavoured alcoholic beverage emulsion (∼10% by weight [w/w]) were made using a Zetasizer Nano system at 25◦ C. The Nano S (Malvern Instruments Ltd., United Kingdom [UK]) uses a 4 milliWatt (mW) HeNe laser operating at a wavelength of 633 nm and a detection angle of 173◦ . Measurements were collected on both neat and dilute forms of the sample, with the dilute forms being prepared by dispersing the neat sample in 10 mm sodium chloride content (NaCl) solution.
172
Nondestructive Testing of Food Quality
Using Viscosity of Water
25
100% 75% 50% 25% 10% 2% Dilute
Intensity (%)
20
15
10
5
0 1
10
100
1000
10000
Size (d.nm)
Figure 7.4. Intensity size distributions obtained for various concentrations of the emulsion sample using the viscosity of water.
In a concentrated dispersion, the diffusion of the particles may be hindered by the presence of other particles. This phenomenon normally manifests itself by an increase in the viscosity. These effects may be taken into account by replacing the dispersant viscosity with the actual viscosity of the bulk dispersion in the Stokes-Einstein equation. Figure 7.4 shows the concentration dependence of the intensity particle size distributions for the emulsion sample, using the solvent viscosity (i.e., water) in the Stokes-Einstein equation. An increase of sample concentration leads to an apparent increase of particle size, with no noticeable increase in the width of the distribution, suggesting the influence of hindered diffusion at higher concentrations. Figure 7.5 shows the concentration-dependent size distributions from Figure 7.4, recalculated using the viscosity of the bulk sample to compensate for hindered diffusion effects. The use of the bulk sample viscosity virtually eliminates the concentration dependence, yielding a mean size of circa 200 nm for all sample concentrations. Laser Diffraction
Laser diffraction has become one of the most widely used techniques for particle size analysis in the food industry, with applications from
Particle Sizing in the Food and Beverage Industry 25
Intensity (%)
20
15
10
173
Using Viscosity of Bulk Solution 100% 75% 50% 25% 10% 2% Dilute
5
0
1
10
100
1000
10000
Size (d.nm)
Figure 7.5. Intensity size distributions obtained for various concentrations of the emulsion sample recalculated using the viscosity of the bulk sample to compensate for hindered diffusion effects.
product development through to production and quality control. The major applications involve the sizing of dairy products, flavor emulsions, coffee, sugar, flour, and chocolate. It relies on the fact that particles passing through a laser beam will scatter light at an angle that is directly related to their size. As particle size decreases, the observed scattering angle increases logarithmically. Scattering intensity is also dependent on particle size, diminishing with particle volume. Large particles therefore scatter light at narrow angles with high intensity, whereas small particles scatter at wider angles but with low intensity (Figure 7.6). It is this behavior that instruments based on the technique of laser diffraction exploit to determine particle size. A typical system consists of a laser, to provide a source of coherent, intense light of fixed wavelength; a series of detectors to measure the light pattern produced over a wide range of angles; and some kind of sample presentation system to ensure that material under test passes through the laser beam as a homogeneous stream of particles in a known, reproducible state of dispersion. The dynamic range of the measurement is directly related to the angular
174
Nondestructive Testing of Food Quality
Figure 7.6. Light scattering patterns observed for different particles. The angular range and intensity changes observed relate directly to particle size.
range of the scattering measurement, with modern instruments making measurements from around 0.02 degrees to 140 degrees (Figure 7.7). The wavelength of light used for the measurements is also important, with smaller wavelengths (e.g., blue light sources) providing improved sensitivity to submicron particles. In laser diffraction, particle size distributions are calculated by comparing a sample’s scattering pattern with an appropriate optical model. Traditionally, two different models are used: the Fraunhofer approximation and Mie theory. The Fraunhofer approximation was used in early diffraction instruments. It assumes that the particles being measured are opaque and scatter light at narrow angles. As a result, it is only applicable to large particles and will give an incorrect assessment of the fine particle fraction. Wide angle detection system
Obscuration detector Laser light souce
Fourier lens
Sample cell
Figure 7.7. A typical laser diffraction system.
Focal plane detector
Particle Sizing in the Food and Beverage Industry
175
Mie theory provides a more rigorous solution for the calculation of particle size distributions from light-scattering data. It predicts scattering intensities for all particles, small or large, transparent or opaque. Mie theory allows for primary scattering from the surface of the particle, with the intensity predicted by the refractive index difference between the particle and the dispersion medium. It also predicts the secondary scattering caused by light refraction within the particle. This is especially important for particles below 50 microns in diameter, as stated in the international standard for laser diffraction measurements (ISO 13320 Particle Size Analysis—Laser Diffraction Methods 1999). Laser diffraction is a nondestructive, nonintrusive method that can be used for either dry or wet samples. Because it derives particle size data using fundamental scientific principles, there is no need for external calibration. Well-designed instruments are easy to set up and run, and require very little maintenance. Additionally, the technique offers the following advantages: 1. A wide dynamic measuring range: Modern systems allow users to measure particles in the range from 0.02 micron to a few millimeters without changing the optical configuration, ensuring that both welldispersed and agglomerated particles are detected equally well. 2. Flexibility: The technique is equally applicable to sprays, dry powders, suspensions, and emulsions, allowing different product formulations to be compared in a realistic way. 3. Generation of volume-based particle size distributions: This is normally equivalent to a weight distribution (like traditional techniques such as sieving) and is relevant to many processes because it indicates where most of the mass of material is located in terms of particle size. 4. High repeatability: The ability to acquire data rapidly allows many thousands of measurements to be averaged when reporting a single result, providing repeatability. This, coupled with standardized operating procedures, ensures that the instrument-to-instrument variation is less than 1%, enabling direct comparison of data from different sites. 5. Ease of verification: As a first-principles technique, laser diffraction does not require calibration but can be easily verified using a variety of readily available National Institute of Standards and Technology (NIST)-traceable standards (e.g., from Duke Scientific, Whitehouse Scientific, NIST).
176
Nondestructive Testing of Food Quality
Applications of Laser Diffraction in the Food Industry
Chocolate Chocolate is without doubt one of the world’s best-loved foodstuffs. Fry and Sons produced the first plain chocolate bar in 1847, with the first milk chocolate product being launched by Nestle in the 1870s (Tannebaum 2004). Initially product consistency was poor. However, the introduction of the chocolate kneading process, referred to as conching, by Lindt in 1879 yielded improved flavor and texture (Tannebaum 2004). Since then the world’s desire for chocolate products has expanded rapidly; in 2001, chocolate consumption in the U.S. reached over 1.75 million metric tons, with worldwide sales values of more than $13 billion (Deis 2003). For the consumer, taste is the overriding factor in selecting a chocolate product; for the producer, consistent high quality using optimized, economical, and efficient production systems is vital. Although there are many parameters to be considered in the production of chocolate, a major factor at all stages is the solid ingredient particle size distribution because this has a significant effect both on the final product and on the cost and efficiency of the production process itself. This section examines why particle size analysis is so important in the manufacture of chocolate. Examples of the characterization of different chocolate products using laser diffraction are also described. Achieving Efficient Production For many years, chocolate manufacture was regarded as a highly skilled process, heavily dependent on the expertise and experience of those involved at each stage of production. However, given the expanding and competitive market for chocolate, there have been moves toward increased mechanization and automation of the production processes to achieve higher output. This change has required a greater analysis and knowledge of the underlying processes involved in chocolate production. Understanding, monitoring, and controlling particle size has therefore become an important factor in ensuring a consistent and high quality product. The Manufacturing Process To understand the significance of particle size and particle size analysis, it is necessary to take a brief look at the various stages between cocoa bean and final product.
Particle Sizing in the Food and Beverage Industry
177
Chocolate is basically a suspension of sugar, cocoa, and milk particles in a continuous fat phase, and the aim of chocolate production is to give the product the optimum flow properties for further processing. Through the processes of fermentation, drying, and roasting, the cocoa bean remains reasonably intact with a particle size of several millimetres. Although subsequent processing may take many different forms, there is a common requirement for cocoa particles, sugar, and any milk solids to be too small for detection on the tongue (typically less than 30 microns). This demands particle size reduction, or grinding, for which a number of processes are used depending on the final quality required and the raw materials used. Control of particle size is important for a number of reasons not least the final taste. For example, if cocoa and sugar particles within the product are too coarse, consumers will describe the mouthfeel as “gritty,” and flavor release will be poor. Conversely, if the particle size is too fine, the product requires higher amounts of cocoa butter to achieve the correct flow properties, resulting in a mouthfeel that is “sticky” or “sickly.” Cocoa Mass, Cocoa Powder, and Cocoa Butter. Cocoa bean pods are the fruit of the Theobroma cacao tree. Each tree produces around 20–30 pods a year, yielding around 2 ounces of cocoa beans. Once harvested, the beans are fermented and dried before being shipped to the chocolate producers for processing. During processing only the nib, the crushed and skinned bean, is ground. Shell removal breaks the nib into coarse pieces and a relatively small proportion of fine material. Whether the final product is to be cocoa powder, cocoa butter, or chocolate, the nib must be further ground to a fine homogenous mass. Pregrinding of the nib results in an increase in temperature and produces cocoa butter as a liquid mass, producing a liquid called “chocolate liquor.” During the initial grinding state, it is important that the cocoa butter is completely released from the cocoa cells. It is also vital that the proportion of very fine cocoa solids is kept low because finer particles bind fat and lead to a cocoa mass with poor flow properties. At the end of this process, the cocoa butter and cocoa solids are separated, ready for further processing. Sugar. In chocolate production, there is a trend toward grinding sugar in a two-step process, when mixed with cocoa mass, milk powder, and other ingredients. A major objective in grinding the sugar is to
178
Nondestructive Testing of Food Quality
produce a closely defined particle size distribution, because this leads to well-defined physical properties within the chocolate mass. For sensory reasons, the particle size in the chocolate should not exceed 30 microns, whereas for optimum rheology, it should not fall below 7 microns. Milk. A number of factors are important when considering milk products for chocolate production. Milk proteins, some components of the milk fat, and the milk fat triglyceride structure have an influence on the physical and processing properties of milk. When bespoke milk-based ingredients are produced, the crystalline structure of the lactose is of tremendous importance because it influences the particle size distribution in the final chocolate product. Milk and Chocolate Crumb. Milk and chocolate crumb are used specifically in the manufacture of milk chocolate. Crumb is produced by blending the cocoa mass with milk and sugar. Originally, this crumb was developed as a means of storing fresh milk during the peak milk production times of spring and summer, which are low seasons for chocolate. Here the cocoa mass stabilizes the crumb to prevent it from becoming rancid when exposed to air. Conching Conching is a final mixing stage prior to the formation of the final chocolate product. The chocolate crumb is slowly mixed with cocoa butter, emulsifiers, and flavoring. It is then subjected to shear at relatively high temperatures for long periods of time (sometimes up to 72 hours). The nature of the changes that occur within the product is poorly understood. However, it is believed to eliminate unwanted aromas and flavors associated with volatile organic compounds while the required flavors are developed in the chocolate paste. Cocoa butter is also added to improve fluidity. The conching process results in a smooth glossy product that has a relatively fine particle size. This is then tempered and molded to produce the final chocolate product. Optimizing Chocolate Production When conching is complete, every particle should be coated with fat to ensure good lubrication. The most expensive constituent is the cocoa butter. As its price has increased, it has become important to achieve
Particle Sizing in the Food and Beverage Industry
179
the same product properties while minimizing its use. Manipulation and control of the particle size distributions of the solid materials is crucial in achieving this. Small particles have a large specific surface and therefore a high fat requirement, whereas large particles have a small specific surface and need less fat. However, because a chocolate is perceived to be gritty when particles greater than 30 microns in size exist, strictly defined size distributions must be maintained. Challenges of Particle Size Measurement It is clear that particle size at many different stages of chocolate production will have a significant impact either on downstream processing or on the final product. Particle size measurement is therefore critical. Laser diffraction is the most effective measurement method for this type of system. It requires that a sample of the product with its agglomerates be dissolved in an appropriate solvent to suspend particles and at the same time dissolve fats and other intermediates. Originally chocolate was dispersed in trichloroethane during laser diffraction measurements. However, this solvent can no longer be used in analysis laboratories. Instead, because of its polarity, Volasil 344 can be used because it has similar solvent properties to trichloroethane and therefore yields equivalent results. Isopropyl alcohol (IPA) or sunflower oil can also be used, although these yield slightly different results to Volasil because they dissolve out some of the nonvolatile organic material. Characterizing Different Chocolates The chocolate products available for resale have different properties dependent upon the country of origin and the target market within each country. As such, there is no standard chocolate recipe, and significant differences both in raw ingredients and the particle size distribution can be observed. Here, the Mastersizer 2000 has been used to characterize different chocolate products. Each measurement was carried out in IPA, with 3 minutes low-power sonication being used to melt and disperse the chocolate prior to measurement. American and United Kingdom Chocolate Products. Figure 7.8 shows a comparison of the particle size of two leading brands, one sold in America and the other in the UK. The requirements for mouthfeel and
180
Nondestructive Testing of Food Quality
6 American Chocolate UK Chocolate 5
Volume (%)
4
3
2
1
0 0.1
1
10
100
1000
Particle size/Microns
Figure 7.8. Particle size distributions recorded for UK and American chocolate bars.
flavor differ between these two countries. The UK consumer is used to a smoother mouthfeel. This not only affects the composition of the chocolate but also defines the end particle size. As can be seen in Figure 7.8, the size distribution for the American chocolate extends to much larger particle sizes compared to the UK product. This yields a grittier product with different melting and taste release characteristics compared to the UK product. Luxury Brands. Some brands of chocolate market themselves as providing a luxurious smooth feel. Again the required melting characteristics and lack of grittiness are achieved both through the choice of ingredients and through controlling the particle size. This is shown in Figure 7.9 where a so-called luxury brand is compared against a standard brand and an economy brand. As can be seen, the particle size distribution moves to larger particle sizes in moving from the luxury product to the economy product. Obviously the milling and conching duration used for the economy brand
Particle Sizing in the Food and Beverage Industry 100
181
Standard Chocolate Luxury Brand Supermarket Own Brand
Culmlative Volume (%)
80
60
40
20
0 0.1
1
10
100
1000
Particle size/Microns
Figure 7.9. Particle size distributions recorded for standard, luxury, and economy (supermarket’s own) brands.
will be much shorter. In addition, the amount of expensive cocoa butter required during processing will be less. The above example relates to the UK market. A similar trend is also observed for the U.S. market. In Figure 7.10, the U.S. brand shown in Figure 7.9 is compared against another product designed to yield a smoother, more European mouthfeel. Again, to achieve this, the ingredients are ground to a finer particle size, although the difference observed is less pronounced than would be expected considering the properties of most UK brands. More noticeable perhaps is the change in the cocoa and milk solids content required to mimic a European brand. Comparing Dark and Milk Chocolate
The difference between dark and milk chocolate is shown in Figure 7.11. Here, the difference observed relates to the ingredients used. Dark chocolate generally contains a large amount of sugar to produce a palatable product. This yields a coarser particle size and a grittier mouthfeel.
182
Nondestructive Testing of Food Quality
100 Standard US Chocolate "European Style" Chocolate
Cumlative Volume (%)
80
60
40
20
0 0.1
1
10 Particle size / Microns
100
1000
Figure 7.10. Comparison of a standard U.S. brand against a chocolate product designed to be more European in taste comparing dark and milk chocolate.
Differences are also observed at the fine end of the distribution—these relate to the presence of milk solids within the milk chocolate. Dairy and Food/Flavor Emulsions
The particle size of the fat droplets present in dairy and other food emulsions is important in defining properties such as flavor release, mouthfeel, and the emulsion stability. Large emulsion droplets can lead to poor flavor release, a greasy mouthfeel, and poor stability as a result of creaming. Emulsification to a smaller droplet size tends to reduce creaming and improve the taste of a product. However, in so doing, a balance is required, because decreasing the particle size increases the available surface area, which in turn can lead to flocculation if the emulsifier concentration is not controlled. It will also change the rheological properties of the emulsion. In other products, such as ice cream, the particle size of the fat droplets is important in defining structural characteristics. Aggregated fat clusters are known to be involved in the stabilization of the air cells within
Particle Sizing in the Food and Beverage Industry
183
6 Milk Chocolate Dark Chocolate
Cumlative Volume (%)
5
4
3
2
1
0 0.1
1
10 Particle size / Microns
100
1000
Figure 7.11. Particle size distributions recorded for milk chocolate and dark chocolate (UK market).
whippable dairy products. The formation of these clusters can only be achieved by the controlled destabilization of the fat emulsion. Thus, a knowledge of the particle size is important in defining the functionality and taste of different food emulsion products. Emulsion Measurements
Laser diffraction provides an excellent tool to the food industry for the characterization of food emulsions. Its wide dynamic range permits both fine emulsion droplets and larger flocculated or coalesced droplets to be characterized. This range also allows for the measurement of large protein micelles, such as casein, enabling the interaction between the protein and emulsified fat phase to be understood (McCrae and LePoetre 1996). Characterizing Different Milk Products The particle size of milk products can be easily assessed using laser diffraction, allowing changes in the fat phase to be detected. An example of this is shown in Figure 7.12 where typical results for full fat (3.6% fat),
184
Nondestructive Testing of Food Quality Full Fat
Skimmed
Semi-Skimmed
12
10
Volume (%)
8
6
4
2
0 0.01
0.1
1
10
100
Size / Microns
Figure 7.12. Size distributions recorded for full fat, semi-skimmed (half and half) and skimmed milk.
semiskimmed (half and half, 1.7% fat), and skimmed milk (0.1% fat) are shown. As can be seen, two modes can be detected in each sample, one relating to the fat phase and one relating to free casein micelles. In moving from full fat to skimmed milk, the relative proportions in each mode changes, tracking the reduction in fat content. Milk Homogenization During processing, milk emulsions are normally homogenized to reduce creaming during storage. Laser diffraction can be used to track the progress of homogenization, as shown in Figure 7.13. During the homogenization of a milk emulsion (red curve, Figure 7.13), a decrease in particle size is initially observed as the homogenization pressure is increased. However, at high pressures the observed decrease becomes less pronounced. This is because of fatcluster formation caused by bridging of the casein protein between the fat droplets within the emulsion. This occurs when the surface area of
Particle Sizing in the Food and Beverage Industry Cluster-free Emulsion
185
Standard Milk Emulsion
0
Log D[3,2]
−0.1
−0.2
−0.3
−0.4
−0.5 0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
Log [Homogenization Pressure]
Figure 7.13. Variation of the surface area mean -D(3,2) with homogenization pressure for a standard milk emulsion and a cluster free emulsion containing the caseindissolving solution (Muir et al. 1991).
the fat droplets becomes too large to be covered by the available protein. Formation of these fat clusters can be inhibited using an appropriate “casein-dissolving” solution (Muir et al. 1991). Adding this to the milk emulsion disperses the fat clusters, yielding a smaller particle size (blue curve, Figure 7.13). Emulsion Storage
The behavior of dairy emulsions during storage can also be related to particle size. Often emulsions such as cream liqueurs are found to increase in viscosity and even gel during prolonged storage. Figure 7.14 shows how the D (v, 0.9) (particle size below which 90% of the volume of droplets exists) varies as a function of the viscosity measured over time for different liqueurs. Changes in the D (v, 0.9) can be used to detect the appearance of large particles. As can be seen, a direct correlation can be observed between the D (v, 0.9) and the viscosity, with a move
186
Nondestructive Testing of Food Quality 100 90
Malt Whisky
80
Grain Spirit Neutral Spirit
Viscosity / c/F
70 60 50 40 30 20 10 0 0.5
0.6
0.7
0.8
0.9
D90 / microns
Figure 7.14.
Variation in particle size observed during the storage of cream liqueurs.
to the coarser particle sizes as the viscosity increases. This is caused by the formation of a flocculated droplet network. Milk Powder Rehydration
Milk products are often spray dried before being shipped and reconstituted. The process of reconstitution of the spray–dried powder is an important factor in the production of many foodstuffs. This can be followed using laser diffraction. Figure 7.15 shows the evolution of the particle size of an aqueous solution containing 5% weight per volume (w/v) milk powder. The initial size of the powder was relatively larger (>10 microns). Over time, samples of the solution were collected and measured. As can be seen, a mode at very fine particle sizes was observed as the larger powder mode decreased in volume. This fine mode relates to protein micelle formation during the rehydration of the powder. Hydration is initially rapid but then slows down dramatically, with the process taking several hours to complete. In this case, skimmed-milk powder was used, thus no fat was detected.
Particle Sizing in the Food and Beverage Industry 1000 min
60 min
120 min
187
0 min
12
Volume in Class (%)
10
8
6
4
2
0 0.01
0.1
1
10
100
1000
Size / microns
Figure 7.15. Following milk powder reconstitution using Mastersizer 2000.
Flavor Emulsions Flavor emulsions are oil and water emulsions that are normally prepared as a concentrate and can be diluted to form a final product. Two types of flavor emulsion are used in the food industry. One is a high concentration oil emulsion that is essential oil stabilized with emulsifiers, stabilizers, and other additives. The other is a flavored oil emulsion with added vegetable oil that is formulated to give a cloudy appearance. The emulsions must be stable over time (both in their concentrated and diluted forms). Sedimentation of flocculated material at the bottom of the container, or creaming of oil to the top of the container, are both undesirable. The system is often stabilized by the addition of hydrocolloids such as xanthan gum, gum Arabic, alginates, and carageenans. These will absorb to the oil water interface and can impart both electrostatic and steric (electrosteric) stability to the emulsion. The optimal particle size is often considered to be <3 microns (especially if caramel color is used as an emulsifier). The concentrate also will consist of sugar and preservatives. By monitoring the change in particle size
188
Nondestructive Testing of Food Quality
Figure 7.16. Laser diffraction used to detect outsize particles in flavor emulsions.
upon storage or upon dilution, laser diffraction can be used to screen new formulations so that unstable formulations can be rejected. Problems can also occur with the crystallization of sugar within flavor emulsions that can lead to batch-to-batch problems. Figure 7.16 shows two different flavor emulsions of similar sizes, one of which had a stability problem. There is a peak of large particles (could be coalesced oil droplets or sugar crystals) in the 10–100 µm region seen on the particle size distribution of the unstable formulation. Coffee
The best tasting coffee comes from whole beans that have been freshly ground minutes before brewing. Over time, the coffee will oxidize and the flavor will change. The grinding process often occurs in a smallscale batch process in the shop or caf´e where the coffee is purchased (or using a home grinder) as well as on a large scale (for preground coffee). In both cases, the user needs to establish how long the coffee should be ground and understand how this impacts the flavor of the coffee. As will be demonstrated here, the flavor of the coffee does depend on the particle size, so the suppliers and users of the grinders need to know what size of product results under certain grinder settings. Industry Background Coffee is harvested from either single stem or multistem coffee bushes. These bushes tend to be kept to a maximum height of 1.6 meters in order to keep within the reach of harvesters. To obtain good quality coffee, only red ripe cherries are harvested, and at the last round of picking, all of the cherries ripe or otherwise are collected. This ensures good
Particle Sizing in the Food and Beverage Industry
189
Figure 7.17. Particle size of coffee produced by a grinder at its fastest (green), a medium speed (red), and its slowest (blue) speed.
pest control of the coffee berry borer. Before drying, the cherries are generally poured into a vat. If they float, they are bad and are discarded. The cherries are dried quickly over raised mats until they are quite dry; drying on the ground produces poorer quality coffee and harder beans. To avoid fermentation, which also lowers quality, the cherries are spread very thinly; periodic turning of the cherries is necessary to ensure even drying. When the coffee beans are dry, they are then moved to a dry place to avoid wetting by rain, which can also impair product quality. The next stage in the process is hulling, where the dried cherries are passed through a huller that separates the dry outer covering from the coffee bean. The beans are then separated from the outer coatings and any other foreign matter, and they are then ready for roasting. Coffee Flavors The factors that affect the quality of a coffee are the quality of the grain, the degree of roasting, and the particle size. The additional factors affecting the taste are the hardness of the water used, the temperature of the water, or the temperature and pressure of the steam, which affects the degree of extraction of the coffee components. Research has shown that particle size directly affects the infusion of the soluble material within the coffee (Smith and Thomas 2003) with the smaller the coffee the greater the amount of soluble material released. The rate was also found to be temperature dependent. The following results show coffee ground in an electric mill (Figure 7.17) at the slowest (maximum size) and the fastest (minimal size) speed achievable. The smaller particle size coffee will produce a far
190
Nondestructive Testing of Food Quality
Figure 7.18. Particle size of a pre-ground espresso and pre-ground filter coffee.
more bitter brew than the large one from exactly the same beans due purely to its size. The smallest coffee will be more suited for espresso and the larger one for a general filter coffee, with the medium one being somewhere in between. If we compare this trend with pre-ground, offthe-shelf coffees for espresso and filter coffee (Figure 7.18), the same trend is seen. Online Particle Sizing Techniques In the past, particle size analysis was always performed in the laboratory, using periodic sampling and small, potentially unrepresentative, sample volumes. This was a perfectly suitable procedure from a quality control perspective, but it was insufficient to make a genuine contribution to real process understanding or the automation of process control. Recently, online particle size analyzers that make continuous measurements on large sample volumes directly on the process line and in real-time have been developed. The Insitec L (Malvern Process Systems, Malvern Instruments Ltd., UK) has the capability to provide fully automated control and monitoring of the manufacturing process. The Insitec is a laser diffraction-based instrument that continuously monitors the scattering of light from particles passing through the measurement zone. Using the rigorous Mie Light Scattering Theory, this diffraction of light is measured and the resulting particle size distribution is calculated. In a typical emulsion system, material passes through a liquid flow cell where the measurement is made (Figure 7.19). Since material continuously passes through the measurement zone, a particle size distribution is generated as a function of time.
Particle Sizing in the Food and Beverage Industry
191
Beam power detector Scattering detector Scattered light ray Particle
Beam focus & pin hole
Lens Sample flow Laser beam Laser diode
Figure 7.19. Schematic representation of an Insitec online particle size analyzer.
Typical process control parameters such as the average particle size, Dv(90), %<, and mesh can be trended as a function of time. Further software can export data to the plant for archiving and process control. The data attained compares closely to traditional off-line analysis because laboratory systems are based on the same laser diffraction principle. Online particle size analyzers have the distinct advantage of being right at the process line where the material is in its natural state and particles are less prone to coalesce or agglomerate. Because this technique is continuously monitoring material at the outlet of the homogenizer, shifts in the particle size can be observed immediately. One exercise involved varying homogenizer conditions during production of a food emulsion, monitoring the affects and then optimizing the homogenizer settings based on droplet size information gleaned from this exercise. This may be done at the research and development or product development stages or during the start-up of the production process. In this case, the Insitec is the key to monitoring the cause and affect of process changes with respect to droplet size. Figure 7.20 is a screenshot of the Insitec software, monitoring the droplet size of a food emulsion produced by an APV Homogenizer run at various pressures. The concentration of droplets inside the homogenizer is extremely high and cannot been measured at this concentration by laser diffraction without dilution. For this reason, an innovative Process Interface Solution (cascade diluter) is used to tackle the
192
Nondestructive Testing of Food Quality
Figure 7.20. A screenshot from the Insitec software showing the change in size for four different operating conditions.
problem of sample dilution in real-time without any subsampling bias. This diluter comprises a series of coaxial venturis, each one providing a dilution ratio of 5:1. The higher the concentration, the greater the number of stages used. All stages are triclamped together and the device is self-cleaning. This simple but highly effective device (currently being patented) extracts sample from the process, dilutes and transports it to the Insitec flow-cell in a matter of seconds where it is measured in realtime. The screenshot in Figure 7.20 shows the real-time data collected over the test period. It is apparent from the trend line that there were four different conditions run during this test. The Dv(50) average particle size (blue line) shows a noticeable shift in droplet size at each of the four pressures used during this process test. What is remarkable is that this test was performed and the data for the entire run including all four conditions was collected in only 20 minutes. If this test were to be performed in a traditional manner, the operators would have to take samples intermittently during the run and perform the analysis in the lab after the test was complete, an operation typically lasting several hours.
Particle Sizing in the Food and Beverage Industry
193
Figure 7.21. (a) Three different shapes with the same CE diameter; (b) fines identified due to number-based measurement; (c) images of particles providing qualitative information.
This real-time data can be used as an aid in product development and as a process control and optimization tool in addition to ensuring quality control in production. Image Analysis Image analysis is a technique that is particularly suited to analyzing particle size and particle shape because it generates data by capturing direct images of each particle. This provides users with the ultimate sensitivity and resolution as subtle differences in particle size and particle shape can be accurately characterized. Images of each individual particle are also recorded, providing a further visual verification of the data and also enabling detection of important phenomena such as agglomeration, breakage, and foreign particles. There are three main advantages of image analysis: 1. It measures the shape as well as the size of particles. The three shapes shown in Figure 7.21 all have the same circular equivalent (CE) diameter, therefore, size measurement by traditional ensemble techniques would not distinguish between them. However, the three shapes are
194
Nondestructive Testing of Food Quality
Figure 7.22. Example images of (a) tea stalks and (b) tea leaves.
clearly different and since the morphology of particles may affect the way particles behave in a process or end product, measuring their shape as well as their size can be important. By using one of the many different morphological parameters, for example circularity, provided by image analysis techniques such as the Morphology G2 or the FPIA-3000 systems, the different shapes would be identified. 2. Image analysis provides extra resolution since it is a number-based technique. This means that every particle is equally weighted rather than larger particles being more heavily weighted than smaller ones, as is the case with volume-based techniques. Therefore, the presence
Particle Sizing in the Food and Beverage Industry
195
Classification
14.03%
Stalks
85.97%
Leaves
4.62%
Unclassified
0
10
20
30
40
50
60
70
80
90
Percentage area to total included Record 15: Tea 2
Figure 7.23. Proportion of tea leaves and stalks in measured sample.
of fines may be identified using image analysis that may have been masked by volume-based measurements (Figure 7.21b). 3. With image analysis, the images of the particles measured can be seen and recorded thus providing qualitative information to support the quantitative information of the measurement. For example, Figure 7.21c clearly shows that the first sample is made up of agglomerates of spheres whereas the second sample is made up of fibers. An example where image analysis could be useful in the food industry is in the analysis of tea when ensuring that no more than a certain percentage of stalks are mixed in with tea leaves. The resulting images from measurement of a sample of tea leaves on the Morphologi G2 (Figure 7.22) can be classified into tea leaves and stalks and then the proportion of each type evaluated (Figure 7.23).
196
Nondestructive Testing of Food Quality
References Allen T. 1992. Particle Size Measurement, 4th Edition, Chapman and Hall. Berne B and R Pecora. 1976. Dynamic Light Scattering, Wiley, New York. Deis RC. 2003. Food Product Design, March Edition. Finsy R. 1994. Adv. Colloid Interface Sci., 52, 79. German patent 19725211. Horne DS and CM Davidson. 1993. Colloids Surf. A 77, 1. International Organization for Standardization (ISO) on Photon Correlation Spectroscopy. ISO 13321. 1996. ISO 13320. Particle Size Analysis—Laser Diffraction Methods. 1999. Japan patent 2911877. Jillavenkatesa A, SJ Dapkunas, and L-SH Lum. 2001. Particle Size Characterization, NIST. Lawson CL and RJ Hanson. 1995. Solving Least Squares Problems, SIAM, Society for Industrial and Applied Mathematics. McCrae CH and A LePoetre. 1996. In: Dairy Journal, 6, 247–256. Muir DD, CH McCrae-Hornsma, and AWM Sweetsur. 1991. Milchwissenshaft 46, 691–691. Pine DJ, DA Weitz, PM Chaikin, and E Herbolzheimer. 1988. Phys. Rev. Lett. 60, 1134. Provencher SW. 1979. Makromol. Chem. 180, 201. Provencher SW. 1982a. Comput. Phys. Commun. 27, 213. Provencher SW. 1982b. Comput. Phys. Commun. 27, 229. Schatzel K. 1991 J. Mod. Opt. 38, 1849. Smith AJ and DL Thomas. 2003. The Infusion of Coffee Solubles Into Water: Effect of Particle Size and Temperature, Loughborough University, Leicestershire, LE11 3TU, UK. Tannebaum G. 2004. J of Chem. Education, v 81, 1131–1135. Twomey S. 1997. Introduction to the mathematics of inversion of remote sensing and indirect measurements, Dover Publications. U.S. patent 6016195.
Chapter 8 Online Image Analysis of Particulate Materials Peter Schirg
Introduction Very often particle characterization is understood as particle size analysis. In the beginning, either the microscope or sieve analysis for dry powders was used. Later the most common particle size measurement became laser diffraction. Only the latest mathematical models for laser diffraction can calculate an average particle shape and only can measure percent-based distributions. Particle counters like Coulter (electric sensing zone) or optical systems using light blockage have the advantage of measuring every single particle and thus have a number distribution as a primary result but cannot measure shape. Imaging techniques can give more information, such as size and number of particles as well as shape and darkness/reflectivity of each particle. We will show that new instruments can measure under much more difficult circumstances than we find under a microscope, such as online at high concentrations and directly in the process. We will discuss possibilities and limits of this technology in three examples: r
in-process measurement of size and number of dark particles in a white powder r measurement of size and number of big particles in concentrated submicron dispersions r detection of fibrous impurities in particle suspensions
197
198
Nondestructive Testing of Food Quality
Measurement Principles We will describe XPT Particle Analysis instruments in two setups: XPTP is a probe-based system and XPT-C is a system with flow through cell. Both systems use CCD or complementary metal–oxide–semiconductor (CMOS) cameras, and images taken are directly analyzed by a dedicated computer program. Resolution of the images is 1024 × 768 pixels with a pixel representing 0.3 micrometer (µm) at the highest magnification. XPT-P has a camera, optics, and lighting via fiber optics integrated in a probe, which has either a 60-millimeter (mm) diameter for large fields of view up to 15 mm or a small probe tip of 20-mm diameter for 1-mm field of view. The principle is shown in Figure 8.1. Short exposure times of strobe light down to 1 millisecond (ms) allow sharp imaging of fast-moving particles. Images are taken with reflected light, which has two advantages: first, there is only a flat window at the probe tip and no probe gap, and second, there is no particle concentration limit. These probes can be used in dry and wet applications. XPT-C systems have a camera, optics, and accessory components like a peristaltic pump integrated in one housing on which a flow-through cell is attached. Lighting is attached onto the flow-through cell and can be stroboscopic for very fast flows or constant light using the camera shutter (which already allows exposure times down to 10 ms, which is short enough for many applications). The principle is shown in Figure 8.2. The standard flow-through cells have a channel width of 3.5 or 10 mm and a channel depth of 0.2 to 5 mm depending on the application. The guidelines for choosing the right cell follow: r
small channel depth for high particle concentrations but the largest particle in the process or sample must not block the cell r large channel width and depth for large particles to be measured but the flow speed in the cell (and in all pipes) must be high enough to avoid sedimentation The software transforms the raw image into a filtered image (using filters like subtraction of background to eliminate nonmoving particles, changing contrast and brightness, inverting from black to white, removing particles of no interest depending on size and shape) and then detects the remaining particles depending on the gray level threshold. The detected particles can be shown in different colors to distinguish
XPT-P System Process
XPT
199
camera
real time image analysis Figure 8.1. Schematic of an XPT-P system.
Particle Analyser
action on result measure on demand
XPT-C System flow through cell
light
camera
200 Peristaltic pump sample (>1ml) or directly from process Figure 8.2. Schematic of an XPT-C system.
Computer for real time image analysis
action on result measure on demand
Online Image Analysis of Particulate Materials
201
Figure 8.3. Original image (top), filtered image (middle), and analyzed particles (bottom).
whether a particle has been detected as one or several separate particles, which is useful during the method test. An example in Figure 8.3 shows white ceramic beads contaminated with black particles. Filtering changes contrast so that the darker areas around the white particles disappear and transform black to white. As a result, only the originally black particles remain and now appear white. The threshold to detect particles is set for example to 200 so pixels with grey level from 200 to 255 are accepted as particle and shown in the third image in color. The results are displayed as number or percent size distributions, table results, or trends over time. For size, a large variety of values can be chosen such as equivalent spherical diameter, max ferret diameter (longest dimension), or perimeter. Also shape information is measured
202
Nondestructive Testing of Food Quality
Figure 8.4. Scatter plot of shape versus particle size.
and can be displayed in different ways. One very interesting display is the scatter graph shown as example in Figure 8.4. On the y-axis, a number for shape is plotted (like ratio of equivalent rectangular sides), and on the x-axis, a number for size like max ferret diameter. Every point in the graph represents one particle. It is now easy to see clusters (for example, in a mixture of small spheres with long needles, the cluster for the spheres is in the lower left corner and the one for the needles in the upper right corner). Another option of such software is to store every image with at least one particle fulfilling criteria, such as a certain size and/or shape. This is useful in contamination detection applications to view the image in case a large contaminant was detected. In-process Measurement of Size and Number of Dark Particles in a White Powder Dark impurities in more or less white powders produced in the food industry are undesired because of the look of the product but also because they may be a “burned” product, which is an indication of process problems and subsequent change in other qualities of the product such as taste. The case described here has dust inside the process (Ex-zone 0). A starch-like product is falling through a pipe of 200-mm diameter.
Online Image Analysis of Particulate Materials
203
Figure 8.5. Image from sample 5.
The first question was if and how the dark particles can be detected, preferably without extracting product out of the process. This would entail examining at it with a probe. Offline Test Results As a first test, five samples with different amounts of contamination have been measured offline on an XPT-P probe. It was important to have a closed layer of product in the image zone without air gaps because air gaps between the white particles are dark in the image similar to dark particles. Results from 30 images on each sample follow: r r r r r
Sample 5: 300 dark particles, as shown in Figure 8.5 Sample 4: 15 dark particles Sample 3: 7 dark particles Sample 2: 2 dark particles Sample 1: no dark particles
To give an estimate about the detection limit for dark particles, we can calculate that one image has 1024 × 768 pixels. Ten pixels will be a very safe detection limit for a particle. This is 1.3 × 10−3 % of the image area of one image or 13 particles per 1 million (13 parts
204
Nondestructive Testing of Food Quality
per million [ppm]). This can be increased as desired by the number of images taken. For example, with 130 images, the limit will be at 0.1 ppm. The measurement time for this example is 15 seconds. In-process Installation The XPT-P probe is attached directly onto a sight glass in the product pipe. Inside, a collecting cup fills with powder falling down. A timer counts down a few seconds to be sure that the cup is full, then it gives a signal to the XPT instrument to take some images. Then it stops the measurement and opens a magnetic valve to blow the powder out of the cup. This is running continuously. The probe, computer, and electronics are all IP65 or purged with air to prevent dust from entering the instruments, because the operation environment can be very dusty. Measuring Size and Number of Big Particles in Concentrated Submicron Dispersions There are many such submicron dispersions and emulsions in food applications where the primary particles are between 200 nanometers (nm) and 1 micron at high concentrations of 10 to 50%. Large particles are contaminants either from the same substance formed in the process or of different matter. The goal is to detect for example 10 large particles of 30 to 100 microns in one milliliter (ml) of dispersion, which means in about 1012 small particles. With laser diffraction, dilution would have to be so extreme that it would become extremely unprobable that a large particle could be detected in the measured volume. XPT-C instruments with flow-through cells with small channel depth (but bigger than the largest particle) and high light intensity are used on the highly concentrated dispersion. The small primary particles cannot be seen and give a gray or white background with dark contaminants in it. Figure 8.6 shows an example of such a β-carotene dispersion of 300 nm primary particles at 10% concentration by volume. The grid in the images is 500 micron. Trend Over Time As a typical online application, a display of results like mean particle size and number over time is useful. See Figure 8.7. The same numbers
Online Image Analysis of Particulate Materials
205
Figure 8.6. Raw image and detected particles in β-carotene dispersion.
can be given as analog output signal or switch an alarm when exceeding a predefined level.
Detection of Fibrous Impurities in Particle Suspensions Raw material from different sources or batches can contain fibers (for example, from cloth or filter bags). With conventional methods, the only possibility to detect them is time-consuming work under the microscope. With XPT-C, the suspension can be analyzed automatically by using an imaging filter that only accepts particles of a high length to thickness
206 Figure 8.7. Online trend display.
Online Image Analysis of Particulate Materials
207
Figure 8.8. Raw image with fiber in product suspension and analyzed image (only fibers measured).
ratio. Figure 8.8 shows a comparison of one raw image with fibers and the corresponding analyzed image, where all nonfibrous product particles have been automatically removed. The grid in the images is again 500 microns. Figures 8.9a and 8.9b show a measurement result from 1,000 images with and without filter on the same sample. The number of particles displayed in both cases per size class and size value chosen was max Ferret diameter. Image volume was 0.8 × 3.74 × 2.80 mm. This is an analyzed volume of 8.4 ml in 1,000 images and takes less than 2 minutes.
208
Nondestructive Testing of Food Quality
number of particles [-]
Particle Spectrum 10 8 6 4 2 0 0
200
400 600 800 Max Feret Diameter
1000
Figure 8.9a. Size and count of fibers only.
number of particles [-]
Particle Spectrum 3590 3000 2500 2000 1500 1000 500 0 0
200
400 600 800 Max Feret Diameter
1000
Figure 8.9b. Size and count of all particles.
The total number of fibers was 44 compared to the total number of all particles, which was 19,968. Both diagrams have a different scale on the y-axis; otherwise the spectrum of the fibers would be invisible. The product granules are around 200 microns long, and the fine material (broken product granules) is around 50 microns. This would not be seen clearly in a volume distribution. Conclusion Particle analysis by online imaging allows selective particle counting as well as size and shape analysis. This was shown in three examples:
Online Image Analysis of Particulate Materials
209
black speck detection, large contaminants in concentrated submicron dispersion, and fiber detection in product granules. Such an instrument has the following features that are especially useful in online applications: r r r
r
r r r
various measurement modes: single measurement, series of measurements, and trend over time start and stop of measurement by digital input to the instrument analysis results can be sent to hardware outputs: as analog output, for example, for a mean particle size or shape and as a digital output (low or high depending on events like an alarm on each particle above a certain size) automatic storage of images (for example, all images with a fiber), which is a very helpful feature to later quickly inspect the images with a rare special particle without saving and regarding thousands of images material in contact with medium have certificates for food and pharma (3.1b, FDA, USP class VI) wide temperature and pressure range instruments are IP65 including the computer
Chapter 9 Recent Advances in Nondestructive Testing with Nuclear Magnetic Resonance Michael J. McCarthy and Young Jin Choi
Introduction Nuclear magnetic resonance (NMR) spectroscopy and magnetic resonance imaging (MRI) provide a wealth of information on the quality, structure, component functionality, and ingredient interactions in food systems (McCarthy 1994). This information has been used to measure solid-liquid ratios in fats and oils, emulsion particle size distribution, emulsion stability, determination of molecular mobility, mixing efficiency, rheological properties, and basic structure. MRI and NMR have many advantageous properties for use in process analysis, process control, and high throughput systems. The measurement is noncontacting and noninvasive. The magnetic resonance signal is directly proportional to the number of nuclei in the sampling volume, and the response is linear from the detection limits (∼100 parts per thousand [ppt]) to 100% (Skloss et al. 1994). The signal can be made to be very specific by obtaining information from only one type of nucleus, for example 13 C, 31 23 P, Na, 19 F, or 1 H, or a combination of nuclei. The specific information available includes concentration, self-diffusion coefficients, droplet size distribution, shear viscosity, or copolymer blend ratios. These properties enable NMR to be applied to a wide range of materials from polymers to consumer products to foods for a wide range of measurements including reaction rates, molecular weight distribution, and phase distribution. Despite the advantages of NMR, application in-process analysis and control has been very limited in comparison to other analytical
211
212
Nondestructive Testing of Food Quality Computer control
Magnet, gradients and radio frequency coil
Spectrometer, radio frequency generation, pulse programmer, analogto-digital conversion
Shim power supply and linear magnet field gradients
Figure 9.1. Basic components of a magnetic resonance imaging spectrometer.
methodologies. This low application rate is primarily a result of difficulties in implementing NMR in an in-line or online mode. Barriers to more widespread use of NMR and MRI include cost, complexity of equipment, equipment size, and difficulty incorporating the equipment into a high-throughput system. The major components of an NMR/MRI system are a computer for control/data analysis, a spectrometer, a radio frequency coil, a magnet, and a shim/gradient system. A schematic of the major components is shown in Figure 9.1. Using NMR or MRI for process measurements in actual environments would be facilitated by advances in information processing and engineering of the spectrometer. This chapter reviews current applications to process analysis and recent advances in information processing and engineering of the spectrometer that should enable a greater number of NMR and MRI applications for process control.
Theory and Practical Implications Magnetic resonance is based on the interaction between an external magnetic field and atomic particles, and electrons and nuclei that possess spin, more precisely spin angular momentum. NMR focuses only on the nuclear interactions. Atomic nuclei have a spin property that is quantized to discrete values. Nuclear spin or, more precisely, nuclear spin angular momentum, depends on the precise atomic composition. Only nuclei that have a nonzero spin will have a magnetic moment that interacts with an external magnetic field. Almost every element in the periodic
Recent Advances in Nondestructive Testing
213
table has an isotope with a nonzero nuclear spin. However, protons, 1 H (spin 1/2 nuclei) are of central interest because they are the most abundant isotope for hydrogen and have a strong signal. For an accurate mathematical description of a nucleus with spin and its interaction, quantum mechanical, and/or statistical mechanical principles are required. However, when performing NMR, exceedingly large numbers of nuclei act largely independently so that at the macroscopic level the collection of particles appears continuous, the so-called “ensemble” behavior. For these, all states of the ensemble can be characterized by a simple vector quantity, which is referred to as the nuclear magnetization. Thus, we can discuss the NMR phenomenon using classical mechanics (Callaghan 1991). In a macroscopic sample, the net magnetization moment, M = (Mx , M y , Mz ), can be derived from the sum of all the nuclear magnetic moments. In such a model, the macroscopic angular momentum vector is simply Mγ , where M is the magnetization and γ is the gyromagnetic ratio. Then, the motion for the classical magnetization vector is dM = γM × B (9.1) dt The solution to Equation 9.1 when B is a magnetic field of amplitude, B0 corresponds to a precession of the magnetization about the field at the Larmor frequency, ω0 = γ B0 , which is one of the fundamental relationships in NMR. The resonance phenomenon occurs after the application of a transverse magnetic field oscillating at the Larmor frequency ω0 . This perturbing radio frequency (RF) pulse, applied perpendicular to B0 , is given by B1 (t) = B1 [cos(ω0 t)i − sin(ω0 t) j]
(9.2)
where i, j, and κ are unit vectors along the x, y, and z axes, respectively. Then the applied RF pulse yields d Mx = γ [M y B0 + Mz B1 sin(ω0 t)] dt d My = γ [Mz B1 cos(ω0 t) − Mx B0 ] dt d Mz = γ [−Mx B1 sin(ω0 t) − M y B1 cos(ω0 t)] dt
(9.3)
214
Nondestructive Testing of Food Quality
Under the initial condition of M(t = 0) = M0 κ, the solution of Equation 9.3 is Mx = M0 sin(ω1 t) sin(ω0 t) M y = M0 sin(ω1 t) cos(ω0 t) Mz = M0 cos(ω1 t)
(9.4)
where ω1 = γ B1 . Equation 9.4 implies that application of a rotating magnetic field of frequency ω0 , the magnetization simultaneously precesses about the static field B0 at ω0 and about the RF field B1 at ω1 . In the laboratory frame, the transverse magnetization precessing at the Larmor frequency will induce an oscillatory electromagnetic field (EMF) at ω0 in the NMR probe. This alternating current, or signal, is heterodyned with a reference frequency ωr . This signal detection method is inherently phase sensitive so that separate quadrature-phase output signals are obtained. The signal may oscillate at an offset frequency δω = ω0 − ωr . By applying a 90◦ RF pulse (ω1 = π/2), the laboratory magnetization at time t is given by M(t) = M0 [cos(ω0 t)i + sin(ω0 t) j]
(9.5)
which can be expressed in complex notation as M(t) = M0 exp(iω0 t)
(9.6)
and the resulting complex NMR signal at the offset δω is out of phase by ø, described as S(t) = S0 exp(iφ + iωt)
(9.7)
where S0 is the arbitrary initial signal intensity that depends upon the electronics of the detection system and is proportional to the total transverse magnetization of the sample M0 . In the time domain, this primary NMR signal is measured as an oscillating and decaying EMF induced by the magnetization in free precession. Therefore, it is known as the free induction decay (FID). The acquisition time for the FID is determined by the range of frequencies recorded (sweep width) time and the number of points recorded. The actual time is equal to the number of points divided by two times the sweep width. In practice, this yields acquisition time on the order of seconds and hence limits the use of FID
Recent Advances in Nondestructive Testing
215
(spectral data) for in-line analysis (generally an online or at-line mode is needed to achieve useful results). Relaxation occurs when the spins dissipate the absorbed RF energy and return to their equilibrium state. After the application of the RF pulse, the net magnetization is once again influenced by the main magnetic field B and tries to return to a state of equilibrium. As the relaxation occurs, the magnetization recovers in the longitudinal plane. Longitudinal and transverse relaxation processes occur simultaneously. T1 relaxation, alternatively referred as longitudinal relaxation or spinlattice relaxation, is the return of longitudinal magnetization to thermal equilibrium (that is, from a higher energy state to a lower energy state) and is associated with the exchange of energy between the spin system and surrounding thermal reservoir or lattice. This process can be described as d Mz −(Mz − M0 ) (9.8) = dt T1 And the magnetization at certain time t is t t + M0 1 − exp − Mz (t) = Mz (0) exp − T1 T1
(9.9)
where T1 is known as the spin-lattice or longitudinal relaxation time, which is the time required for Mz to return to 63% of its original value following an excitation pulse. At room temperature, T1 is a few seconds for protons in water and shorter for most other liquids. A typical T1 relaxation curve is shown in Figure 9.2. The implications of the time required for measurements in-line are extremely important since T1 values for foods are in the range of 0.1 to 1.5 seconds, and hence the samples will not in general be at equilibrium for measurement. The spin-lattice relaxation time is important for both equilibrium (signal strength is a strong function of T1 ), and if different regions in a sample (for example, oil and water) have different T1 values, signal strength differences based on these values will be observed. The transverse relaxation, also called T2 relaxation or spin-spin relaxation, is the return of transverse magnetization to equilibrium. T2 relaxation involves intermolecular and intramolecular interactions such as vibrations or rotations among spins. As the excited spins release their energy among themselves, the coherence of transverse magnetization
216
Nondestructive Testing of Food Quality 1.0
M Z 0.63 Mo
0
T1
τ (time) Figure 9.2. Typical T1 relaxation curve. The arrows are the size of z component of the net magnetization.
disappears gradually. T2 relaxation is often well described by a first order relaxation process: −Mx,y d Mx,y = dt T2 with solution
(9.10)
t Mx,y (t) = Mx,y (0) exp − T2
(9.11)
where T2 is spin-spin relaxation time or transverse relaxation time, which is the time required for Mx,y to decay to 37% of its initial value. Figure 9.3 shows a typical T2 relaxation curve. T2 ranges from microseconds for protons in solid foods (for example, ice, solid fat) to greater than 1 second for protons in liquid-like samples (for example, fruit). T2 values are critical to data acquisition since these effectively limit the time for data acquisition. If the sample is moving during measurement, the effect is to increase the effective relaxation time (McCarthy and Bobroff 2000). The Bloch equations are a combination of equations 9.3, 9.8, and 9.10 describing the behavior of a magnetization vector in the laboratory
Recent Advances in Nondestructive Testing
217
1.0
M xy exp(-τ /T2)
M xymax 0.37
0
T2
τ (time) Figure 9.3. Typical T2 relaxation curve. The arrows are the size of z component of the net magnetization.
frame and are given by Mx d Mx = γ [M y B0 + Mz B1 sin(ωt)] − dt T2 My d My = γ [Mz B1 cos(ωt) − Mx B0 ] − (9.12) dt T2 (Mz − M0 ) d Mz = γ [−Mx B1 sin(ωt) − M y B1 cos(ωt)] − dt T1 Stationary MRI In conventional NMR spectroscopy, the spectrum of nuclear precession frequencies provides information on the chemical and electronic environment of the spins. The polarizing magnetic field, B0 , is uniform across the sample. The static field inhomogeneities can be removed by careful adjustment of the currents in magnet shim coils. The frequency of energy is determined by the exact magnetic field strength that the spins experience. MRI techniques use this field dependence to localize proton frequencies to different spatial positions. This localization is accomplished by applying linear magnetic field gradients to disturb the main magnetic field homogeneity so that the local Larmor precession
218
Nondestructive Testing of Food Quality
frequency becomes spatially dependent. Although these magnetic field gradients perturb the static magnetic field, B0 , they are much smaller so that the Larmor frequency is affected only by components parallel to B0 . Hence, the local Larmor frequency can be defined as ω(r ) = γBo + γ G · r
(9.13)
where G is the field gradient vector and r is the position vector. For a small element of volume dV at position r with local spin density ρ(r ) in the presence of the field gradient, the NMR signal is given by d S(G, t) = ρ(r )d V exp [iγ (B0 + G · r )t]
(9.14)
When the reference frequency is chosen to be γ B0 , so called on resonance, neglecting the effect of main magnetic field the signal oscillates at γ G · r . The integrated signal over all volume elements is given by ρ(r ) exp(iγ G · r t)dr (9.15) S(t) = where dr represents a volume integration. For simplification, and to further display that Equation 9.15 is a sum of oscillating terms that has the form of a Fourier transformation, Mansfield (Callaghan 1991) introduced the concept of a reciprocal space vector κ given by k=
γ Gt 2π
(9.16)
The κ-vector has a magnitude expressed in units of reciprocal space, m−1 , and it is clear that κ-space may be traversed by moving either in time or in gradient magnitude. The factor of 2π appears since γ is expressed in units of radians per second per Gauss and G the units of Gauss per meter. However, the direction that κ-space is traversed is determined by the direction of the gradient G. Using the κ-space construct, and the concept of Fourier transform and its inverse, S(k) = ρ(r ) exp(i2π γ k · r )dr (9.17) ρ(r ) =
S(k) exp(i2π γ k · r )dk.
(9.18)
Recent Advances in Nondestructive Testing
219
Equations 9.17 and 9.18 state that the signal S(κ) and the nuclei density ρ(r ) are mutually conjugate. This is the fundamental relationship in NMR imaging. For the actual NMR experiment, data are attained over a limited range in κ and at discrete intervals. Hence, equations (9.17) and (9.18) should be interpreted as discrete Fourier transforms for which there exists a simple relation between the range and resolution in both spaces. For two-dimensional imaging, κ-space is acquired in two dimensions. When the echo is sampled in the presence of a gradient, points are obtained along a single line in κ-space. This line is oriented along one of the Cartesian axes, and the associated gradient is known as the read gradient. The G y gradient is named the phase gradient since it imparts a phase modulation to the signal, dependent on the position of volume elements along the y axis. The third dimension is called the slice selection and is a plane in the sample that is excited using a frequency selective RF pulse in the presence of a static magnetic field gradient, where the selective pulse is composed of an on-resonance RF field modulated by a lower frequency envelope of finite duration. The resulting signal will represent only the magnetization from the selected plane or slice. This approach avoids having to acquire the entire threedimensional time-domain data set. A spin echo pulse sequence is one of the most commonly used pulse sequences in MRI (Figure 9.4). The Gaussian-shaped RF pulse along with the applied G z gradient confines the region of interest to a slice in the x–y plane. During the interval τ , the pre-read gradient pulse, G x , allows the acquisition of an echo by encoding a constant amount of phase, and simultaneously, a variable amount of phase from the applied G y pulse is encoded along the y-direction. The amount of phase encoded is dependent on the magnitude of G y . To acquire the signal to generate the x–y image of the slice, a series of experiments (or excitations) are performed where G y is incremented in steps δG y from (N y /2)δG y to −(N y /2)δG y . During the signal acquisition, the phase along x is constantly evolving and the signal is composed of the sum over the frequency contributions from all the sample volume elements. For the pulse sequence of Figure 9.4, k x -space is attained by sampling the echo at specific time intervals, designated as t and k y -space is traversed by changing the G y gradient amplitude in steps denoted by δG y . Twodimensional Fourier transformation of the κ-space data set yields an image of the magnetization density in the sample. Then the NMR signal
220
Nondestructive Testing of Food Quality 90° selective pulse
180° pulse
RF Slice selection Gz
φ encode Gv
δ Gv τ
ω encode
Gx FID
Signal
Echo
t
Figure 9.4. Spin echo pulse sequence and a sample image of internal browning in an apple.
is given by b/2 S(k x , k y ) = −b/2
⎡ ⎣
∞ ∞
⎤
ρ(x, y, z) exp i2π (k x x + k y y) d xd y ⎦dz
−∞ −∞
(9.19) where b is the slice thickness. For convenience, the outer integral is ignored since it represents the process of averaging across the slice giving ∞ ∞ S(k x , k y ) = −∞ −∞
ρ(x, y) exp i2π (k x x + k y y) d xd y.
(9.20)
Recent Advances in Nondestructive Testing
221
Reconstruction of ρ(x, y) from S(k x , k y ) simply requires an inverse Fourier transform ∞ ∞ ρ(x, y) =
S(k x , k y ) exp −i2π (k x x + k y y) dk x dk y
(9.21)
−∞ −∞
γ G yτ γ Gxt and k y = . 2π 2π Taking a viewpoint of the imaging experiment that a certain region of κ-space should be sampled with some specific resolution in order to obtain an image with the appropriate range and resolution, it is obvious that any imaging scheme will work as long as κ-space is adequately sampled. For example, k y -space could be acquired by starting with negative G y amplitude and proceeding to positive G y , or vice versa. Secondly, an x gradient of negative amplitude could be employed during τ and omit the 180◦ RF pulse. Furthermore, the sampling of κ-space need not be confined to a Cartesian grid (Callaghan and Eccles 1987, Callaghan 1991), but whatever the sampling strategy, a means of transforming the acquired data from the time-domain to the spatial-domain is required. All of the sampling approaches require a specific amount of time to complete the experiment and to acquire the data. In the case of the basic spin-echo sequence, the time for the entire experiment is given by time to acquire a single line of κ-space plus the delay between lines times the total number of lines. Since images are usually acquired as a square matrix with a dimension that is a power of two, a 128 × 128 image can require anywhere from 0.5 seconds to several minutes. More rapid data acquisition schemes are possible and require from 0.1 to several seconds to acquire the data. Sample motion during the image will generally result in blurring and other artifacts. Motion compensation may be successful if the sample is moving along the direction of a single gradient. Optimally the sample would be stationary for the duration of the image acquisition, limiting the range of applications for this type of implementation. where k x =
Dynamic MRI Dynamic imaging for detecting motion is more complicated than static imaging. In dynamic imaging, the phase and frequency of the volume
222
Nondestructive Testing of Food Quality
elements within the sample are sensitive to the motion of these volume elements within the magnetic gradients fields. Various methods have been developed to measure the flow. Typical flow imaging methods can be classified as being either inflow/outflow, time-of-flight, or phaseencoding methods. All of these methods require an evolution of the spin system over a well-defined time scale. The inflow/outflow method and the time-of-flight technique are similar in that they do not depend on phase measurements but have subtle differences. The inflow/outflow method relies on a rapid repetition of RF and gradient pulses so that the saturation recovery or free-precession signal amplitude depends on the spin motion (Callaghan 1991). In the inflow/outflow method, the initial RF pulse selectively saturates the spins in a thin slice so that subsequent excitation of the same slice elicits an NMR signal from only those nuclear spins that have flown into the slice since saturation. This method monitors the changes in signal intensity resulting from motion of the spins into or out of the selected region (Caprihan and Fukushima 1990, Pope and Yao 1993). Saturation effects arise when excited nuclei do not have the time to recover back to equilibrium before the next RF excitation pulse. For these nuclei, the transverse magnetization component will be less after the RF excitation pulse than nuclei that were fully relaxed causing a decrease in signal intensity. Since the signal intensity depends on spin density, relaxation times, and flow, this method is suitable only for quantitative flow measurement of homogeneous samples (Pope and Yao 1993). In the time-of-flight method, a particular group of spins is selectively excited or magnetically tagged and then its Larmor frequency is examined after the tagged elements move in a static magnetic field gradient (Caprihan and Fukushima 1990, Pope and Yao 1993). Here, different velocity components are distinguished by frequency encoding their displacements during the image sequence. The moving element will have a changed frequency domain spectrum since spatial position is encoded as frequency by the magnetic field gradient. Here, the imaging slice is different than the excited slice that distinguishes this method from the inflow/outflow method described above. The excited slice is chosen to be normal to the principal flow direction whereas the imaging region can be parallel or perpendicular to the flow. In the former case, the image reveals the distributions of displacements (that is, velocities) along the line of intersection of slice
Recent Advances in Nondestructive Testing
223
selection and image planes. The latter provides the spatial distribution of spins with a particular velocity. Obtaining the full displacement profile requires images at different positions to obtain the full distribution of velocities. The phase-encoding method provides velocity information from flow-dependent phase shift of the magnetization. Here, the position, velocity, or higher order of fluid motion is phase encoded directly. Equation 9.22 expresses the position dependence of the spins in the presence of magnetic field gradients. It can be rewritten as ω [r (t)] = ω0 + γ G · r (t)
(9.22)
where r (t) = [x(t), y(t), z(t)]. The moving spins experience a frequency difference. Hence, the phase of the nuclei changes. dφ = ω [r (t)] − ω0 = γ G · r (t) dt
(9.23)
and integration yields t φ(t) = γ
G(t) · r (t)dt.
(9.24)
0
The gradient field G is time dependent G(t) since the phase accumulation of the nuclei results from both the duration of the applied pulse and the motion of the nuclei. For simplicity, assume onedimensional flow along the z-direction so that r (t) becomes z(t) and expand the nuclei’s position along the z-direction in terms of time derivatives, 1 ∂ 2z 1 ∂nz ∂z z(t) = z 0 + + + ··· + t n (9.25) ∂t t=0 2 ∂t 2 t=0 n! ∂t n t=0 and substituting Equation 9.25 into Equation 9.24 yields φ(t) = 1 ∂nz × G(t)t n dt γ z 0 G(t)dt + γ v0 G(t)tdt + · · · + γ n! ∂t n t=0 (9.26)
224
Nondestructive Testing of Food Quality 90° selective pulse
180° pulse
RF Slice selection
φ encode velocity Gz
τ
ω encode position
Gx Figure 9.5. An example of one-dimensional phase-encode NMR pulse sequences for flow measurement.
which includes phase contributions from the nuclei position, velocity, and higher orders of motions. The first integral of the above equation is called the zeroth moment, which is position dependent; the second integral of the first moment, which is velocity dependent; and the third integral of the second moment, which is acceleration dependent (Pope and Yao 1993). Importantly, Equation 9.26 shows that the longer the gradient is applied the more higher orders of motion influence the phase measurement of the nuclei. However, in practice, the gradient configuration in the NMR pulse sequence is manipulated so that all moments except the one for the motion investigated will either integrate to zero or have a negligible effect on the phase measurement. Figure 9.5 shows an example of one-dimensional phase-encode NMR pulse sequence for simple flow measurement. It selects a slice along the z-direction allowing the acquisition of a two-dimensional flow image in a two-dimensional experiment. The acquired data are the Fourier transform of a map of the magnetization density in the selected z–x plane. Thus, both fluid velocity along the z-direction vz and position along x-axis become the image coordinates. The phase-encoding gradient is stepped through different gradient amplitudes to sample the oscillatory magnetization. Here, the G z gradient pulse is configured so that only the first moment of Equation 9.26 is left. The phase shift is independent of position, and the velocity information is independent of spin
Recent Advances in Nondestructive Testing 90° selective pulse
225
180° pulse
RF
∆
φ encode displacement Gz
δ1 = δ 2 = δ δ1
δ2
ω encode position
Gx
t
Figure 9.6. An example of dynamic NMR pulse sequences (PGSE pulse sequence).
position. The NMR signal from a pulse sequence can be represented by ∞ ∞ S(k x , pz ) =
ρ(vz , x) exp(i2π k x x) exp(i2π pz vz )dvz d x
(9.27)
−∞ −∞
Performing a two-dimensional inverse Fourier transform provides ρ(vz , x), the joint density function of spin density along the x-axis and the velocity component along the z-axis. The conjugate variable pz is given by 1 pz = γ 2π
τ Gz(t)dt
(9.28)
0
This approach provides a useful description for samples comprising simple velocity fields (Callaghan 1991). The use for simple velocity fields is very powerful for the measurement of fluid rheological properties (Powell et al. 1994). A more general approach is allowed through an alternate combination of κ-space and q-space, known as dynamic NMR microscopy (Callaghan and Eccles 1987, Callaghan and Xia 1991, Callaghan 1991). Figure 9.6 provides a dynamic NMR microscopy pulse sequence, which is a displacement-sensitive pulsed gradient spin echo (PGSE). In
226
Nondestructive Testing of Food Quality
contrast to the pulse sequence of Figure 9.5, it encodes the displacement of the fluid z using two direct phase encodings. Equation 9.26 can be applied to both gradient pulses individually. The first gradient pulse of time duration τ1 induces a phase given by τ1 (9.29) φ1 = γ z 1 G z (t)dt = γ z 1 G z τ1 0
which is inverted by the subsequent 180◦ RF pulse and added to the second position-dependent phase measurement of τ2 (9.30) φ2 = γ z 2 G z (t)dt = γ z 2 G z τ21 0
results in a net phase of φ2 − φ1 . This net phase represents the distance traveled over the time interval of T , referred to as the flow time. In practice, the duration of each phase measurement is minimized, and the flow time T is maximized to reduce phase contributions from higher orders of motion. However, this approach cannot discern between the sources of motion. Hence, it is only under experimental conditions of steady flow that one can assume that the displacement rate is equal to the fluid velocity. The acquisition of qz - and k x -space for the pulse sequence of Figure 9.6 is similar to Figure 9.5. Here, the variable amount of net phase encoded along the z direction, or qz -space, results from changing the amplitude of G z in steps denoted by δG z . For acquiring k x -space, echo sampling is divided into specific time intervals, designated as t. The NMR signal obtained from the pulse sequence of Figure 9.6 is given by ⎡ ⎤ ∞ ∞ S(k x , qz ) = ⎣ρ(x) exp(i2π k x x) P(z, x; T ) exp(i2πqz z)⎦d x −∞
−∞
(9.31) where P(z, x; T ) is the conditional probability density that a fluid element at position x will displace z within the pulse sequence time interval T of Figure 9.6. A two-dimensional Fourier transform with respect to k x and qz produces a map of P(z, x; T )ρ(x) for each transverse position x. If the flow of fluid is steady-state, division of the fluid displacement by the flow time yields velocity. The resulting image may
Recent Advances in Nondestructive Testing
227
be referred to as either the position-displacement conditional probability density or the velocity profile (Caprihan and Fukushima 1990). The time needed to acquire dynamic MRI data is very similar to that for spin-echo based images as described previously. The important difference is that the measurement of motion is usually applied to fluid systems and hence the data are acquired on flowing material. The material is assumed to be constant in properties over the time of the data acquisition. Relationship of NMR Properties to Food Quality The consumer and producer of foods are interested in how foods taste, smell, feel, and look, and the shelf-life and nutritional value. These qualitative/quantitative descriptors often are highly correlated to physical/chemical attributes of the food product. The ability of NMR/MRI to measure and quantify physical and chemical properties directly and indirectly provides a powerful tool for quality assessment (McCarthy 1994). Direct measurements of composition are often possible using simple one-dimensional spectra or just initial signal intensity in the time domain. Indirect measurements of composition may often be achieved through correlations with relaxation times. These decay times for the NMR signal may be strong functions of moisture content, textural properties, and/or physical phases. Diffusion coefficient measurements have a strong dependence on food material structure such as particle size and other structural features in porous food matrices. MRI measurements of component spatial distribution give insight on the rates of drying, freezing, and other transient processes that impact final product quality, shelf-life, and process efficiency. Collectively, NMR and MRI measurement protocols permit a wide range of quality measurements in food systems. Yet these have been primarily restricted to laboratory environments. The type of instruments and measurements that have been used in processing environments are primarily based on low-field magnets making relaxation time measurements.
Recent Advances Several commercial companies have been offering process-compatible NMR systems. These systems are either low-resolution based on
228
Nondestructive Testing of Food Quality
relaxation time measurements or high-resolution Fourier Transform spectral measurements. The low-resolution systems are manufactured by Process Control Technologies (www.pctnmr.com) and Progression, Inc. (www.progression-systems.com). Progression’s systems have been applied primarily to controlling polyolefin and/or thermoplastic production. The high-resolution based sensors have been used to measure component concentrations in industrial petroleum refineries (system manufactured by Qualion Ltd.). Recently this system has been used to measure quality factors in fluid foods, for example alcohol content in beer and water, and fat content in dairy products (www.processnmr.com). The extension of process NMR/MRI systems to a more complete range of experiments and property measurements is only now just being realized. The advances enabling these extensions will be described below. Hardware: Magnets Fundamental to all magnetic resonance spectroscopy and imaging applications is the use of a magnet. Magnets are available in three different types: electromagnet, superconducting magnet, or permanent magnet. Operating and capital costs for electromagnets and superconducting magnets are considerably higher than for permanent magnets. Thus, permanent magnets have historically been chosen for in-line, online, and at-line applications (McCarthy and Bobroff 2000). In recent years, significant advances have occurred in permanent magnet design and construction permitting easier integration into manufacturing operations. These advances are primarily in the construction of large homogeneous volume permanent magnets and small unilateral magnets. Large volume homogeneous permanent magnets have the potential for use in a variety of measurements including chemical composition, physical properties, and structural properties. ASPECT Magnet Technologies LLC (www.aspect-mr.com) has recently introduced a permanent magnet with large rectangular shaped homogeneous volume; a picture of one of these is shown in Figure 9.5. Magnets have been constructed with volumes ranging from 30 × 50 × 50 millimeters (mm) to 110 × 250 × 250 mm with homogeneity of up to 10 parts per million (ppm) over 90 mm. Images acquired using fast spin echo techniques demonstrate the performance of the magnet and are shown in Figure 9.6. The magnetic field strengths available for this design range
Recent Advances in Nondestructive Testing
229
Figure 9.7. A one Tesla permanent magnet designed for in-line magnetic resonance imaging of structural defects in food products (Photo courtesy of Uri Rapoport, ASPECT Magnet Technologies, LLC).
from 1.0 Tesla to 1.5 Tesla. Key features of this system that are unique include shielding of the magnetic field so that there are no fringe magnetic fields, passive shimming, high performance gradient coils, and the ability to perform NMR spectroscopy and NMR imaging experiments. See Figure 9.7. This is a remarkable achievement enabling an expanded range of measurements for process environments. This magnet, spectrometer, and associated gradient technology enable practical in-line measurement of shear viscosity as a function of shear rate (Powell et al. 1994), inspection of packaged foods, and inspection of internal quality of fruit, vegetables, and meats. Current applications of these systems include physical property measurements (for example, viscosity), proton spectroscopy (for example, moisture and fat content), and imaging of structural defects (for example, damage in fresh fruit). See Figure 9.8. An alternate configuration of a magnetic resonance magnet is the unilateral geometry. Shown in Figure 9.9 is one type of unilateral magnet with a watermelon as a sample. Recent work in magnet design, and spectrometer design and analysis of NMR signals acquired in nonuniform magnetic fields have resulted in a range of portable NMR systems. The most versatile portable NMR system is called the NMRMOUSE Mobile Universal Surface Explorer, registered trademark of RWTH Aachen) (Bl¨umich et al. 2002, Bl¨umich et al. 2005, and www.nmr-mouse.de). This device has been applied to measure properties of polymers, automobile tires, foods, as well as human subjects
230
Nondestructive Testing of Food Quality
Figure 9.8. Four slices from a three-dimensional fast spin echo data set on a small lime using a permanent magnet designed for in-line magnetic resonance imaging (Data courtesy of Uri Rapoport, ASPECT Magnet Technologies, LLC).
(www.nmr-mouse.de). The NMR-MOUSE is based on a small magnet and RF coil assembly that achieves an active volume external to the magnet. This active volume is on the order of millimeters away from the surface of the NMR-MOUSE (Bl¨umich et al. 2002). Because the magnet assembly can be easily lifted and moved with one hand, the NMR signal can be acquired from small regions on very large objects. Small regions can even be imaged using this assembly. The main magnetic field is shaped to produce a linearly decreasing field orthogonal to the magnet/RF coil surface. The frequency of excitation is used to select a plane above the surface, and phase encoding is used to spatially resolve signals in the other two dimensions. The main limitation of the NMR-MOUSE is the small active region for measurements and the limited depth of penetration.
Recent Advances in Nondestructive Testing
231
Figure 9.9. Watermelon on top of a unilateral NMR magnet and radio frequency coil (Photo courtesy of Eiichi Fukushima, ABQMR, Inc.).
Achieving a larger active volume than that of the NMR-MOUSE requires an alternate magnet design. The magnet shown in Figure 9.9 has a special design to extend the depth of measurement further into the sample (Fukushima and Jackson 2004). This extended depth of penetration allows for measurements on volumes deeper inside objects (5 to 40 mm). This is an important advance since the magnet is easily portable and can literally be taken to the field to inspect fruit such as watermelon or cantaloupe. The additional depth penetration also permits the inspection of packaged foods and the interior of other large pieces of food (for example, meats, breads). The limitations on these types of systems is that they can perform primarily relaxation time measurements, diffusion coefficient measurements, or density depth profiling over a limited region and that the measurements are not yet at a speed compatible with in-line operation.
232
Nondestructive Testing of Food Quality
Figure 9.10. Portable NMR spectrometer manufactured by Tecmag, Inc. (Photo courtesy of Paul Kanyha, Tecmag, Inc.).
With the continual development of modeling software and magnet technology, many other types of magnets are possible from easier designs of traditional systems to novel styles. The most common traditional designs are the Halbach cylinder and a C-shaped magnet. Figure 9.10 shows a Halbach magnet designed to be used to measure spin-spin relaxation rates of whole navel oranges. This is a very similar system to one recently constructed to measure relaxation decays for rock cores (Anferova et al. 2004). Extension of the Halbach design was recently demonstrated using an array of dipolar magnets to form a magnetic field of improved homogeneity (Anferova et al. 2004). This has been taken even further by including a latch and hinge to open and close the magnet about cylindrical samples like a tree limb (Bl¨umler et al. 2006). Examples of other style magnets include a triangular-shaped magnet system to measure moisture content of wood chips (Barale et al. 2002), an array of cylindrical magnets to create a large homogeneous volume unilateral NMR system (Hunter et al.
Recent Advances in Nondestructive Testing
233
2003) and an earth’s field system for use in Antarctica (Dykstra et al. 2003). Hardware: Spectrometers Recent advances in electronics and digital signal processing have resulted in a number of portable NMR spectrometers. Figure 9.10 is a picture of one such portable spectrometer produced by TecMag (Houston, TX, USA). This device is a single board NMR spectrometer, with one broadband channel from 40 kilohertz (kHz) to 120 megahertz (MHz). The spectrometer allows for phase modulation and is controlled using a USB 2.0 connection to a laptop computer. Other vendors with portable NMR spectrometers include Magritek Limited (www.magritek.com) and Resonance Systems (www.mobilenmr. com). Single board, broadband, radio frequency excitation, and data acquisition systems are available from SpinCore Technologies, Inc. (www.spincore.com). These are general purpose NMR spectrometers. Quantum Magnetics Corporation (San Diego, CA, USA, now part of General Electric Security Systems) has built a small portable system for only measuring spin-spin relaxation times. This system is battery operated and consists of two small boards. One PC board is primarily a digital signal-processing chip for excitation and data acquisition, and the other board is a class-D switching amplifier with up to 1 kilowatt (kW) of power. This system is shown in Figure 9.11 and includes a portable computer and a Halbach cylindrical magnet weighing ∼15 kilogram (kg). The magnet is made from ferrite and is not temperature controlled. The unit is powered by two lithium batteries permitting operation for approximately 8 hours. The computer is connected to the single board spectrometer by an Ethernet cable. The software has a graphical user interface, which displays the data and experimental parameters. A sample data set for a navel orange is shown below the photograph in Figure 9.11. Pulse Sequence Advances Hardware developments are only part of the major advances in mobile and process NMR/MRI. The other major aspect is pulse sequence/ experimental design. The main emphasis in pulse sequence design has
234
Nondestructive Testing of Food Quality
CPMG of Navel Orange, T2's 258 and 1027 ms 900000 800000
amplitude (arb units)
700000 600000 500000 400000 300000 200000 100000 0
0
200
400
600
800
1000 1200 milliseconds
1400
1600
1800
2000
Figure 9.11. Photograph of a portable NMR system. The magnet is a 15-kg Halbach cylinder enclosed in an aluminum and wood case and is shown on the right side of the photograph. In the middle is the spectrometer electronics in an aluminum box that also contains batteries for powering the system. Data from a navel orange is shown below the photograph.
been to extend experiments that are easily performed in homogeneous magnetic fields to inhomogeneous magnetic fields. Recent advances include the measurement of velocity profiles (Casanova et al. 2004, Perlo et al. 2005), three-dimensional imaging with a single-sided sensor (Perlo et al. 2004), novel single-shot measurements of relaxation
Recent Advances in Nondestructive Testing
235
times or diffusion (H¨urlimann 2006), and even 19 F spectroscopy (Perlo et al. 2006). The measurement of spectra is based on using a nonuniform radio frequency field that matches the inhomogeneous field of the magnet. Although these spectroscopy experiments require about 3 minutes on volumes of ∼3 mm3 , larger samples will probably require an extension of the technique to achieve volume selection. This type of system should have widespread use in package inspection and for measurements on large food objects like watermelons or meat carcasses.
References Anferova S, V Anfrov, DG Rata, B Bl¨umich, J Arnold, C Clauster, P Bl¨umer, and H Raich. 2004. A mobile NMR device for measurements of porosity and Pore size distribution of drilled core samples, Concepts in Magn Reson Part B 23B(1), 26–32. Barale PJ, CG Fong, MA Green, PA Luft, AD McInturff, JA Reimer, and M Yahnke. 2002. The Use of a Permanent Magnet for Water Content Measurements of Wood Chips, IEEE Trans Appl Superconductivity, 12(1), 975–978. Bl¨umich B, V Anfrov, S Anferova, M Klein, R Fechete, M Adams, and F Casanova. 2002. Simple NMR-Mouse with a bar magnet, Concepts Magn. Reson. 15(4), 255– 261. Bl¨umich B, F Casanava, J Perlo, S Anferova, V Anferov, K Kremer, N Goga, K Kupferschlager, and M Adams. 2005. Advances of unilateral mobile NMR in nondestructive materials testing, Magnetic Resonance Imaging, 23, 197–201. Bl¨umler P, N Hermes, H Soltner, D van Dusschoten, and N Schneider. 2006. The NMRCuff: A hinged Halbach, In: Conference Program and Abstracts 6th Colloquium on Mobile Magnetic Resonance, P. Bl¨umler, Aachen, Ed., Germany, September, P9. Callaghan PT. 1991. Principles of Nuclear Magnetic Resonance Microscopy. Oxford: Oxford Science Publications. Callaghan PT and Eccles CD. 1987. Sensitivity and resolution in NMR imaging. J Magn Reson, 71, 426–445. Callaghan PT and Y Xia. 1991. Velocity and diffusion imaging in dynamic NMR microscopy. J Magn Reson, 91, 326–352. Caprihan A and E Fukushima. 1990. Flow measurements by NMR. Phys Rep, 198, 195–235. Casanova F, J Perlo, and B Bl¨umich. 2004. Velocity distributions remotely measured with a single-sided sensor, J. Magnetic Resonance, 171, 124–130. Dykstra R, PT Callaghan, CD Eccles, and MW Hunter. 2003. A portable NMR for remote measurements. In: Book of Abstracts 7th International Conference on Magnetic Resonance Microscopy, Snowbird, Utah, USA, September 20–23, The University of Utah, P-8. Fukushima E and JA Jackson. 2004. Unilateral magnet having a remote uniform field region for nuclear magnetic resonance. U.S. Patent No. 6,828,892; December 7.
236
Nondestructive Testing of Food Quality
Hunter MW, PT Callaghan, R Dykstra, and CE Eccles. 2003. “Design and construction of a portable one-sided access NMR probe.” In: Book of Abstracts 7th International Conference on Magnetic Resonance Microscopy, Snowbird, Utah, USA, September 20–23, The University of Utah, P-7. H¨urlimann MD. 2007. Encoding of diffusion and T1 in the CPMG echo shape: Singleshot D and T1 measurements in grossly inhomogeneous fields, J. Magnetic Resonance 184:114–129. McCarthy MJ. 1994. Magnetic Resonance Imaging in Foods. Chapman and Hall, New York, 110 pp. McCarthy MJ and S Bobroff. 2000. Nuclear magnetic resonance and magnetic resonance imaging for process analysis. p. 8264–8281. In: RA Meyers, Ed., Encyclopedia of Analytical Chemistry, John Wiley & Sons Ltd., Chichester, England. Perlo J, F Casanova, and B Bl¨umich. 2004. 3D imaging with a single-sided sensor: an open tomograph, J. Magnetic Resonance, 166:228–235. Perlo J, F Casanova, and B Bl¨umich. 2005. Velocity imaging by ex-situ NMR, J. Magnetic Resonance, 173:254–258. Perlo J, F Casanova, and B Bl¨umich. 2006. Single-sided sensor for high-resolution NMR spectroscopy, J. Magnetic Resonance, 180:274–279. Pope JM and S Yao. 1993. Quantitative NMR imaging of flow, Concepts Mang Reson, 5:281–302. Powell RL, JE Maneval, JD Seymour, McCarthy KL, and MJ McCarthy. 1994. Nuclear magnetic resonance imaging for viscosity measurements, Journal of Rheology, 38(5):1465–1470. Skloss TW, AJ Kim, and JF Haw. 1994. High Resolution NMR Process Analyzer for Oxygenates in Gasoline, Anal. Chem. 66(4), 536–542.
Chapter 10 Electronic Nose Applications in the Food Industry Parameswarakumar Mallikarjunan
Introduction In the food industry, characterizing a food product in terms of aroma or smell is a critical factor for the success of the food in the marketplace. Before putting the food in the mouth, consumers will smell the product and any objectionable odor will prevent them from consuming the product. To characterize this attribute by smell, many descriptive terms have been used: smell, aroma, odor, and flavor. Of these, flavor refers to the volatiles released during smelling and during mastication. The smell is the resultant reaction between volatile chemicals and the nose. The volatiles are often released from biological materials due to physical and chemical reactions occurring in the material. Primarily, these volatile compounds are organic in nature and are comprised of aldehydes, ketones, and esters. Other volatile compounds that are not organic chemicals include sulfur, ammonia, hydrogen sulphide, etc. During consumption or selection of a product, the human nose recognizes the smell of a food product. Volatile compounds from the food enter the olfactory area from the nose and the mouth. When the human nose sniffs, currents of air swirl up over the turbinate bones and to a sheet called the olfactory epithelium. The odor from the food is also brought into the olfactory epithelium through the mouth. The olfactory epithelium is very small, approximately one square centimeter per nostril, and yet contains an estimated 50 million primary sensory cells in humans. Each of the sensory cells has miniscule filaments that extend
237
238
Nondestructive Testing of Food Quality
from the surface of the epithelium into the watery mucus. Each filament contains a protein that is the molecular receptor (cilia) that interacts with the incoming odorant molecules. An odor molecule binds to these cilia to trigger the neuron and causes you to perceive a smell. Humans can distinguish more than 10,000 different smells. It has always been a challenge to correlate the human experience with analytical methods. Conventional analytical processes mostly use gas chromatography (GC), gas chromatography mass spectrometry (GC-MS), and gas chromatography olfactory (GCO) methods, and involve several time- and labor-intensive steps including developing the methods, sampling, transporting the sample to the lab, preparing the sample for analysis, separating specific volatile compounds using an appropriate chromatographic column, interpreting the chromatogram, taking the decision back to the sampling site, and acting on the decision. Complex samples have to be separated into their individual chemical components before a decision can be made. Even with the use of MS methods, the correlation of sensory smell with instrumental data has not been very successful due to the interactions of several volatiles in forming the sensory experience. Aroma of a particular sample is a complex mixture of compounds, and a large number of statistical calculations or multiple sniff ports could not yield the exact smell print of the sample. Electronic noses are gas multisensor arrays that are able to measure aroma compounds in a manner that is closer to sensory analysis than GC. Generally, an electronic nose system is composed of several sensors that may be set to achieve various levels of sensitivity and selectivity. The adsorption of volatiles on the sensor surface causes a physical or chemical change of the sensor and a specific reading to be obtained for that sample. Electronic nose systems consist of an array of chemical sensors that respond to the volatile flavors from a sample (Bartlett et al. 1997) in a unique pattern (Haugen and Kvaal 1998). Though an electronic nose is not a substitute for human sensory panels, which are most reliable and sensitive in measuring aroma, it can be used as a rapid, automated, and objective alternative to detect, measure, and monitor aroma (Mielle 1996). In recent years, there have been major advances in sensor technologies for odor analysis. Electronic noses have been around for approximately 35 years, and the last 15 years have seen dynamic advancements in both sensor technology and information processing systems. Chemosensor array-based systems using the conducting polymer technology were
Electronic Nose Applications in the Food Industry
239
initially developed in the early 1980s (Payne 1998). Initial work on the electronic nose with this technology stemmed from polymer development technology achieved by the U.S. Air Force. The Air Force was attempting to use certain polymers found to conduct electricity to build an airplane that would better evade radar. Researchers at Britain’s Warwick University in the mid-1980s used the findings from the military research to develop the first electronic nose chemosensory system based on conducting polymer sensors (Pope 1995). Since that time, other new sensor technologies have been developed that have properties more suitable for particular applications. Metal oxide semiconductor gas sensors were first used in the 1960s in Japan in home gas alarms. Conducting ceramic or oxide sensors were invented by Taguchi and produced by the company Figaro (Schaller and Bosset 1998). Electronic nose instruments have been tested successfully for use as a complementary tool in the discrimination of many consumer products. Currently commercial electronic nose systems have been developed based on three major technologies: metal oxide semiconductors, semiconducting polymers, and quartz crystal microbalance sensors. A select number of companies also produce systems using metal oxide semiconductor field effect transistor technology. In addition, technologies using surface acoustic wave sensors (for example, zNose) provide additional opportunities to convert traditional GC methods to the one similar to electronic nose systems. Determining which of the aforementioned types of electronic nose systems, if any, would be most suitable as a discriminatory analysis tool to be used in quality control is of interest in the food industry. The other major issue with the electronic nose technology is it is perceived as a market pushed and not as demand driven. Many companies have sprung up but failed to sustain a market presence due to the push versus pull of the technology. The electronic nose manufacturers are trying to find applications to employ their system and trying to convince the user base, especially the food industry to adopt the systems. As a result, one can see rapid changes in the marketplace. A few examples follow: r r r
Cyrano Sciences merged with Smith Detection Systems. AromaScan changed to Osmetech. Nordic Sensors merged into Applied Sensors (and moved to other areas).
240
Nondestructive Testing of Food Quality
r
Perkin Elmer dropped the support for their system with HKR GMBH, letting that company go out of business. r Agilent 4440A from Hewlett Packard is no longer available. Electronic noses are comprised of (1) chemical sensors that are used to measure aromas, (2) electronic system controls, and (3) information processing systems and a pattern-recognition system that is capable of recognizing complex aromas. Although there are various sensor technologies used among the current manufactured instruments, most systems work using the same series of steps. They analyze compounds in a complex aroma and produce a simple output. Usually, the steps in using an electronic nose system include (1) generating an odor from a sample, (2) exposing the sensor array to the aroma, (3) measuring changes in an array of sensors when they are exposed to the odor, (4) establishing a recognition pattern for the sample from the responses of all or a number of sensors in the system, and (5) using this information in statistical analyses or pattern recognition neural networks to compare to a database of other chemosensory measurements. Each of the sensors in the electronic nose are made with a unique material and, when exposed to a particular vapor mixture, each sensor reacts in a different but reproducible manner producing a “smellprint” (combination of responses from all sensors) for each volatile mixture. A database of smellprints or the digital images of a chemical vapor mixture is created by training the electronic nose system. Then using a prediction algorithm, such as a multivariate technique (principal component analysis, canonical discriminant analysis, etc.), a model can be developed. When a new unknown vapor mixture is to be identified, the electronic nose system digitizes the vapor mixture and compares this digital image with the previously established database (model) of smellprints in its memory. The unique feature of an electronic nose system is that its response takes into account all of the characteristic features (chemical and physical properties) of a sample, but does not provide information about the composition of the complex mixture. Thus this system can be used when the decision about a chemical vapor of a sample is more important than its contents, such as a spoiled versus nonspoiled food sample, age of a fruit, type of cheese, and adulteration in the product, etc. An aroma may be taken at ambient conditions to mimic what the human nose would experience under normal circumstances or the
Electronic Nose Applications in the Food Industry
241
sample may be heated to intensify aroma concentrations. Aroma exposure to the sensor array is generally accomplished by one of two methods: static headspace analysis and flow injection analysis. Static headspace analysis involves direct exposure to a saturated vapor taken from the headspace above a sample. Flow injection analysis involves injecting the aroma sample into a control gas that is constantly pumped through the sensor chamber (Payne 1998).
Electronic Nose Niche The electronic nose has both advantages and disadvantages over the use of human sensory panels as well as GC/MS analyses. Therefore, it is used as a complementary instrument to monitor odors. The human olfaction system, which is the basis of sensory panels, is still the most sensitive device available for aroma measurement. It is also the odor measurement method used by consumers when assessing the odor of consumable products. Therefore, it is important that any odor monitoring methods used in quality control or quality assurance be capable of detecting odors that may be found to be offensive by the human olfactory system. This fact is also the reason that human sensory panels are still the basis of aroma measurements in the food industry. Although electronic noses cannot compete with the sensitivity and final correlation of sensory panels and replace them, they are objective instruments and involve primarily a small capital investment compared to having a standing sensory panel. They can also be used on the production floor and even can be implemented for in-line measurements. Work performed by Strassburger (1998) demonstrated that a metal oxide semiconductor (MOS) sensor-based instrument showed great potential in aiding in flavor analysis going from the research and development phase to the production floor as it produced results that were directly correlated to sensory and analytical results. Sensory panels are inherently subjective and the physical condition of panelists may vary from one day to another. This brings inherent error into any scientific quantification of experimental results. Human panels require sustained training for each type of product or sample and standardization between different panels at different sites is extremely difficult. Sensory panels have high costs associated with training, maintaining, and testing, and they experience fatigue. Therefore, they are not
242
Nondestructive Testing of Food Quality
run continuously for extended periods of time. A trained electronic nose provides a complementary objective tool available for 24-hour complex aroma analysis (Payne 1998). Newman and others (1999) used a conducting polymer sensor-based electronic nose as a complementary tool to sensory analysis in the odor analysis of raw tuna quality. Electronic nose measurements were successfully correlated with sensory scores with correct classification rates of 88%, 82%, and 90% for raw tuna stored at three temperatures. The electronic nose is an instrument that can be either portable or connected to an auto-sampler to reduce the need for human involvement in multiple sample testing. It is also an instrument available to potentially test odors that a human sensory panel would not be willing to test, although this facet is not particularly pertinent to the food industry. Past objective odor monitoring analysis options involved the use of analytical GC/MS techniques. These techniques offer identification and quantification of compounds comprising an aroma. However, GC/MS techniques often find difficulties in identifying which of the comprising compounds contribute to the recognized odor and to what extent, particularly if they are complex odors. Electronic nose technology has a unique advantage over GC and MS techniques because it is an analytical technique that samples an entire aroma rather than identifying it by its comprising components. It is also a faster method of aroma analysis (Payne 1998). A portable electronic nose unit could also be used to directly sample headspace aromas from bulk raw materials or food containers where sampling for GC/MS analysis becomes difficult (Hodgins and Conover 1995). Electronic nose analysis is also a technique that may be nondestructive and incurs low operational costs. Overall, it fills a number of gaps in odor analysis not achieved by use of sensory panels and GC/MS techniques in conjunction. Although the electronic nose has a number of weak points that inhibit its ability to be used exclusively, it is a powerful tool that enhances aroma monitoring when used as a complementary tool to sensory panels and GC/MS techniques.
Electronic Nose Market The market for electronic nose instrumentation has been developed as a result more of ‘technology push’ rather than ‘market pull’ (Payne
Electronic Nose Applications in the Food Industry
243
1998). The technology has been continually developed without an existing market demand and manufacturers have actively pushed sales and pursued applications in which electronic nose instruments would be useful. There are numerous and expanding applications in which such an instrument would be complementary and enhance a product or process. As a result of this market situation, manufacturers are able to offer strong product support and aid in the implementation of their instrument. However, until the recent development of portable electronic noses, long research development and associated costs have resulted in high pricing of most electronic nose systems. This has in turn retarded their expansion into new applications. End users are hesitant to purchase these instruments without being fully assured that it will work as the manufacturers claim. The production of an industrial electronic nose that is reliable is still in the development phase, and most systems currently available are most suited to a laboratory environment (Payne 1998). Current commercial electronic nose system manufacturers that are most involved in the market include Airsense (Germany), Alpha MOS (France), Applied Sensor (US) merged from Nordic Sensor Technologies (Sweden) and Motech GmbH (Germany), AromaScan (UK), Bloodhound Sensors (UK), HKR Sensorsystems (Germany), Lennartz electronic (Germany), Neotronics (USA, UK), RST Rostock (Germany), and OligoSense (Belgium) (Payne 1998).
Types of Chemosensory Systems The major types of chemosensory-based electronic nose technology include MOS sensors, conducting polymer (CP) sensors, quartz microbalance (QMB) sensors, and metal oxide field effect transistors (MOSFET). Certain manufacturers in recent years have also been developing hybrid or modular chemosensory systems that use multiple sensor types. The MOS and MOSFET sensors are considered to be ‘hot’ sensors, and the remaining sensor technologies and CP and QMB sensors are considered to be ‘cold’ sensors due to their operating temperatures (Schaller and Bosset 1998). Recently, there has been an increase in the development of nanoscale sensors (primarily using metal oxide based) with an aim to miniaturize the sensing device. MOS sensors and CP sensors are the two technologies that have been used the longest in commercial electronic nose systems. Conducting
244
Nondestructive Testing of Food Quality
polymer sensors are easily fabricated and are fabricated with a high degree of reproducibility. They also have the greatest range of selectivity and sensitivity. However, the MOS-based systems are less susceptible to water vapor variations, are more robust, have a longer useful life, and are cheaper to replace. Metal Oxide Sensors MOS sensors consist of a ceramic substrate heated by wire and coated by a metal oxide semiconducting film. The metal oxide coatings used are often n-type oxides that include zinc oxide, tin dioxide, titanium dioxide, or iron (III) oxide. P-type oxides such as nickel oxide or cobalt oxide are also used. The main differences between sensors using the two types of oxide coatings are the types of compounds with which they react. The sensors using n-type (n = negative electron) coatings respond to oxidizing compounds because the excitation of these sensors results in an excess amount of electrons in its conduction band. The p-type (p = positive hole) sensors develop an electron deficiency when excited and therefore are more prone to react with reducing compounds (Schaller and Bosset 1998). MOS sensors have a low sensitivity to moisture and are robust. They typically operate at temperatures ranging from 400◦ C to 600◦ C to avoid moisture effects. These sensors are not typically sensitive to nitrogenor sulfur-based odors, but they are sensitive to alcohols and other combustibles (Bartlett et al. 1997). Conducting Polymer Sensors Conducting polymer sensors are composed of a conducting polymer, a counter ion, and a solvent that are grown from a solution onto an electrode bridging a 10-micrometer (µm) gap to produce a resistor. Measurements are made by measuring changes in resistance. Altering one or more of the three comprising materials produces different sensors. The single stage fabrication technique allows the reproducibility from the production of one sensor to the next to be consistent. Conducting polymer sensors are formed electrochemically onto a silicon or carbon substrate. This results in a polymer in an oxidized form that has cationic sites and anions from the electrolyte. Sensors made
Electronic Nose Applications in the Food Industry
245
from polymers based on aromatic or heteroaromatic compounds, such as polypyrrole, polythiophene, and polyaniline, are sensitive to many volatile compounds and experience a reversible change in conductions (Persaud et al. 1999). Conduction is achieved in the electrically conductive polymer by electron transport, not ion transport. The carriers are associated with the cation sites (Hodgins and Simmonds 1995). Although the conducting polymer sensors have the greatest range and balance between selectivity and sensitivity, they are more sensitive to water vapor and are more expensive to produce and replace. They can be used at room temperatures and temperatures moderately higher. This allows for future development of handheld electronic nose instruments and avoids problems associated with the breakdown of volatiles at the sensor surface of systems using increased heating (Persaud et al. 1999). Quartz Microbalance Quartz microbalance (QMB) sensors or quartz crystal microbalance (QCM) sensors evolved from a larger group of piezoelectric crystal sensors. These sensors use crystals that can be made to vibrate in a surface acoustic wave (SAW) mode or bulk acoustic mode (BAW). The sensors are made from thin discs composed of quartz, lithium niobate (LiNbO3 ), or lithium tantalite (LiTaO3 ) then coated. The coating materials are usually GC stationary phases but may be any nonvolatile compounds that are chemically and thermally stable (Schaller and Bosset 1998). The quartz microbalance sensors respond to an aroma through a change in mass. When an alternating voltage is applied at a constant temperature, the quartz crystal vibrates at a very stable and measurable frequency. This is dependent upon the assumption that viscoelastic effects are negligible (Bartlett et al. 1997). Upon exposure to volatile compounds in an aroma, the volatiles adsorb onto the GC phase coating of the sensor, which causes a change in the mass of the sensor. The change in mass results in a measurable change of the oscillating frequency of the sensor. QMB sensors have developed as a useful electronic nose technology because they produce stable responses and are formed through a simple fabrication process. In reporting on trends and developments in quartz microbalance chemosensory systems, Nakamoto and Morrizumi in 1999 performed
246
Nondestructive Testing of Food Quality
work examining QMB sensor responses with different aroma injection systems as well as model development for response prediction. QMB sensor technology continues to improve and Applied Sensor has released a handheld unit this year that is currently the least expensive electronic nose system on the market.
Metal Oxide Semiconductors Field Effect Transistors (MOSFET) MOSFET sensors respond to aroma volatiles with a measurable change in electrostatic potential. Each sensor in a MOSFET system consists of three layers including a silicon semiconductor, a silicon oxide insulator, and a catalytic metal. The catalytic metal component is also called the gate and is usually palladium, platinum, iridium, or rhodium (Schaller and Bosset 1998). The standard transistor is an example of an “active” circuit component, a device that can amplify, producing an output signal with more power than the input signal. The field-effect transistor (FET) controls the current between two points but does so differently than the bipolar transistor. The FET operates by the effects of an electric field on the flow of electrons through a single type of semiconductor material. Current flows within the FET in a channel, from the source terminal to the drain terminal. A gate terminal generates an electric field that controls the current. Placing an insulating layer between the gate and the channel allows for a wider range of control (gate) voltages and further decreases the gate current (and thus increases the device input resistance). The insulator is typically made of an oxide (such as silicon dioxide [SiO2 ]). This device is the metal oxide semiconductor FET (MOSFET). MOSFET sensors are similar to MOS sensors in that they are also robust and have a low sensitivity to water.
Surface Acoustic Wave (SAW)-Based Sensors SAW sensors are piezoelectric quartz crystals that detect the mass of chemical vapors absorbed into chemically selective coatings on the sensor surface. This absorption causes a change in the resonant frequency of the sensor, similar to quartz microbalance based sensors. The internal microcomputer measures these changes and uses them to determine the presence and concentration of volatiles. The SAW sensor coatings
Electronic Nose Applications in the Food Industry
247
have unique physical properties that allow a reversible adsorption of chemical vapors. The zNoseTM is a novel device that combines a GC system to SAWbased sensor that allows nondestructive aroma profiling by producing a spectrum much like the GC but operating at the speed of an electronic nose. The sensing system is based upon fast chromatography techniques and a single high-Q acoustic sensor that simulates a virtual sensor array containing hundreds of orthogonal sensors. The zNoseTM consists of a heated inlet port, vapor preconcentrator, temperature-programmed GC column, and a solid-state SAW detector. The SAW sensor is a temperature-controlled quartz crystal, which absorbs vapors as they exit the GC capillary column. The changes in the fundamental frequency of the SAW detector caused by absorbed mass of each condensed analyte produces a GC showing retention times and total counts per second. Analysis of any odor is accomplished by serially polling this virtual sensor array or a spectrum of retention times. Any analyte can be calibrated according to the retention times of a standard mixture of linear chain n-alkanes. The chromatogram provides a qualitative and quantitative analysis of specific chemicals in the headspace analyzed. Data resulting from the zNoseTM measurements could be analyzed by either a chromatographic or a spectroscopic approach. For the chromatogramic approach, selected peaks in the entire chromatogram could be considered and their relative areas could be compared. In the spectral analysis approach, the entire frequency plot will be compared after baseline and timeline correction. Polar olfactory images with frequency change as radial (r) and elution time in the angular (θ) measurements of specific vapor mixtures can be obtained using the software (VaporPrintTM images) provided by the manufacturer (Staples 2000).
Issues or Drawbacks with Electronic Nose Technology Major problems that exist with the use of the electronic nose systems are sensor drift, poor repeatability and reproducibility due to system sensitivity to changes in operational conditions or poor gas selectivity, and sensitivity (Roussel et al. 1998). To overcome these difficulties, it is necessary to develop a successful and efficient testing methodology at optimum parameter settings, and periodic calibration or retraining of the nose is warranted.
248
Nondestructive Testing of Food Quality
Issues such as sensor drift and the nature of the instruments’ discriminatory abilities are major concerns because electronic nose technology is selected and developed for particular applications. To achieve the necessary repeatability, it is necessary that the sensors in the electronic nose systems react reversibly with the compounds in a sample aroma. Sensor drift occurs when the sensors experience small additive changes over time and usage. The aging of sensors, or sensor drift, has been a major issue of concern throughout the history of the development of electronic nose systems. However, some of the most recent advancements in electronic nose technology have been developed to deal with this issue. Advances in design and manufacturing of sensors have to increase the useful life of sensors, and calibration standards and artificial neural networks have been better developed to increase the reliability and longevity of measurements that unknowns are compared to. In addition, optimization of system and experimental parameters can establish more stable conditions and combat sensor drift. Mielle and Marquis (1998) performed work examining several parameters or dimensions of electronic nose analysis including sensor temperature, number of sensors, and sample incubation time, to stabilize system response and lengthen the useful life of library patterns in the system database. Roussel and others (1999) examined the influence of various experimental parameters on the multisensor array measurements using an electronic nose with tin dioxide (SnO2 ) sensors and attempted to quantify them. Volatile concentration in the headspace increased as the sample temperature is increased. In screening factorial experimental designs, it is necessary that the response of the experimental parameters be monotonic within a studied range. Alternatively, response surface designs must be generated to develop a model of the multisensor response. The discriminatory power of any electronic nose chemosensory system is based upon its ability to respond measurably and repeatedly to components of aromas and to respond differently to aromas with varying components. The chemical nature and concentration of the volatiles in an aroma, reaction kinetics and dynamics of those volatiles, as well as system parameters and sample preparation affect the fundamental response of each sensor. Schaak and others (1999) examined the effects of the system parameters: injection volume, incubation time, and incubation temperature, and their effect on sensor response and discriminatory power for the MOS sensor-based Alpha M.O.S FOX 3000.
Electronic Nose Applications in the Food Industry
249
Optimizing the system response of sensors in an electronic nose system through controlling system and experimental parameters is key to it being a useful analytical tool in most applications. Nakamoto and Morrizumi (1999) reported that the QMB sensor responses could be predicted using computational chemistry. This allows optimal sensor selection for target odors. Hansen and Wiedemann (1999) performed optimization work in using the Alpha M.O.S. (Toulouse, France) FOX 4000. The experimental and system parameters were investigated to optimize the response range of the sensors and enhance their discriminatory power. This work was performed using a full factorial design and examined four experimental parameters: incubation time, incubation temperature, sample mass, and sample agitation rate. It was found in this work that only the oven temperature had a major influence on volatile generation in the sample headspace. Bazzo and others (1999) performed optimization work for the MOS sensor-based FOX 4000 system to analyze high-density polyethylene (HDPE) packaging. The optimization work allowed the selection of discriminating sensors as well as appropriate sample throughput conditions. It is necessary to optimize electronic nose instrumentation to ensure sensitivity at the lowest detection thresholds. The threshold detection levels of 30 food aroma compounds with varying chemical structures for a MOS sensor-based electronic nose system were found to be similar to reported ortho-nasal human detection thresholds (Harper and Kleinhenz 1999). Harper also found that the matrix solution used strongly influenced electronic nose threshold levels, and the use of a 4% ethanol matrix solution resulted in the sensor response resistance changes above their useable range. Subsequently, it is necessary to find a workable range of sensitivity for the sensors in a chemosensory array for particular samples to achieve an appropriate sensor response. It must also be acknowledged that although electronic nose technology continues to improve, it still responds very differently to many compounds than does the human nose. For example, the human nose is not sensitive to water vapor as well as several other compounds. However, such compounds affect most electronic nose systems, particularly those operating at lower temperatures. Consequently, electronic nose systems may be blinded by such compounds or not suited to discriminating others that the human nose is sensitive to. The other major issue with the adaptation of the commercially available system is the limitations from the software and user interface. Many
250
Nondestructive Testing of Food Quality
systems allow only a limited number of samples (around 10) to develop the smell print for that particular aroma. However, when dealing with biological products having wide variations, this becomes very limited for practical use. In addition, the presence of moisture in the biological materials also creates unique problems with respect to identifying the aroma.
Statistical Analysis Problems exist with the use of electronic nose sensors such as sensor drift and poor repeatability and reproducibility caused by system sensitivity to changes in operational conditions or poor gas selectivity and sensitivity (Roussel et al. 1998). To overcome these difficulties, it is necessary to develop a successful and efficient testing methodology at optimum parameter settings. Roussel and others (1999) examined the influence of various experimental parameters on the multisensor array measurements using an electronic nose with SnO2 sensors and attempted to quantify them. Volatile concentration in the headspace increased as the sample temperature was increased. In screening factorial experimental designs, it is necessary that the response of the experimental parameters be monotonic within a studied range. Alternatively, response surface designs must be generated to develop a model of the multisensor response. Response surface analysis involves the investigation of linear and quadratic effects of two or more factors. The fundamental principle of response surface methodology is to develop a simple mathematical expression, usually first- or second-order polynomials, that approximate the relationship between response and the examined factors (Devineni et al. 1997). An experimental design is selected that allows a minimal number of experiments be used to examine a full range of values for a particular factor. Popular Box-Behnken designs are fractions of 3 N designs used to estimate a full quadratic model in N factors. They consist of all 2k possible combinations of high and low levels for different subsets of the factors of size k, with all other factors at their central levels; the subsets are chosen according to a balanced incomplete block design for N treatments in blocks of size k. A number of center points, with all factors at their central levels, may also be added (Box and Draper 1987, SAS System Help 1988). The response surface analysis
Electronic Nose Applications in the Food Industry
251
procedure uses the method of least squares to fit quadratic response surface regression models. The models focus on characteristics of the fit response function and in particular, where optimum estimated response values occur. Multivariate Factor Analysis Statistical analysis is a key to understanding the sensor responses in an electronic nose instrument and realizing their discriminatory power. Discrimination and identification of sample recognition patterns require the use of multivariate factor analysis. Factor analysis is a type of multivariate analysis that is concerned with the internal relationships of a set of variables (Lawley and Maxwell 1971). There are several multivariate statistical methods used among electronic nose systems. Multivariate Discriminant Analysis and Principal Component Analysis (PCA) are factor analysis methods that are most common to electronic nose data analysis software and are the primary discussion topics. Other types of factor analysis, such as cluster analysis, partial least squares, Soft Independent Modeling of Class Analogy, and Artificial Neural Networks are also discussed briefly. Great length is given to discussion of the descriptive statistics quantifying the amount of separation between sample classes and identification of unknowns, particularly the Mahalanobis distance. Principal Components Analysis PCA allows data exploration and was initially developed and proposed by Hotelling in 1933. It is the extraction of principal factors through the use of a component model. This analysis process does so by assessing the similarities between samples and the relationships between variables. It is a linear technique and uses the assumption that response vectors are well described in Euclidean space (Bartlett et al. 1997). The object is to determine whether samples are similar or dissimilar and can be separated in homogenous groups and to determine which variables are linked and the degree to which they correlate. PCA summarizes information contained in a database in subspaces with the object of reducing the number of variables and eliminating redundancy (Gorsuch 1983, Jolliffe 1986). In PCA, the principal factor method is applied to a correlation matrix with unit values as the diagonal elements. The factors then give the most
252
Nondestructive Testing of Food Quality
suited least squares fit to full correlation matrix, with each factor ranked based upon the amount of the total correlation matrix that it accounts for. The principal components of the analysis are linear combinations of the original variables, and the discerned information from the analysis are presented in two- or three-dimensional spaces relative to the chosen components, which are classified based upon the level of information that they produce. The smaller factors are generally dropped from the model because they carry a trivial portion of the total variance and do not provide significant information (Gorsuch 1983, Alpha M.O.S. 2000). PCA is a form of dimension reduction factor analysis where the relationships of a set of quantitative variables are examined and transformed into factors based on the amount of contributed variability to the system. Although PCA does not ignore covariances and correlations, it concentrates on variances. The principal components are selected and ranked based on the amount of total variation, not the variation that most discriminates among classes of observations. This method of analysis is commonly used to reduce the number of variables used prior to performing discriminant analysis in order to make the calculations in the latter more manageable (Jolliffe 1986). Discriminant Analysis Multivariate discriminant analysis, also known as discriminant function analysis, discriminant factorial analysis (DFA), Gaussian discriminant function (GDF), and canonical discriminant analysis (CDA), is the most common analysis method used by electronic nose systems to separate classes of observations in a database (Lawley and Maxwell 1971). CDA is a dimension reduction technique that creates new canonical variables by taking special linear combinations of the original response variables. The canonical variables of the CDA, in some sense, are similar to principal components of the PCA. The principal advantage of CDA is its ability to allow the researcher to visualize the observations, which are classified into the different categories, in two- or three-dimensional space. Another advantage of CDA is that the output from a PCA can be used as an input for the CDA, thus the data visualized. If possible, one can attempt to interpret the canonical variables (Johnson 1998). Canonical correlation analysis (CCA) is generally performed when there is a need to compare groups of variables. It helps in reducing the dimensionality of the data. CCA can be used to summarize the
Electronic Nose Applications in the Food Industry
253
underlying relationship between groups of variables by creating new variables from the existing groups of variables. These new variables are called canonical functions. While performing the CCA, the optimum number of canonical functions can be known only after performing a preliminary CCA. Generally the option NCAN = 2 is used to limit the number of canonical functions generated to two. Interpretation of canonical functions is generally considered to be difficult (Johnson 1998). CDA may be used to determine descriptive variables that predict the divisions between groupings when information regarding sample groupings is known ahead of time. An algorithm is used to determine linear combinations of new descriptive variables that separate the predetermined groups as much as possible. A set of data Nx is partitioned , Nm into m subsets {N1x , N2x , . . . , Nkx , . . . , Nm−1 x x } that represent different quality descriptors. CDA proposes to then develop an algorithm with new variables {F1 , F2 , . . . , Fj , . . . , Fs } that correspond to the directions that separate the subsets. This method allows the classification prediction of an unknown as one of these groups through the computation of the distance to the centroid of each of the groups. The unknown is classified with the closest associated group (Lawley and Maxwell 1971, Harman 1976, AlphaM.O.S. 2000). CDA is a dimension-reduction type of factor analysis related to principal component analysis and canonical correlation. The manner in which the canonical coefficients are derived parallels that of a oneway MANOVA. In a CDA, linear combinations of the quantitative variables are found that provide maximal separation between the classes or groups. Given a classification variable and several quantitative variables, this procedure derives canonical factors, linear combinations of the quantitative variables that summarize between-class variation in much the same way that principal components summarize total variation. The discriminant function procedure in the SAS Software (Cary, North Carolina), PROC DISCRIM, develops a discriminant criterion to classify each observation into one of the groups for a set of observations containing one or more quantitative variables and a classification variable defining groups of observations. The derived discriminant criterion from a training or calibration data set can be applied to an unknown data set (Harman 1976, SAS System Help 1988). In CDA, the classification of an unknown observation involves plotting the unknown observation and determining whether the point falls near the mean point of one of the groups. If the unknown observation is
254
Nondestructive Testing of Food Quality
close enough, the sample can be classified as being the same material. If the point is far away from all groups in a database, the sample may be a different material or have a different concentration from the sample observations used as the training set data. The approach is relatively straightforward except for the concept of how being near a group is actually defined. Visual inspection of a CDA projection plot provides useful initial information. However, it is not a viable method for real world discriminant analysis applications. Quantification with a mathematical equation is needed to measure the closeness of the unknown point to the mean point of the groups in a database. The Euclidean distance is one such measurement technique. The unknown response can be used in a formula to calculate the distance of the unknown point to the group mean point. This would be an acceptable method except for two facts. The Euclidean distance does not give any statistical measurement of how well the unknown matches the training set, and the Euclidean distance only measures a relative distance from the mean point in the group. The method does not take into account the distribution of the points in the group, because the variation along one axis is often greater than the variation along another axis. The training set group points tend to form an elliptical shape. However, the Euclidean distance describes a circular boundary around the mean point. The Euclidean distance method is not an optimum discriminant analysis algorithm, because it does not take into account the variability of the values in all dimensions (Jolliffe 1986, Marcus 2001). Mahalanobis distance (D), however, does take the sample variability into account. It weights the differences by the range of variability in the direction of the sample point. Therefore, the Mahalanobis distance constructs a space that weights the variation in the sample along the axis of elongation less than in the shorter axis of the group ellipse. In terms of Mahalanobis measurements, a sample point that has a greater Euclidean distance to a group than another sample point, may have a significantly smaller distance to the mean if it lies along the axis of the group that has the largest variability (Jolliffe 1986, Marcus 2001). Mahalanobis distances examine not only variance between the samples within a group, but also the covariance among groups. Another advantage of using the Mahalanobis measurement for discrimination is that the distances are calculated in units of standard deviation from the group mean. Therefore, the calculated circumscribing ellipse formed around the cluster of a class of observations actually defines a preset
Electronic Nose Applications in the Food Industry
255
number of standard deviations as the boundary of that group (Jolliffe 1986). The user can then assign a statistical probability to that measurement. For relatively large samples and normality assumptions D/2 behaves like a normal multivariate z with standard deviation 1. In theory, D/2 can be examined to obtain an indication of the separation between samples and their estimated populations, and the probability of incorrect assignment. A D value of 5 would correspond to about five standard deviation separations that cover approximately 99% of a population given a multivariate normal distribution. Separation of groups quantified with a Mahalanobis distance greater than 5 would indicate very little overlap. In practice, the determination of the cutoff value depends on the application and the type of samples (Marcus 2001). The Mahalanobis distance, expressed as D2 or D, is consequently the statistic most often used in multivariate analyses to identify unknown samples and to quantify the probability that they belong to the identified class. Mahalanobis distance is the most appropriate measure of multivariate relationships when data are normally distributed, homoscedastic, and has equality among covariance matrices. Most software give a classification matrix of the Mahalanobis distances to each group centroid and identifies the sample as belonging to the group with the smallest distance (Jolliffe 1986). The Mahalanobis metric in a minimum-distance classifier is generally used as follows. Let m1 , m2 , . . . , mc be the means for the c classes, and let C1 , C2 , . . . , Cc be the corresponding covariance matrices. An unknown vector x is classified by measuring the Mahalanobis distance from x to each of the means, and assigning x to the class for which the Mahalanobis distance is minimum (Knapp 1998). Articles are often not specific about reporting whether D or D2 is being used, and usually it is only discerned in context. Mahalanobis D or D2 is a descriptive measure of similarity or adjusting for pooled values within variance and covariance. D2 is calculated first and often preferred, as variance to standard deviation, because of its additive and has known distribution. However, only the standard deviation is in original measure units. Also, D is used as the ruler in canonical variate space or canonical projection plots and so is in the more useful form when examining the data graphically (Marcus 2001). D2 is approximately chi-square distributed with p degrees of freedom. Therefore, an unknown is still assigned to the population with
256
Nondestructive Testing of Food Quality
the smallest D2 . Furthermore, using this idea, one can decide not to assign an unknown if all D2 values are larger than some cutoff based on the chi-square distribution with p degrees of freedom. It is also a useful statistic for finding multivariate outliers in a sample. If the data follow a multivariate normal distribution, then the D2 values will be approximately chi-squared distributed with p degrees of freedom. For standardized principal components, D2 is the sum of squares (Jolliffe 1986). One problem with the Mahalanobis distance is that, because it is a summation of coefficient products, the number of observations and independent variables used in the calculation affects it. As with many multivariate quantitative methods, the Mahalanobis distance solves for multiple dimensions simultaneously. However, the Mahalanobis model tends to become overfit very quickly as more independent variables are added. This is similar to an increased R2 value for models when an increased number of independent variables are used and only logical when the method of calculating Mahalanobis matrix is considered. D2 cannot be zero because it is always a quantity greater than zero. Therefore, D2 is a biased estimate whether the null hypothesis is true or not. The size of the bias can be substantial when the sample sizes are small relative to the number of variables measured. An unbiased Mahalanobis distance (D2u ) is given by Equation 10.1 (Marcus 2001): 2 = Du(1|2)
(n 1 + n 2 − p − 3) D 2 (n 1 + n 2 ) p − (n 1 + n 2 − 2) n1n2
(10.1)
where D2 = Mahalanobis distance between classes 1 and 2, dimensionless n1 and n2 = number of observations in class 1 and 2 p = number of independent variables The answer to combining these apparently opposing necessities into one method for sample discrimination lies in first reducing the sensor data in electronic nose systems into its component variations with principal component analysis. A commercially available electronic nose, the Cyranose 320, uses this method to avoid over-fitting and instability in calculations. The PCA method indirectly performs a sensor selection and reduces the number of variables used in building a canonical model. It is recommended for that instrument that a breakpoint of 5 for
Electronic Nose Applications in the Food Industry
257
the Mahalanobis distance, D, indicates well-separated groups (Cyrano Sciences 2000). For cases where the number of observations or independent variables used in the discriminant analysis differ, it is not fair to compare Mahalanobis distances and so a standardized value must be compared. An average or unbiased Mahalanobis distance calculated by using a proportionality constant accounting for the number of independent variables and observations may be used or the F-value for the Mahalanobis distances may also be used for comparison as well. Hotelling’s T-square, T2 , equals this distance except for an included proportionality constant. A problem with both D2 and T2 is that they are based on the inverse of the variance-covariance matrix, S = X X, with the assumption that X has been centered and scaled. This inverse can be calculated only when the number of variables, p, is small compared to the number of training set samples, N. Mahalanobis D2 is also part of the formula for finding the two-sample extension of student’s t test, testing that the centroids of two multivariate populations have the same mean. This is Hotelling’s T2 test, and is a maximum likelihood test. D2 can be converted to T2 and then to an F statistic, which has p, and n1 -n2 -p-1 degrees of freedom by multiplying by appropriate constants based on the number of observations in each class and the number of variables used. This statistic is known to be fairly robust to violations of normality assumptions, but is more sensitive to equality of variance assumptions, particularly for disparate sample sizes (SAS System Help 1988). Because the F-value incorporates the Mahalanobis distance and also takes into account the number of observations used, the degrees of freedom, and the number of independent variables used, it is a useful term for comparing discriminant analyses performed by different systems. The Wilks’ Lambda value is calculated from the inverse of the product of each of the eigenvalues incremented by one. Because Lambda is a type of inverse measure, values of Lambda that are close to zero denote high discrimination among groups. The F-value for the Wilks’ Lambda provides a quantitative value for the overall discrimination of all the classes involved in the discriminant analysis. While it is a useful number for quickly quantifying the amount of separation between classes, it denotes total discrimination and does not indicate if the total amount of separation is due to a balanced separation of all the classes or a very large separation of some classes while having little separation between other
258
Nondestructive Testing of Food Quality
classes. Consequently, the most useful value in comparing Mahalanobis distances from different systems are the F-values for the Mahalanobis distances because they give a standardized value of the separation between each of the three classes analyzed in the discriminant analysis. The percent correct during cross-validation also provides additional information regarding the degree of separation. After the discriminant model is developed, the most common method of validation is to use what is commonly referred to as the “leave one out” method or crossvalidation. In this cross-validation, each data point is removed and tested as an unknown to the model developed with the remaining data points. A value of 100% indicates complete separation of all classes. A value of 90% is usually considered sufficient validation for a database model. The user sets the actual required percent recognition for a training set validation based on the application requirements. This is also often called the “leave one out” procedure, as each observation is left out in turn in the analysis and then identified using all of the remaining data (SAS System Help 1988). Equations 10.2–10.5a, used to calculate the discussed terms, are given as follows (Jolliffe 1986): 2 = (x1 − x2 ) COV −1 (x1 − x2 ) D(1|2)
(10.2)
(n 1 − 1) + (n 2 − 1) + (n 3 − 1) − p + 1 n 1 n 2 D2 (n 1 − 1) + (n 2 − 1) + (n 3 − 1) p n1 + n2 (10.3) 1 1 (10.4) = 1 + λ1 1 + λ2
FMahanalobis(1|2) =
F = t=
1 − 1/t [N − 1 − 0.5( p + k)] t − 0.5 [ p(k − 1) − 2] 1/t p(k − 1)1 (10.5) p 2 (k − 1)2 − 4 p 2 + (k − 1)2 − 5
(10.5a)
where D2(1|2) = Mahalanobis distance between classes 1 and 2, dimensionless
Electronic Nose Applications in the Food Industry
259
FMahalanobis(1|2) = F-value for Mahalanobis distance between classes 1 and 2, dimensionless x1 and x2 = the geographic means of classes 1 and 2, dimensionless n1 , n2 , and n3 = number of observations in each class p = number of independent variables = Wilks’ Lambda, dimensionless λ1 and λ2 = first and second eigenvalues derived from the discriminant analysis, dimensionless F = F-value for Wilks Lambda, dimensionless COV = pooled variance matrix, dimensionless k = Number of classes N = total number of observations in all classes Discriminant analysis is used primarily to answer three basic questions: 1. Is the number of sensors and the sensor data obtained from the training set useful for building a model to classify the apples into its maturity level or stage? 2. Can the model classify correctly the unknown apples of varying maturity levels? 3. If not, what is the miscalculated percentage? Discriminant analysis, also known as classification analysis, is a multivariate method for classifying observations into appropriate categories (apples into appropriate maturity levels) (Johnson 1998). The concept of discriminant analysis is analogous to regression analysis, as the goal of the latter being to predict the value of the dependent variable, while that of the former being to predict the category of the individual observation (Johnson 1998). The main difference is that multivariate (discriminant analysis) approach is used when the variables are not independent. This condition violates the assumption of regression. According to Johnson (1998), there are four nearly equivalent ways to develop a discriminant rule to classify observations into categories: Likelihood Rule, Linear Discriminant Function Rule, Mahalanobis Distance Rule, and Posterior Probability Rule. There are three different methods that can be used to verify or estimate the probability of the correct classification of the observations and are described in detail below (Johnson 1998).
260
Nondestructive Testing of Food Quality
1. Resubstitution Method: The resusbstitution method employs a discriminant rule to the same data that were used to develop the rule and check how many observations were correctly classified by the rule into the correct categories. This method presumes that if a rule cannot classify properly on the original data used to build the rule, then there is a poor chance of it doing well with a new data set. The major drawback with this method is its overestimation of the probabilities, when it classifies correctly. In SAS, this method can be invoked using the DATA = option (lists). 2. Holdout Method: This method uses a holdout set or a test data set, where we know which observation belongs to which particular category, and the hold out data set is not used to develop the discriminant rule. The major drawback for this method is that one has to sacrifice the hold out data in order to build the discriminant rule, thus not being able to develop the best possible discriminant rule. In SAS, this method can be invoked using the DATA = option (test data). 3. Cross-Validation Method: Lachenbruch and Mickey (1968) first proposed the cross-validation method, also known as jackknifing. This is the preferred method when compared to the above two discriminant rules. The first observation vector is holdout and the remaining data are used to construct the discriminant rule, then the rule is used to classify the first observation, and then it checks whether the observation is correctly classified into the particular category. In the next step, the second observation vector is removed, but the first observation is replaced back into the original data, and then the discriminant rule is constructed. The rule thus developed is used to classify the second observation and thus check whether the observation is classified correctly. Thus, the same process is continued for the entire data set and also noting down the category it is being classified. It is claimed that this method is almost unbiased. In SAS, this method can be invoked using the DATA = option (cross lists). Variable Selection Procedure Since the number of variables involved in this study is high (32), a variable selection procedure is used to reduce the number of variables, which are really necessary for effective discrimination of the data. The three types of variable selection procedures are Forward Selection Procedure, Backward Elimination Procedure, and Stepwise Selection Procedure. Johnson (1998) recommends the Stepwise Selection Procedure when the number of variables exceeds 15.
Electronic Nose Applications in the Food Industry
261
Other statistical analyses used include Partial Least Squares (PLS), Soft Independent Modeling of Class Analogy, Cluster Analysis, and the use of Artificial Neural Networks. Discriminant analysis should not be confused with cluster analysis and principal component analysis because discriminant analysis requires prior knowledge of the classes. The data used in cluster analysis do not include information on class membership. Its purpose is to construct the classification (Guertin and Bailey 1970, SAS System Help 1988). PLS is a statistical method that may be used to extract quantitative information. It is an algorithm based on linear regression and can be used to extract concentration sensory score predictions. The PLS algorithm attempts to correlate a matrix containing quantitative measurements to a predictive matrix using a matrix of sensor measurements from the electronic nose instrument. After building the model, the predictive matrix is used to predict quantitative information contained in an unknown sample (Gorsuch 1983, AlphaM.O.S. 2000). Soft Independent Modeling of Class Analogy (SIMCA) is a factor analysis method that is similar to PCA and CDA. This method classifies unknown samples using a comparison to a database composed of one group only. PCA is first performed on the data with the objective to find the subspace that most precisely contains samples. Each sample is explained in terms of its projection on the subspace and its projection on the orthogonal subspace. This matrix composed of a set of sensor observations induces two new matrices. The threshold identification criteria are set with theoretical values for the norm of the residual part of the predictive matrix and the Mahalanobis distance of the quantitative scores matrix to the centroid of the values projected in the subspace. SIMCA modeling works with as few as five observations from each population with no restriction on the number of independent variables (Jolliffe 1986, AlphaM.O.S. 2000). Cluster analysis deals with data sets that are to be divided into classes when very little is known beforehand about the groupings. It provides an entry into factor analysis by establishing groupings within a data set. Within cluster analysis, principal components are calculated and used to provide an ordination or graphical representation of the data, or either to construct distance measures. The majority of cluster analysis techniques require the computation of similarity or dissimilarity among each pair of observations with the objective of clearly identifying group structures. The PCA graphical representation is often useful in verifying a cluster structure. This method of analyses is also often used in conjunction
262
Nondestructive Testing of Food Quality
with artificial neural networks to perform the classifications (Guertin and Bailey 1970, Jolliffe 1986). Artificial Neural Networks In many applications, there may be many references or combinations of sensor data to which the unknown needs to be compared. In these cases, an artificial neural network (ANN) is often used to analyze data from the sensor array. ANNs are particularly useful in analyzing data from hybrid electronic nose instruments where combined data must be analyzed. They are also particularly useful when the data to be analyzed exhibits a non-Gaussian distribution. The artificial neurons carry out a summation or other simple equation using predetermined weighted factors. The weighted factors are determined during the training of the neural network and are set arbitrarily before it is trained (Hodgins and Simmonds 1995). The training process for any ANN is a defining factor for its success. The training of an ANN is accomplished by inputting data from the sensor array to the artificial neurons defined by a set answer for that data. The neural network calculates the values at all the neurons in the hidden and output layers. A back propagation technique is then used to adjust the weighted factors until the correct output is achieved. This is repeated for all sensor data for all samples in a training set. A common breakpoint value for determining if the ANN is sufficiently trained for an application is if the weighting factors vary by no more than 10% during a training run (Hodgins and Simmonds 1995). A trained ANN can then be used to identify an unknown sample by comparing it to all of the references in the training set. In practice, ANNs do not always identify unknowns that are one of its references with 100% confidence. However, ANNs do provide a means for performing numerous comparison calculations quickly to provide identification.
Applications in the Food Industry Many food industry professionals are skeptical about the claims and capabilities of the technology, and the need for developing training sets for each application is also slowing down the adaptation of this technology in a wider scale. Schaller and Bosset (1998) reviewed the
Electronic Nose Applications in the Food Industry
263
applications of different electronic nose systems to different foods including meat, grains, coffee, beer, mushrooms, cheese, sugar, fish, blueberry, orange juice, cola, and alcoholic beverages, as well as packaging. They concluded that the electronic nose can be regarded as an interesting tool for a quick “yes or no” quality test, and could occasionally replace sensory analysis, and even perform better, in cases where nonodorous or irritant gases need to be detected. In the last few years, more and more electronic nose applications have been developed to be implemented in the food industry (Table 10.1). The technology has excellent potential for use in quality assurance and quality control applications, and compliance of ingredients from suppliers. Baby and others (2005) have evaluated the discrimination ability of modular sensor system (MOSES) to classify medicinal plants and found that the discrimination between Valeriana officinalis and Valeriana wallichii types was achieved very successfully. Classification of milk samples from one dairy product and based on fat content within a particular dairy product was developed using a support vector machines (SVM) approach using metal oxide sensors (Brudzewski et al. 2004). Similarly, Collier and others (2003) attempted using metal oxide sensors to classify various dairy products and compared that to screenprinted electrochemical arrays. In addition, the technology can be used to monitor quality changes, especially oxidative changes in foods having adequate lipid content and off-flavor development in food products caused by spoilage microorganisms. Aparicio and others (2000) studied the rancidity in olive oils using conduction polymer-based sensors and found that they could detect the rancidity at very low levels. Similarly oxidative rancidity in milk using metal oxide semiconductor thin film-based sensors was done (Capone et al. 2001). Another promising area for which this technology is finding widespread adaptation is evaluating fruit maturity. Tin oxide-based sensors were used by Simon and others (1996) to monitor blueberry flavor. Benady and others (1992) related the data derived from electronic senses to various ripeness indices such as slip pressure and classical volatile measurements in melons. Data from sensory panels were correlated to the electronic nose data that registered gases from the degradation reactions in tomatoes (Simon et al. 1996). Young and others (1999) demonstrated that electronic nose technology using metal oxide sensors could be used as a potential maturity indicator to predict the harvest date for Royal Gala apples. According to Young and others (1999), the
Table 10.1. Applications of different electronic nose systems to different foods.
264
Product
Application
Sensor Technology
Apple
Maturity
CP
Apple Barley grain Beef
Maturity Mycotoxin contamination Microbial quality
Bread Cod roe
Spoilage molds Flavor profile
Coffee
Classification based on sensory parameters Shelf Life
Crescenza Cheese Dairy products Egg
Instrumental system/Manufacturer
Cyranose 320, Cyrano Sciences, Pasadena, CA QMB LibraNose, Technobiochip, Italy MOSFET VCM 422, S-SENCE, Link¨oping University, Link¨oping, Sweden CP Cyranose 320, Cyrano Sciences, Pasadena, CA CP BH114, Bloodhound Sensors Ltd, Leeds, UK No information FreshSense, IFL and Bodvaki-Maritech, K´opavogur, Iceland MOS Pico1, laboratory-made
MOS, MOSFET
model 3320, Applied Sensor Laboratory Emission Analyser; Applied Sensor Co., Link¨oping, Sweden)
Review article
Freshness assessment Emmental cheese Ripening Fish Freshness Fruit and grape Discrimination based wine on type of fruit Fuji apples Maturity
Reference (Pathange et al. 2006) (Saevels et al. 2003) (Olsson et al. 2002) (Balasubramanian et al. 2004) (Keshri et al. 2002) (Jonsdottir et al. 2004) (Pardo and Sberveglieri 2002) (Benedetti et al. 2005)
MOS
Laboratory made
(Ampuero and Bosset 2003) (Ritaban et al. 2003)
QMB MOS MOS
Laboratory made Laboratory made FOX 3000; Alpha MOS; France
(Bargon et al. 2003) (O’Connell et al. 2001) (McKellar et al. 2005)
QMB
LibraNose, Technobiochip, Italy
(Echeverria et al. 2004)
Ginseng
Ground red peppers
Grapes Hazel nuts Honey
Change in aroma profile during preparation Capsaicin, dihydrocapsaicin, and total capsaicinoids levels Grape aroma
MOS
FOX 3000; Alpha MOS; France
(Lee et al. 2005)
CP
e-NOSE 4000, EEV Inc., Amsford, NJ
(Korel et al. 2002)
SAW
zNose, Electronic Sensor Tech., Newbury Park, CA. eNOSE 4000, EEV, Inc., NJ, USA Model 3320 Applied Sensor Lab Emission Analyser; Applied Sensor Co., Link¨oping, Sweden
(Watkins and Wijesundera 2006) (Alasalvar et al. 2004) (Benedetti et al. 2004)
(Lammerty et al. 2004, Veraverbeke et al. 2005) (Gomez et al. 2006)
zNose, Electronic Sensor Tech., Newbury Park, CA
MOS
Many fruits Milk
Maturity Spoilage
MOS CP
Milk
Classification of samples from different dairies Rancidity
MOS
PEN2, WMA Airsense Analysentechnik GmbH, Schwerin, Germany Laboratory-made Model BH-114: Bloodhound Sensors Ltd., Leeds, UK Laboratory made
MOS
IME-CNR laboratory in Lecce
265
SAW
Mandarin
Varietal aroma Classification of samples of different geographical and botanical origin Classification and detecting adulteration Maturity
Honey
Milk
CP MOS, MOSFET
(Brezmes et al. 2005) (Magan et al. 2001) (Brudzewski et al. 2004) (Capone et al. 2001) (cont.)
Table 10.1. (cont.) Product
Application
Sensor Technology
Instrumental system/Manufacturer
Reference
Milk
Microbial quality
CP
e-NOSE 4000, EEV Inc., Amsford, NJ
Oak barrels
Monitoring of toasting homogeneity Rancid defect Discrimination of different types of oil Defects in post-harvest fruits Odor of retained solvents
MOS
Fox 4000, Alpha MOS, France
(Korel and Balaban 2002) (Chatonnet and Dubourdieu 1999)
CP CP
AromaScan plc, Crewe, U.K. Laboratory made
(Aparicio et al. 2000) (Stella et al. 2000)
TSMR
University of Rome Tor Vergata and Technobiochip Cyranose 320, Cyrano Sciences, Pasadena, CA
(Di Natale et al. 2001)
Olive oil Olive oil
Oranges/Apples
266
Packaging
CO, MOS, QMB
(Van Deventer and Mallikarjunan 2002)
Fox 3000, Alpha MOS, France Peanut Peanuts Pectin gel
CP QMB MOS
Pink salmon
Off-flavor detection Flavor fade Changes in flavor release with aging Detecting spoilage
Pink Lady apple Poultry meat
maturity Microbial quality
MOS MOS
Raw and cooked Quality degradation cod fish during storage
CP
CP
HKR Sensorsystems, Munich, Germany A-32S, AromaScan Inc., Hollis, NH HKR Sensorsystems, Munich, Germany Laboratory-made
(Osborn et al. 2001) (Williams et al. 2006) (Monge et al. 2004)
Cyranose 320, Cyrano Sciences, Pasadena, CA Laboratory-made Fox 3000, Alpha MOS America, Inc, Hillsborough, NJ, USA e-NOSE 4000, EEV Inc., Amsford, NJ
(Chantarachoti et al. 2006) (Brezmes et al. 2001) (Dorothy and Boothe 2002) (Korel et al. 2001)
Salmon fillets Shrimp
Strawberry ice cream Tea
Tomato
267
Truffle Various dairy products Wine Wine Wine Wine Wine
Yellowfin tuna Yogurt
Microbial quality Monitoring quality changes under different cooling conditions Discrimination based on fat content Flavors of teas manufactured under different processing conditions Various quality levels Aging Classification based on type of product Discrimination based on sensory quality Discrimination based on sensory quality Off-flavor detection Discrimination based on quality Discrimination based on variety and origin Microbial quality Fermentation control
CP Aromascan, AromaScan Inc., Hollis, NH No information FreshSense, IFL and Bodvaki-Maritech, K´opavogur, Iceland
(Du et al. 2002) (Zeng et al. 2005)
IMCELL, MOS MGD-1, Environics Oy, Kuopio, Finland
(Miettinen et al. 2002)
MOS
Laboratory-made
(Dutta et al. 2003)
QMB
(Berna et al. 2005)
MOS MOS
enQbe, University of Rome ‘Tor Vergata’, Italy) Pico1, laboratory-made Fox 4000 Alpha MOS, France
MOS
Laboratory-made
TSMR MOS
University of Rome Tor Vergata and Technobiochip Fox 4000, Alpha MOS, France
MOS
Laboratory-made
(Ragazzo-Sanchez et al. 2005) (Garcia et al. 2006)
MOS
Laboratory-made
(Santos et al. 2004)
CP Aromascan, AromaScan Inc., Hollis, NH MOSFET, MOS No information
(Falasconi et al. 2005) (Collier et al. 2003) (Penza and Cassano 2004a) (Di Natale et al. 2004)
(Du et al. 2001) (Cimander et al. 2002)
268
Nondestructive Testing of Food Quality
electronic nose analysis was approximately 40 times more sensitive than the headspace/gas chromatography. With higher correlation to human sensory panels, this technology also has great potential in the product development activities. With recent developments in developing real time sensing with the gas sensors, online implementation for process control is very promising. Cimander and others (2002) have developed a system using MOSFET-based sensors integrated with a near infrared sensing system for online monitoring of yogurt fermentation. Results showed that proposed online sensor fusion improves monitoring and quality control of yogurt fermentation with implications for other fermentation processes. To determine the ripening stage in Emmental cheese, a quartz microbalance-based sensing system was developed by Bargon and others (2003) and monitors the ripening process continuously. Similarly, the shelflife of Crescenza cheese stored at different temperatures was measured by a metal oxide-based sensing system (Benedetti et al. 2005). The technology also has potential to be used to detect pathogen contamination in selected foods with a sufficiently larger population but within the risk level for human illnesses. In addition to application in the food and bioprocess industries, electronic nose technology has been explored for use in the medical field as well. An electronic nose can examine odors from the body and identify possible health-related problems. Odors in the breath can be indicative of infections, diabetes, and gastrointestinal, sinus, and liver problems. Infected wounds and tissues emit distinctive odors that can be detected by an electronic nose. Odors coming from body fluids such as blood and urine can indicate liver and bladder problems and is measuring with a blood gas analyzer. There is extensive literature available in this area. The scope of this book is limited to applications of electronic nose technology in the food industry, therefore information related to medical applications are not included here. Case Studies Researchers at Virginia Tech have used electronic nose technology for various food-related applications including, but not limited to, detection of plasticizers in packaging material (van Deventer and Mallikarjunan 2002), discrimination of oil quality (Innawong et al. 2005),
Electronic Nose Applications in the Food Industry
269
determination of fruit maturity (Pathange et al. 2006, Athamneh et al. 2006), detection of oxidation-related quality changes in meat, peanuts, and milk (Ballard et al. 2005, Mallikarjunan et al. 2006), and spoilage detection in seafood products (Hu et al. 2005). Contrary to having only one type of electronic nose, the research facility at Virginia Tech has all three major electronic nose systems, both in handheld format and desktop format. Thus, it provided an opportunity to compare the technologies for a given application and find a system that is suitable to that application.
Detection of Retained Solvent Levels in Printed Packaging Material The packaging suppliers use plasticizers to make the printing stay on the packaging material. Some of the plasticizer could transfer into the food product and alter the taste and flavor. In addition, at higher levels, these plasticizers pose health risks and the industry wants to limit the level of plasticizer in printed packaging materials. Currently, the industry uses a human sensory panel-based sniff test and found that the chromatographic techniques did not correlate with sensory results. It was decided to explore the feasibility of using electronic nose technology for this application. Three different types of electronic nose systems were tested for their ability to discriminate the packaging based on the contamination level, ease of use for training and prediction, and repeatability. (See Figure 10.1.) First and foremost, each system was optimized for its performance to obtain maximum sensor response. The results of the optimization are described by van Deventer and Mallikarjunan (2002). Performance analyses of these systems, which use three leading sensor technologies, showed that the conducting polymer sensor technology demonstrated the most discriminatory power. All three technologies proved able to discriminate among different levels of retained solvents. Each complete electronic nose system was also able to discriminate between assorted packaging having either conforming or nonconforming levels of retained solvents. Each system correctly identified 100% of unknown samples. Sensor technology had a greater effect on performance than the number of sensors used. Based on discriminatory power and practical features, the FOX 3000 and the Cyranose 320 were superior (van Deventer and Mallikarjunan 2002).
270 Figure 10.1.
Comparison of three types of electronic nose systems in discriminating packaging material based on the level of plasticizers.
Electronic Nose Applications in the Food Industry
271
Discrimination of Frying Oil Quality Based on the Usage Level Various criteria are being used to judge when frying oils needs to be discarded. In restaurants and food services, changes in physical attributes of frying oils, such as oil color, odor, and foam level have been used as an indicator of oil quality. In the food industry, not only physical tests but also chemical tests are used to measure oil quality, including acidity, polymer content, and/or total polar content. Many of the methods do not correlate with oil quality effectively, and many times they are time consuming and expensive. Previous monitoring methods used to analyze volatile compounds and aroma in food needed either a highly trained sensory panel or GC/MS techniques. Thus, there has been a genuine need for a quick, simple, and powerful objective test for indicating deterioration of oil. This study was conducted to determine the possibility of using a chemosensory system to differentiate among varying intensities of oil rancidity and investigate discrimination between good, marginal, and unacceptable frying oils. Fresh, 1-day, 2-day used, and discarded frying oils were obtained from a fast food restaurant in each frying cycle for 4 weeks. The oil samples were analyzed using a quartz microbalance-based chemosensory system. The discrimination between good, marginal, and unacceptable frying oils with regard to rancidity was examined, and the results were compared to their physicochemical properties such as dielectric constant, peroxide value, and free fatty acid content. The different qualities of frying oils were successfully evaluated and discriminated using the chemosensory system. Good correlations (0.87 to 0.96) were found between changes in physicochemical properties of oil and the sensor signals (Innawong et al. 2005). Based on the results, oils from two different restaurants were obtained with different usage levels to discriminate between the usage levels (Bengtson et al. 2005). See Figure 10.2. Evaluating Apple Maturity Sometimes harvested apples are a mixture of mature, immature, and overmature fruits, and the quality of an apple fruit depends primarily on its level of maturity at the time of harvest. Even though the external appearance of an immature fruit may look perfect to harvest, store, and sell, due to their preclimacteric physiological condition, these apples do not ripen normally, and thus their taste is strongly impaired due to lack of full-flavor compounds. On the other hand, overmature fruits have
Good Good Used
Used
Discard
272 Discard
Fresh Oil
Marginal
Discarded
Fresh
Restaurant A Figure 10.2. Discrimination of frying oil using a chemosensory system.
Fresh
Restaurant B
Electronic Nose Applications in the Food Industry
5 23
4
9
3 8
1 8 1 4 10 7 10 7 9 52 6 844 10 10 1 1 10 2 7 98 1 2 45 8 10 8 6 4 2 6 10 4 8 6
0
3
9 9 3 5
273
6
F2
3
Figure 10.3. Evaluation of apple maturity using a conducting polymer-based sensing system.
shorter storage life, soften rapidly, develop storage disorders such as off flavor, lack firm texture, and are unattractive in appearance. Currently, random and destructive sampling techniques are used to evaluate apple quality. Thus, there exists a need to develop a nondestructive technique to assess apple quality. Gala and York apples were harvested at different times to obtain different maturity groups (immature, mature, and ripe). Headspace evaluation was performed first, and maturity indices were measured within 24 hours after harvest. Individual apples were placed in a 1.5-liter glass bottle, and the headspace gas from the glass bottle was exposed to the electronic nose. A conducting polymer-based sensing system was used for apple maturity evaluation. Maturity indices such as starch index, puncture strength, total soluble solids, and titratable acidity were used to categorize apples into three maturity groups referred to as immature, mature, and overmature fruits. See Figure 10.3. Multivariate analysis of variance (MANOVA) of the electronic nose sensor data indicated that there were different maturity groups (Wilks’ Lambda F = 3.7, P < 0.0001). From the discriminant analysis (DA), the electronic nose could effectively categorize Gala apples into the
274
Nondestructive Testing of Food Quality
three maturity groups with the correct classification percentage of 83% (Pathange et al. 2006). Detection of Spoilage and Discrimination of Raw Oyster Quality The effectiveness of two handheld electronic nose systems to assess the quality of raw oysters was studied on live oysters stored at 4 and 7◦ C for 14 days. Electronic nose data were correlated with a trained sensory panel evaluation by Quantitive Description Analysis (QDA) and with microbial enumeration. Oysters stored at both temperatures exhibited varying degrees of microbial spoilage, with bacterial load reaching 107 colony-forming units per gram (CFU/g) at day 7 for 7◦ C storage. See Figure 10.4. The Cyranose 320 electronic nose system was capable of generating characterized smell prints to differentiate oyster qualities of varying age (100% separation). The validation results showed that Cyranose 320 can identify the quality of oysters in terms of storage time with 93% accuracy. Comparatively, the correct classification rate for the VOCChek electronic nose was only 22%. Correlation of electronic nose data with microbial counts suggested Cyranose 320 was able to predict the microbial quality of oysters. Correlation of sensory panel scores with electronic nose data revealed that the electronic nose has demonstrated potential as a quality assessment tool by mapping varying degrees of oyster quality.
Summary Electronic nose technology is an emerging analytical tool that has excellent potential to be implemented in the food industry for quality control, quality assurance, product safety evaluation, new product development, and process control. Many systems are available in the marketplace using multitude of sensing technologies, software capabilities, and hardware configurations. The cost of the system also ranges from $5,000 to $120,000 with varying capabilities and sensitivities. The technology has not been recognized by the market mainly because of the lack of confidence in the technology and limited available applications for immediate adaptation in the industry. In addition, the technology is perceived as a technology push from the instrument manufacturers without clear implementation strategies in the food industry. Successful development
275 Figure 10.4. Discrimination of raw oyster quality by two types of handheld electronic nose systems.
276
Nondestructive Testing of Food Quality
and implementation of this technology to a wide range of applications and subsequent research publications in the near future will provide enough support from the food industry.
References Alasalvar C, AZ Odabasi, N Demir, MO Balaban, F Shahidi, and KR Cadwallader. 2004. Volatiles and flavor of five Turkish hazelnut varieties as evaluated by descriptive sensory analysis, electronic nose, and dynamic headspace analysis/gas chromatography-mass spectrometry. J. Food Sci. 69(3):SNQ99–SNQ106. AlphaM.O.S. 2000. AlphaM.O.S. Introductory Manual. AlphaM.O.S. Toulouse, France. Ampuero S and JO Bosset. 2003. The electronic nose applied to dairy products: a review. Sensors and Actuators B: Chemical. 94(1):1–12. Aparicio R, SM Rocha, I Delgadillo, and MT Morales. 2000. Detection of rancid defect in virgin olive oil by the electronic nose. J. Agric. Food Chem., 48 (3):853 –860. Athamneh A, P Mallikarjunan, and B Zoecklin. 2006. A comparison of chemosensory and analytical analysis for evaluating grape maturity. Annual Meeting of American Society of Enology and Viticulture, Eastern Section, Rochester, NY, July 9–12. Baby RE, M Cabezas, A Kutschker, V Messina, and N Wals¨oe de Reca. 2005. Discrimination of different valerian types with an electronic nose. J. Argent. Chem. Soc. 93:1–3. Balasubramanian S, S Panigrahi, CM Logue, M Marchello, C Doetkott, H Gu, et al. 2004. Spoilage Identification of Beef Using an Electronic Nose System Trans. ASAE. 47:1625–1633. Ballard T, P Mallikarjunan, S O’Keefe, and H Wang. 2005. Optimization of electronic nose response for analyzing volatiles arising from lipid oxidation in cooked meat. Annual Meeting of Institute of Food Technologists, July 16–20, New Orleans, LA. Bargon J, S Brascho, J Florke, U Herrmann, L Klein, JW Loergen, et al. 2003. Determination of the Ripening State of Emmental Cheese Via Quartz Microbalances. Sensor Actuat B-Chem. 95:6–19. Bartlett PN, JM Elliot, and JW Gardner. 1997. Electronic noses and their application in the food industry. Food Tech. 51(12):44–48. Bazzo S, F Loubet, S Labreche, and TT Tan. 1999. Optimization of Fox sensor array system for QC packaging in factory environment. Robustness, sample throughput and transferability. In: Hurst J, Ed. Electronic noses & sensor array based systems: design & applications. Lancaster, Penn: Technometric. pp. 225–144. Benady JE, D Simon, J Charles, and GE Miles. 1992. Determining melon ripeness by analyzing headspace gas emissions. ASAE Paper # 92-6055, ASAE, St. Joseph, MI 49085. Benedetti S, S Mannino, AG Sabatini, and GL Marcazzan. 2004. Electronic Nose and Neural Network Use for the Classification of Honey. Apidologie. 35:397–402.
Electronic Nose Applications in the Food Industry
277
Benedetti S, N Sinelli, S Buratti, and M Riva. 2005. Shelf Life of Crescenza Cheese as Measured by Electronic Nose. J. Dairy Sci. 88:3044–3051. Bengtson R, P Mallikarjunan, R Moreira, K Muthukumarappan, L Wilson, and D Weisenborn. 2005. Measurement of Frying Oil Quality by Various Objective Methods. Status: Published or completed this reporting year. ASAE Paper No. 056168. American Society of Agricultural Engineers, St. Joseph, MI. Berna AZ, S Buysens, CD Natale, IU Grun, J Lammertyn, and BM Nicolai. 2005. Relating Sensory Analysis with Electronic Nose and Headspace Fingerprint Ms for Tomato Aroma Profiling. Postharvest Biology and Technology. 36:143–155. Box G and NR Draper. 1987. Empirical model-building and response surfaces. New York: John Wiley & Sons. 669 p. Brezmes J, E Llobet, X Vilanova, J Orts, G Saiz, and X Correig. 2001. Correlation between Electronic Nose Signals and Fruit Quality Indicators on Shelf-Life Measurements with Pinklady Apples. Sensor Actuat B-Chem. 80:41–50. Brezmes J, MLL Fructuoso, E Llobet, X Vilanova, I Recasens, J Orts, et al. 2005. Evaluation of an Electronic Nose to Assess Fruit Ripeness. IEEE Sensors Journal. 5:97–108. Brudzewski K, S Osowski, and T Markiewicz. 2004. Classification of Milk by Means of an Electronic Nose and Svm Neural Network. Sensor Actuat B-Chem. 98:291–298. Capone S, M Epifani, F Quaranta, P Siciliano, A Taurino, and L Vasanelli. 2001. Monitoring of Rancidity of Milk by Means of an Electronic Nose and a Dynamic Pca Analysis. Sensor Actuat B-Chem. 78:174–179. Chantarachoti J, ACM Oliveira, BH Himelbloom, CA Crapo, and DG McLachlan. 2006. Portable Electronic Nose for Detection of Spoiling Alaska Pink Salmon (Oncorhynchus Gorbuscha). J. Food Sci. 71:S414–S421. Chatonnet P and D Dubourdieu. 1999. Using Electronic Odor Sensors to Discriminate among Oak Barrel Toasting Levels. J. Agric. Food Chem. 47:4319–4322. Cimander C, M Carlsson, and C-F Mandenius. 2002. Sensor Fusion for on-Line Monitoring of Yoghurt Fermentation. J. Biotechnol. 99:237–248. Collier WA, DB Baird, ZA Park-Ng, N More, and AL Hart. 2003. Discrimination among Milks and Cultured Dairy Products Using Screen-Printed Electrochemical Arrays and an Electronic Nose. Sensor Actuat B-Chem. 92:232–239. Cyrano Sciences. 2000. Cyranose 320 User’s Manual. Cyrano Sciences. Pasadena, CA. Devineni N, P Mallikarjunan, MS Chinnan, and RD Phillips. 1997. Supercritical fluid extraction of lipids from deep-fried food products. J Amer Oil Chem Soc. 74(12):1517–1523. Di Natale C, A Macagnano, E Martinelli, R Paolesse, E Proietti, and A D’Amico. 2001. The Evaluation of Quality of Post-Harvest Oranges and Apples by Means of an Electronic Nose. Sensor Actuat B-Chem. 78:26–31. Di Natale C, R Paolesse, M Burgio, E Martinelli, G Pennazza, and A D’Amico. 2004. Application of Metalloporphyrins-Based Gas and Liquid Sensor Arrays to the Analysis of Red Wine. Analytica Chimica Acta. 513:49–56. Dorothy DH and JWA Boothe. 2002. Electronic Nose Analysis of Volatile Compounds from Poultry Meat Samples, Fresh and after Refrigerated Storage. J. Sci. Food Agric. 82:315–322.
278
Nondestructive Testing of Food Quality
Du W-X, J Kim, JA Cornell, T-S Huang, MR Marshall, and C-I Wei. 2001. Microbiological, Sensory, and Electronic Nose Evaluation of Yellowfin Tuna under Various Storage Conditions. J. Food Prot. 64:2027–2036. Du WX, CM Lin, T Huang, J Kim, M Marshall, and CI Wei. 2002. Potential Application of the Electronic Nose for Quality Assessment of Salmon Fillets under Various Storage Conditions. J. Food Sci. 67:307–313. Dutta R, EL Hines, JW Gardner, KR Kashwan, and M Bhuyan. 2003. Tea Quality Prediction Using a Tin Oxide-Based Electronic Nose: An Artificial Intelligence Approach. Sensor Actuat B-Chem. 94:228–237. Echeverria G, E Correa, M Ruiz-Altisent, J Graell, J Puy, and L Lopez. 2004. Characterization of Fuji Apples from Different Harvest Dates and Storage Conditions from Measurements of Volatiles by Gas Chromatography and Electronic Nose. J. Agric. Food Chem. 52:3069–3076. Falasconi M, M Pardo, G Sberveglieri, F Battistutta, M Piloni, and R Zironi. 2005. Study of White Truffle Aging with Spme-Gc-Ms and the Pico2-Electronic Nose. Sensor Actuat B-Chem. 106:88–94. Garcia M, M Aleixandre, J Gutierrez, and MC Horrillo. 2006. Electronic Nose for Wine Discrimination. Sensor Actuat B-Chem. 113:911–916. Gomez AH, J Wang, G Hu, and AG Pereira. 2006. Electronic Nose Technique Potential Monitoring Mandarin Maturity. Sensor Actuat B-Chem. 113:347–353. Gorsuch RL. 1983. Factor analysis 2nd ed. Hillsdale, New Jer: Lawrence Erlbaum Assoc. 425 p. Guertin W, and JP Bailey. 1970. Introduction to modern factor analysis. Michigan: Edwards Brothers, Inc. 472 p. Hansen WG, and SC Wiedemann. 1999. Evaluation and optimization of an electronic nose. In: J Hurst, Ed. Electronic noses & sensor array based systems: design & applications. Lancaster, Penn: Technometric. pp. 131–144. Harman HH. 1976. Modern factor analysis. Chicago: University of Chicago Press. 487 p. Harper WJ and JP Kleinhenz. 1999. Factors affecting sensory and electronic nose threshold values for food aroma compounds. In: J Hurst, Ed. Electronic noses & sensor array based systems: design & applications. Lancaster, Penn: Technometric. pp. 308–317. Haugen JE and K Kvaal. 1998. Electronic nose and artificial neural network. Meat Sci, 49, Supplement 1:273–286. Hodgins D and D Conover. 1995. Evaluating the electronic NOSE. Perfumer & Flavorist 20(6):1–8. Hodgins D and D Simmonds. 1995. Sensory technology for flavor analysis. Cereal Foods World 40(4):186–191. Hu X, RC Quillin, BM Matanin, B Cheng, P Mallikarjunan, and D Vaughan. 2005. Development of non-destructive methods to evaluate oyster quality by electronic nose technology. ASAE paper #05-6097. American Society of Agricultural Engineers, St. Joseph, MI. Innawong B, P Mallikarjunan, and JE Marcy. 2005. The determination of frying oil quality using a chemosensory system. Lebensmittel-Wissenschaft und Technologie, 37 (1):35–41.
Electronic Nose Applications in the Food Industry
279
Johnson DE. 1998. Applied Multivariate Methods for Data Analysis, Duxbury Press, Brooks/Cole Publishing Company, Pacific Grove, CA. p.1–8, 93–105, 113–119, 217–164, 494–511. Jolliffe IT. 1986. Principal component analysis. New York: Springer-Verlag. 271 p. Jonsdottir R, G Olafsdottir, E Martinsdottir, and G Stefansson. 2004. Flavor Characterization of Ripened Cod Roe by Gas Chromatography, Sensory Analysis, and Electronic Nose. J. Agric. Food Chem. 52:6250–6256. Keshri G, P Voysey, and N Magan. 2002. Early Detection of Spoilage Moulds in Bread Using Volatile Production Patterns and Quantitative Enzyme Assays. J. Appl. Microbiol. 92:165–172. Knapp RB. 1998. Mahalanobis metric. http://www.engr.sjsu.edu/∼knapp/HCIRODPR/ PR Mahal/M metric.htm. San Jose State University. Posted July 16, 1998. Korel F, DA Luzuriaga, and M Balaban. 2001. Quality Evaluation of Raw and Cooked Catfish (Ictalurus Punctatus) Using Electronic Nose and Machine Vision. Journal of Aquatic Food Product Technology. 10:3–18. Korel F and MO Balaban. 2002. Microbial and Sensory Assessment of Milk with an Electronic Nose. J. Food Sci. 67:758–764. Korel F, N Bagdatlioglu, MO Balaban, and Y Hisil. 2002. Ground Red Peppers: Capsaicinoids Content, Scoville Scores, and Discrimination by an Electronic Nose. J. Agric. Food Chem. 50:3257–3261. Lachenburg PA and MR Mickey. 1968. Estimation of Error Rates in Discriminant Analysis. Technometrics, 10, No. 1, pp. 1–11. Lammerty J, EA Veraverbeke, and J Irudayaraj. 2004. zNose technology for the classification of honey based on rapid aroma profiling. Sensors and Actuators B 98:54– 62. Lawley DN and AE Maxwell. 1971. Factor analysis as a statistical method. New York: American Elsevier Pub. Co. 153 p. Lee SK, JH Kim, HJ Sohn, and JW Yang. 2005. Changes in Aroma Characteristics During the Preparation of Red Ginseng Estimated by Electronic Nose, Sensory Evaluation and Gas Chromatography/Mass Spectrometry. Sensor Actuat B-Chem. 106:7–12. Magan N, A Pavlou, and I Chrysanthakis. 2001. Milk-Sense: A Volatile Sensing System Recognises Spoilage Bacteria and Yeasts in Milk. Sensor Actuat B-Chem. 72:28– 34. Mallikarjunan S, P Mallikarjunan, and SE Duncan. 2006. Evaluating levels of photo oxidation in milk using an electronic nose. Description: Abstract # 78E-05, Annual Meeting of Institute of Food Technologists, Orlando, FL, June 24–28. Marcus L. 2001. Mahalanobis Distance http://www.qc.edu/Biology/fac stf/marcus/ multisyl/fourth.htm. Queens College. Posted August 3, 2001. McKellar RC, HPV Rupasinghe, X Lu, and KP Knight. 2005. The Electronic Nose as a Tool for the Classification of Fruit and Grape Wines from Different Ontario Wineries. J. Sci. Food Agric. 85:2391–2396. Mielle P. 1996. ‘Electronic noses’: towards the objective instrumental characterization of food aroma. Trends in Food Sci & Technol. 7(12):432–438. Mielle P and F Marquie. 1998. ‘Electronic nose’: improvement of the reliability of the product database using new dimensions. Sem in Food Anal. 3:93–105.
280
Nondestructive Testing of Food Quality
Miettinen SM, V Piironen, H Tuorila, and L Hyvoenen. 2002. Electronic and Human Nose in the Detection of Aroma Differences between Strawberry Ice Cream of Varying Fat Content. J. Food Sci. 67:425–430. Monge ME, D Bulone, D Giacomazza, DL Bernik, and RM Negri. 2004. Detection of Flavour Release from Pectin Gels Using Electronic Noses. Sensor Actuat B-Chem. 101:28–38. Nakamoto T and T Moriizumi. 1999. Developments and trends in QCM odor sensing systems. In: J Hurst, Ed. Electronic noses & sensor array based systems: design & applications. Lancaster, Penn: Technometric. pp. 123–130. Newman DJ, DA Luzuriaga, and MO Balaban. 1999. Odor and Microbiological Evaluation of raw tuna: correlation of sensory and electronic nose data. In: J Hurst, Ed. Electronic noses & sensor array based systems: design & applications. Lancaster, Penn: Technometric. pp. 170–184. O’Connell M, G Valdora, G Peltzer, and R Martin Negri. 2001. A Practical Approach for Fish Freshness Determinations Using a Portable Electronic Nose. Sensor Actuat B-Chem. 80:149–154. Olsson J, T Borjesson, T Lundstedt, and J Schnurer. 2002. Detection and Quantification of Ochratoxin and Deoxynivalenol in Barley Grains by Gc-Ms and Electronic Nose. Int. J. Food Microbiol. 72:203–214. Osborn GS, RE Lacey, and JA Singleton. 2001. Non-Destructive Detection of Peanut Off-Flavors Using an Electronic Nose. Trans. ASAE. 44:939–944. Pardo M and G Sberveglieri. 2002. Coffee Analysis with an Electronic Nose. IEEE T. Instrum. Meas. 51:1334–1339. Pathange LP, P Mallikarjunan, RP Marini, S O’Keefe, and D Vaughan. 2006. NonDestructive Evaluation of Apple Maturity Using an Electronic Nose System. J. Food Eng. 77:1018–1023. Payne JS. 1998. Electronic nose technology: an overview of current technology and commercial availability. Food Sci and Technol Today. 12(4):196–200. Penza M and G Cassano. 2004a. Chemometric Characterization of Italian Wines by Thin-Film Multisensors Array and Artificial Neural Networks. Food Chemistry. 86:283–296. Perkin-Elmer Corporation. 1999. Operators manual for the chemosensory-system QMB6 / HS40XL. Norwalk, CT. HKR Sensorsysteme GmbH. Persaud KC, RA Bailey, AM Pisanelli, HG Byun, DH Lee, and JS Payne. 1999. Conducting polymer sensor arrays. In: J Hurst, Ed. Electronic noses & sensor array based systems: design & applications. Lancaster, Penn: Technometric. pp. 318–328. Pope K. 1995. Technology improves on the nose as scientists try to mimic smell. Wall Street J March 1:B1. Ragazzo-Sanchez JA, P Chalier, and C Ghommidh. 2005. Coupling Gas Chromatography and Electronic Nose for Dehydration and Desalcoholization of Alcoholized Beverages: Application to Off-Flavour Detection in Wine. Sensor Actuat B-Chem. 106:253–257. Ritaban D, LH Evor, WG Julian, DU Dociana, and B Pascal. 2003. Non-Destructive Egg Freshness Determination: An Electronic Nose Based Approach. Meas. Sci. Technol. 14:190.
Electronic Nose Applications in the Food Industry
281
Roussel S, G Forsberg, V Steinmetz, P Grenier, and V Bellon-Maurel. 1998. Optimization of electronic nose measurements. Part I: Methodology of output feature selection. J Food Eng. 37:207–222. Roussel S, G Forsberg, P Grenier, and V Bellon-Maurel. 1999. Optimization of electronic nose measurements. Part II: Influence of experimental parameters. J Food Eng, 39:9–15. Saevels S, J Lammertyn, AZ Berna, EA Veraverbeke, C Di Natale, and BM Nicolai. 2003. Electronic Nose as a Non-Destructive Tool to Evaluate the Optimal Harvest Date of Apples. Postharvest Biology and Technology. 30:3–14. Santos JP, T Arroyo, M Aleixandre, J Lozano, I Sayago, M Garcia, et al. 2004. A Comparative Study of Sensor Array and Gc-Ms: Application to Madrid Wines Characterization. Sensor Actuat B-Chem. 102:299–307. SAS System Help. 1988. SAS Software Version 7. SAS Institute Inc., Cary, NC, USA. Schaak RE, DB Dahlberg, and KB Miller. 1999. The electronic nose: studies on the fundamental response and discriminative power of metal oxide sensors. In: J Hurst, Ed. Electronic noses & sensor array based systems: design & applications. Lancaster, PA: Technometric. pp. 14–26. Schaller E and JO Bosset. 1998. ‘Electronic noses’ and their application to food: a review. Seminars in Food Anal. 3:119–124. Simon JE, A Hetzroni, B Bordelon, GE Miles, and DJ Charles. 1996. Electronic sensing of aromatic volatiles for quality sorting of blueberries. J. Food Sci. 61(5):967–969. Staples EJ. 2000. The zNoseTM , a New Electronic Nose Using Acoustic Technology. J Acoust Soc Am. 108, 2495. Stella R, JN Barisci, G Serra, GG Wallace, and D De Rossi. 2000. Characterisation of Olive Oil by an Electronic Nose Based on Conducting Polymer Sensors. Sensor Actuat B-Chem. 63:1–9. Strassburger KJ. 1998. Electronic nose technology in the flavor industry: moving from R&D to the production floor. Seminars in Food Anal. 3:5–13. Van Deventer D and P Mallikarjunan. 2002. Comparative performance analysis of three electronic nose systems using different sensor technologies in odor analysis of retained solvents on printed packaging. J. Food Sci. 67:3170–3183. Veraverbeke EA, J Irudayaraj, and J Lammertyn. 2005. Fast aroma profiling to detect invert sugar adulteration with zNose. J Sci Food Agric. 85:243–250. Watkins P and C Wijesundera. 2006. Application of zNoseTM for the analysis of selected grape aroma compounds. Talanta. 70:595–601. Williams JE, SE Duncan, RC Williams, K Mallikarjunan, WN Eigel, and SF O’Keefe. 2006. Flavor Fade in Peanuts During Short-Term Storage. J. Food Sci. 71:S265– S269. Young H, K Rossiter, M Wang, and M Miller. 1999. Characterization of Royal Gala apple aroma using electronic nose technology-potential maturity indicator. J. Agric. Food. Chem. 47:5173–5177. Zeng QZ, KA Thorarinsdottir, and G Olafsdottir. 2005. Quality changes of shrimp (pandalus borealis) stored under different cooling conditions. J. Food Sci. 70:s459– s466.
Chapter 11 Biosensors: A Theoretical Approach to Understanding Practical Systems Yegermal Atalay, Pieter Verboven, Steven Vermeir, and Jeroen Lammertyn
Introduction As a consequence of several recent food crises, the consumer’s attention is directed toward improved food quality and safety. Where food safety is concerned about the presence of harmful components like toxins and chemicals (polychlorinated biphenyls [PCB], dioxins, etc.), food quality is determined by sensory properties like taste, aroma, texture, and appearance. The measurement of many of these quality attributes often requires sophisticated analytical techniques such as liquid chromatography and gas chromatography-mass spectrometry. Screening food products for toxins, off-flavors, taste components, etc., becomes an expensive and laborious task, since traditional analytical techniques are often too costly and time consuming to be included in routine quality measurements on food products. The development of biosensors offers an alternative approach to this problem. Biosensors are a subgroup of chemical sensors where the detection of a chemical component is based on a specific interaction of this chemical component with a biorecognition molecule, being an enzyme, antibody, aptamer, microorganism, or even a whole cell. This biological sensing element is integrated with or is in intimate contact with a physicochemical transducer (Th`evenot et al. 2001, Lammertyn et al. 2006). A wide range of transducers is available to detect the interaction between the analyte and the biorecognition molecule and convert it into an electronic signal. Electrochemical,
283
284
ANALYTE
BIOLOGICAL DETECTION ELEMENT
TRANSDUCER
CHEMICAL
Nondestructive Testing of Food Quality
SIGNAL
SIGNAL PROCESSOR
Figure 11.1. Principle of a biosensor. There are three main parts in a biosensor: biorecognition elements, which recognize the substance of interest; a transducer, which converts the biorecognition event into a measurable signal; and a signal processing system, which converts the signal into a workable form.
optical, thermal, and mass sensitive transduction mechanisms have been used in biosensor development over the past decade (Ramsay 1998). The typical composition of a biosensor is depicted in Figure 11.1. A high selectivity and specificity, a relatively low production cost, a limited sample preparation time, and the potential for miniaturization are the main advantages of biosensors over conventional analytical methods. Even though, the driving force for the development of biosensors has come from the health care industry, there have been many suggested applications in food, bioprocessing, agriculture, and environment. For instance, biosensors have a great potential for monitoring food composition (for example, carbohydrates and organic acids) and product freshness (for example, fish, fruits, and vegetables) as well as for online process control and fermentation processes. Many food safety applications are reported and discussed in literature, such as rapid detection of pathogenic organisms, pesticides, microorganisms, and toxins (Whitaker 1994, Bilitewski and Rohm 1997, Ivnitski et al. 2000, Mello and Kubota 2002, Prodromidis and Karayannis 2002, Rajendran and Irudayaraj 2002, Kim and Park 2003, Sharma et al. 2004). Although they have potential, not many biosensors have reached the point of commercialization, due to factors such as stability (the limited life time of the biological component), mass production, storage and
Biosensors: A Theoretical Approach
285
sensitivity (practicality in handling in the real sample), which will have to be further studied to commercialize those biosensors. Scientists are now trying to use sensor technology in every aspect of life so that considerable efforts are given to improve the performance of the biosensors and reduce the cost of production. Biosensor technology also benefits from the fast growth of microelectronic developments, which results in advanced biochips by combining the knowledge of microfluidics with microelectronics (Weigl et al. 2003, Erickson and Li 2004). Although the performance of a biosensor is evaluated based on a particular application, one should address the basic performance criteria in the design of successful biosensors. These include calibration characteristics (that is, sensitivity, detection and quantitative determination limits, operational and linear concentration range), selectivity and reliability, response time, high sample throughput, reproducibility, stability, and lifetime (Buerk 1993, Th`evenot et al. 2001, Eggins 2002). In addition, the complete biosensor should be cheap, small, portable, and capable of being used by semiskilled operators. Miniaturization of biosensors is one of the recent trends aiming both at an increased performance and portability, but also at low cost mass production. To do so, all the geometric and operational parameters have to be optimized properly (Li 2004; Squires and Quake 2005). However, it is both time-consuming and costly to study the effects of those parameters on performance by conventional prototyping. The modern approach of numerical prototyping is formulating mathematical models that best describe the system and use powerful computers to find the optimum design parameters. This does not only cut the cost of experimentation by reducing the number of experiments needed to analyze a particular problem (for example, to avoid use of factorial experiment design), but can also be used to explore problems that are difficult or expensive to test and make extrapolation to the unexplored or unexplorable regions. Mathematical modeling is defined as the description of a physical system by a set of mathematical relationships that allow the response of the system to various inputs to be calculated. Appropriate mathematical models for biosensors most often are a set of partial differential or integro-differential equations and boundary conditions that govern the fluid flow, species transport (with possible biochemical kinetics), and energy transport in a system. This chapter outlines the main aspects of modeling biosensors and presents a few case studies.
286
Nondestructive Testing of Food Quality waste
Carrier Fluid reservoir Pump
Sample reservoir
Biorecognition element
Data recorder
Detection unit
Waste
Multi-port injection valve
Figure 11.2. A Flow Injection Analysis (FIA) biosensor configuration. By means of the valve system, the sample is automatically injected into the carrier fluid and transported to the biorecognition unit where interaction occurs between the biorecognition element and target molecule. The generated signal is then recorded for further analysis.
Flow-type Biosensors Principles Flow-type biosensors are particularly suited for online assays and are largely based on concepts of flow injection analysis (FIA). The FIA technique was developed in the mid-1970s to automate wet chemistry assays (Bilitewkski and Rohm 1997). Automation is achieved by carrying out analyses in a flow system where a pump is used to continuously draw sample and reagent solutions into plastic tubing, as well as push them forward through the system toward the detector (Figure 11.2). The sample solution is dispensed into the carrier stream by an injection valve. A specific product or signal is generated as a result of a reaction (mostly by means of enzymes). By connecting a detector at the end of the sample’s flow path, automated detection of the processed sample is ensured. Compared to manual analyses, the tubing lines serve as reactor and transfer vessels, the injection valve serves as a micropipette, and the pump replaces the lab technician performing the assay. FIA technology has been very successful in simplifying chemical assays because of several advantages: lower analysis costs due to automation, higher sample throughput, and higher precision and lower sample and reagent consumption, and hence, waste production. The biorecognition elements can be immobilized by different techniques (for example, DNA molecules and antibodies) or can be used in solution (for example, enzymes) and injected with the substrate. In the latter
Biosensors: A Theoretical Approach
287
case, it is easier to perform multiple experiments in a row, but it causes losses of biorecognition elements after each measurement. Miniaturization and Microfluidics Besides the practical advantages such as portability and minimal operation cost, miniaturization has the principal advantage of improving the analytical performance of the process (Weigl et al. 2003, Erickson and Li 2004). This is due to the compactness and the high surface area-to-volume ratios of microscopic fluid devices that make them an attractive alternative to the conventional flow systems. For a cylindrical microtube with a 50-micrometer (µm) radius, the surface area-volume ratio reaches 4 × 104 m−1 , which is huge (Bousse et al. 2000, Koch et al. 2000, Koo and Kleinstreuer 2003). Furthermore, it is possible to reduce the molecular diffusion time significantly by handling microvolumes of fluids in small channels in comparison to handling large volumes of reactants in ordinary macro devices. This is because the diffusion time (t) varies with the square of the distance x the molecules travel (t ∝ x 2 /D with D the diffusion coefficient) (Bousse et al. 2000). Microfluidics is the science of designing, manufacturing, and formulating devices and processes that deal with volumes of fluid in the order of nanoliters, which will be significantly important when the reagents used are expensive. A microfluidic device can be identified by the fact that it has one or more channels with at least one dimension less than 1 millimeter (mm). The fabrication of micro devices is relatively inexpensive and very amenable both to highly elaborate, multiplexed devices and also to mass production (Bousse et al. 2000, Koch et al. 2000, Koo and Kleinstreuer 2003, Karniadakis et al. 2005). Most micro devices have dimensions in the range of 30 to 300 µm (Sharp et al. 2002). Similar to microelectronics, microfluidic technologies enable the fabrication of highly integrated devices with different functions on one substrate chip. One of the long-term goals in the field of microfluidics is to create integrated, portable diagnostic devices for home and bedside use, thereby eliminating time-consuming laboratory analysis procedures. For these reasons, not surprisingly, the medical industry has shown a dedicated interest in microfluidics technology. However, experimental evidence has shown that fluid flow in microchannels differs from macrochannel flow behavior. Moreover, the laboratory observations are often inconsistent and contradictory. Thus, it is important to be aware of the theory of
288
Nondestructive Testing of Food Quality
microfluidics before directly applying flow theory, which is applicable for a macrofluidic system (Bousse et al. 2000, Polson and Hayes 2001, Koo and Kleinstreuer 2003, Li 2004). Flow Mechanisms in Microchannels The flow of a fluid through a microfluidic channel can be characterized by the Reynolds number (Re = ρud/µ, where ρ [kg m−3 ] is the density, u [ms−1 ] is the average velocity, d [m] is the characteristic dimension of the channel, and µ [kg m−1 s−1 ] is dynamic viscosity). Due to the small dimensions of microchannels, the Reynolds number is usually lower than 100, often even lower than 1. In this Reynolds number regime, flow is completely laminar and no turbulence occurs. The transition to turbulent flow generally occurs in the range of Reynolds numbers larger than 2,000 (Koch et al. 2000). Laminar flow provides a means by which molecules can be transported in a relatively predictable manner through microchannels (Weigl et al. 2003). In the long, narrow geometries of microchannels, flows are also predominantly uniaxial; the entire fluid moves parallel to the local orientation of the walls. The significance of uniaxial laminar flow is that all transport of momentum, mass, and heat in the direction normal to the flow is by molecular mechanisms: molecular viscosity, molecular diffusivity, and thermal conductivity (Squires and Quake 2005). Diffusion becomes the main method to move particles, mix fluids, and control reaction rates in biochips (Erickson and Li 2004). As the channel dimension diminishes the relative importance of surface and interfacial phenomena (such as surface tension, roughness, and electrokinetic effects) increases (Sharp et al. 2002, Li 2004). Using the latter phenomena as an advantage for fluid transport currently makes microfluidics an interesting field of study (Squires and Quake 2005). Fluid can be transported through a microfluidic device in different ways. The driving force can be a pressure or a voltage difference applied over the microchannel or surface forces such as capillarity. Hydrodynamic pressure is customarily used, but for small channels, pressuredriven flow exhibits a parabolic velocity profile, and the pressure drop (inversely proportional to the second power of the transverse dimension of the channel) will be very large and make the method impractical for some applications (Karniadakis et al. 2005). Even though other creative solutions have been proposed, electrokinetics has been and still is
Biosensors: A Theoretical Approach
289
Electrokinetic Flow
Hydrodynamic Flow
Figure 11.3. Cross-sectional flow profile due to pressure driven (for example, hydrodynamic flow) and electrokinetic flow in microchannel.
the basis for most microfluidic devices (Polson and Hayes 2001). Electrokinetics is a phenomenon that involves the interaction between solid surfaces (such as glass or polymer-based substrate), ionic solutions, and applied electric fields. Electroosmosis and electrophoresis are the two important classes used to transport fluids and particles in microfluidic devices. Electroosmosis is widely used for sample injection and transport in micro channels whereas electrophoresis is widely used in capillary gel electrophoresis and capillary zone electrophoresis, which are both suitable for the separation of chemical species such as DNA fractionation (Weigl et al. 2003, Erickson and Li 2004). Electroosmotic flow is the most popular electrokinetic technique. Electroosmotic flow (EOF) has considerable advantages over pressure-driven flow, especially in microfluidic and nanofluidic devices. In contrast to pressure-driven flow, the velocity profile is uniform and independent of the microfluidic channel (Figure 11.3). The pluglike velocity profile results in significantly less dispersive effects than pressure-driven flow (Karniadakis et al. 2005, Squires and Quake 2005). The latter is of high importance in relation to the sensitivity of the biosensor. With the appropriate application of potentials, its valveless control of fluid flow is favored in high performance sample separation techniques. It is agreeable for miniaturization as the need for additional structures such as pumps and valves becomes less important. EOF also has a serious drawback. It is difficult to control because flow depends on the physicochemical properties of the solution and the channel walls. The
290
Nondestructive Testing of Food Quality
surface charge density varies with solution pH, ionic strength, and solute adsorb on the walls, and this implies that electroosmosis velocity may change with the process time (Bousse et al. 2000, Liu et al. 2004).
Governing Equations for Biosensor Modeling This section will describe the mathematical equations of the different physicochemical phenomena involved in biosensors: bulk flow, component transport, and reaction kinetics. Bulk Flow Modeling Hydrodynamic Flow The governing equations of fluid flow are derived from the conservation laws of physics. Hence, the flow of fluid through a channel, including microfluidic devices, is studied using the Navier-Stokes equations (Gaskell 1992, Bird et al 2002): ∇ ·u=0 ρ
∂u − η(∇ 2 u ) + ρ u ∇ · u + ∇ p − ρg + F = 0 ∂t
(11.1) (11.2)
where P [Nm−2 ] is the pressure, η [kg m−1 s−1 ] is dynamic viscosity, g[m s−2 ] is the gravitational body force, and F [Nm−3 ] is any other body force. For pressure-driven process in a rectangular channel solving the above equation with no slip wall boundary condition results a parabolic velocity profile (Figure 11.3). Electroosmotic Flow Most surfaces possess a negative charge resulting from the ionization of the surface or the adsorption of the ionic species, and a layer of cations builds up near the surface when this surface comes in contact with polar liquids to maintain the charge balance. This creates an electric double layer (EDL) of ions near the surface and a potential difference between the fixed charges on the wall and the diffuse charges of the mobile ions called zeta potential (ζ ). The EDL is resolved in two regions as shown in Figure 11.4: a compact layer and a diffuse layer. The compact layer is immobile due to a strong electrostatic force, whereas the diffuse
Biosensors: A Theoretical Approach
291
Negatively charged surface
ψ
Compact layer Diffuse layer Shear plane
ψo water molecule
ζ
Bulk region
0
Distance from the surface
x
Figure 11.4. Schematic diagram of EDL next to a negatively charged solid surface. Here, ψ is the electroosmotic potential, ψ0 is the surface potential, ζ is the zeta potential, and x is the distance measured from the wall.
layer has a net charge different from zero so that it is this layer that is mobile when a longitudinal electric field, E, is applied and makes the bulk moving as a result of viscous force (electric body force). This collective movement induces fluid motion in the channel, creating what is called electroosmotic flow. The potential in EDL can be described by the Poisson-Boltzmann equation (Li 2004). Therefore, the magnitude of the EOF depends on the electric field strength and the local net charge density, which is a function of EDL field. The thickness of the diffuse layer depends on the bulk concentration and the electrical properties of the liquid. The thickness of the EDL is described by the Debye length (1/k), given as (Sharp et al. 2002): 1 = λD = k
εk B T 2z 2 F 2 C B
(11.3)
292
Nondestructive Testing of Food Quality
where k B [JK−1 ] is Boltzmann’s constant, T [K] is the temperature, z is the valence number of the ion, F is Faraday’s constant, and C B [mol m−3 ] is the bulk concentration of the ion. ζ [V] is the zeta potential, which is empirically determined from EOF measurements. ε [C2 N−1 m−2 ] is the permittivity of the fluid, the degree to which a medium resists the flow of electric charge. The actual permittivity is then calculated by multiplying the relative permittivity (εr ,) sometimes also called dielectric constant, by the relative permittivity of vacuum (ε0 ). It is possible to model either EOF or the combination of EOF and pressure-driven flow by introducing the electrokinetic body forces in the general Navier-Stokes equation (Equation 11.2): ∇ ·u=0 ρ
∂u − η(∇ 2 u ) + ρ u ∇ · u + ∇ p − ρg − ρ E E = 0 ∂t
(11.4) (11.5)
where ρ E [C m−3 ] is the local net charge density per unit volume and E [V m−1 ] is the electric field in the fluid. Both variables need be solved by additional equations. The electric field is given by: E = ∇φ
(11.6)
where ø is the externally applied potential to the system. The potential distribution in the system is given by the Laplace equation: ∇ · (σ ∇φ) = 0
(11.7)
with σ [S m−1 ] the electric conductivity of the medium, a physical property of the fluid. The electric charge density is related to the internal potential field ψ of the EDL that results from the charge at the internal wall. According to the theory of electrostatics, the relationship at any point in the solution is described by the Poisson equation (Karniadakis et al. 2005, Li 2004): ρE (11.8) ∇ 2ψ = − ε Under certain assumptions of the ionic characteristics of the fluid (Li 2004), one can write: zeψ ρ E = −2zen 0 sinh (11.9) kB T
Biosensors: A Theoretical Approach
293
Introducing this equation into the Poisson equation results in the Poisson-Boltzmann equation: 2zen 0 zeψ 2 sinh (11.10) ∇ ψ= ε kB T where e [C] is the charge of a proton and n 0 [1/m 3 ] is the bulk ionic concentration. Equations 11.7 and 11.10 are solved to obtain the external and internal potential fields, which are required to solve the flow equations 11.4 and 11.5. The following assumptions can simplify the solution procedure (Li 2004): r
low velocity and low Reynolds number flow (that is, the transient and convective terms in the Navier-Stokes equations can be neglected) r flow in a slit microchannel, where width is much larger than the height of the channel, resulting in developed one-dimensional flow r relatively thin EDL thickness in comparison to the channel dimension (channel height in this case) Under these assumptions, the Poisson-Boltzmann equation (Equation 11.10) need not be solved, but can be replaced by an effective boundary condition to the Navier-Stokes equations, namely the constant EDL slip velocity u s (Karniadakis et al. 2005): u s = u eo,i = −
εζ E i = µeo E i η
(11.11)
where u eo,i is the velocity of a liquid in a channel due to electroosmosis flow. µeo [m2 V−1 s−1 ] is the electroosmotic mobility, which is the electroosmotic flow velocity per unit electric field strength. E i is the global electric field strength in the i-th direction, equal to the potential gradient in that direction. From the simplified equation shown in Equation 11.11, which is valid in many cases, it is clear that the flow rate is dependent on the channel length, the liquid properties, the zeta potential at the channel wall, and the applied electric field strength. Ionization of the surface can be one of the possible ways to originate the surface charge. When the surface contains acidic groups, a
294
Nondestructive Testing of Food Quality
negatively charged surface results by dissociation and release of H+ to the solution. A positively charged surface results from the dissociation of the basic group (for example, OH on the surface) by releasing negative OH− in to the solution. This process is dependent on the pH of the medium. For instance, for material containing acidic groups, ionization is slower if the solution has lower pH. A smaller surface charge density results with consequent lower EO velocity. The reverse is true for higher pH. Therefore, EO control is possible through coating of the channel surface and manipulation of the solution viscosity, electric field, pH of the buffer, and concentration of the buffer (Hayes et al. 1993, Liu et al. 2004). Longitudinal diffusion, sample overloading, adsorption of species on capillary walls, nonuniform zeta potential and electric field, nonuniform geometry, and Joule heating are some of the factors that cause sample dispersion in electrokinetically driven micro devices systems and result in reduced separation efficiency (Tang et al. 2006). The Joule heating effect is inherent to electroosmotic microchannel flows. It results from the inevitable volumetric heating when an electric field is applied across conducting media such as electrolytes. This causes a rise in buffer temperature and temperature gradients in all directions. As a consequence, the EOF and electrophoretic transport of solutes is affected causing a flow profile that resembles hydrodynamic flow, and band broadening occurs. In general, microfluidic systems can be thermocontrolled in a very accurate way, provided that active cooling (using liquid coolant and air by natural or forced convection) or materials that readily dissipate Joule heat are used. For designing such systems, therefore, it is necessary to couple the energy balance equation with the equations to solve fluid flow and mass transport. Intensive discussion is given by Zhao and Liao (2002) on complete model equations that consider thermal effects during electroosmotic transportation of liquids in microchannels. Modes of Component Transport Molecular diffusion, convection, and/or ionic migration are the basic modes of transport for analytes to reach the biological recognition elements and take those produced to the point of detection (if not detected at the spot). In the following section, the model equations that are used to characterize each mode of transport are presented.
Biosensors: A Theoretical Approach
295
Diffusion Diffusion can be defined as the random walk of an ensemble of particles from regions of high concentration to regions of lower concentration. The rate of movement of material by diffusion can be predicted mathematically, and Fick proposed two laws to quantify the process. Fick’s first law is given by (Gaskell 1992, Bird et al. 2002, Lebedev et al. 2006): J = −D∇C
(11.12)
where J [mol m−2 s−1 ] is the diffusion flux, D[m2 s−1 ] is the diffusivity, C [mol m−3 ] is the component concentration in the bulk fluid, and x[m] is position. The negative sign signifies that the material moves down a concentration gradient. However, in many cases, we need to know how the concentration of a component varies with time, which is described by Fick’s second law as (Gaskell 1992, Bird et al. 2002, Lebedev et al. 2006): ∂C = ∇ (D∇C) ∂t
(11.13)
The diffusion coefficient at different temperatures is often found to be well predicted by (Gaskell, 1992): D = D0 e(−E/RT )
(11.14)
D0 is the maximum diffusion coefficient (at infinite temperature), E [J mol−1 ] is the activation energy for diffusion, T [K] is the absolute temperature, and R [J mol−1 K−1 ] is the gas constant. Convection Convective transport occurs when a constituent of the fluid (mass, energy, a component in a mixture) is carried along with the bulk fluid as a result of the action of force on the solution that may be natural or forced. Natural convection is generated by fluid density differences and often causes the solution to mix in a random and unpredictable manner. Forced convection is caused by external forces such as rotational forces, pressure gradients, or electrokinetic forces. Convective transport is then driven by the resultant velocity u of the fluid. For a laminar flow, which is usually the case in micro biosensors due to the small dimensions of the system, of an incompressible fluid, the amount of species carried
296
Nondestructive Testing of Food Quality
past a plane of unit area perpendicular to the velocity (the flux) is the product of the velocity and the species concentration: ∂C ∂C = −u x (11.15) ∂t ∂x where u x is the velocity of the solution in the x-direction. Electrokinetic Migration (Electrophoresis) Applying electrical fields to ionic solutions induces migration transport in addition to the existing diffusion and convection processes. Migration implies that positive ions migrate from a positive potential to a negative potential along the direction of the electric field and vice versa for negatively charged ions. The electrokinetic migratory flux is mathematically expressed as (Li 2004): ∂φ ∂C (11.16) = z i µep C ∂t ∂x where µep [m2 V−1 s−1 ] is the ionic mobility (electrophoretic mobility), and z i is the valance electron or charge. Total Mass Balance Modern biosensors are designed to work continuously by avoiding human intervention, and all of the mass transport phenomena can possibly occur in the transport system. Then the transport and mass balances of every dissolved species in a solution is rendered as follows: ∂Ci + ∇ · (−Di ∇Ci − z i µep Ci ∇φ + Ci u) = Ri ∂t
(11.17)
where Ri [mol m3 s−1 ] denotes the reaction term that is discussed in the next section. The velocity vector u is equal to the velocity of the solvent. This can be the result of either electroosmosis and pressure-driven flow or both. The velocity is calculated by solving Navier-Stokes equation for the system (Equations 11.1 and 11.2) and the potential gradient by the Laplace equation (Equation 11.7) and then coupled with the mass balance (Equation 11.17) for a complete modeling of transport process.
Biosensors: A Theoretical Approach
297
Reaction Kinetics Biosensors can be categorized with respect to the reaction mechanisms. A first concept has both the substrate and receptors (such as enzymes or aptamers) in the solution, either in batch (for example, cuvettes) or in continuous form (for example, FIA biosensors). The second and increasingly popular concept is the biosensor design where the receptors are confined by a permeable membrane or immobilized on the surface, and the substrate is transported from the solution to the receptor site. A third option is that the receptors are attached to suspended particles but both the receptors and substrate are found in the solution. Responses of the biosensors will depend on the kinetics of the recognition and transduction reactions or on mass transfer rates. Determination of the rate-limiting step is essential for understanding optimization and control of biosensor performance criteria. When the reaction rate is slow in comparison to the transport, the kinetics will not be affected by the transport and the system is considered to be mixed, but if the reaction is relatively faster, the kinetics will be influenced by the transportation. Enzyme Kinetics In an enzymatic reaction the substrate (S) is converted into a product (P) with the help of the enzyme (E): E
S −→ P
(11.18)
The rate of reaction r P can be expressed in terms of either the change of substrate concentration C S or the product concentration, Cp dC S dC P = (11.19) dt dt It is important to know how the reaction rate is influenced by reaction conditions such as substrate, product, and enzyme concentration if you want to understand the effectiveness and characteristics of an enzyme reaction. Figure 11.5 shows a typical Michaelis-Menten curve in which the enzymatic conversion rate is depicted as a function of substrate concentration, given a fixed enzyme concentration. Figure 11.5 shows that the reaction rate is proportional to the substrate concentration (first-order reaction) at low values of substrate concentration, and does not depend on the substrate concentration (zero-order reaction) at high values of substrate concentration, which means the rP = −
298
Nondestructive Testing of Food Quality
Vmax
1/V V Slope = KM/Vmax 0.5Vmax Intercept = 1/Vmax
KM
Intercept = −1/KM
1/Cs
CS
Figure 11.5. Michaelis-Menten kinetics: reaction rate at different levels of substrate for the given concentration of enzyme. Vmax is the maximum reaction rate and K M is Michaelis-Menten constant, which is the substrate concentration required for an enzyme to reach half of its maximum rate. Both are kinetic parameters that need to be experimentally determined. Inset: Double-reciprocal Lineweaver-Burk plot for enzyme kinetics; from the slope and intercept, the Vmax and K M value for the given enzyme can be estimated.
reaction goes gradually from first-order to zero-order as the concentration of the substrate is increased. The maximum reaction rate, Vmax , is proportional to the enzyme concentration. This was what Henri observed in 1902, and he proposed the following rate equation (Bailey and Ollis 1986): V = rp =
Vmax C S K M + CS
(11.20)
where Vmax [mol m−3 s−1 ] and K M [mol m−3 ] are kinetic parameters that need to be experimentally determined. This equation describes many experimental results. Leonor Michaelis and Maud Menten proposed a quantitative theory to support the observed enzyme kinetics, which is still widely used today under the name Michaelis-Menten kinetics (Moser 1985). For an in-depth discussion of enzyme kinetics and other possible enzyme kinetic models, the reader is referred to Moser (1985), Bailey and Ollis (1986), and Marangoni (2003).
Biosensors: A Theoretical Approach
299
Enzymes can perform up to several millions catalytic reactions per second; to determine the maximum conversion rate of an enzymatic reaction, the substrate concentration is increased until a constant rate of product formation is achieved. This is the maximum reaction velocity (Vmax ) of the enzyme. In this case, all of the enzyme active sites are saturated with substrate. The other parameter is the amount of substrate needed to achieve a given rate of reaction, called the Michaelis-Menten constant (K M ), which is the substrate concentration required for an enzyme to reach half of its maximum conversion rate. Each enzyme has a characteristic K M for a given substrate. Both K M and Vmax are usually determined by extrapolating from a limited data set using what is known as a double reciprocal, or Lineweaver-Burk plot (see inset Figure 11.5) (White and Turner 1997). In the simplest case, when the diffusion of substrate molecules is neglected and steady-state conditions are assumed for the enzyme reaction (that is, the rate of enzyme conversions exceeds the rate of mass transfer), the mathematical model of enzyme kinetics is given by Michaelis–Menten equation (as given in Equation 11.20). When the enzyme is immobilized in a thin layer of some finite thickness on the surface of the biosensor, any modeling of the system response must consider the diffusion and partitioning effects of the enzyme layer. It is interesting to note that unlike solution reactions where the rate is dependent upon a steady-state concentration of the enzyme-substrate complex, with immobilized enzymes, diffusion of the substrate to the enzyme is often, but not necessarily, the ratedetermining step (Bilitewski 1994, Baronas et al. 2002, Barak-Shinar et al. 2004, Lebedev et al. 2006). The mass transfer by diffusion is a first-order reaction with respect to substrate concentration. Imposing diffusion thus has the effect of extending the linear range of initial reaction velocity beyond the K M value of the normal enzyme. Because of this linear relationship, however, the observed rate of the reaction, and therefore analytical signal, is lower than would have been in a kinetically controlled enzyme reaction (Bailey and Ollis, 1986). Electrode Kinetics Enzyme-based electrochemical biosensors are well developed in determination of glucose in both food industries and medical diagnosis (Baronas et al. 2002, Mello and Kubota 2002). They use the biospecificity of an enzymatic reaction, along with an electrode reaction of the
300
Nondestructive Testing of Food Quality
reaction product (for example, hydrogen peroxide), that generates an electric current or a potential difference for quantitative analysis. Applying a potential at the electrodes, electrolysis of the product of the enzyme reaction occurs, which causes a faradic current that is proportional to the concentration of the electroactive species (Mason et al. 1999, Baronas et al. 2002, Prodromidis and Karayannis 2002, Lammertyn et al. 2006). The half reaction is given by: O + ne− → R
(11.21)
where n is the number of electrons (e) transferred between oxidant (O) and reductant (R). Before the initiation of the electrolysis, the concentration of O is assumed uniform at the electrode. When a voltage applies, the concentration of O at the electrode surface becomes less than in the bulk solution due to the electrochemical conversion of O into R. Because diffusion is the main mode of transport close to the electrode, the flux of O to the electrode surface at time t is proportional to the steepness of the concentration gradient. The flux J [mol m2 s] of O is therefore given simply by Fick’s first law of diffusion (Equation 11.12). The current flowing in the cell is dependent upon this flux of material at the electrode surface. The faradic current I [A = C s−1 ] for an electrode with area A[m2 ] can therefore be given as (Grattarola et al. 1996, Rajendran and Irudayaraj 2002, Kaunietis et al. 2005): I = nFAJ (t)
(11.22)
where F is the Faraday constant (96,485 C mol−1 ). If we combine the above equation with Fick’s law of diffusion, we obtain an expression for the time-dependent current (the diffusion current): Id = nFAD O
∂C O (0,t) ∂x
(11.23)
where D O and C O are the diffusion coefficient and concentration of the product from the enzymatic reaction, respectively. The concentration gradient at the electrode surface depends on the way the substrate, enzyme, and product are transported in the system. Therefore, the solution obtained from the appropriate numerical equations can be compared with the amperometric current observed in the experiment during the validation of optimized biosensor design.
Biosensors: A Theoretical Approach
301
Kinetics of the Interaction Between Target and Bioreceptor A crucial aspect in biosensor technology is the interaction between the target molecule and the biorecognition molecules. In the case of enzymes, the target molecule is converted into a product that can be measured directly or indirectly with a transducer. As discussed in the previous section, this conversion process is described by means of enzyme kinetics models. Other types of biosensors work with receptor molecules, for instance DNA or antibodies. Knowledge of the kinetics of the interaction between target and receptor is crucial to model and describe the behavior and to the performance of the biosensor. Two types of interactions are often encountered in biosensors: DNA-hybridization and antibody-antigen interactions. Hybridization is the process of joining two complementary strands of DNA or one each of DNA and RNA to form a double-stranded molecule by hydrogen bonding. The hybrid receptors such as DNA and RNA probes have shown promising application in food analysis, for example, in microorganism detection (Newman et al. 1997). The principle of selective detection is based on the detection of a unique sequence of nucleic acid bases through hybridization. Likewise, immunosensors are also widely used in food analysis and biodiagnostics based on a specific interaction of the antigen with an immobilized antibody to form a thermodynamically stable complex. The physicochemical change induced by antigen–antibody binding does not necessarily generate an electrochemically detectable signal. Sometimes, enzymes, fluorescent compounds, electrochemically active substrates, radionuclides, or avidin–biotin complexes are used to label either the antigen or the antibody and as such, generate a signal. The most common transducers to immunosensors are acoustic and optical systems. Hybridization kinetics can be modeled based on an approach that accounts for both the direct hybridization from the bulk phase and hybridization after an initial nonspecific adsorption onto the solid surface followed by two-dimensional diffusion over the surface and hybridization. Taking in to account only the direct DNA hybridization, the DNA heterogeneous hybridization reaction can be explained by (Erickson et al. 2003, Kim et al. 2006): k1
R + C ←→ B k−1
(11.24)
302
Nondestructive Testing of Food Quality
The symbol R represents single-stranded DNA molecules (DNA capture probes) immobilized on the solid surface available for hybridization. The target DNA molecules in the sample above the capture probes, C, bind specifically to the DNA capture probes and form hybridized doublestranded DNA molecules, B, on the surface of the capture elements. The equation is also applicable for bimolecular antibody (Ab)-antigen (Ag) interaction as: k1
Ag + Ab ←→ Ag − Ab k−1
(11.25)
The rate of change of product produced for a monovalent analyte at concentration C and a monovalent soluble receptor at concentration R when they exist in well mixed conditions (when both the substrates found freely in the solution) is given by (Glaser 1993, Lebedev et al. 2006, Mason et al. 1999): dB = k1 CR − k−1 B dt
(11.26)
Since both the receptor and substrate are uniformly dispersed in the solution, the rate coefficients are independent of time and of the concentration of the reactants (Sadana and Beelaram 1995). Of course, they may depend on other parameters such as temperature and viscosity. For the case of receptors attached on the surface, the rate of the hybridization reaction is represented by (White and Turner 1997, Erickson et al. 2003, Kim et al. 2006): dB = k1 Cbulk (Rt − B) − k−1 B dt
(11.27)
Rt is the total concentration of the immobilized DNA or receptor. B is the concentration of hybridized DNA or binding component (ligand-receptor or antigen-antibody) at time t. (Rt − B) is the concentration of the unhybridized immobilized receptor at the site. k1 is the forward reaction rate (hybridization), and k−1 is the backward or dehybridization rate in this case. For other binding models considering the indirect binding or effect of lateral interactions and other complex model, we refer to Neikov and Sokolov (1995), Sadana and Beelaram (1995), and Erickson et al. (2003).
Biosensors: A Theoretical Approach
303
Numerical Approach Combined electrokinetics, fluid flow, and species transport are solved by dedicated software programs. Examples are COMSOL Multiphysics (COMSOL AB, Stockholm, Sweden), in which one is able to couple the physical phenomena in an interactive environment by using the built-in conventional physics modes or using the free custom partial differential equations (PDE) and making a simulation of complex problems, ANSYS CFX (Ansys, Inc., Canonsburg, PA, USA) and Fluent (Fluent, Inc., Lebanon, NH, USA), which are dedicated fluid flow analysis packages. Solution algorithms that are used most commonly are the finite volume method (FVM) and the finite element method (FEM). FVM starts from the integral form of the governing equations. The computational domain is subdivided into a number of interconnected but nonoverlapping subdomains called control volumes. Computational nodes are situated at the centroids of the control volumes. By application of the equations to the control volumes, the conservation form of the equations is transferred from the original infinitesimal scale to the discrete scale. Surface integrals and volume integrals are approximated in terms of values of the variables at the cell face and nodes, respectively. Cell face values are themselves expressed in terms of nodal values by means of interpolation. As a result, algebraic equations are obtained for all nodes, which can be solved by well-known solution methods. FVM can be easily applied to any geometry, because the grid only defines the control volume boundaries. The combination of integration and interpolation however makes it difficult to apply higher order approximations on nonstructured grids. In FEM, a given computational domain is subdivided as a collection of finite elements, subdomains of variable size and shape, which are interconnected in a discrete number of nodes. Such elements are typically quadrilaterals or triangles in two dimensions and tetrahedra or hexahedra in three dimensions. The solution of the governing partial differential equation is approximated in each element by a low-order polynomial in such a way that it is defined uniquely in terms of the (approximate) solution at the nodes. The global approximate solution can then be written as a series of low-order piecewise polynomials with the coefficients of the series equal to the approximate solution at the nodes. Substitution of the approximate solution in the differential equation produces in general a nonzero residual. In the Galerkin finite
304
Nondestructive Testing of Food Quality
element method, the unknown coefficients of the low-order piecewise polynomials are then found by orthogonalization of this residual with respect to these polynomials. This results in a system of algebraic or ordinary differential equations, which can be solved using the wellknown techniques. The advantage of the FEM is its ability to deal with complex shapes in a straightforward way, and the easy way to refine meshes simply by subdividing elements.
Case Studies Electrokinetic Sample Injection in Micro-FIA Biosensors Electrokinetic Dispensing Mechanism Injection of the sample into the biosensor is one of the key elements in the sample handling process, and its characteristics determine the quality of the chemical analysis (Polson and Hayes 2001). Design as well as fabrication of a pressure-driven injection system for limited sample volumes is difficult because of the integration of a valve system into the microfluidic biosensor device. Electrokinetics offers a means for implementing valveless switching, a technique used for introducing precisely metered samples in a microfluidic channel. In this case study, a sample injection system based on electrokinetics is modeled. Design and optimization of the injection process includes a study of the geometry of the channel intersection, the selection of the appropriate voltages as well as a precise timing for switching the electric field (Fu et al. 2003, Li 2004). Figure 11.6 illustrates the principle of the microfluidic injection system. Two microchannels, an injection, and a loading channel are manufactured perpendicularly and coupled to reservoirs. Initially, all the reservoirs and channels, except the sample reservoir, are filled with buffer. During the loading mode, a voltage is applied along the axis of the loading channel causing the sample to flow from reservoir 1 to 2 due to electroosmotic forces. Simultaneously, a potential is applied at reservoirs 3 and 4, causing a potential gradient between the reservoir and the junction (Figure 11.7). As a result, the sample is squeezed at the junction as a result of the buffer flow from the reservoirs toward the junction. This phenomenon is called electrokinetic focusing. Consequently, the sample at the crossing of the injection and loading channel has a
1 Loading channel
3
4
Injection channel
2
Figure 11.6. Simple channel intersection used for electrokinetic flow in micro device. Reservoir 1 is a sample reservoir, reservoirs 2 and 4 are waste reservoirs, and reservoir 3 contains buffer.
Potential
Loading stage
Injection stage
10 V
6V
7V
7V
12 V
0V
0V
6V flow
flow
flow
flow
flow
Velocity flow
0.6 mm/s
0
Figure 11.7. Velocity and potential profile inside the channel during loading and injection mode for one case of the optimization study.
305
306
Nondestructive Testing of Food Quality
Table 11.1. Parameter values used in the numerical simulation. Description
Symbol
Value
Sample solute diffusion coefficient, m2 s−1 Electric permittivity of the medium, C2 N−1 m−2 Electrophoretic mobility, m2 V−1 s−1 Zeta potential, V Electrical conductivity of the medium, S m−1
D ε µep ζ σ
2∗ 10−10 7.1∗ 10−10 15∗ 10−9 0.1 0.0575
trapezoidal shape. Afterward, the applied voltages are switched from the sample-loading mode to sample-injection mode, that is, a higher potential is applied to the buffer reservoir 3 with reservoir 4 grounded and the potentials at reservoirs 1 and 2 are set at approximately half of the potential of buffer reservoir 3. Consequently, buffer starts flowing from reservoir 3 to 4, pinches the sample at the junction, and carries it along. A rectangular shape of the injection plug indicates sharp separation and is beneficiary for good noise-free performance. Optimization of the potential at the different reservoirs during the loading and injection step is therefore critical to obtain a rectangular plug. Optimization of the Dispensing Process Using the system of equations outlined in the previous section, the mass transfer in a cross flow channel (channel width of 200 µm and all legs equal to 1mm) was solved to optimize the injection and loading potential for a high performance injection process, characterized by minimized amount of leakage and, hence, a high signal-to-noise ratio. Comsol Multiphysics 3.2 was applied to implement and solve the model equations (that is, the Laplace equations, the incompressible NavierStokes equations with slip conditions, and the mass transport equations as described previously). The parameter values used in the numerical simulation are listed in Table 11.1. Table 11.2 gives the potential values used for the injection and loading modes in the optimization study. Figure 11.7 gives the calculated profiles of the electrical potential and fluid velocity in the channels. The electroosmotic transport mode results in a flat velocity profile in the channels. Because of the potential setup during the loading mode, buffer is also transported to the junction leading to increased velocity profile in the bottom leg of the junction. For a square channel with a width of 200 µm, this setup allows to pump
Biosensors: A Theoretical Approach
307
Table 11.2. Values of the applied electrokinetic potentials of the injection and loading stages in the optimization study of a microfluidic injection system. Potential at the reservoirs in Figure 11.6 ø1 [V]
ø2 [V]
Loading
10 10 10 10
0 0 0 0
Injection
10 6 0
10 6 0
ø3 [V] 8.5 7 5 2 12 12 12
ø4 [V] 8.5 7 5 2 0 0 0
10 nanoliters of sample per second during sample loading. Switching the potentials to injection mode results in buffer flow from left to right. The simulated case in Figure 11.7 has such a potential setting in the orthogonal reservoirs that no net flow occurs in these legs. In Figure 11.8, the sample concentration profiles resulting from the optimization study are given. The numerical analysis shows that it is possible to control the volume of loaded sample by adjusting the focusing potential (potential at the reservoirs of the loading channel) during the loading stage: the larger the value of the potential, the smaller the sample volume at the intersection (Figure 11.8). However, by increasing the focusing potential beyond a certain limit (larger than 8.5 V in the case considered, not shown), no sample will reach the cross junction, which will be completely filled with buffer. In Figure 11.8, the injection-phase potential settings are insufficient to avoid leakage. To reduce the degree of leakage, the electric potential at the lateral ports 1 and 2 can be manipulated during the injection stage. By creating a gradient from the cross-section to the sample reservoirs at the loading channel, it is possible to pinch the sample back toward the reservoirs instead of following the sample in the injection channel. This is achieved by applying a smaller potential or grounding at those ports, resulting in a potential gradient from the junction toward the reservoirs (Figure 11.9). This optimization study shows the advantages of numerical optimization to find the best dispensing process by controlling only one
Loading
Injection
10 V
6V
2V
2V
0V
12 V
0V
6V
10 V
6V
12 V
5V
5V
6V
0V
6V
10 V
0V
12 V
7V
7V
0V
0V
6V
10 V
6V
8.5 V
8.5 V
12 V
0V
6V
0V
10 mM
0
Figure 11.8. Sample transport as a function of variable focusing potential. The left column shows the sample concentration during loading and the right column shows 1 second after injection.
308
Biosensors: A Theoretical Approach 0V
2V
6V
2V
6V
309 10 V
0V 12 V 0V ( a)
(b)
(c)
10 V (d)
Figure 11.9. The concentration profiles with different potential values at the reservoirs in the loading channel while the loading potential remains the same (that is, ø3 = 12V and ø4 = 0V). The higher the potential at the lateral reservoirs, the more pronounced the leakage (from a to d).
parameter, the potential. However, it is also possible to vary other parameters (for example, the geometry) to improve the performance of injection processes. Fu and others (2003), Li (2004), and Yang and others (2005) studied electrokinetic focusing and its diverse applications in microfluidic devices.
Optimization of a Glucose Flow Injection Analysis Biosensor Biosensors based on flow injection analysis are commonly used in quality control systems because of their accurate and fast results. The performance characteristics of an FIA biosensor strongly depend on the design and operational parameters. The influence of, for instance, flow rate, microfluidic channel dimensions, and flow cell geometry should be carefully investigated to optimize the signal outcome. Because it is time consuming and costly to build all possible FIA biosensor configurations, a mathematical modeling approach, describing the relation between biosensor parameters and performance, offers a cheap solution to overcome this problem. In this case study, we will illustrate the application of a convection-reaction-diffusion model to study the principles and the design of an amperometric glucose FIA biosensor. In the first section, we briefly describe the biosensor setup. The second and third sections deal with model development and model simulations, respectively. Finally, the model will be used to determine the optimal set of design and operational parameters.
310
Nondestructive Testing of Food Quality
Figure 11.10. Flow injection amperometric biosensor: pulse-free syringe pump (1), 10-port injection valve (2), electric valve actuator (3), Plexiglas detector cell (4), potentiostat (5), syringes to inject enzyme (6), and substrate (7), flow rate switch (8).
Flow Injection Analysis Amperometric Sensor The principle of the biosensor is based on the conversion by glucose oxidase of D-glucose into D-gluconic acid and hydrogen peroxide, which can be detected amperometrically. A picture of the flow injection biosensor is given in Figure 11.10. The buffer is pumped through the system with a pulse-free syringe pump with variable flow rate (8). Sample (6) and enzyme (7) are injected in the carrier flow through a 10-port injection valve (2). The enzyme and sample react chemically in the tubing leading to the detector. The Plexiglas detector cell (4) with an internal volume of 9.5 milliliter houses two electrodes: a Pt working electrode and an Ag/AgCl reference electrode. The electrodes are connected to a potentiostat (5) operating at a potential of +0.65 V with respect to the Ag/AgCl reference electrode. Data acquisition and processing are carried out in LabVIEW (National Instruments, Austin, TX, USA). For a detailed description of the FIA biosensor setup, the reader is referred to Lammertyn and others (2006). Model Formulation The transient convection-diffusion-reaction equations for the different dissolved species in the FIA biosensor are described by the steady-state
Biosensors: A Theoretical Approach 0.12
311 0.625mM
7 6
Simulated la flow Measured as flow
0.08
H2O2 (mM)
5 Current (nA)
0.1
0.3125mM 4 0.156mM 3 0.078mM 2
0.06
1 0
0.04
0
2000
4000
6000
8000
Time (s)
0.02
0 0
100
200
300
400
500
Time (s)
Figure 11.11. Simulated and measured hydrogen peroxide versus time profiles. The inset shows 18 consecutive measurements of different glucose concentrations.
incompressible Navier-Stokes equations for fluid flow and Fick’s Law of diffusion, and the reaction equations obey Michaelis-Menten kinetics (Shuler and Kargi 1992). A detailed mathematical description of the model and its parameters is given in Lammertyn and others (2006). The model was implemented in the Computational Fluid Dynamics (CFD) code CFX 4.4 (ANSYS, Inc., Canonsburg, PA, USA). First, a solution for the steady-state laminar velocity field is generated, after which the transient solution of the species distribution is calculated. The inset in Figure 11.11 depicts sensor readings at different glucose concentrations. The model was successfully validated in a practical situation resulting in a close match between the simulated and measured hydrogen peroxide versus time profiles (Figure 11.11). Model Simulations The convection-diffusion-reaction model was used to simulate the fluid flow and reaction kinetics in the biosensor as a function of the design and operational parameters. In Figure 11.12, the concentration profiles of glucose, glucose oxidase, and hydrogen peroxide in the detector are shown as a function of time. Glucose enters the detector after 50s, followed by glucose oxidase and hydrogen peroxide. The highest
312 Time (s)
Nondestructive Testing of Food Quality [Glucose]
[Glucose oxidase]
[H2O2]
50 60 70 80 100 120 160 [mM] 0.0
0.556 G 0.006 GOx 0.035 H2O2
Figure 11.12. Concentration profiles of glucose, glucose oxidase, and hydrogen peroxide in the detector as a function of time.
concentrations can be found at the axis of the detector. From there it diffuses toward the Pt-electrode to be oxidized. Remnants of hydrogen peroxide are still present in the detector cell after 160s. It takes a long time to completely flush the detector cell and hence to reach the baseline again. This peak tailing can be avoided by changing the dimensions of the detector cell. Figure 11.13 (top) illustrates the influence of the geometry of the detector on the H2 O2 -concentration profiles near the electrodes. An increase of the inner detector cell diameter results in a decrease of the peak height and in extended peak tailing. The time to peak also increases since it takes some time for the H2 O2 entering the detector cell to reach the electrodes through diffusion and convection. The best signal is obtained for the detector diameter of 0.5 mm, which corresponds to the inner diameter of the incoming tubing. It can be concluded that the optimal signal will occur when the diameter of the detector cell is equal to the diameter of the incoming tubing. An increasing flow rate results in a shorter but less sensitive measurement (Figure 11.13 bottom).
Biosensors: A Theoretical Approach
313
0.08
H2O2 (mM)
0.06
0.5 mm
1.0 mm
1.5 mm
2.0 mm
0.04
0.02
0 0
50
100
150
200
250
300
350
400
450
Time (s) 0.3
50 µl/min 25 µl/min
0.2 H2O2 (mM)
20 µl/min 12.5 µl/min 10 µl/min
0.1
0 0
50
100
150
200
250
300
Time (s)
Figure 11.13. Effect of the geometry of the detector cell on the appearance of H2 O2 at the electrodes (top) and effect of the flow rate on the biosensor respons (bottom).
Optimization of the FIA Sensor The performance of a biosensor is determined by its sensitivity, specificity, selectivity, and repeatability, but also by the time it takes to measure a sample or the regeneration time. Depending on the application, one of these performance characteristics is less or more important. To account for this, an objective function, which is a linear combination of three standardized signal parameters—peak height (PH), response time (RES), and recovery time (REC)—was established. Response time is the time that elapses between the start of the measurement and the moment when maximal peak height is observed. Recovery time is defined as the time between the appearance of the maximal peak height and the time when the signal reaches the baseline. PH RES −1 Objective Function = ω1 × + ω2 × PHmax RESmin REC −1 + ω3 × (11.28) RECmin
314
Nondestructive Testing of Food Quality (A)
Objective Function
0.7 0.6 0.5 0.4 0.3 0.2 160 140
0.8 0.6
120 100
0.4 80
Length (mm)
0.2
Diameter (mm)
(B)
1
Objective Function
0.9 0.8 0.7 0.6 0.5 0.4 160 140
0.8 0.6
120 100 Length (mm)
0.4 80
0.2
Diameter (mm)
Figure 11.14. Response surfaces of the objective function for two different settings of weighing parameters with a flow rate of 0.25 mm3 s−1 . (A) ω1 = 0.4 ω2 = ω3 = 0.3; (B) ω1 = 1ω2 = ω3 = 0.
Standardization was achieved by dividing each signal parameter by the minimal or maximal value of the signal parameters observed in the design experiment. Peak heights have to be maximized for optimal sensitivity of the measurement, and time factors have to be minimized to increase the throughput of the system. In the linear combination weighing factors ωi (ω1 + ω2 + ω3 = 1) were introduced. The ideal FIA setup has a maximal value for the objective function of 1. Figure 11.14 shows the objective function response surfaces for two different settings of the weighing parameters. In the simulations, the flow rate was kept constant at 0.25 mm3 s−1 . The first response surface is the result of a maximization process in which the objective function includes the peak height (ω1 = 0.4) and both time parameters (ω2 = ω3 = 0.3). The
Biosensors: A Theoretical Approach
315
maximal value of the objective function, which results in an optimal flow injection analysis setup, corresponds to a total length of 160 mm and a tubing inner diameter of 0.25 mm. The second response surface represents the outcome of a maximization of the objective function, which only includes peak height. The emphasis is here on the design of an extremely sensitive sensor neglecting the response time and the sensor recovery time (ω1 = 1; ω2 = ω3 = 0). This corresponds to an FIA setup with a total length of 140 mm and a tubing inner diameter of 0.5 mm. Theoretically, an increasing tubing length induces higher peak heights, due to longer reaction times of the enzyme with the substrate, but this is countered by the dispersion of the H2 O2 -band in the microfluidic channel. It was observed that for a glucose/glucose oxidase system, the effect of dispersion will be more pronounced than the reaction time effect when microfluidic channels with a length of more than 140 mm are involved. It is clear that other enzyme-substrate systems with different enzyme kinetics and different diffusion properties will have other optimal system parameters.
Conclusions There is a growing potential for biosensors in food industries as online detection methods. The industries are looking to replace conventional methods that do not allow high sampling rates and are time consuming. The effective development and application of biosensors for food quality control depends on the elimination of some of the shortcoming such as short-term stability, low measurable ranges of analytes, shelf life of the biosensor, and sensitivity to process conditions such as temperature and pH (Bilitewski et al. 1997, Mello and Kubota 2002). Because of the wide variety of fields involved in the development of online biosensors, successful research and development will require a multidisciplinary approach. In this regard, the role of numerical methods is important because it reduces the cost for constructing dozens of prototypes and provides a tool to determine relevant design parameters before rushing into the prototyping stage. The response of biosensors is controlled by the kinetics of recognition and transduction reactions and by mass transfer phenomena. Determination of the rate-limiting step is clearly essential for the understanding, optimization, and control of biosensor performance criteria. This chapter provided an introduction to the
316
Nondestructive Testing of Food Quality
involved transport phenomena modes and the principles of numerical design of flow-type biosensors. Acknowledgements The Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen) and the Fund for Scientific Research—Flanders (Belgium) (Research Grant and Research Project [FWO G.0298.06]) are gratefully acknowledged for their financial support. Steven Vermeir is holder of a research grant of IWT and Pieter Verboven is postdoctoral fellow of the Fund for Scientific Research— Flanders (Belgium). References Barak-Shinar D, M Rosenfeld, J Rishpon, T Neufeld, and S Abboud. 2004. Computational fluid dynamic model of diffusion and convection processes in electrochemical sensor. IEEE Sensors Journal, 4(1). Bailey JE and DF Ollis. 1986. Biochemical engineering fundamentals. New York, NY: McGraw-Hill Book Co. pp. 86–108. Baronas R, F Ivanauskas, and J Kulys. 2002. Modelling dynamics of amperometric biosensors in batch and flow injection analysis. Journal of Mathematical Chemistry, 32(2), 225–237. Bilitewski U and I Rohm. 1997. Biosensors for Process Monitoring. In: Handbook of Biosensors and Electronic noses: medicine, food and the environment, E KressRogers, Ed. USA: CRC Press, Inc. pp. 435–468. Bilitewski U. 1994. Enzyme electrodes for food Analysis. In: Food Biosensor Analysis, G Wagner and GG Guilbaut, Eds. New York: Marcel Dekker, Inc. pp. 31–61. Bird RR, WE Stewart, and EN Lightfoot. 2002. Transport phenomena, 2nd. ed. USA: John Wiley & Sons, Inc. Bousse L, C Cohen, T Nikiforov, A Chow, AR Kopf-Sill, R Dubrow, and JW Parce. 2000. Electrokinetically controlled microfluidic analysis systems. Annu Rev Biophys Biomol Struct. 29, 155–181. Buerk DG. 1993. Biosensors: Theory and applications. Lancaster, PA: Technomic Publishing Company, Inc. Eggins BR. 2002. Chemical Sensors and Biosensors. West Sussex, England: John Wiley & Sons, Inc. pp. 1–9. Erickson D and D Li 2004. Integrated microfluidic devices. Analytica Chimica. Acta 507, 11–26. Erickson D, D Li, and UJ Krull. 2003. Modeling of DNA hybridization kinetics for spatially resolved biochips. Analytical Biochemistry, 317, 186–200.
Biosensors: A Theoretical Approach
317
Fu LM, RJ Yang, and GB Lee. 2003. Electrokinetic focusing injection methods on microfluidic devices. Anal Chem. 75(8), 1905–1910. Gaskell DR. 1992. An Introduction to Transport Phenomena in Material Engineering. New York: Macmillan Publishers, Ltd. pp. 102–115, 522–527. Glaser RW. 1993. Antigen-antibody binding and mass transport by convection and diffusion to the surface: a two dimensional computer model of binding and dissociation kinetics. Analytical biochemistry, 213, 152–161. Grattarola M, A Cambiaso, L Delfino, G Verreschi, D Ashworth, P Vadgama, and A Maines. 1996. Modelling and simulation of a diffusion limited glucose biosensor. Sensors and Actuators B: Chemical, 33(5), 203–207. Hayes MA, I Kheterpal, and AG Ewing. 1993. Effects of buffer pH on electroosmotic flow control by an applied radial voltage for capillary zone electrophoresis. Anal Chem. 65(17), 27–31. Ivnitski D, I Abdel-Hamid P Atanasov, E Wilkins, and S Stricker. 2000. Application of Electrochemical Biosensors for Detection of Food Pathogenic Bacteria. Electroanalysis, 12(5), 317–325. Karniadakis G, A Beskok, and N Aluru. 2005. Microflows and Nanoflows: Fundamentals and simulation. Interdisciplinary applied mathematics, Vol. 29, SS Antman, JE Marsden, and L Sirovich, Eds. Springer Science+Business Media, Inc., USA. ˇ Kaunietis I, R Simkus, V Laurinaviˇcius, and F Ivanauskas. 2005. Apparent Parameters of Enzymatic Plate-Gap Electrode. Nonlinear Analysis: Modelling and Control, 10(3), 211–221. Kim JH, A Marafie, X Jia, JV Zoval, and MJ Madou. 2006. Characterization of DNA hybridization kinetics in a microfluidic flow channel. Sensors and Actuators B: Chemical, 113(1), 281–289. Kim N and I Park. 2003. Application of a flow-type antibody sensor to the detection of Escherichia coli in various foods. Biosensors and Bioelectronics, 18, 1101–1107. Koch M, A Evans, A Brunnschweiler. 2000. Microfluidic technology and applications. Research studies press LTD: Baldock, Hertfordshire, England. Koo J and C Kleinstreuer. 2003. Liquid flow in microchannels: experimental observations and computational analyses of microfluidics effects. J. Micromech. Microeng, 13, 568–579. Lammertyn J, P Verboven, EA Veraverbeke, S Vermeir, J Irudayaraj, and BM Nicolai. 2006. Analysis of fluid flow and reaction kinetics in a flow injection analysis biosensor. Sensors and Actuators B, 114:728–736. Lebedev K, M Salvador, and P Stroeve. 2006. Convection, diffusion and reaction in a surface-based biosensor: Modeling of cooperativity and binding site competition on the surface and in the hydrogel. Journal of Colloid and Interface Science, 296, 527–537. Li D. 2004. Electrokinetics in Microfluidics. Interface Science and technology, Vol. 2. Elsevier Academic Press. Liu X, D Erickson, D Li, and UJ Krull. 2004. Cationic polymer coatings for design of electroosmotic flow and control of DNA adsorption. Analytica Chimica Acta, 507, 55–62.
318
Nondestructive Testing of Food Quality
Marangoni AG. 2003. Enzyme kinetics: A modern approach. Hoboken, NJ: John Wiley & Sons, Inc. pp. 41–60, 174–192. Mason T, AR Pineda, C Wofsy, and B Goldstein. 1999. Effective rate models for the analysis of transport-dependent biosensor data. Mathematical Biosciences, 159, 123–144. Mello LD and LT Kubota. 2002. Review of the use of biosensors as analytical tools in the food and drink industries. Food Chemistry, 77(2), 237–256. Moser A. 1985. Rate Equations for enzyme kinetics. In: Biotechnology, Volume 2: Fundamentals of biochemical engineering. H Brauer, Ed. pp. 199–226. VCH Verlagsgesellschaft: Germany. Neikov A and S Sokolov. 1995. Generalised model for enzyme amperometric biosensors. Analytica Chimica Acta, 307, 27–36. Newman DJ, Y Olabiran, and CP Price. 1997. Bioaffinity agents for sensing systems. In: Handbook of Biosensors and Electronic noses: medicine, food and the environment. E Kress-Rogers, Ed. pp. 59–90. USA: CRC Press, Inc. Polson NA and MA Hayes. 2001. Microfluidics controlling fluids in small places. Anal Chem. 73(11), 312A–319A. Prodromidis MI and MI Karayannis. 2002. Enzyme Based Amperometric Biosensors for Food Analysis. Electroanalysis, 14(4), 241–261. Rajendran V and J Irudayaraj. 2002. Detection of glucose, galactose, and lactose in milk with a microdialysis–coupled Flow Injection Amperiometric sensor. Journal of Dairy Science, 85, 1357–1361. Ramsay G. 1998. Commercial biosensors. John Wiley & Sons, New York, USA. Sadana A and AM Beelaram. 1995. Antigen-antibody diffusion-limited binding kinetics of biosensors: a fractal analysis. Biosensors and Bioelectronics, 10(3), 301–316. Sharma SK, R Singhal, BD Malhotra, N Sehgal, and A Kumar. 2004. Lactose biosensor based on Langmuir–Blodgett films of poly(3-hexyl thiophene). Biosensors and Bioelectronics, 20(3), 651–657. Sharp KV, RJ Adrian, JG Santiago, and JI Molho. 2002. Liquid flow in microchannels. In: The MEMS Handbook, Mohamed Gad-el-Hak, Ed. CRC Press. Shuler ML and F Kargi. 1992. Bioprocess Engineering, Basic Concepts, 1st ed., Prentice Hall, NJ. Squires TM and SR Quake. 2005. Microfluidics: Fluid physics at the nanoliter scale. Reviews of modern physics, 77. Tang G, D Yan, C Yang, H Gong, J Chee Chai, and YC Lam. 2006. Assessment of Joule heating and its effects on electroosmotic flow and electrophoretic transport of solutes in microfluidic channels. Electrophoresis, 27, 628–639. Th`evenot DR, K Toth, RA Durst, and GS Wilson. 2001. Electrochemical biosensors: recommended definitions and classification. Biosensors & Bioelectronics, 16, 121– 131. Weigl BH, RL Bardell, and RC Catherine. 2003. Lab-on-a-chip for drug development. Advanced Drug Delivery Reviews, 55, 349–377. Whitaker JR. 1994. The need for biosensors in the food industry and food research. In: Food Biosensor Analysis, G Wagner and GG Guilbaut, Eds. pp. 13–30. New York: Marcel Dekker, Inc.
Biosensors: A Theoretical Approach
319
White SF and APF Turner. 1997. Enzymes, cofactors and mediators. In: Handbook of Biosensors and Electronic noses: medicine, food and the environment, E KressRogers, Ed. pp. 43–57. USA: CRC Press, Inc. Yang RJ, CC Chang, SB Huang, and GB Lee. 2005. A new focusing model and switching approach for electrokinetic flow inside microchannels. J. Micromech. Microeng. 15, 2141–2148. Zhao TS and Q Liao. 2002. Thermal effects on electro-osmotic pumping of liquids in microchannels. Journal of Micromech. Microeng. 12, 962–970.
Chapter 12 Techniques Based on the Measurement of Electrical Permittivity Malcolm Byars
Overview Many foods are composed of ingredients that are electrical dielectrics, that is, they are insulators and do not conduct electricity (at least not in the way a copper wire conducts electricity). Examples of dielectric food ingredients include oils, fats, and most dry cereals. Some examples of finished products that are dielectrics include chocolate, butter, biscuits, bread, breakfast cereals, and vegetable oils. For these classes of materials, measurement of their dielectric properties using capacitive measurement techniques can be a very useful tool in both testing and production environments. In this chapter, we will first explain the fundamental concepts of electrical capacitance and dielectric measurements and then describe some techniques that are just emerging from the research phase and that have the potential for routine use in food testing and monitoring.
Electrical Capacitance When an alternating voltage (V) is connected between two parallel metallic plates (electrodes) that are separated by an air gap, a small current (IA ) will flow between the two plates. This arrangement of two metallic plates, shown in Figure 12.1(a), is the simplest form of an electrical capacitor. The property of a capacitor to pass more or less current for a fixed applied voltage is termed its capacitance. More current will 321
322
Nondestructive Testing of Food Quality
(a)
(b)
Figure 12.1. Basic electrical capacitors.
flow as the capacitance increases and if the applied voltage is sinusoidal, the current waveform will lead that of the voltage by 90 degrees. This is in marked contrast to the current flow in a conducting material, where the current will be in-phase with the applied voltage. The capacitance depends on the electrode geometry. It increases with electrode area, decreases with electrode spacing, and also depends on the nature of the material located between the two electrodes. The unit of capacitance is the farad (F), but this is an unrealistically large unit for most practical applications in electronics and physics. In most of the applications described in this chapter, the capacitances between electrodes are very small and are typically measured in femtofarads (fF), where 1 fF = 10−15 F. Modern electronic circuitry can measure changes in capacitance of 0.1 fF or less at speeds in excess of 10,000 measurements per second. If the air gap between the electrodes is replaced by an insulating material (dielectric), as shown in Figure 12.1(b), the current that flows between the plates will increase to a new value (IK ). The increase in the current is caused by a property of the insulating material known
Techniques Based on the Measurement of Electrical Permittivity 323 as permittivity, and the introduction of the insulating material therefore increases the capacitance between the two electrodes. The permittivity of a dielectric material is normally defined relative to that of air, which by definition, has a relative permittivity of 1. The relative permittivity (K) of any insulating material is simply the ratio (IK /IA ), which is also known as the dielectric constant of the material. Typical values of K for most common insulating materials lie in the range from 1 to 10, although a few materials have much higher values (for example, pure water has a value of K = 80). Capacitance Sensors In their simplest form, capacitance sensors consist of two electrodes. The test sample is either located in the space between the two electrodes or close to them. In most cases, the capacitances between the two electrodes will be very small compared with the capacitance between either electrode and earth. This situation will occur, for example, if coaxial cables are used to connect the electrodes to the measurement circuitry, or earthed screens are used to shield the electrodes from external effects. If one of the sensor electrodes is itself earthed, it becomes very difficult to measure small changes in the sensor capacitance in the presence of the much larger parallel capacitance to earth of the connecting cables and screens. Moreover, these capacitances may change significantly, for example, if the cables are flexed. For these reasons, the preferred sensor configuration for many capacitance sensors is to use two unearthed electrodes and to use a capacitance measurement method that ignores any capacitance to earth. It then becomes much easier to measure small changes in capacitance between the electrodes.
Capacitance Measurement Methods There are many possible methods available for measuring the capacitance between pairs of unearthed electrodes, and Baxter (1997) gives a good summary of available techniques. Traditional methods apply a sinusoidal voltage between the electrodes and measure the current that flows between them. However, an alternative method, which is simple,
324
Nondestructive Testing of Food Quality
Figure 12.2. Basic capacitance measurement circuit.
robust, and has proved very successful in many practical applications, uses square wave excitation of the electrodes. The basic idea is shown in Figure 12.2. One electrode of the unknown capacitor Cx is connected to the junction of two electronic switches, S1 and S2, which operate alternately to generate a high frequency square waveform at a frequency (f) of a few megahertz (MHz). The other electrode is held at virtual ground potential (0V) by connecting it to the inputs of two inverting operational amplifiers, A1 and A2, via a further pair of electronic switches, S3 and S4. These switches also operate alternately at the same frequency, but are in phase quadrature with S1 and S2. On each transition of the square excitation waveform, the unknown capacitor Cx is alternately charged and discharged. Consequently, current pulses of opposite polarity flow into the capacitors (C) via the switches S3 and S4, which act as synchronous demodulators. Positive pulses flow into C (A1) via S3, and negative pulses flow into C (A2) via S4. The resultant build up of charge causes the voltage across these capacitors to increase. These increasing voltages cause the outputs of each inverting amplifier to increase by a much larger amount, and this, in turn, generates a current of opposing polarity through the feedback resistors (Rf) into the capacitors until the net stored charge is zero. Hence, the voltage across
Techniques Based on the Measurement of Electrical Permittivity 325 each capacitor (C) is maintained at zero (or virtual earth) potential (0V) by the complementary output voltages, Va and Vb. Ignoring any secondary effects, the output voltage from this circuit Vc (= Va − Vb), is given by Equation 12.1: VO = 2 · f · Vs · Rf · Cx
(12.1)
where Vs is the amplitude of the applied voltage (15V in Figure 12.2) and f is the excitation frequency. Rf and Cf determine the frequency response of the circuit. Equation 12.1 shows that the output voltage is directly proportional to the unknown capacitance Cx . Moreover, this circuit has the useful property of not responding to capacitance to earth and is therefore a very effective method for measuring changes in very small capacitances between pairs of unearthed electrodes. It has the further advantage that the circuit can be made to measure conductance rather than capacitance by operating S3 and S4 in-phase with S1 and S2, rather than in quadrature.
Applications of Capacitance Sensors Capacitance sensors can be used for a very wide range of applications. Examples include online uniformity monitoring of many conveyed products, product moisture measurement, two-phase flow monitoring, and high-speed check weighing. The measurement technique is usually noninvasive and can operate at high speeds. In the remaining pages of this chapter, we will give examples of some innovative applications that have been developed under a number of separate research projects.
Applications Using Single-electrode Pairs Missing Biscuit Detection A typical application is the detection of missing wafers in chocolatecoated wafer bars. Faulty bars are detected by measuring the loss-free (capacitive) and lossy (conductive) components of the impedance of a capacitance sensor when the bar is located inside the sensor. The main difference between a normal finger containing a wafer and a solid
326
Nondestructive Testing of Food Quality
(a)
(b)
Figure 12.3.
Prototype missing wafer detector system.
chocolate finger is in the value of the lossy component (conductance) of the sensor impedance. The ratio of the sensor capacitance/conductance has been found to be a reliable measurement of the status of the bar finger. Figure 12.3(a) shows an experimental measurement system, and Figure 12.3(b) shows the capacitance sensor with a four-finger test chocolate wafer bar. The capacitance electrodes can be arranged in a number of different ways; some possible arrangements are shown in cross-section in Figure 12.4 for a test object in the form of a two-finger wafer bar. In the figures, S indicates a source (excitation) electrode, D indicates a detector (virtual ground) electrode, and E indicates an earthed screen. The third arrangement (c) is particularly attractive in many applications, because there is no physical contact with the conveyed objects under test. In this configuration, the source electrode (S) is excited with an alternating signal and the current that flows into the detector electrode
Techniques Based on the Measurement of Electrical Permittivity 327
(a)
(b)
(c)
Figure 12.4. Capacitance electrode configuration options.
(D) is measured. When a bar enters the sensor, some of the electric field lines between the source and detector electrodes are diverted to the earthed base plates by the bar, causing the capacitive impedance measured between the source and detector electrodes to change. This allows nonstandard bars to be detected and rejected online. Moisture Measurement Capacitance sensors can be used to measure the moisture content of a wide range of materials in solid or particle format. The measurement is nondestructive, and either individual samples or complete packs of products can be measured on a continuous basis. Moreover, if the sample material is homogeneous then the measurement is largely independent of sample size or mass. The measuring technique is instantaneous and is based on measuring the capacitive admittance (= 1/impedance) of a sample inside the sensor. Depending on the shape of the sample, the sensor can consist of a set of either parallel or adjacent electrodes, and some examples of possible sensor electrode configurations are shown in Figure 12.5. The technique requires the sensor to be calibrated initially using samples of material of known moisture content. The measured parameters of capacitance (C), conductance (G), or their ratio (G/C) may then be used to determine the moisture contents of unknown samples of similar material. In a series of tests on breakfast cereal bars with moisture contents below 20% by weight, the ratio of G/C was found to be largely independent of the sample size or mass, and this parameter can therefore be used as a particularly useful indicator of moisture content. Figure 12.6 shows a set of measured values of G/C for cereal wheatflake biscuits taken from various stages of the production process for
328
Nondestructive Testing of Food Quality
(a)
(b)
Figure 12.5. Basic capacitance moisture sensors.
both whole and half biscuits, and confirms that the technique is largely unaffected by the size of the sample. A further example, in this case for loose wheat grains of varying moisture content, is shown in Figure 12.7. Applications Using Multiple Electrode Pairs Electrical Capacitance Tomography One very interesting application of capacitance measurement technology is for producing permittivity images of the contents of closed
Figure 12.6. Measured moisture content of cereal biscuits.
Techniques Based on the Measurement of Electrical Permittivity 329 0.450
0.400
0.350
0.300
G/C
0.250
0.200
0.150
0.100
0.050
0.000 8.000
9.000
10.000
11.000
12.000
13.000
14.000
15.000
Moisture %
Figure 12.7. G/C against moisture content for wheat.
vessels. This technique is known as electrical capacitance tomography (ECT) and can be used to measure and display the concentration distribution of a mixture of two insulating (dielectric) fluids, such as oil, gas, plastic, glass, and some minerals, located inside a vessel. The measurement can be completely noninvasive if the vessel walls are nonconducting. The basic idea is to surround the vessel with a set of electrodes, as shown in Figure 12.8(a) and to take capacitance measurements between each unique pair of electrodes. From these measurements, the permittivity distribution of the mixture (which is related to the concentration of one of the fluids) can be deduced. In principle, vessels of any cross-section can be imaged and an example of a simple cylindrical eight-electrode sensor is shown in Figure 12.8(b). The concentration distribution is normally plotted on a fairly coarse pixel grid, because the relatively small number of available measurements limits the possible image resolution. In the sample images shown below, a red/green/blue color scale shows areas of high concentration as red and areas of low concentration as blue. See Figure 12.9. Although the resolution of ECT images is relatively low, they can be captured at high speeds, typically 200 frames (images) per second
(a)
(b)
Figure 12.8. A basic eight-electrode cylindrical ECT sensor.
(a)
(b)
(c)
Figure 12.9. Sample ECT images.
330
Techniques Based on the Measurement of Electrical Permittivity 331 for an eight-electrode sensor. If the fluid is in motion and images are captured at two axial locations, correlation techniques can be used to calculate the velocity profile across the vessel cross-section as well as the concentration profile. This enables the flow profile and overall flow rate in two-phase flow systems to be calculated. ECT can be used in a wide range of applications, including monitoring fluidized beds, flow rate measurement in pneumatic conveying systems, flame and combustion imaging, product uniformity monitoring and sensing, high-speed check weighing, and the monitoring of oil-gas flows. A few applications that have been used for food follow. High-speed Check Weighing In many manufacturing processes, nominally identical products are transported through the production process on moving conveyor belts. Each item produced by the same manufacturing plant should be similar and, in an ideal world, the plant would operate within a closed feedback loop to ensure that the products remain within an acceptable normal range. A typical example of this occurs in processed food manufacturing, where products are usually required to have similar shapes and identical masses. For these manufacturing processes to operate under closed loop control, a sensor that can measure the bulk or mass of the product is required so that the sensor output can be used to correct the process continuously. In some applications, a conventional online check weigher can be used as the sensor. However, for many processes, check weighers cannot be used because of speed or contact problems. Where the materials used in the products are predominantly dielectrics, ECT can be used as a form of high-speed noninvasive check weigher and can also provide an output to monitor and control the manufacturing process. Some examples of food products that can be measured in this way include chocolate, butter, and most fat or oil-based products where the water content is low. Other suitable nonfood products are plastics, glass, many minerals, and hydrocarbons. A typical ECT bulk measurement sensor electrode configuration for conveyed products is illustrated in Figure 12.10, which shows a view of the cross-section of the conveyor belt, the sensor electrodes, and the sample test object, which is moving along an axis orthogonal to the page. A photograph of an experimental sensor of this type is shown in Figure 12.11(a).
Figure 12.10. Capacitance sensor electrode configuration (cross-section).
(a)
(b)
Figure 12.11. Test bar inside 12-electrode ECT sensor and ECT image.
332
Techniques Based on the Measurement of Electrical Permittivity 333 The number of electrodes that can be used above and below the bars depends on the sensitivity of the capacitance measurement circuitry. In the example shown above, 12 electrodes have been used with 6 located above and 6 below the conveyor belt. If the conveyor belt is made from an insulating material, such as plastic, it is possible to locate the lower array of electrodes below the conveyor belt, and there will therefore be no need for the measurement electrodes to be in contact with the product. Hence, in this application, the bulk sensor can be completely noninvasive, which minimizes the risk of product contamination. The width of each bar sensor electrode array is constrained by the widths of individual bars and the spacing between the lines of bars. Moreover, because there is little point in making the sensors much longer than the lengths of the bars because this simply increases the standing sensor capacitances, the axial length of the sensor is also defined approximately by the bar dimensions. Figure 12.11(a) shows a test bar located inside a 12-electrode ECT sensor and Figure 12.11(b) shows a typical ECT image of the bar (permittivity profile) obtained using this sensor. The object bulk or volume can be calculated from the permittivity profiles. Tests carried out on a set of accurately machined plastic bars showed that it was possible to obtain measured values for the bar volumes within a range of 2% of the actual bar volumes using a standard laboratory ECT system and the 12-electrode sensor shown in Figures 12.10 and 12.11(a).
Two-phase Flow Measurement Using ECT In two-phase flows, there is a mixture of two materials inside the vessel or pipeline. Examples include gravity or pneumatic conveying of solids, such as wheat grains, where the two components of the mixture are a solid and air. A second category includes the flow of a mixture of a liquid (such as oil) and gas. It is usually difficult and often impossible to measure these types of flow accurately using conventional flow measurement technology. Any successful measurement technique for measuring two-phase flows, where the concentration and velocity vary across the vessel, must
334
Nondestructive Testing of Food Quality
Figure 12.12. Tomoflow measurement principle.
be based on the general equation (Equation 12.2) for calculating the instantaneous flow Q(t) through a vessel of cross-sectional area S: Q(t) = ∫ Co (s) · V(s)dS
(12.2)
where Q(t) is the flow rate as a function of time t, Co(s) is the concentration profile over the vessel cross-section S, and V(s) is the velocity profile over this section. The overall mass flow can be obtained by multiplying Q(t) by the material density and then integrating over time. ECT is one of the few techniques that can be used to implement this equation for a mixture of two dielectric materials and has been used successfully to measure the flow of mixtures such as granular solids and gases, or oils and gases, where the flow is nonuniform across the vessel or pipe. The measurement can be completely noninvasive for many material mixtures. The principle of operation is illustrated in Figure 12.12. A twin-plane ECT system and capacitance sensor are used to produce sets of concentration profile frames at two axial pipe locations at high frame rates. Correlation techniques are then used to convert these two sets of concentration profiles into velocity profiles across the pipe. The products of the concentration and velocity profiles are then integrated
Techniques Based on the Measurement of Electrical Permittivity 335 across the pipe area to produce an accurate measurement of volumetric flow as defined in Equation 12.2. This technique measures the velocity of the interfaces between the two dielectric materials. In the case of a weak mixture of granular solids and air, this velocity will correspond to the speed of the granules and so the technique will measure the flow rate of the granules. However, in the case of a bubbly oil-gas mixture, the interface velocity will be that of the gas bubbles if the oil is the majority component of the mixture and so the measured flow rate will be that of the gas bubbles. The range of measurable flow velocities depends on the particular flow regime and the design of the capacitance sensor. The measurement principle requires the concentration profile at the first measurement plane to be sufficiently unchanged when it reaches the second measurement plane to allow correlation techniques to be used to extract the flow velocity. The achievable velocity resolution depends on the number of frames of data that can be captured during the time taken for the flow to move between the two sensing planes. If the flow is reasonably stable, the two concentration profiles will be similar at the two sensing planes even if they are widely separated, allowing a relatively large number of data frames to be captured and successfully correlated, which means that relatively high flow velocities can be measured with reasonable resolution. However, if the flow regime is more chaotic, the sensing planes will need to be closely spaced to ensure that the concentration profiles are sufficiently similar to be correlatable, and this limits the highest flow velocities that can be measured in practice. The maximum measurable flow rate will also depend on the number of sensing electrodes used at each measurement plane. A larger number of electrodes will increase the accuracy of the concentration measurement but also reduces the maximum frame capture rate. Conversely, a smaller number of electrodes produce less accurate concentration profile measurements but data can be captured at higher frame rates. Practical capacitance sensors for flow measurement currently have between 4 and 12 electrodes located around the pipe circumference at each measurement location. ECT has been used successfully to measure flow rates up to 20 meters per second. A complete flow measurement system consists of a twin-plane guarded multielectrode capacitance sensor, a multiplane capacitance measurement controlled by a personal computer, and a comprehensive suite of data capture and analysis software. An experimental flow sensor
336 Figure 12.13. Experimental tomographic two-phase flow measurement system.
Techniques Based on the Measurement of Electrical Permittivity 337
Figure 12.14. Tomographic flow measurement of wheat.
and ECT measurement system installed on an oil/gas flow rig is shown in Figure 12.13. Some typical flow measurement results for a two-phase mixture of wheat and air in a gravity flow are shown in Figure 12.14. Conclusions The precision capacitance measurement technology described in this chapter has been developed over the last 15 years and is largely based on original research carried out by Professor Maurice Beck and his team at UMIST (now Manchester University) in the UK. As of 2006, most applications have been in the applied research field. However, we are aware that at least one major international company is now developing this technology for use in food processing and manufacture, and it seems likely that commercial measuring devices based on this technology will start to appear over the next 2 to 3 years. Further Reading Additional information on the technology described in this chapter can be found in the following references:
338
Nondestructive Testing of Food Quality
Baxter LK. 1997. Capacitive Sensors, IEEE Press, ISBN 0-7803-1130-2. Beck MS and A Plaskowski. 1987. Cross-correlation flowmeters: their design and application, IOP Publishing, Bristol, UK. Byars M. 2001. “Developments in Electrical Capacitance Tomography,” Proceedings of 2nd World Congress on Industrial Process Tomography, Hannover, Germany, August 29–31. Byars M and J Pendleton. 2005. A high-speed non-contact check weigher, Proceedings of 4th World Congress on Industrial Process Tomography, Aizu, Japan. September. Hunt A, J Pendleton, and M Byars. 2004. Non-intrusive Measurement of Volume and Mass using Electrical Capacitance Tomography, ESDA2004, Manchester UK, July 19–22. Williams RA and MS Beck. 1995. Process Tomography, Butterworth-Heinemann Ltd., ISBN 0 7506 0744 0.
Index
A AAS. See Atomic-absorption spectroscopy Acoustic impedance, 50–51 Acoustic spectrometer, low frequency, nondestructive texture analysis of porous cereal products with use of, 26 Adiabatic compressibility, 47 Adipose tissue characterization, tentative FT-Raman band assignments for, 149t Adsorption of species on capillary walls, 294 Afseth, N. K., 158 Agglomeration automated image analysis and, 24 image analysis and detection of, 193 Aguilera, J. M., 150 Airsense (Germany), 243 Alcoholic beverages MIR spectroscopy and, 135–136 Alcoholic beverages, alcohol in, NIR analysis and, 115–116 Alcoholic beverages, calibration results for, 115t Alginates, 187 Al-Jowder, O., 131 Allais, I., 23 Alpha M.O.S. FOX 3000, 248, 249 discrimination of packaging material based on level of plasticizers and, 270 Alpha MOS (France), 243
American chocolate, 179–180 particle size distributions for bars of, 180 American Oil Chemists’ Society, 110–111 American Society for Testing and Materials, 93–94 Amide I Raman band envelopes, 147 Amide III Raman band envelopes, 147 Angular frequency, 47 ANN. See Artificial neural networks ANSYS CFX, 303 Antibody-antigen interactions, 301 AOAC. See Association of Official Analytical Chemists AOCS. See American Oil Chemists’ Society Aparicio, R., 263 Apolar molecules, detecting, Raman spectroscopy and, 144 Appearance of food, 283 Apple juice beverages, MIR spectroscopy and, 135–136 Apples evaluating maturity of, 271, 273–274 with conducting polymer-based sensing system, 273 sorting method for, based on image analysis, 20 spin echo pulse sequence and sample image of internal browning in, 220 Applied Sensors, 239, 243, 246 APV Homogenizer, screenshot of Insitec software monitoring droplet size of food emulsion produced by, 191, 192 Archibald, D. D., 152
339
340
Index
Aroma, 283 characterizing food product in terms of, 237 Aroma exposure, to sensor array, 241 AromaScan (UK), 239, 243 Aromatic amino acid side chains, Raman spectroscopy and, 147 Artificial neural networks, 91, 251, 261, 262 Asher, A., 147 ASPECT Magnet Technologies LLC, 228 Association of Official Analytical Chemists, 132 ASTM. See American Society for Testing and Materials At-line analysis, 4, 16, 81–82 Atomic-absorption spectroscopy, 86 Attenuated total reflectance (ATR), 125, 126–127 crystals, 120 schematic of internal reflection in crystal, 126 ZnSe crystal, 127 Authenticity, 120 mid-infrared spectroscopy and, 130t MIR spectroscopy and, 138 Automatic sampler for butter, 105 for milk powder, 103 Automation of production, increased use of, 5 B Baby, R. E., 263 Background scanning, 124 Backscatter detection, 170, 171 Backward elimination procedure, 260 Bailey, J. E., 298 Bairi, A., 24 Baldauf, N. A., 137 Band positions, selected parameters pertinent to, 144 BAW mode. See Bulk acoustic mode Bawsinskiene, L., 26 Baxter, L. K., 323
Bazzo, S., 249 Beamsplitter, 74, 123 Beans, vision system and classification of, 20 Beck, M., 337 Bedson, P., 8 Beef, FT-Raman spectra of, 149t Beef adipose tissue, FT-Raman spectra of, at various irradiation doses, 155 Beelaram, A. M., 302 Beer, computer vision and determining bubble size distributions in, 24 Beer-Lambert Law, 88 Benady, J. E., 263 Benedito, J., 57, 60 Benson, I. B., 16 ß-carotene dispersion, raw image and detected particles in, 205 Beverages, NIR analysis and, 114–116 Biopolymer structure, ultrasonic characterization of, 57–58 Bioprocess industries, electronic nose technology and, 268 Biorecognition molecules, biosensor technology, interaction between target molecules and, 301 Biosensor response, effect of geometry of detector cell on appearance of hydrogen peroxide at electrodes and effect of flow rate on, 313 Biosensors, 283–316 case studies, 304–315 electrokinetic sample injection in Micro-FIA biosensors, 304–309 optimization of a glucose flow injection analysis biosensor, 309–315 concluding remarks about, 315–316 flow-type, 286–290 flow mechanisms in microchannels, 288–290 miniaturization and microfluidics, 287–288 principles, 286–287 governing equations for modeling of, 290–302 bulk flow modeling, 290–294
Index modes of component transport, 294–296 reaction kinetics, 297–302 numerical approach, 303–304 overview of, 283–286 principles of, 284 successful, performance criteria in design of, 285 Bioterrorism, food, MIR spectroscopy and, 137 Birefringence, loss of, in starch granules as a function of temperature, 150 Biscuits, missing, detection of, 325–327 Black speck detection, 197, 202–204, 209 Bloch equations, magnetization vector behavior and, 216–217 Bloodhound Sensors (UK), 243 Bosset, J. O., 263 Box-Behnken designs, 250 Breakage, image analysis and detection of, 193 Brosnan, T., 20 Brown, R. J., 132 Brownian motion, 168 Brucella sp., 137 Bulk acoustic mode, 245 Bulk flow modeling, 290–294 electroosmotic flow, 290–294 hydrodynamic flow, 290 Bulk longitudinal (L) waves, 45 Butter automatic sampler for, 105 calibration results for, 105t NIR analysis and, 104 plot of NIR predictions versus reference values for moisture in, 106 Bypass instruments, 3 C Caffeine, in instant coffee, NIR analysis and, 114–115 Cake formulation, detection of lard adulteration in, MIR spectroscopy and, 137
341
Calibration, 8–9, 10, 34 of indirect methods, influence of reference methods on, 33–43 multivariate, theoretical basics of, 86–89, 91 reference method quality and, 17 Calibration data set, number of samples in, 93–94 Calibration line, 34, 43 for mass loss determination at 145◦ C of Lactoserum Euvoserum, 41 and sample determination based on Karl Fischer titration, 38 and sample determination based on oven drying, 39 for water content determination of samples predried at 145◦ C of Lactoserum Euvoserum based on Karl Fischer titration, 42 Calibration models building, 91–94 cross validation, 92 external validation, 91–92 results and parameters of validation, 93 Calibration model updating, 94 Cameras, 21 Canadian Grain Commission, 68 Canonical correlation analysis, 252–253 Canonical discriminant analysis, 240, 252, 253–254 Canonical variate analysis, 153, 154 discrimination of porcine adipose tissues and, 156 Capacitance electrode configuration options, 327 Capacitance measurement circuit, basic, 324 Capacitance measurement methods, 323–325 Capacitance moisture sensors, basic, 328 Capacitance sensor electrode configuration, cross-section, 332 Capacitance sensors, 16, 323 applications of, 325 Carageenans, 187
342
Index
Carbohydrates Raman spectroscopy and, 150–151 tentative FT-Raman band assignments for, 151t Cascade diluters, 191 Casein-dissolving solution, homogenization pressure for standard milk emulsion and cluster free emulsion with, 185 Casein micelles, laser diffraction, milk products and, 183, 184 Case studies, electronic nose technology, 268–274 detection of retained solvent levels in printed packaging material, 269 detection of spoilage and discrimination of raw oyster quality, 274 discrimination of frying oil quality based on usage level, 271 evaluating apple maturity, 271, 273–274 Castillo, M., 24 Cattaneo, T. M. P., 134 CCA. See Canonical correlation analysis CCD. See Charged coupled device CDA. See Canonical discriminant analysis Celadon, A., 150 Ceramic sensors, 239 Cereal biscuits, measured moisture content of, 328 Cereal products, porous, nondestructive texture analysis of, 26 CFD. See Computational fluid dynamics CGC. See Canadian Grain Commission Chanamai, R., 53 Chandraratne, M. R., 20 Charged coupled device, 144 Cheddar cheese, principal component scores plot of, at 6-, 9-, and 12-month ripening stage, 128 Cheese analyzing dielectric processes of, 24 hard and slicing, calibration results for, 108t mid-infrared spectroscopy and, 131–135 NIR analysis and, 105–108 processed, mid-infrared spectra of, 122
Chemical assays, FIA technology and, 286 Chemometric methods, 127–129 Chemometrics, 157 Chemosensory system types conducting polymer sensors, 243, 244–245 metal oxide field effect transistors, 243, 246 metal-oxide sensors, 243, 244 quartz microbalance sensors, 243, 245–246 surface acoustic wave-based sensors, 246–247 Chen, M., 133 Chen, X. D., 25 Chi, Z., 147 Cho, B., 56 Chocolate calibration results for, 114t detection of lard adulteration in, MIR spectroscopy and, 137 early manufacture of, 176 ingredients in, 177 laser diffraction and applications with, 176–183 achieving efficient production, 176 American and United Kingdom chocolate products, 179–180 challenges of particle size measurement, 179 cocoa mass, cocoa powder, and cocoa butter, 177 conching, 178 dairy and food/flavor emulsions, 182–183 dark chocolate vs. milk chocolate, 181–182 emulsion measurements, 183 luxury brands, 180–181 manufacturing process, 176–177 milk, 178 milk and chocolate crumb, 178 optimizing production of, 178–179 sugar, 177–178 NIR analysis and, 113–114 particle size distributions for standard, luxury, and economy brands, 181
Index particle size distributions for UK and American chocolate bars, 180 validation with independent samples of calibration of fat in, 114 Chocolate-coated wafer bars, detection of missing wafers in, 325–327 Chocolate crumb, milk chocolate manufacture and, 178 Chocolate liquor, 177 Cimander, C., 268 Circular equivalent (CE) diameter image analysis, particle shape and, 193 three different shapes with same diameter, 193 Cis/trans isomers, FT-Raman spectroscopy in determination of, 150 Classical magnetization vector, motion for, 213 Classification analysis, 259 Cluster analysis, 251, 261 CMOS cameras. See Complementary metal-oxide-semiconductor cameras Coates, J. P., 125 Cocoa bean pods, 177 Cocoa butter, 177, 178 determination of fat in, 113 Cocoa powder, 177 Coffee flavors of, and factors affecting quality of, 189–190 green and roasted, FT-Raman spectroscopy and discriminating botanical origin of, 156 industry background, 188–189 instant, calibration results for, 115t particle size of pre-ground espresso and pre-ground filter coffee, 190 produced by grinder at varying speeds, 189 Coffee berry borer, 189 “Cold” sensors, 243 Collier, W. A., 263 Color changes measurement, video image analysis and, 27
343
Complementary metal-oxide-semiconductor cameras, 198 Component transport modes, 294–296 convection, 295–296 diffusion, 294, 295 electrokinetic migration (electrophoresis), 296 total mass balance, 296 Composition, direct and indirect measurements of, 227 Compositional analysis mid-infrared spectroscopy and, 130t performing, 3 Compressed potassium bromide (KBr) pellets, 125 Computational fluid dynamics, 311 Computer vision, 20–21 bubble size distributions in beer and, 24 COMSOL Multiphysics, 303 COMSOL Multiphysics 3.2, 306 Concentrated dispersions, particle sizing of, 169–170 Conching, 178, 180 introduction of, 176 Condensed milk, FTIR-ATR spectroscopy and analysis of, 132 Conducting polymer-based sensing system, apple maturity evaluation with, 273 Conducting polymer sensors, 243, 244–245 Conducting polymer technology, 238 Confocal microscopes, Raman applications and, 144 Consolidation, in food industry, 5 Consumables, ensuring supply of, 11 Contamination monitoring, mid-infrared spectroscopy and, 130t Control volumes, 303 Convection, 294, 295–296 Corn, fermented mash, calibration and validation results for ethanol in, 116t Corn starch, classification of, MIR spectroscopy and, 136 Correctness of results, 34–35 Corredig, M., 58
344
Index
Cost, management support and, 13 Coulter particle counters, 197 Coupland, J. N., 54, 55 CP sensors. See Conducting polymer sensors Cream liquers, variations in particle size and storage of, 185, 186 Crescenza cheese metal oxide-based sensing system and shelflife of, 268 MIR spectroscopy and analysis of, 134 Cross-correlation DLS instrumentation, 169–170 Cross validation method, 92, 93, 260 Mahalanobis distance and, 258 Crystallization, automated image analysis and, 24 C-shaped magnet, 232 Cuboids, defining size of, 166, 166 Curda, L., 19 Cuvette holders, 78 CVA. See Canonical variate analysis Cylinder, with same volume of given sphere, 166–167, 167 Cylindrical magnets, 232 Cyrano Sciences, 239 Cyranose 320, 256 discrimination of packaging material based on level of plasticizers and, 270 raw oyster quality differentiation and, 274 D DA. See Discriminant analysis Daestain, M -F, 20 Dairy and food/flavor emulsions, particle size of fat droplets in, 182–183 Dairy emulsions, storage of, particle size and, 185–186 Dairy products, mid-infrared spectroscopy and, 131–135 Dalgleish, D., 60 Dark chocolate milk chocolate vs., 181–182, 182 particle size distributions for, 183
Dark particles in a white powder, in-process measurement of size and number of, 197, 202–204 De Baerdemaeker, J., 27 Debye length, thickness of EDL described by, 291 Defining the method, 11 Design qualification, 8 Detector array dispersive spectrophotometers, 73–74 principle of, 74 DFA. See Discriminant factorial analysis Dialectric materials, permittivity of, 323 Dielectric constant, 292 of the material, 323 Dielectric food, examples of, 321 Dielectric imaging, 25 Diffuse reflectance, 127 methods, 127 Diffuse reflection, 79–81 Diffuse reflection measurements, 79–81 principle of, 80 Diffusion, 294, 295 Diffusion coefficient measurements, food material structure and, 227 Diffusive wave spectroscopy, 170 Dilute dispersions, dynamic light scattering and, 169 Diode array dispersive instruments, advantages and disadvantages of, 76 Dionisi, F., 17 Dioxins, 283 Direct calibration transfer, 95–96 Direct measurement, 3 Direct method, secondary methods calibrated against, 43 Direct methods, 33 Direct standardization, 94 Discriminant analysis, 153, 240, 252–262 variable selection procedure, 260–262 Discriminant factorial analysis, 252 Discriminant rule, development of, for classifying observations into categories, 259
Index Dispensing process, optimization of, parameter values used in numerical simulation, 306t Dissolution, automated image analysis and, 24 DLS. See Dynamic light scattering DNA, hybridization and, 301–302 Double-bonded structures, detecting, Raman spectroscopy and, 144 Double reciprocal, 299 Drying curve, of Lactoserum Euvoserum, 40 Drying oven, 36 Drying oven method, 83 Drying techniques, 43 water content determination, 35, 36 Dry sieving, 168 DST. See Direct standardization Du, C-J, 21 DWS. See Diffusive wave spectroscopy Dynamic light scattering, 168–172 food applications with, 171–172 latest advances in, 169–171 measurement positions for small, weakly scattered samples, and concentrated, opaque samples, 171 Dynamic MRI, 221–227 Dynamic NMR microscopy, 225 Dynamic NMR pulse sequences (PGSE pulse sequence), example of, 225 E ECT. See Electrical capacitance tomography Edible oils calibration results for iodine value in, 111t iodine value for, 110–111 EDL. See Electric double layer ED-XRF. See Energy dispersive X-ray fluorescence Eigenvector quantification methods, 157, 158 Eight-electrode cylindrical ECT sensor, 330 Electrical capacitance, 321–323
345
Electrical capacitance tomography, 328–329, 331 sample images, 330 test bar inside 12-electrode sensor and image, 332, 333 two-phase flow measurement and use of, 333–335, 337 Electrical capacitance tomography sensor, eight-electrode cylindrical, 330 Electrical capacitors, basic, 322 Electrical dielectrics, 321 Electrical permittivity, techniques based on measurement of, 321–337 Electric double layer, 290 schematic diagram of, next to a negatively charged solid surface, 291 Electric valve actuator, flow injection amperometric biosensor, 310 Electrode kinetics, 299–300 Electrokinetic flow in microchannel, 289 simple channel intersection used for, in micro device, 305 velocity and potential profile inside channel during loading and injection mode, 305 Electrokinetic focusing, 304 Electrokinetic migration (electrophoresis), 296 Electrokinetic sample injection in Micro-FIA biosensors (case study) electrokinetic dispensing mechanism, 304–306 optimization of the dispensing process, 306–307, 309 Electromagnets, 228 Electronic nose applications in food industry, 237–276, 264–267t case studies, 268–274 chemosensory systems types, 243–247 conducting polymer sensors, 244–245 metal oxide semiconductors field effect transistors, 246 metal-oxide sensors, 244
346
Index
Electronic nose applications in food industry, (cont.) quartz microbalance, 245–246 surface acoustic wave-based sensors, 246–247 electronic nose market, 242–243 electronic nose niche, 241–242 issues or drawbacks with electronic nose technology, 247–250 overview, 237–241 statistical analysis and, 250–262 artificial neural networks, 262 discriminant analyses, 252–262 multivariate factor analyses, 251 principal components analysis, 251–252 Electronic noses components of, 240 description of, 238 Electronic nose systems comparison of, in discriminating packaging material based on level of plasticizers, 270 handheld, discrimination of raw oyster quality by two types of, 275 Electronic nose technology steps in usage of, 240 summary remarks about, 274, 276 Electroosmosis, 289 Electroosmotic flow, 289, 290–294 Electrophoresis, 289 Eliminated product, image analysis and, 20 Elmehdi, H. M., 53 Emmental cheese, 135 determining ripening stage in, 268 MIR spectroscopy and predicting WSN content of, 133–134 Energy dispersive X-ray fluorescence, 21 “Ensemble” behavior, 213 Enzyme kinetics, 297–299 EOF. See Electroosmotic flow Equipment qualification process design qualification, 8 installation qualification, 8–9 operational qualification, 9 performance qualification, 9–10 stages in, 8
Erickson, D., 302 Espresso, pre-ground, particle size of, 190 Essential oils, Raman spectroscopy and quality of, 156 Ethanol, calibration and validation results for, in fermented corn mash, 116t Euclidean distance, 254 Everard, C. D., 24 External validation (test set validation), 91–92 F Factor analysis, 89, 251 Factorization, example, of simple spectra into corresponding loadings and scores, 90 Fagan, C. C., 134 Faraday’s constant, 292 Fast spin echo techniques, magnets and images acquired with use of, 228–229 Fats raman instrumentation and, 148–150 Raman quality measurements of, 154 FDA. See Food and Drug Administration Fecal contamination, imaging technologies and detection of, 21 FEM. See Finite element method Fermi resonance, 144 FET. See Field-effect transistor FIA. See Flow injection analysis FIA sensor, optimization of, 313–315 Fiber optic-based liquid probes, 78 Fiber optic probes, setup of, for different measurement modes, 78 Fibrous impurities, detection of, in particle suspensions, 197, 205, 207–208 Fick’s law of diffusion, 300, 311 Fick’s second law, 295 FID. See Free Induction Decay Field-effect transistor, 246 Figaro, 239 Filter-based instruments dedicated wave numbers covered by, 73 principle of, 72 Filter-based photometers, 72
Index Filter-based process analyzers, for water or moisture determination, 17 Filter coffee, pre-ground, particle size of, 190 Filter instruments with interference bandpass filters, advantages and disadvantages of, 75 Final products, 2, 4 Finite element method, 303 Finite volume method, 303 Fish, Raman measurements of, 157 Fish fat, Raman analysis of, 149 Flare, laser, 170 Flatten, A., 131 Flavor, 237 Flavor emulsions laser diffraction used to detect outsize particles in, 188 oil, 187 use of in food industry, types of, 187 Flocculated systems, 57 Flow curve determination Flow curve determination, for food texture measurement, 59–60 Flow injection amperometric biosensor, 310 Flow injection analysis aroma and, 241 biosensor configuration, 286 Flow injection analysis amperometric sensor, 310 Flow profiles, 60 Flow rate switch, flow injection amperometric biosensor, 310 Flow time, 226 Flow-type biosensors, 286–290 flow mechanisms in microchannels, 288–290 miniaturization and microfluidics, 287–288 principles, 286–287 Fluent, 303 Fluorescence spectroscopy, 23 Fluorescence tools, 21 Food and Drug Administration, 109, 120 Food composition, ultrasonic measurement of, 52–55
347
Food emulsions, laser diffraction and characterization of, 183 Food industry challenges facing, 119–120 changes in, and consequences for use of sensors, 4–5 Food quality relationship of NMR properties to, 227 sensory properties related to, 283 Food quality assessments multivariate qualitative Raman spectroscopy for, 153–157 multivariate quantitative Raman spectroscopy for, 157–158 Food quality measurements, contemporary and special applications of raman spectroscopy for, 151–153 Food research, nondestructive sensors for, 25–28 Food safety consumer’s attention to, 283 MIR spectroscopy and, 137 Food samples, NIR spectra of, with assignment of spectral regions, 70 Food structure, ultrasonic measurement of, 55–58 Food texture measurement, 58–61 correlation with L-wave measurements, 60–61 flow curve determination, 59–60 shear wave methods, 58–59 Foreign bodies/particles detecting, in bottled beverages, fruit juices, and pie fillings, 57 image analysis and detection of, 193 Forward selection procedure, 260 Fourier deconvolution, protein secondary structure determination and, 147 Fourier transformation, stationary MRI and, 218 Fourier transform (FT) instrument, 74 Fourier transform (FT) spectrometers, 122 Fourier transform infrared (FT-IR) instruments, advantages and disadvantages of, 76
348
Index
Fourier transform infrared (FT-IR)spectrometers, 123 detection of fats and oils and, 148 Fourier transform infrared (FT-IR) spectroscopy, 120, 122–124, 129 cheese flora analysis and, 133 Fourier transform infrared (FT-IR) technology, 18 dairy industry and use of, 19 Fourier transform near infrared (FT-NIR) detector, 17 Fourier transform near infrared (FT-NIR) spectrophotometer, principle of, 75 Fourier transform near infrared (FT-NIR) technology, advantages and disadvantages of, 76 Fourier transform Raman (FT-Raman) spectrometer, 145 Fourier transform Raman (FT-Raman) spectroscopy, quantifying unsaturated acyclic components in garlic oil and, 27–28 Fox, P., 53 FPIA-3000 system, circularity, image analysis and, 194 Fraunhofer approximation, 174 Free Induction Decay, acquisition time for, 214 Free trade, quality and authenticity of food products and, 27 Freeze drying, 35 Freezing, measurement of, 55 Fresh products, consumer demand for, 2 Fruit beverages, MIR spectroscopy and, 135–136 Fruits L-wave ultrasound and monitoring of ripening and softening of, 61 maturity of, sensor systems and evaluation of, 263, 268 spectrosopic techniques and assessment of quality in, 27 Fry and Sons, 176 Frying oil, discrimination of, using a chemosensory system, 272
Frying oil quality (case study), discrimination of, based on usage level, 271 19 F spectroscopy, 235 FT-NIR. See Fourier transform near infrared detector Fu, L. M., 309 Full fat milk, size distributions recorded for, 183–184, 184 FVM. See Finite volume method G Galerkin finite element method, 303–304 Gan, T. H., 26 Garlic, FT-Raman spectroscopy and quantifying unsaturated acyclic components in, 27–28 Gas chromatography, 238 Gas chromatography-mass spectrometry, 238, 283 Gas chromatography olfactory methods, 238 Gas sensors, real time sensing with, 268 Gate, in metal oxide semiconductors field effect transistors, 246 Gauche-gauche-trans conformation, 147 Gaussian discriminant function, 252 GC. See Gas chromatography GC-MS. See Gas chromatography-mass spectrometry GCO methods. See Gas chromatography olfactory methods GDF. See Gaussian discriminant function Gelation, Raman spectroscopy and structural changes in proteins during, 147 Geometry, nonuniform, 294 German sausages, categories of, 99 Glucose, concentration profile of, 311, 312 Glucose flow injection analysis biosensor (case study) flow injection analysis amperometric sensor, 310 model formulation, 310–311 model simulations, 311–312 optimization of, 309–315 optimization of the FIA sensor, 313–315
Index Glucose oxidase, concentration profile of, 311, 312 Glycolysis, 133 Goat’s milk, processed, mid-infrared spectra of, 122 Gomez-Carracedo, M. P., 135–136 Grains image analysis and, 20 Raman microspectroscopy and quality assessment of, 152 Griffin, S. J., 56 Guided waves, 51 alignment of transducers and ultrasonic path for, 48–49 Guillard, A. S., 131 Gum Arabic, 187 H HACCP. See Hazard Analysis and Critical Control Point Haider, M., 15 Halbach cylinder magnet, 232, 234 Handheld probes, 78 Hansen, W. G., 249 Harhay, G. P., 153 Harper, W. J., 249 Hatcher, D. W., 20 Hazard Analysis and Critical Control Point, 109 Hazelnuts, irradiated, MIR spectroscopy and, 136 HDE. See Hydrogen-deuterium exchange HDPE packaging. See High-density polyethelyne packaging Helium neon (HeNe) laser, 74, 124 dynamic light scattering and, 169 Hepworth, N. J., 24 Herrmann, N., 61 Herschel, Sir William, 68 Hewlett Packard, 240 High-density polyethelyne packaging, 249 High performance liquid chromatography, 86, 158 High-speed check weighing, 331–333 HKR Sensorsystems (Germany), 243 Holdout method, 260
349
Homogenization automated image analysis and, 24 of milk, particle sizes and, 184–185 pressure for standard milk emulsion and cluster free emulsion containing casein-dissolving solution, 185 Honzatko, R. B., 147 Hotelling, 251 Hotelling’s T2 test, 257 “Hot” sensors, 243 HPLC. See High performance liquid chromatography Hybrid chemosensory systems, 243 Hybridization, 301 Hydrocolloids, 187 Hydrodynamic flow, 290 in microchannel, 289 Hydrogenated vegetable oil, trans isomer content of fatty acids and texture of, 150 Hydrogen-deuterium exchange, 153 Hydrogen peroxide, concentration profile of, 311–312, 312 Hyperspectral imaging, 21 I Ice cream, particle size of fat droplets in, 182 ICP. See Inductively coupled plasma IDF. See International Dairy Federation ILS. See Inverse Least Squares Image analysis, 20–21, 197. See also Online image analysis of particulate materials advantages with, 193–195 Imaging spectrosocpy, 21 Indirect methods (or secondary methods), 33 influence of reference methods on calibration of, 33–43 Inductively coupled plasma, 86 Infant cereal matrices, use of wavelength dispersive X-ray fluorescence and minerals in, 21–22 Inflow/outflow method, dynamic MRI and, 222 Infrared drying, 35
350
Index
Infrared Engineering (United Kingdom), 17 In-line analysis, 82 In-line magnetic resonance imaging four slices from three-dimensional fast spin echo data set on a small lime, 230 Tesla permanent magnet and, 229 In-line particle sizer, pilot plant installation with, 23 In-process measurement, of size and number of dark particles in a white powder, 197, 202–204 Insitec L, 190 Insitec online particle size analyzer, schematic representation of, 191 Insitec software, screenshot, showing change in size for four different operating conditions, 191, 192 Installation identifying place of, 11–12 qualification, 8–9 Instrumentation specifications, central importance of, 7–10 Insulating material, relative permittivity of, 323 Integrating sphere principle of, 80 rotating cup on, 81 Interferograms, 124 Interferometer, schematic of, 123 Interferometry, 122 Internal reflectance element, 126 Internal reflection, in ATR crystal, 126 Internal validation, 92 International Dairy Federation, 132 Inverse Least Squares, 88 Iodine value calibration value for, in edible oils, 111t for edible oils, 110–111 Ionic migration, 294 Ionic solutions, electrical fields applied to, 296 IRE. See Internal reflectance element Irudayaraj, J., 56, 133, 158 IV. See Iodine value
J Jackknifing, 260 Jeffries, M., 56 Jha, S. N., 27 Johnson, D. E., 259, 260 Joule heating effect, 294 Juodeikiene, G., 26 Just-in-time delivery, 2 K Kahweol, 156 Karl Fischer titration, 83, 84 calibration line and sample determination based on, 38 calibration line for water content determination of samples of Lactoserum Euvoserum, based on, 42 true values by, and oven drying and predicted values for wheat semolina samples, 37, 37t water content determination and, 36 Karoui, R., 27, 133, 134, 135 Katsumata, T., 21 KFT. See Karl Fischer titration Kilic, K., 20 Kim, I-H, 150 Kim, M. S., 21 Kim, S., 137 Kimbaris, A. C., 28 Kinetics electrode, 299–300 enzyme, 297–299 hybridization, 301 of interaction between target and bioreceptor, 301–302 Kizil, R., 148, 154 Kohman method, butter and, 104 Kueppers, S., 15 Kukackova, O., 19 L Laboratory (or off-line analysis), 81 Laboratory values, 34 LabVIEW, 310 Lachenbruch, P. A., 260 Lachenmeier, D. W., 136
Index Lactic acid fermentation, ultrasonic velocity and, 26 Lactococcus lactis spp. lactis and cermoris, 133 Lactoserum, water content determination for, 37 Lactoserum Euvoserum calibration for mass loss determination at 145◦ C, 41 calibration line for water content determination of samples of, based on Karl Fischer titration, 42 drying curve of, 40 Lamb, FT-Raman spectra of, 149t Laminar flow, uniaxial, 288 Lammertyn, J., 310, 311 Lana, M. M., 27 Laplace equations, 292, 296, 306 Larmor frequency, transverse magnetization precessing at, 213, 214 Larmor precession frequency defined, 218 stationary MRI and, 217 Laser, 173 Laser diffraction, 172–175, 197 advantages with, 175 applications with, 172–173 chocolate, and applications with, 176–183 coffee, and applications with, 188–190 flavor emulsions and applications with, 187–188 and detection of outsize particles with, 188 milk products, and applications with, 183–186 particle size distribution calculations, 174 Laser diffraction system, typical, 174 Lateral reservoirs, potential values, leakage and, 309 LDA. See Linear discriminant analysis Least square analysis, protein secondary structure determination and, 147 “Leave one out” method, 258 Lee, S., 55
351
Leemans, V., 20 Lefier, D., 133 Lennartz electronic (Germany), 243 L’Etivaz cheese, MIR spectrosocpy and, 135 Li, D., 309 Liao, Q., 294 Li Chan, E., 147 Light scattering patterns, for different particles, 174 Likelihood Rule, 259 Lindt chocolates, 176 Linear discriminant analysis, 135, 153 Linear Discriminant Function Rule, 259 Lineweaver-Burk plot, 299 Lipid crystallization, 54 Lipids, ultrasound and phase transitions in, 53 Lipolysis, 133 Lipopolysaccharides, 137 Liquid chromatography, 283 Liquid milk, mid-infrared analyzer to control composition of, online, 19 Local Weighted Regression, 91 Longitudinal diffusion, 294 Longitudinal relaxation time, magnetic resonance and, 215 Longitudinal wave reflectance, estimation of foam bubble size by, 51 Low-intensity ultrasound, 45 LPS. See Lipopolysaccharides Lucas, T., 25 Lucia, V., 133 Luxury brand chocolates, 180–181 L-wave measurements, food texture measurement and correlation with, 60–61 L-waves, 45, 46 LWR. See Local Weighted Regression M Magnetic resonance dynamic, 221–227 relationship of NMR properties to food quality and, 227 stationary, 217–221
352
Index
theory and practical implications with, 212–227 Magnetic resonance imaging, 60, 211 applications with, 25 barriers to more widespread use of, 212 Magnetic resonance imaging spectrometer, basic components of, 212 Magnets availability of, for NMR/MRI systems, 228–233 unilateral, 229 Magnet technology, continual development of, 232 Magritek Limited, NMR spectrometers through, 233 Mahalanobis distance, 251, 254–259, 261 most useful value in comparison of, from different systems, 258 unbiased, 256, 257 Mahalanobis Distance Rule, 259 Mallikarjunan, P., 269 Management support, getting, 12–14 Mango maturity, color measurement and nondestructive evaluation of, 27 MANOVAS. See Multivariate analysis of variance Marangoni, A. G., 298 Marquardt, B. J., 149, 157 Marquis, F., 248 Martini, S., 54 “Master” central spectrometer, 95 Mastersizer 2000, 179 following milk powder reconstitution with use of, 187 Mathematical modeling, 285 Mayonnaise calibration results for, 110t NIR analysis and, 109–110 McClements, D. J., 53, 61 McElhinney, J., 131 McQueen, D. H., 132 Measurement time, 12 Meat and meat products calibration results for, 100t computer vision and quality evaluation of, 20
cross correlation of water and fat content in, 98 measurement of, 97–99 MIR spectroscopy applications and, 129, 131 NIR reflectance spectra of, 87 Mechanics classical, NMR phenomenon discussion and, 213 Medical field, electronic nose technology applications in, 268 Meltability models, cheeses and, 134 Mendenhall, I. V., 132 Menten, Maud, 298 Mergers, electronic nose manufacturers, 239 Mesh sieves, 167 Messtechnik Schwartz GmbH (Germany), 23 Metal oxide semiconductor gas sensors, 239 Metal oxide semiconductors, 239 Metal oxide semiconductors field effect transistors, 239, 243, 246 Metal-oxide sensors, 243, 244, 263 Method, defining, 11 Michaelis, Leonor, 298 Michaelis-Menten constant, 299 Michaelis-Menten curve, 297, 298 Michaelis-Menten equation, 299 Michelson interferometer, 74, 145 Mickey, M. R., 260 Microchannels flow mechanisms in, 288 hydrodynamic and electrokinetic flows in, 289 Microelectronic developments, biosensor technology and, 285 Microelectronics, microfluidics and, 285 Microfluidic injection system, values of applied electrokinetic potentials of injection and loading stages in optimization study of, 307t Microfluidics flow-type biosensors and, 287–288 microelectronics and, 285 Micromesh sieves, 167
Index Microorganisms, UV-Raman identification of, 153 Microscale structure, 57–58 Microwave absorption, 9, 16 Microwave drying, 35 Microwave spectrosocpy, milk study and, 24 Mid-infrared absorption frequencies, selected molecular group, 121t Mid-infrared (MIR) spectroscopy, 16 application of, to food processing systems, 119–138 applications, 129–137 dairy products, 131–135 fruit and alcoholic beverages, 135–136 meat and poultry, 129, 131 other food products, 136–137 chemometric methods, 127–129 equipment, 122–127 Fourier transform infrared spectroscopy, 122–124 sample presentation methods, 124–127 selection of reported food analysis applications of, 130t Mid-infrared spectra, of processed cheese, goat’s milk, and olive oil, 122 Mie Light Scattering Theory, 190 Mielle, P., 248 Mie theory, 174–175 Milk. See also Milk products calibration results for, 102t mid-infrared spectroscopy and, 131–135 Milk and dairy products, NIR analysis and, 99–109 Milk chocolate dark chocolate vs., 181–182, 182 particle size distributions for, 183 Milk crumb, milk chocolate manufacture and, 178 MilkoScanTM FT 120, 132 Milk powder automatic sampler for, 103 calibration results for, 102t in a lab environment, 104t
353
laser diffraction and rehydration of, 186 NIR analysis and, 101–103 Milk products for chocolate production, 178 homogenization of, 184–185, 185 laser diffraction and particle size of, 183–184 Milk study, microwave spectrosocpy and, 24 Minerals in food products, MiniPal 4 and, 22, 22 Miniature MIR spectrometers, 138 Miniaturization of biosensors, 285 flow-type biosensors and, 287–288 Miniaturized visible/near infrared (VIS/NIR) spectrometer, 27 Minimum-distance classifier, Mahalanobis metric in, 255 MiniPal 4, determining minerals in food products and, 22, 22 MIR spectroscopy. See Mid-infrared (MIR) spectroscopy MLR. See Multiple Linear Regression MM710 near infrared analyzer, 17 Modular chemosensory systems, 243 Modular sensor system, 263 Mohr titration method, butter and, 104 Moisture calibration, of sugar-containing samples after recalibration with moving sample, 85 Moisture content, 35 Moisture measurement, capacitance sensors and, 327–328 Molecular diffusion, 294 Morphology G2 circularity, image analysis and, 194 image analysis of tea leaves with, 195, 195 Morrizumi, T., 245, 249 Moser, A., 298 MOSES. See Modular sensor system MOSFET. See Metal oxide semiconductors field effect transistors MOS sensors. See Metal-oxide sensors Motech GmbH (Germany), 243
354 MRI. See Magnetic resonance imaging Muik, B., 149 Multiple electrode pairs applications, 328–329, 331, 333 electrical capacitance tomography, 328–329, 331 high-speed check weighing, 331, 333 Multiple Linear Regression, 88 Multiple scattering, 169 Multivariate analyses, Mahalanobis distance and, 255 Multivariate analysis of variance, apple quality evaluation and, 273 Multivariate calibration method, 157 theoretical basics of, 86–89, 91 Multivariate discriminant analyses, 251 Multivariate factor analysis, 251 Multivariate qualitative Raman spectroscopy, food quality assessments and, 153–157 Multivariate quantitative Raman spectroscopy, food quality assessments and, 157–158 Munkevik, P., 21 N NADH. See Nicotinamide adenine dinucleotide Nakai, S., 147 Nakamoto, T., 245, 249 Nanoscale sensors, 243 Navel oranges, Halbach magnet and measurement of spin-spin relaxation rates of, 232, 234 Navier-Stokes equations, 293, 296, 306, 311 hydrodynamic flow and, 290 NDE technology. See Nondestructive evaluation (NDE) technology Near infrared analyzer MM710, 17 Near infrared calibrations, of sweets on moisture: sugar free and sugar containing, 85 Near infrared (NIR) irradiation, 145
Index Near infrared (NIR) spectroscopy, 9, 16, 28, 34, 36 advantages and disadvantages of spectrometer technologies, 75–76 advantages of, 69–71 applications of, in food industry, 96–116 beverages, 114–116 chocolate, 113–114 mayonnaise, edible oil, and olives, 109–113 meat and meat products, 97–99 milk and dairy products, 99–109 building calibration models, 91–94 calibration model updating, 94 cross validation, 92 external validation, 91–92 how many samples in a calibration data set?, 93–94 results and parameters of validation, 93 calibration development and, 82–96 NIR calibrations and reference analysis, 82–84, 86 theoretical basics of multivariate calibration, 86–89, 91 categories for implementation of NIR instruments, 81–82 description of, 69 direct calibration transfer, 95–96 discovery of potential for, 67 history of, 68–69 instruments, 71–76 detector array dispersive spectrophotometers, 73–74 filter-based photometers, 72 Fourier transform instruments, 74 scanning dispersive grating spectrophotometers, 72–73 measurement modes and sampling techniques, 76–82 diffuse reflection, 79–81 transflection, 79 transmission, 76–78 sample areas measured with, by static and moving samples, 86 spectral transfer, 94–95 use of, in the food industry, 67–116
Index Near infrared (NIR) technology, advantages with, 116 Near infrared process analyzer, determination of moisture in skimmed milk powder with use of, 18 Near Infrared Reflectance and Transmission (NIT) instruments, 78 Near-line analysis, 4 Neikov, A., 302 Neotronics (USA, UK), 243 Nestle, 176 Net magnetization moment, 213 Newman, D. J., 242 Newton, Sir Isaac, 68 Nib, cocoa bean, 177 Nicotinamide adenine dinucleotide, 23 NIR spectroscopy. See Near infrared (NIR) spectroscopy NMR. See Nuclear magnetic resonance NMR magnet, unilateral, watermelon on top of radio frequency coil and, 231 NMR MOUSE (Mobile Universal Surface Explorer), 229–230, 231 NMR spectrometers, portable, 233 NMR spectroscopy. See Nuclear magnetic resonance (NMR) spectroscopy Noda, T., 21 Noncontact, ultrasound applications and, 26 Noncontact measurements, 50 Nondestructive evaluation (NDE) technology, 45 Nondestructive food control, requirements summary related to investment in, 13–14 Nondestructive instrumentation, successful use of, 6–7 Nondestructive sensors for product development and food research, 25–28 for production control, 15–25 at-line analysis, 16 off-line analytics, 15–16 online analytics, 16–25
355
Nondestructive testing need for, 1–5 final products, 2, 4 process control, 2, 3–4 product development, 2, 4–5 raw material, 2–3 research, 2, 5 Nondestructive testing instrumentation, cost factors and, 13 Nonparallelism, 57 Nonuniform geometry, 294 Nonuniform zeta potential, 294 Nordic Sensors, 239 Nordic Sensor Technologies (Sweden), 243 Norris, Karl, 68 N-type oxides, 244 Nuclear magnetic resonance, 16, 54 Nuclear magnetic resonance (NMR) spectroscopy advances in nondestructive testing with, 211–235 barriers to more widespread use of, 212 Nuclear magnetic resonance systems pulse sequence advances, 233–235 recent advances in, 227–235 hardware: magnets, 228–233 hardware: spectrometers, 233 Nuclear magnetization, 213 Nuclear spin angular momentum, 212 Nunes, A. C., 24 O Oblique incidence, 57 OD. See oven drying Odors, 237 advances in sensor technologies and analyses of, 238 electronic nose and monitoring of, 241 Off-line analysis, 4 Off-line analytics, 15–16 Oil emulsions, high concentration, 187 Oils Raman quality measurements of, 154 Raman spectroscopy and, 148–150 Olfactory system, human as basis of sensory panels, 241 food consumption and, 237–238
356
Index
OligoSense (Belgium), 243 Olive oil authenticity and quality determination in, MIR spectroscopy and, 137 calibration results for, 112t classification of, 112 NIR analysis and, 111–113 processed, mid-infrared spectra of, 122 samples in 8-mm disposable vials, transmission measurement of, 77 Olive paste, calibration results for, 112t Olives FT-Raman spectroscopy and analysis of, 156–157 NIR analysis and, 111–113 Ollis, D. F., 298 Olson and Price meltability models, 134 One-dimensional imaging, applications with, 56 One-dimensional phase-encode NMR pulse sequences, for flow measurement, 234, 234 One-way MANOVA, 253 Online analysis, 3–4, 82 Online analytics, 16–25 Online image analysis, examples of, 197 Online image analysis of particulate materials, 197–209 applications with, 208–209 detection of fibrous impurities in particle suspensions, 205, 207–208 in-process measurement of size and number of dark particles in a white powder, 202–203 measurement principles, 198, 201–202 measuring size and number of big particles in concentrated submicron dispersions, 204–205 trend over time, 204–205 offline test results, 203–204 in-process installation, 204 online trend display, 206 raw image with fiber in product suspension and analyzed mage, 207 size and count of all particles, 208 size and count of fibers only, 208
Online monitoring, 138 Online near infrared analyzer Corona, 3 Online particle size analyzers, advantages with, 191 Online particle sizing techniques, 190–193 Operational qualification, 9 Optical measuring modes diffuse reflection, 79–81 transflection, 79 transmission, 76–78 Organoleptic quality, cheese, determination of, 133 Organoleptic testing, of olive oil, 112, 113 Osmetech, 239 Oven drying, 35 calibration line and sample determination based on, 39 true values by Karl Fischer titration, predicted values and, for wheat semolina samples, 37, 37t Over-fitting, avoiding, 256 Oxide sensors, 239 Oyster quality, raw (case study) detection of spoilage and discrimination of, 274 discrimination of, by two types of handheld electronic nose systems, 275 P PANalytical (Netherlands), 22 Paradkar, M., 158 Park, B., 21 Partial differential equations, 303 Partial least squares analysis, 251, 261 Partial least squares regression, 91, 121, 128, 134 Particles defined, 165 measuring properties of, 166 Particle shapes image analysis and, 193–194 three, with same circular equivalent (CE) diameter, 193
Index Particle size cocoa bean, 177 coffee flavor and, 188–190 of coffee produced by grinder at varying speeds, 189 concepts related to, 165–167 of flavor emulsions, 187–188 image analysis and, 193–195 importance of, 165 laser diffraction and determination of, 173 of milk products, assessing, 183–184 of pre-ground espresso and pre-ground filter coffee, 190 scatter plot of shape vs., 202 Particle size distributions for full fat, semi-skimmed, and skimmed milk, 183, 184 for milk chocolate and dark chocolate, 183 for UK and American chocolate bars, 180 Particle size measurement, chocolate and challenges with, 179 Particle sizing, 23–24 in food and beverage industry, 165–195 online techniques for, 190–193 Particle suspensions, detection of fibrous impurities in, 197, 205, 207–208 Pasta drying, NMR imaging and, 26 PAT. See Process Analytical Technology Pathogenic microorganisms, MIR spectroscopy and detection and identification of, 137 PC. See Personal computer PCA. See Principal component analysis PCBs. See Polychlorinated biphenyls PCR. See Principal component regression PCs. See Principal components PDE. See Partial differential equations PDS. See Piecewise Direct Standardization Peak height, 313, 314 Performance qualification, 9–10 Perkin Elmer, 240 Permanent magnets, advances in design of, 228 Perring, L., 21
357
Personal computer, 67, 124 Petri dishes, 78, 84, 98 cheese analyses in, 107 mayonnaise samples in, 110 polysterene, 109 PGSE. See Pulsed gradient spin echo PH. See Peak height Pharmaceutical industry, food industry and, 14 Phase-encoding method, dynamic MRI and, 222, 223–224 pH meter, 22 Photoluminescence, 21 Photon correlation spectroscopy, 168 Physical parameters, testing and applications with, 22–23 Physiochemical transducer, 283 Piecewise Direct Standardization, 94 Piezoelectric crystal sensors, 245 Pillonel, L., 134 Piot, O., 152 Pitt, G. D., 145 Pizza, controlling completeness of, 21 Plasticizers detection of, in packaging material, 269 electronic nose systems comparison, in discriminating packaging material based on level of, 270 Plexiglas detector cell, flow injection amperometric biosensor, 310 PLSR. See Partial least squares regression Poisson-Boltzmann equation, 293 potential in EDL described by, 291 Polyaniline, 245 Polychlorinated biphenyls, 283 Polyethylene cards, 126 Polypyrrole, 245 Polythiophene, 245 Porcine adipose tissues CVA and discrimination of, 156 FT-Raman spectra of, 149t Portable NMR spectrometers, 233 Position-displacement conditional probability density, 227 Posterior Probability Rule, 259 Potatoes, MRI and perceived textural properties of, 26
358
Index
Potato starch, ED-XRF and determining phosphorous content in, 21 Potentiostat, flow injection amperometric biosensor, 310 Poultry, MIR spectroscopy applications and, 129, 131 Predicted values, 34, 35 Primary methods, 33 secondary methods calibrated against, 43 Principal calibration phase, steps related to, 158 Principal component, 89 Principal component analysis, 89, 121, 128, 134, 153, 240, 251–252, 256 Principal component regression, 89, 158 Printed packaging material (case study), detection of retained solvent levels in, 269 PROC DISCRIM, 253 Process Analytical Technology, 138 defined, 120 Process Analytical Technology (PAT) initiative, 14–15 ProcesScan FT process analyzer, 19 Process control, 2, 3–4 sensor test results and, 22–23 Process Control Technologies, low-resolution NMR systems manufactured by, 228 Process Interface Solution, 191 Product development, 2, 4–5 nondestructive sensors for, 25–28 Production, automation of, increased use of, 5 Production control, nondestructive sensors for, 15–25 Product monitoring, mid-infrared spectroscopy and, 130t Product safety, 120 Progression, Inc., low-resolution NMR systems manufactured by, 228 Protein analysis, raman spectrosocpy and, 146–147 Proteolysis, 133 P-type oxides, 244 Pulsed gradient spin echo, 225, 225 Pulsed ultrasonic methods, 47
Pulse-echo, alignment of transducers and ultrasonic path for, 48–49 Pulse-free syringe pump, flow injection amperometric biosensor, 310 Pulse sequence/experimental design, advances in, 233–235 Punched hole sieves, 167 Pycnometer, 22 Q QCM sensors. See Quartz crystal microbalance sensors QDA. See Quantitative description analysis QMB sensors. See Quartz microbalance sensors QMB6, discrimination of packaging material based on level of plasticizers and, 270 Qualion Ltd., high-resolution NMR based sensors and, 228 Quality assurance, food, MIR spectroscopy and, 137 Quality control, in cheese production, 106 Quality improvement, as driver for nondestructive testing, 13 Quantitative description analysis, raw oyster quality and, 274 Quantum Magnetics, portable NMR systems for measuring spin-spin relaxation times and, 233 Quartz crystal microbalance sensors, 239, 243, 245–246 Quartz cuvettes, 77 Quartz microbalance-based sensing system, cheese ripening process monitored with, 268 Quasi-elastic light scattering, 168 R Raman microspectroscopy, in quality assessment of grains, 152 Raman quality measurements, of fats and oils, 154 Raman-scattering signals, collecting, 145 “Raman shift,” 143 Raman spectrometers, classification of, 144–146
Index Raman spectroscopy advantages with, 145 applications of, for food quality measurement, 143–159 basic principles of, 143–144 carbohydrates/carbohydrate-based foods and, 150–151 contemporary and special applications of, for food quality measurements, 151–153 fats and oils and, 148–150 future applications with, 159 protein analysis and, 146–147 selection rules applied to, 144 Rapid near-line analysis, 5 Ratioing beam sample data, 124 Raw material, 2–3 Rayleigh scattering filters, 145 Reaction kinetics, 297–302 electrode kinetics, 299–300 enzyme kinetics, 297–299 kinetics of interaction between target and bioreceptor, 301–302 Read gradient, 219 Recovery time (REC), defined, 313 Red wines, MIR and authentication of, 136 Reference laboratory performance, 9 Reference methods calibration of indirect methods and influence of, 33–43 calibration of sensor and, 17 Reflectance, alignment of transducers and ultrasonic path for, 48–49 Reflectance coefficient, 51 Reflectance measurements, 50–52 Reflectance methods, applications of, 51 Reflectance mode, for analyzing cheeses, 107 Reflection probe, 78 Refractometers, 22 Refractometry, 9, 16 Regression analysis, 259 Regression coefficient, 34 Reh, C., 17, 125 Reid, L. M., 135
359
Relaxation, typical T1 relaxation curve, 216 Relaxation time, magnetic resonance and, 215 Repeatability and reproducibility, poor, electronic nose sensors and, 247, 250 Requalification of instruments degree of, 10 types of changes related to, 9 RES. See Response time Resa, P., 26 Research, 2, 5 Resonance Systems, NMR spectrometers through, 233 Resonance ultrasonic methods, 47 Resonator, alignment of transducers and ultrasonic path for, 48–49 Response surface analysis, electronic nose sensors and, 250 Response time, 313 Resubstitution method, 260 Reynolds number, flow of microfluidic channel characterized by, 288 Rheology, 58 Ring structures, detecting, Raman spectroscopy and, 144 Ripoche, A., 131 RMSECV. See Root Mean Square Error of Cross Validation RMSEP. See Root Mean Square Error of Prediction RNA, hybridization and, 301–302 Roberts, C. A., 96 Root Mean Square Error of Cross Validation, 92, 93 Root Mean Square Error of Prediction, 92, 93 Roussel, S., 136, 248 RST Rostock (Germany), 243 R2 value, 93 Rudnitskaya, A., 135 S Sadana, A., 302 Sadeghi-Jorachi, H., 148 Safety, product, 120
360
Index
Saggin, R., 54, 55 Salami, 99 NIR reflectance spectra of, 87 Salmonella enterica serotypes, FTIR spectroscopy applied to classification of intact cells and LPS of, 137 Salts, ultrasound and phase transitions in, 53 Sample dispersion, causes of, in electrokinetically driven micro devices systems, 294 Sample overloading, 294 Sample presentation methods, 124–127 attenuated total reflectance, 126–127 diffuse reflectance, 127 transmission windows and cells, 125–126 Sample presentation system, laser diffraction and, 173 Sample volume, 12 Sampling procedures, 9 Sampling techniques, NIR methods and, 83–84 Sausages, 99 calibration results for, 100t cross correlation of water and fat content in, 98 SAW mode. See Surface acoustic wave mode SAW sensors. See Surface acoustic wave-based sensors Scanning dispersive grating instruments, advantages and disadvantages of, 75 Scanning dispersive grating spectrophotometers, 72–73 principle of, 73 Scattering theory, 57 Schaak, R. E., 248 Schaller, E., 263 Sealed transmission cells, 125 Secondary (or indirect) methods, 33, 35 calibration of, against primary or direct method, 43 Semi-conducting polymers, 239
Semipermanent transmission cells, 125 Semi-skimmed milk, size distributions recorded for, 183–184, 184 Sensor drift, 248, 250 Sensors, food industry changes and consequences related to use of, 4–5 Sensor technologies, for odor analyses, 238. See also Electronic nose technology Separation, by sieving, 167–168 SFC. See Solid fat content Shear/transverse (T) waves, 45 Shear wave methods, for food texture measurement, 58–59 Shear wave reflectance, 51 Shelf-life, NMR and, 227 Sieve analysis, 197 Sieves, types of, 167 Sieving, 167–168 automated image analysis and, 24 Sigfusson, H., 55 Signal processing system, biosensor, 284 Silicon detector, cheese analysis and, 106, 107 SIMCA. See Soft Independent Modeling of Class Analogy Simon, J. E., 263 Singh, A. P., 54 Singh, P. C., 22 Single-electrode pairs applications using, 325–328 missing biscuit detection and, 325–327 Single-shot measurements of relaxation times or diffusion, pulse sequence/experimental design advances and, 234–235 Siragus, F. R., 153 Sivakesava, S., 136 Sizing techniques, 167–195 image analysis, 193–195 light scattering, 168–176 dynamic, 168–172 laser diffraction, 172–176 sieving, 167–168 Skimmed milk, size distributions recorded for, 183–184, 184
Index Skimmed milk powder, determination of moisture in, using a near infrared process analyzer, 18 “Slaves,” 95 Slice selection, 219 Smell, characterizing food product in terms of, 237 “Smellprint,” 240 Smith Detection Systems, 239 SNF. See Solid-non-fat Sniff tests, human sensory panel-based, 269 Soft Independent Modeling of Class Analogy, 251, 261 Sokolov, S., 302 Solid fat content, 54 Solid-non-fat, effect of homogenization of milk on measurement of, 10 Solvent levels (case study), retained, detection of, in printed packaging material, 269 Soxhlet extraction, 84 oil content in olives determined by, 111 quantitative determination of fat content from mayonnaise by, 109 Spectra, simple, example of factorization of, into corresponding loadings and scores, 90 Spectral transfer, 94–95 Spectrometer technologies, advantages and disadvantages of, 75–76 Spectroscopical methods, prediction of texture by, 23 Spectroscopy, defined, 119 Sphere, with same volume of given cylinder, 166–167, 167 SpinCore Technologies, Inc., general purpose NMR spectrometers through, 233 Spin echo pulse sequence, 221 in MRI, 219 sample image of internal browning in an apple and, 220 Spin-lattice relaxation, 215 Spin-spin relaxation, magnetic resonance and, 215, 216
361
Spin-spin relaxation rates, Halbach magnet for measurement of, for whole navel oranges, 232, 234 Spin-spin relaxation times, portable NMR systems for measuring, 233 Spoilage microorganisms, MIR spectroscopy and detection and identification of, 137 Stabilizers, flavor emulsions and, 187 Staff, central importance of, 7 Starch, FT-Raman spectroscopy and gelatinization and retrogradation processes of, 150 Static headspace analysis, 241 Stationary MRI, 217–221 Statistical analysis, electronic nose technology, 250–262 artificial neural networks, 262 discriminant analyses, 252–262 variable selection procedure, 260–262 multivariate factor analyses, 251 principal components analysis, 251–252 Stepwise selection procedure, 260 Stokes-Einstein equation, concentration dependence of intensity particle size distribution for emulsion sample, using solvent viscosity in, 172 Strassburger, K. J., 241 Submicron dispersions, concentrated, measurement of size and number of big particles in, 197, 204–205 Sucrose solutions, speed of sound in, as a function of temperature, 56 Sugar in chocolate production, 177–178 ultrasound and phase transitions in, 53 Sun, D. W., 20, 21 Superconducting magnets, 228 Support vector machines, 91, 263 Surface acoustic wave-based sensors, 246–247 Surface acoustic wave mode, 245 SVM. See Support vector machines Swiss Gruy`ere cheese, MIR spectrosocpy and, 135
362
Index
T Taguchi, 239 Tan, J., 20 Tanaka, F., 24 Tapp, H. S., 125 Target molecules, biosensor technology, and interaction between biorecognition molecules and, 301 Taste, 283 Tea leaves, image analysis of, 194, 195 Tea stalks, image analysis of, 194, 195 Tecmag, Inc., portable NMR spectrometer manufactured by, 232, 233 10-port injection, flow injection amperometric biosensor, 310 Tesla permanent magnet, for in-line magnetic resonance imaging of structural defects in food products, 229 Tesla permanent magnet systems, applications with, 229 Test Set, 93 Texture analysis, of meat, 20 Textures, 283 spectroscopical methods and prediction of, 23 Theobroma cacao tree, 177 Thermal processing, protein structure changes and, 147 3-CCD color digital cameras, 20 Three-dimensional fast spin echo data set, four slices from, on a small lime using permanent magnet designed for in-line magnetic resonance imaging, 230 Three-dimensional imaging with single-sided sensor, pulse sequence/experimental design advances and, 234 Through transmission, alignment of transducers and ultrasonic path for, 48–49 Thybo, A. K., 25 Time coding, 6 Time-of-flight method, dynamic MRI and, 222 Tin oxide-based sensors, 263
Tomoflow measurement principle, 334 Tomographic analysis, 60 Tomographic flow measurement, of wheat, 337 Tomographic two-phase flow measurement system, experimental, 336 Total fat content, nondestructive compositional analysis and, 18 Total mass balance, 296 Toxins, 283 Traceability, 120, 138 mid-infrared spectroscopy and, 130t of production, 6 Transducer, biosensor, 284 Transflection, 79 Transflection measurements, 79 principle of, 79 Transflection probes, 78, 79 Transmission measurements, 76–78 principle of, 77 Transmission probe, 78 Transmission windows and cells, 125–126 Transverse relaxation, magnetic resonance and, 215, 216 Triangular-shaped magnet systems, 232 Triple-bonded structures, detecting, Raman spectroscopy and, 144 True values, 34 by Karl Fischer titration and oven drying and predicted values for wheat semolina samples, 37, 37t Tryptophan, Raman bands and, 147 T2 relaxation, first order relaxation process and, 216 T-waves, 45, 46, 58 Twin-plane guarded multielectrode capacitance sensor, 335 Two-phase flow measurement, and use of electrical capacitance tomography, 333–335, 337 Tyrosine, Raman bands and, 147 U Ultrasonic Doppler velocimetry (UDV), advantages with, 60 Ultrasonic measurement of food composition, 52–55
Index Ultrasonic measurement of food structure, 55–58 macroscale structure, 55–57 microscale structure, 57–58 Ultrasonic particle sizing, 57 Ultrasonic propagation, 47 Ultrasonic sensors, development of, and two major strengths with, 61 Ultrasonic velocimetry, uses for, 53 Ultrasonic velocity, lactic acid fermentation and, 26 Ultrasonic waves, employment of, in ultrasonic studies, 45 Ultrasound, 45–62 advantages with, 62 applications with, 52–61 ultrasonic measurement of food composition, 52–55 measurement methods, 47, 49–52 noncontact measurements, 50 reflectance measurements, 50–52 physical changes during processing and, 26 ultrasonic measurement of food structure, 55–58 measurement of food texture, 58–61 Ultrasound applications, noncontact and, 26 Ultrasound spectroscopy, 25 Ultrosonic Doppler velocimetry measurement system, alignment of transducers and ultrasonic path for, 48–49 Unbiased Mahalanobis distance, 256, 257 Unilateral geometry, 229 Unilateral magnet, 229 United Kingdom chocolate products, 179–180 chocolate bars, particle size distributions for, 180 Unknown samples, Mahalanobis distance and identification of, 255 U.S. Air Force, polymer development technology developed by, 239 UV resonance Raman spectrometers, 145
363
UV resonance Raman spectroscopy identification of microorganisms in pure cultures and, 152–153 quaternary structure transition due to aromatic amino acid residues detected by, 147 V Vacuum drying, 35 Validation, results and parameters of, 93 van Deventer, D., 269 van de Voort, F. R., 129 van Dijk, C., 27 Variable focusing potential, sample transport as function of, showing sample concentration during loading and after injection, 308 Variable selection procedures, types of, 260 Vegetables image analysis and, 20 spectrosopic techniques and assessing quality of, 27–28 Velocity profiles, 227 pulse sequence/experimental design advances and, 234 Verification, of laser diffraction, 175 Vial holders, 78 Vials, 77 Vibrational frequencies, selected parameters pertinent to, 144 Video image analysis, color changes measurement and, 27 Virginia Tech, 269 Viscometers, 22–23 Visible excitation Raman spectroscopy, 144 Vision systems, 20–21 Volatile compounds, smell of food and, 237 W Wafer, missing, prototype detector system, 326 Warwick University (England), 239 Water, intensity size distributions for various concentrations of emulsion sample using viscosity of, 172
364 Water content, 16 determination example, 35–37, 40 Karl Fischer titration and, 36 for lactoserum, 37 Water distribution in product, MRI and measurement of, 25 Watermelon, on top of a unilateral NMR magnet and radio frequency coil, 231 Wavelength dispersive X-ray fluorescence, 21–22 Wave propagation, description of, 47 Weighing parameters, response surfaces of objective function for two different settings of, 314 Wet sieving, 168 Wheat G/C against moisture content for, 329 tomographic flow measurement of, 337 Whey powder, calibration results for, 102t White powder, in-process measurement of size and number of dark particles in, 197, 202–204 Wiedemann, S. C., 249 Wilks’ Lambda value, calculation of, 257 Williams, Phil, 68 Williams, R. W., 147 Wilson, R. H., 125 Wold, J. P., 149, 157 Woven wire sieves, 167
X Xanthan gum, 187 Xing, H., 26 XPT-C systems, 198 concentrated submicron dispersions and, 204 detection of fibrous impurities in particle suspensions with, 205 schematic of, 200 XPT Particle Analysis instruments, 198 XPT-P probe, in-process installation, 204 XPT-P system, 198 schematic of, 199 Y Yang, R. J., 309 Yarrowia lipolytica, 133 Yogurt calibration results for, 109t NIR analysis and, 108–109 online monitoring of fermentation of, 268 Young, H., 268 Yu, C. X., 137 Z Zeroth moment, 224 Zhao, T. S., 294 zNoseTM , 239, 247 Zude, M., 27