EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING -13
COMPUTER-AIDED CHEMICAL ENGINEERING Advisory Editor: R. Gani Volume 1: Volume 2: Volume 3: Volume 4: Volume 5:
Volume 6: Volume 7: Volume 8: Volume 9: Volume 10: Volume 11: Volume 12: Volume 13: Volume 14:
Distillation Design in Practice (L.M. Rose) The Art of Chemical Process Design (G.L. Wells and L.M. Rose) Computer Programming Examples for Chemical Engineers (G. Ross) Analysis and Synthesis of Chemical Process Systems (K. Hartmann and K. Kaplick) Studies in Computer-Aided Modelling. Design and Operation Part A: Unite Operations (I. PallaiandZ. Fonyo, Editors) Part B: Systems (I. Pallai and G.E. Veress, Editors) Neural Networks for Chemical Engineers (A.B. Bulsari, Editor) Material and Energy Balancing in the Process Industries - From Microscopic Balances to Large Plants (V.V Veverka and F Madron) European Symposium on Computer Aided Process Engineering-10 (S. Pierucci, Editor) European Symposium on Computer Aided Process Engineering-11 (R. GanI and S.B. Jorgensen, Editors) European Symposium on Computer Aided Process Engineering-12 (J. Grievink and J. van Schijndel, Editors) Software Architectures and Toolsfor Computer Aided Process Engineering (B. Braunschweig and R. Gani, Editors) Computer Aided Molecular Design: Theory and Practice (L.E.K. Achenie, R. Gani and V. Venkatasubramanian, Editors) Integrated Design and Simulation of Chemical Processes (A.C. Dimian) European Symposium on Computer Aided Process Engineering -13 (A. Kraslawski and I. Turunen, Editors)
COMPUTER-AIDED CHEMICAL ENGINEERING, 14
EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING -13 36th European Symposium of the Working Party on Computer Aided Process Engineering ESCAPE-13,1-4 June, 2003, Lappeenranta, Finland
Edited by
Andrzej Kraslawski Ilkka Turunen Lappeenranta University of Technology Lappeenranta Finland
2003 ELSEVIER Amsterdam - Boston - London - New York - Oxford - Paris San Diego - San Francisco - Singapore - Sydney -Toiiyo
ELSEVIER SCIENCE B.V. Sara Burgerhartstraat 25 P.O. Box211,1000 AE Amsterdam, The Netherlands © 2003 Elsevier Science B.V. All rights reserved. This work is protected under copyright by Elsevier Science, and the following terms and conditions apply to its use: Photocopying Single photocopies of single chapters may be made for personal use as allowed by national copyright laws. Permission of the Publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profit educational classroom use. Permissions may be sought directly from Elsevier Science & Technology Rights Department in Oxford, UK: phone: (-1-44) 1865 843830, fax: (-1-44) 1865 853333, e-mail:
[email protected]. You may also complete your request on-line via the Elsevier Science homepage (http://www.elsevier.com), by selecting 'Customer support' and then 'Obtaining Permissions'. In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA; phone: (-hi) (978) 7508400, fax: (+1) (978) 7504744, and in the UK through the Copyright Licensing Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London WIP OLP, UK; phone: (-H44) 207 631 5555; fax: (-1-44) 207 631 5500. Other countries may have a local reprographic rights agency for payments. Derivative Works Tables of contents may be reproduced for internal circulation, but permission of Elsevier Science is required for external resale or distribution of such material. Permission of the Publisher is required for all other derivative works, including compilations and translations. Electronic Storage or Usage Permission of the Publisher is required to store or use electronically any material contained in this work, including any chapter or part of a chapter. Except as outlined above, no part of this work may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher. Address permissions requests to: Elsevier Science Global Rights Department, at the fax and e-mail addresses noted above. Notice No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made.
First edition 2003 Library of Congress Cataloging in Publication Data A catalog record from the Library of Congress has been applied for. British Library Cataloguing in Publication Data A catalogue record from the British Library has been applied for.
ISBN: 0-444-51368-X ISSN: 1570-7946 (Series) © The paper used in this publication meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper). Printed in Hungary.
Preface
This book contains papers presented at the 13th European Symposium on Computer Aided Process Engineering (ESCAPE-13) held in Lappeenranta, Finland from the V^ to 4* June 2003. The symposia on Computer Aided Process Engineering (CAPE) have been promoted by the Working Party on CAPE of the European Federation of Chemical Engineering (EFCE) since 1968. The most recent symposia were organised in The Hague, The Netherlands 2002, (ESCAPE-12), Kolding, Denmark, 2001 (ESCAPE-U), and Florence, Italy, 2000 (ESCAPE-10) The series of ESCAPE symposia assist in bringing together scientists, students and engineers from academia and industry, which are active in research and application of CAPE. The objective of ESCAPE-13 is to highlight the use of computers and information technology tools on five specific themes: 7. Process Design, 2. Process Control and Dynamics, 3. Modelling, Simulation and Optimisation, 4. Applications in Pulp and Paper Industry, 5. Applications in Biotechnology. The main theme for ESCAPE-13 is Expanding Application Field of CAPE Methods and Tools. It means extending CAPE approach, mainly used in chemical industry, into different sectors of process industry and promoting CAPE applications aiming at generation of new businesses and technologies. This book includes 190 papers selected from 391 submitted abstracts. All papers have been reviewed by 33 members of the international scientific committee. The selection process involved review of abstracts, manuscripts and final acceptance of the revised manuscripts. We are very grateful to the members of international scientific committee for their comments and recommendations. This book is the fourth ESCAPE Symposium Proceedings included in the series on Computer Aided Chemical Engineering. We hope that, as the previous Proceedings, it will contribute to the progress in computer aided process and product engineering.
Andrzej Kraslawski Ilkka Turunen
VI
International Scientific Committee Andrzej Kraslawski Ilkka Turunen
J. Aittamaa A. Barbosa-Povoa L. T. Biegler D. Bogle B. Braunschweig K. Edelmann Z. Fonyo R. Gani U.Gren P. Glavie J. Grievink I. E. Grossmann L. Hammarstrom G. Heyen A. Isakson S. B. J0rgensen B. Kalitventzeff
(Finland) (Portugal) (USA) (United Kingdom) (France) (Finland) (Hungary) (Denmark) (Sweden) (Slovenia) (The Netherlands) (USA) (Finland) (Belgium) (Sweden) (Denmark) (Belgium)
(Finland, co-chairman) (Finland, co-chairman)
S. Karrila J-M. Le Lann K. Leiviska W. Marquardt J. Paloschi C. Pantelides T. Perris S. Pierucci H. Pingen L. Puigjaner Y. Qian H. Schmidt-Traub S. Skogestad P. Taskinen S. de Wolf T. Westerlund
(USA) (France) (Finland) (Germany) (United Kingdom) (United Kingdom) (United Kingdom) (Italy) (The Netherlands) (Spain) (P. R. China) (Germany) (Norway) (Finland) (The Netherlands) (Finland)
National Organising Committee Andrzej Kraslawski Ilkka Turunen
(Finland, co-chairman) (Finland, co-chairman)
J. Aittamaa M. Hurme M. Karlsson K. Leiviska P. Piiroinen R. Ritala P. Taskinen A. Vuori T. Westerlund
Helsinki University of Technology Helsinki University of Technology Metso Corporation Oulu University Danisco Sugar Oy Keskuslaboratorio Oy Outokumpu Research Oy Kemira Oyj Abo Akademi University
vu
Contents Keynote Paper Bogle, I.D.L. Computer A ided Biochemical Process Engineering
1
Contributed Papers Process Design Abebe, S., Shang, Z., Kokossis, A. A Two-stage Optimisation Approach to the Design of Water-using Systems in Process Plants 11 Andersen, N.K., Coll, N., Jensen, N., Gani, R., Uerdingen, E., Fischer, U., Hungerbiihler, K. Generation and Screening of Retrofit Alternatives Using a Systematic IndicatorBased Retrofit Analysis Method 17 Balendra, S., Bogle, I.D.L. A Comparison of Flowsheet Solving Strategies Using Interval Global Optimisation Methods 23 Bayer, B., von Wedel, L., Marquardt, W. An Integration of Design Data and Mathematical Models in Chemical Process Design 29 Berard, F., Azzaro-Pantel, C , Pibouleau, L., Domenech, S. A Production Planning Strategic Framework for Batch Plants 35 Bonfill, A., Canton, J., Bagajewicz, M., Espuna, A., Puigjaner, L. Managing Financial Risk in Scheduling of Batch Plants 41 Borissova, A., Fairweather, M., Goltz, G.E. A Network Model for the Design of Agile Plants 47 Borissova, A., Fairweather, M., Goltz, G.E. A Vision of Computer Aids for the Design of Agile Production Plants 53 Caballero, J.A., Reyes-Labarta, J.A., Grossmann, I.E. Synthesis of Integrated Distillation Systems 59 Cafaro, D.C., Cerda, J. A Continuous-Time Approach to Multiproduct Pipeline Scheduling 65 Chatzidoukas, C , Kiparissides, C , Perkins, J.D., Pistikopoulos, E.N. Optimal Grade Transition Campaign Scheduling in a Gas-Phase Polyolefin FBR Using Mixed Integer Dynamic Optimization 11 Chavali, S., Huismann, T., Lin, B., Miller, D.C., Camarda, K.V. Environmentally-Benign Transition Metal Catalyst Design using Optimization Techniques 11 Cisternas, L.A., Cueto, J.Y., Swaney, R.E. Complete Separation System Synthesis of Fractional Crystallization Processes 83
VIU
Dumont, M.-N., Heyen, G. Mathematical Modelling and Design of an Advanced Once-Through Heat Recovery Steam Generator Duque, J., Barbosa-Povoa, A.P.F.D., Novais, A.Q. Synthesis and Optimisation of the Recovery Route for Residual Products Eden, M.R., J0rgensen, S.B., Gani, R. A New Modeling Approach for Future Challenges in Process and Product Design Emet, S., Westerlund, T. Solving an MINLP Problem Including Partial Differential Algebraic Constraints Using Branch and Bound and Cutting Plane Techniques Farkas, T., Avramenko, Y., Kraslawski, A., Lelkes, Z., Nystrom, L. Selection of MINLP Model of Distillation Column Synthesis by Case-Based Reasoning Fraga, E.S., Papageorgiou L.G., Sharma, R. Discrete Model and Visualization Interface for Water Distribution Network Design Galvez, E.D., Zavala, M.F., Magna, J.A., Cisternas, L.A. Optimal Design of Mineral Flotation Circuits Gerogiorgis, D.I., Ydstie, B.E. An MINLP Model for the Conceptual Design of a Carbothermic Aluminium Reactor Giovanoglou, A., Adjiman, C.S., Galindo, A., Jackson, G. Towards the Identification of Optimal Solvents for Long Chain Alkanes with the SAFT Equation of State Godat, J., Marechal, F. Combined Optimisation and Process Integration Techniques for the Synthesis of Fuel Cells Systems Guadix, A., S0rensen, E., Papageorgiou, L.G., Guadix, E.M. Optimal Design and Operation of Batch Ultrafiltration Systems Heimann, F. Process Intensification through the Combined Use of Process Simulation and Miniplant Technology Huang, W., Chung, P.W.H. A Constraint Approach for Rescheduling Batch Processing Plants Including Pipeless Plants Irsic Bedenik, N., Pahor, B., Kravanja, Z. Integrated MINLP Synthesis of Overall Process Flowsheets by a Combined Synthesis / Analysis Approach Kotoulas, C., Pladis, P., Papadopoulos, E., Kiparissides, C. Computer Aided Design ofStyrene Batch Suspension Polymerization Reactors Kovae Kralj, A., Glavic, P. Waste Heat Integration Between Processes III: Mixed Integer Nonlinear Programming Model Kulay, L., Jimenez, L., Castells, F., Bafiares-Alcantara, R., Silva, G.A. Integration of Process Modelling and Life Cycle Inventory. Case Study: iPentane Purification Process from Naphtha
89 95
101
107
113
119 125
131
137
143 149
155
161
167 173
179
185
IX
Lee, S., Logsdon, J.S., Foral, M.J., Grossmann, I.E. Superstructure Optimization of the Olefin Separation Process Lelkes, Z., Rev, E., Steger, C , Varga, V., Fonyo, Z., Horvath, L. Batch Extractive Distillation with Intermediate Boiling Entrainer Lelkes, Z., Szitkai, Z., Farkas, T., Rev, E., Fonyo, Z. Short-cut Design of Batch Extractive Distillation using MINLP Li, X.-N., Rong, B.-G., Kjaslawski, A., Nystrom, L. A Conflict-Based Approach for Process Synthesis with Wastes Minimization Maravelias, C.T., Grossmann, LE. A New Continuous-Time State Task Network Formulation for Short Term Scheduling of Multipurpose Batch Plants Masruroh, N.A., Li, B., Klemes, J. Life Cycle Analysis of a Solar Thermal System with Thermochemical Storage Process Msiza, A.K., Eraser, D.M. Hybrid Synthesis Method For Mass Exchange Networks Oliveira Francisco, A.P., Matos, H.A. Multiperiod Synthesis and Operational Planning of Utility Systems with Environmental Concerns Orban-Mihalyko, E., Lakatos, B.G. Sizing Intermediate Storage with Stochastic Equipment Failures under General Operation Conditions Papaeconomou, I., J0rgensen, S.B., Gani, R., Cordiner, J. Synthesis, Design and Operational Modelling of Batch Processes: An Integrated Approach Pierucci, S., Bombardi, D., Concu, A., Lugli, G. Modelling, Design and Commissioning of a Sustainable Process for VOCs Recovery from Spray Paint Booths Pinto, T., Barbosa-Povoa, A.P.F.D., Novais, A.Q. Comparison Between STN, m-STN and RTNfor the Design of Multipurpose Batch Plants Proios, P., Pistikopoulos, E.N. Generalized Modular Framework for the Representation ofPetlyuk Distillation Columns Rodriguez-Martinez, A., Lopez-Arevalo, I., Banares-Alcantara, R., Aldea, A. A Multi-Modelling Approach for the Retrofit of Processes Rong, B.-G., Kraslawski, A., Turunen, I. Synthesis of Partially Thermally Coupled Column Configurations for Multicomponent Distillations Shang, Z., Kokossis, A. A Multicriteria Process Synthesis Approach to the Design of Sustainable and Economic Utility Systems Srinivasan, R., Chia, K.C., Heikkila, A.-M., Schabel, J. A Decision Support Database for Inherently Safer Design Stalker, I.D., Fraga, E.S., von Wedel, L., Yang, A. Using Design Prototypes to Build an Ontology for Automated Process Design Stalker, I.D., Stalker Firth, R.A., Fraga, E.S. Engineer Computer Interaction for Automated Process Design in COGents
191 197 203 209
215
221 227
233
239
245
251
257
263 269
275
281 287 293 299
Stikkelman, R.M., Herder, P.M., van der Wal, R., Schor, D. Developing a Methanol-Based Industrial Cluster Sundqvist, S., Pajula, E., Ritala, R. Risk Premium and Robustness in Design Optimization of a simplified TMP plant Syrjanen, T.L. Process Design as Part of a Concurrent Plant Design Project Szitkai, Z., Farkas, T., Kravanja, Z., Lelkes, Z., Rev, E., Fonyo, Z. A New MINLP Model for Mass Exchange Network Synthesis Weiten, M., Wozny, G. A Knowledge Based System for the Documentation of Research Concerning Physical and Chemical Processes - System Design and Case Studies for Application Yuceer, M., Atasoy, I., Berber, R. A Semi Heuristic MINLP Algorithm for Production Scheduling Zhao, Ch., Bhushan, M., Venkatasubramanian, V. Roles of Ontology in Automated Process Safety Analysis
305 311 317 323
329 335 341
Process Control and Dynamics Abonyi, J., Arva, P., Nemeth, S., Vincze, Cs., Bodolai, B., Dobosne Horvath, Zs., Nagy, G., Nemeth, M. Operator Support System for Multi Product Processes - Application to Polyethylene Production Alstad, v., Skogestad, S. Combination of Measurements as Controlled Variables for Self-Optimizing Control Badell, M., Romero, J., Puigjaner, L. Integrating Budgeting Models into APS Systems in Batch Chemical Industries Batzias, A.F., Batzias, F.A. A System for Support and Training of Personnel Working in the Electrochemical Treatment of Metallic Surfaces BenqUlou, C , Bagajewicz, M.J., Espufia, A., Puigjaner, L. Sensor-Placement for Dynamic Processes Berezowski, M., Dubaj, D. Chaotic Oscillations in a System of Two Parallel Reactors with Recirculation of Mass Cao, Y., Saha, P. Control Structure Selection for Unstable Processes Using Hankel Singular Value Cristea, M.V., Roman, R., Agachi, S.P. Neural Networks Based Model Predictive Control of the Drying Process Cubillos, F.A., Lima, E.L. Real-Time Optimization Systems Based On Grey-Box Neural Models Duarte, B.P.M., Saraiva, P.M. Change Point Detection for Quality Monitoring of Chemical Processes Engelien, H.K., Skogestad, S. Selecting Appropriate Control Variables for a Heat Integrated Distillation System with Prefractionator Espufia, A., Rodrigues, M.T., Gimeno, L., Puigjaner, L. A Holistic Framework for Supply Chain Management
347
353 359
365 371
377 383 389 395 401
407 413
XI
Guillen, G., Mele, F.D., Bagajewicz, M., Espuna, A., Puigjaner, L. Management of Financial and Consumer Satisfaction Risks in Supply Chain Design 419 Hyllseth, M., Cameron, D., Havre, K. Operator Training and Operator Support using Multiphase Pipeline Models and Dynamic Process Simulation: Sub-Sea Production and On-Shore Processing 425 Kiss, A.A., Bildea, C.S., Dimian, A.C., ledema, P.D. Unstable Behaviour of Plants with Recycle 431 Kwon, S.P., Kim, Y.H., Cho, J., Yoon, E.S. Development of an Intelligent Multivariable Filtering System based on the RuleBased Method ^2>1 Lee, G., Yoon, E.S. Multiple-Fault Diagnosis Using Dynamic PLS Built on Qualitative Relations 443 Li, H., Gani, R., J0rgensen, S.B. Integration of Design and Control for Energy Integrated Distillation 449 Li, X.X., Qian, Y., Wang, J. Process Monitoring Based on Wavelet Packet Principal Component Analysis 455 Li, X.X., Qian, Y., Wang, J., Qin, S.J. Information Criterion for Determination Time Window Length of Dynamic PCA for Process Monitoring 461 Madar, J., Szeifert, P., Nagy, L., Chovan, T., Abonyi, J. Tendency Model-based Improvement of the Slave Loop in Cascade Temperature Control of Batch Process Units 467 Maurya, M.R., Rengaswamy, R., Venkatasubramanian, V. Consistent Malfunction Diagnosis Inside Control Loops Using Signed Directed Graphs 473 Mele, F.D., Bagajewicz, M., Espuna, A., Puigjaner, L. Financial Risk Control in a Discrete Event Supply Chain 479 Meng, Q.F., Nougues, J.M., Bagajewicz, M.J., Puigjaner, L. Control Application Study Based on PROCEL 485 Mizsey, P., Emtir, M., Racz, L., Lengyel, A., Kraslawski, A., Fonyo, Z. Challenges in Controllability Investigations of Chemical Processes 491 Reinikainen, S.-P., Hoskuldsson, A. Analysis of Linear Dynamic Systems of Low Rank 497 Saxen, B., Nyberg, J. Data Based Classification of Roaster Bed Stability 503 Seferlis, P., Giannelos, N.F. A Two-Layered Optimisation-Based Control Strategy for Multi-Echelon Supply Chain Networks 509 Segovia-Hernandez, J.G., Hernandez, S., Femat, R., Jimenez, A. Dynamic Control of a Petlyuk Column via Proportional-Integral Action with Dynamic Estimation of Uncertainties 515 Segovia-Hernandez, J.G., Hernandez, S., Rico-Ramirez, V., Jimenez, A. Dynamic Study of Thermally Coupled Distillation Sequences Using Proportional - Integral Controllers 521 Vu, T.T.L., Hourigan, J.A., Sleigh, R.W., Ang, M.H., Tade, M.O. Metastable Control of Cooling Crystallisation 527
xu
Yen, Ch.H., Tsai, P.-F., Jang, S.S. Regional Knowledge Analysis of Artificial Neural Network Models and a Robust Model Predictive Control Architecture
533
Modelling, Simulation and Optimisation Ahola, J., Kangas, j ! , Maunula, T., Tanskanen, J. Optimisation of Automotive Catalytic Converter Warm-Up: Tackling by Guidance of Reactor Modelling Alopaeus, V., Keskinen, K.I., Koskinen, J., Majander, J. Gas-Liquid and Liquid-Liquid System Modeling Using Population Balances for Local Mass Transfer Arellano-Garcia, H., Martini, W., Wendt, M., Li, P., Wozny, G. Robust Optimization of a Reactive Semibatch Distillation Process under Uncertainty Attarakih, M.M., Bart, H.-J., Faqir, N.M. Solution of the Population Balance Equation for Liquid-Liquid Extraction Columns using a Generalized Fixed-Pivot and Central Difference Schemes Bardow, A., Marquardt, W. Identification of Multicomponent Mass Transfer by Means of an Incremental Approach Barrett, W., Harten, P. Development of the US EPA's Metal Finishing Facility Pollution Prevention Tool Bozga, G., Bumbac, G., Plesu, V., Muja, I., Popescu, CD. Modelling and Simulation of Kinetics and Operation for the TAME Synthesis by Catalytic Distillation Brad, R.B., Fairweather, M., Griffiths, J.F., Tomlin, A.S. Reduction of a Chemical Kinetic Scheme for Carbon Monoxide-Hydrogen Oxidation Brauner, N., Shacham, M. A Procedure for Constructing Optimal Regression Models in Conjunction with a Web-based Stepwise Regression Library Chatzidoukas, C., Perkins, J.D., Pistikopoulos, E.N., Kiparissides, C. Dynamic Simulation of the Borstar® Multistage Olefin Polymerization Process Cheng, H.N., Qian, Y., Li, X.X., Li, H. Agent-Oriented Modelling and Integration of Process Operation Systems Citir, C., Aktas, Z., Berber, R. Off-line Image Analysis for Froth Flotation of Coal Coimbra, M.d.C., Sereno, C , Rodrigues, A. Moving Finite Element Method: Applications to Science and Engineering Problems Dalai, N.M., Malik, R.K. Solution Multiplicity in Multicomponent Distillation. A Computational Study Dave, DJ., Zhang, N. Multiobjective Optimisation of Fluid Catalytic Cracker Unit Using Genetic Algorithms
539
545
551
557
563
569
575
581
587 593 599 605
611 617
623
Xlll
Demicoli, D., Stichlmair, J. Novel Operational Strategy for the Separation of Ternary Mixtures via Cyclic Operation of a Batch Distillation Column with Side Withdrawal Dietzsch, L., Fischer, I., Machefer, S., Ladwig, H.-J. Modelling and Optimisation of a Semihatch Polymerisation Process Elgue, S., Cabassud, M., Prat, L., Le Lann, J.M., Cezerac, J. A Global Approach for the Optimisation of Batch Reaction-Separation Processes Gopinathan, N., Fairweather, M., Jia, X. Computational Modelling of Packed Bed Systems Hadj-Kali, M., Gerbaud, V., Joulia, X., Boutin, A., Ungerer, P., Mijoule, C , Roques, J. Application of molecular simulation in the Gibbs Ensemble to Predict LiquidVapor Equilibrium Curves of Acetonitrile Hallas, I.e., S0rensen, E. Simulation of Supported Liquid Membranes in Hollow Fibre Configuration Haug-Warberg, T. On the Principles of Thermodynamic Modeling Heinonen, J., Pettersson, F. Short-Term Scheduling in Batch Plants: A generic Approach with Evolutionary Computation Hinnela, J., Saxen, H. Model of Burden Distribution in Operating Blast Furnaces Hugo, A., Ciumei, C , Buxton, A., Pistikopoulos, E.N. Environmental Impact Minimisation through Material Substitution: A MultiObjective Optimisation Approach Inglez de Souza, E.T., Maciel Filho, R., Victorino, I.R.S. Genetic Algorithms as an Optimisation Toll for Rotary Kiln Incineration Process Kasiri, N., Hosseini, A.R., Moghadam, M. Dynamic Simulation of an Ammonia Synthesis Reactor Katare, S., Caruthers, J., Delgass, W.N., Venkatasubramanian, V. Reaction Modeling Suite: A Rational, Intelligent and Automated Framework for Modeling Surface Reactions and Catalyst Design Kim, Y.H., Ryu, M.J., Han, E., Kwon, S.-P., Yoon, E.S. Computer Aided Prediction of Thermal Hazard for Decomposition Processes Kloker, M., Kenig, E., Gorak, A., Fraczek, K., Salacki, W., Orlikowski, W. Experimental and Theoretical Studies of the TAME Synthesis by Reactive Distillation Ko5i, P., Marek, M., Kubicek, M. Oscillatory Behaviour in Mathematical Model ofTWC with Microkinetics and Internal Diffusion Kohout, M., Vanickova, T., Schreiber, I., Kubicek, M. Methods of Analysis of Complex Dynamics in Reaction-Diffusion-Convection Models Korpi, M., Toivonen, H., Saxen B. Modelling and Identification of the Feed Preparation Process of a Copper Flash Smelter
629 635 641 647
653 659 665
671 611
683 689 695
701 101
713
719
725
731
XIV
Koskinen, J., Pattikangas, T., Manninen, M., Alopaeus, V., Keskinen, K.L, Koskinen, K., Majander, J. CFD Modelling of Drag Reduction Effects in Pipe Flows Kreis, P., Gorak, A. Modelling and Simulation of a Combined Membrane/Distillation Process Lacks, DJ. Consequences ofOn-Line Optimization in Highly Nonlinear Chemical Processes Lakner, R., Hangos, K.M., Cameron, I.T. Construction of Minimal Models for Control Purposes Lievo, P., Almark, M., Purola, V.-M., Pyhalahti, A., Aittamaa, J. Miniplant - Effective Tool in Process Development and Design Lim, Y.-L, Christensen, S., J0rgensen, S.B. A Generalized Adsorption Rate Model Based on the Limiting-Component Constraint in Ion-Exchange Chromatographic Separation for Multicomponent Systems Miettinen, T., Laakkonen, M., Aittamaa, J. Comparison of Various Flow Visualisation Techniques in a Gas-Liquid Mixed Tank Montastruc, L., Azzaro-Pantel, C , Davin, A., Pibouleau, L., Cabassud, M., Domenech, S. A Hybrid Optimization Technique for Improvement of P-Recovery in a Pellet Reactor Mori, Y., Partanen, J., Louhi-Kultanen, M., Kallas, J. Modelling of Crystal Growth in Multicomponent Solutions Mota, J.P.B. Towards the Atomistic Description of Equilibrium-Based Separation Processes. I. Isothermal Stirred-Tank Adsorber Mota, J.P.B., Rodrigo, AJ.S., Esteves, I.A.A.C., Rostam-Abadi, M. Dynamic Modelling of an Adsorption Storage Tank using a Hybrid Approach Combining Computational Fluid Dynamics and Process Simulation Mu, F., Venkatasubramanian, V. Online HAZOP Analysis for Abnormal Event Management of Batch Process Mueller, C , Brink, A., Hupa, M. Analysis of Combustion Processes Using Computational Fluid Dynamics - A Tool and Its Application Novakovic, K., Martin, E.B., Morris, A.J. Modelling of the Free Radical Polymerization ofStyrene with Benzoyl Peroxide as Initiator Oliveira, R. Combining First Principles Modelling and Artificial Neural Networks: a General Framework Oreski, S., Zupan, J., Glavic, P. Classifying and Proposing Phase Equilibrium Methods with Trained Kohonen Neural Network Paloschi, J.R. An Initialisation Algorithm to Solve Systems of Nonlinear Equations Arising from Process Simulation Problems
131 743 749 755 761
767
773
779 785
791
797 803
809
815
821
827
833
XV
Peres, J., Oliveira, R., Feyo de Azevedo, S. Modelling Cells Reaction Kinetics with Artificial Neural Networks: A Comparison of Three Network A rchitectures Perret, J., Thery, R., Hetreux, G., Le Lann, J.M. Object-Oriented Components for Dynamic Hybrid Simulation of a Reactive Distillation Process Ponce-Ortega, J.M., Rico-Ramirez, V., Hernandez-Castro, S. Using the HSS Technique for Improving the Efficiency of the Stochastic Decomposition Algorithm Pongracz, B., Szederkenyi, G., Hangos, K.M. The Effect of Algebraic Equations on the Stability of Process Systems Modelled by Differential Algebraic Equations Pons, M. The CAPE-OPEN Interface Specification for Reactions Package Poth, N., Brusis, D., Stichlmair, J. Rigorous Optimization of Reactive Distillation in GAMS with the Use of External Functions Preisig, H.A., Westerweele, M. Effect of Time-Scale Assumptions on Process Models and Their Reconciliation Repke, J.-U., Villain, O., Wozny, G. A Nonequilibrium Model for Three-Phase Distillation in a Packed Column: Modelling and Experiments Roth, S., Loffler, H.-U., Wozny, G. Connecting Complex Simulations to the Internet - an Example from the Rolling Mill Industry Rouzineau, D., Meyer, M., Prevost, M. Non Equilibrium Model and Experimental Validation for Reactive Distillation Salgado; P.A.C., Afonso, P.A.F.N.A. Hierarchical Fuzzy Modelling by Rules Clustering. A Pilot Plant Reactor Application Salmi, T., Warna, J., Mikkola, J.-P., Aumo, J., Ronnholm, M., Kuusisto, J. Residence Time Distributions From CFD In Monolith Reactors - Combination of Avant-Garde and Classical Modelling Schneider, P.A., Sheehan, M.E., Brown, S.T. Modelling the Dynamics of Solids Transport in Flighted Rotary Dryers Sequeira, S.E., Herrera, M., Graells, M., Puigjaner, L. On-Line Process Optimisation: Parameter Tuning for the Real Time Evolution (RTF) Approach Shimizu, Y., Tanaka, Y., Kawada, A. Multi-Objective Optimization System MOON^ on the Internet Singare, S., Bildea, C.S., Grievink, J. Reduced Order Dynamic Models of Reactive Absorption Processes Skouras, S., Skogestad, S. Separation ofAzeotropic Mixtures in Closed Batch Distillation Arrangements Smolianski, A., Haario, H., Luukka, P. Numerical Bubble Dynamics Soares, R. de P., Secchi, A.R. EMSO: A New Environment for Modelling, Simulation and Optimisation
839
845
851
857 863
869 875
881
887 893
899
905 911
917 923 929 935 941 947
XVI
Thullie, J., Kurpas, M. New Concept of Cold Feed Injection in RFR Tiitinen, J. Numerical Modeling of a OK Rotor-Stator Mixing Device Urbas, L., Gauss, B., Hausmanns, Ch., Wozny, G. Teaching Modelling of Chemical Processes in Higher Education using MultiMedia van Wissen, M.E., Turk, A.L., Bildea, C.S., Verwater-Lukszo, Z. Modeling of a Batch Process Based upon Safety Constraints Virkki-Hatakka, T., Rong, B.-G., Cziner, K., Hurme, M., Kraslawski, A., Turunen, I. Modeling at Different Stages of Process Life-Cycle Yang, G., Louhi-Kultanen, M., Kallas, J. The CFD Simulation of Temperature Control in a Batch Mixing Tank Zilinskas, J., Bogle, I.D.L. On the Generalization of a Random Interval Method
953 959
965 971
977 983 989
Applications in Pulp and Paper Industry Alexandridis, A., Sarimveis, H., Bafas, G. Adaptive Control of Continuous Pulp Digesters Based on Radial Basis Function Neural Network Models 995 Brown, D., Marechal, F., Heyen, G., Paris, J. Application of Data Reconciliation to the Simulation of System Closure Options in a Paper Deinking Process 1001 Costa, A.O.S., Biscaia Jr., E.G., Lima, E.L. Mathematical Description of the Kraft Recovery Boiler Furnace 1007 de Vaal, P.L., Sandrock, C. Implementation of a Model Based Controller on a Batch Pulp Digester for Improved Control 1013 Ghaffari, Sh., Romagnoli, J.A. Steady State and Dynamic Behaviour of Kraft Recovery Boiler 1019 Harrison, R.P., Stuart, P.R. Processing ofThermo-Mechanical Pulping Data to Enhance PC A and PLS 1025 Jemstrom, P., Westerlund, T., Isaksson, J. A Decomposition Strategy for Solving Multi-Product, Multi-Purpose Scheduling Problems in the Paper Converting Industry 1031 Masudy, M. Utilization of Dynamic Simulation at Tembec Specialty Cellulose Mill 1037 Pettersson, P., Soderman, J. Synthesis of Heat Recovery Systems in Paper Machines with Varying Design Parameters 1043 Rolandi, P.A., Romagnoli, J.A. Smart Enterprise for Pulp and Paper: Digester Modeling and Validation 1049 Silva, CM., Biscaia Jr., E. C. Multiobjective Optimization of a Continuous Pulp Digester 1055 Soderman, J., Pettersson, F. Searching for Enhanced Energy Systems with Process Integration in Pulp and Paper Industries 1061
XVll
Virta, M.T., Wang, H., Roberts, J.C. The Performance Optimisation and Control for the Wet End System of a Fluting and Liner Board Mill 1067
Applications in Biotechnology Acufia, G., Cubillos, F., Molin, P., Ferret, E., Perez-Correa, R. On-line Estimation of Bed Water Content and Temperature in a SSC Bioreactor Using a Modular Neural Network model Eusebio, M.FJ., Barreiros, A.M., Fortunate, R., Reis, M.A.M., Crespo, J.G., Mota, J.P.B. On-line Monitoring and Control of a Biological Denitrification Process for Drinking-Water Treatment Horner, D.J., Bansal, P.S. The Role of CAPE in the Development of Pharmaceutical Products Kristensen, N.R., Madsen, H., J0rgensen, S.B. Developing Phenomena Models from Experimental data Levis, A.A., Papageorgiou, L.G. Multi-Site Capacity Planning for the Pharmaceutical Industry Using Mathematical Programming Li, Q., Hua, B. A Multiagent-based System Model of Supply Chain Management for Traditional Chinese Medicine Industry Lim, A.Ch., Farid, S., Washbrook, J., Titchener-Hooker, N.J. A Tool for Modelling the Impact of Regulatory Compliance Activities on the Biomanufacturing Industry Manca, D., Rovaglio, M., Colombo, I. Modeling the Polymer Coating in Microencapsulated Active Principles Marcoulaki, E.G., Batzias, F.A. Extractant Design for Enhanced Biofuel Production through Fermentation of Cellulosic Wastes Sarkar, D., Modak, J.M. Optimisation of Fed-Batch Bioreactors Using Genetic Algorithms: Two Control Variables van Winden, W.A., Verheijen, P.J.T., Heijnen, J.J. Efficient Modeling of C-Labeling Distributions in Microorganisms Wang, F.-Sh. Fuzzy Goal Attainment Problem of a Beer Fermentation Process Using Hybrid Differential Evolution Wongso, F., Hidajat, K., Ray, A.K. Application ofMultiobjective Optimization in the Design ofChiral Drug Separators based on SMB Technology
1145
Author Index
1151
1073
1079 1085 1091
1097
1103
1109 1115
1121
1127 1133
1139
This Page Intentionally Left Blank
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
Computer Aided Biochemical Process Engineering I.D.L.Bogle Dept of Chemical Engineering, University College London, Torrington Place, London, WCIE 7JE
[email protected]
Abstract The growth of the biochemical industries is heating up in Europe after not meeting the initial expectations. CAPE tools have made some impact and progress on computer aided synthesis and design of biochemical processes is demonstrated on a process for the production of a hormone. Systems thinking is being recognised by the life science community and to gain genuinely optimal process solutions it is necessary to design right through from product and function to metabolism and manufacturing process. The opportunities for CAPE experts to contribute in the explosion of interest in the Life Sciences is strong if we think of the 'Process' in CAPE as any process involving physical or (bio-)chemical change.
1. Introduction The biochemical process industries have been in the news, and often headlines, for many years now. There has been a significant impact on Chemical Engineering research in scientific fundamentals but not such a great impact on process design. In the early nineties the European Commission (EC communication 19 April 1991) was predicting that the sales of biotechnology derived products would be between €26 and €41 billion by the year 2000, a three-fold increase on sales in 1985. A recent review of global biotechnology by Ernst and Young (2002) shows that revenues in 2000 in Europe were around €10 billion (M$9,872) but had risen by 39% for 2001 with a figure of around €13 billion (M$ 13,733). So while the sector has not delivered the full promise it is clearly a growing one. Globally the USA dominates the sector with product sales of $28.5 billion in 2001 ($25.7 billion in 2000). A key difference between the USA and Europe is that in the USA the revenues for public companies dominates while in Europe private companies produce nearly half the revenues. For public companies the rest of the world currently contribute only 5.6% of revenues. The European biotechnology industry 'is now starting to play a central role on the global stage ... enabled by a dramatically increased flow of funds into the industry' (Ernst and Young). According to the Ernst and Young report 'the European biotechnology sector is characterized by a high proportion of small early stage companies' so one of the constraining factors is a strain on private equity resources. However the number of companies that have raised more than 20 million euros has risen from 3 in 1998 to 23 in 2001. Also resistance to the industry has been stronger in Europe and this has resulted in tighter controls on both manufacturing and research and development. Regulatory compliance is a key issue in biochemical processes because the regulatory authorities demand well defined operating procedures to be adhered to once approved for a
particular product and process. This has significance for the CAPE community since to have a role it must be early in the development process when there is freedom to make design and operational decisions. Much of the (Bio-)Chemical Engineering design activity aids large scale manufacturing. The modeling effort for biochemical processes is significant because of the specific characteristics of the production (typically fermentation of cells) and separation (often specific chromatographic techniques) operations. However as we will show in this paper there has been significant success in design and operations optimization and that with the continuing improvement in the understanding of metabolic systems and with progress being made elsewhere in facilitating first principles modeling there is considerable scope for improvement and take up of contributions from our community to assist in the development of the industry.
2. Product and Process One of the characteristics of the industry is the rapid generation of new products. This is set to increase particularly for medicinal products since systematic generation based on genome and proteome information are bound to flow from the mapping of the building blocks of living matter in the genes and proteins. Many databases are now Web accessible. The most important driver for CAPE based tools is to be able to provide manufacturing solutions very rapidly based on incomplete information. However tools can also seek to provide guidance for the correct sort of data needed from the laboratory. And also they can seek to provide guidance on the appropriate products for the market place and for manufacturability. The biotechnology sector covers a wide range of products including medicinal products, foodstuffs, specialty chemical products, and land remediation. There is a role for engineers to play in systematic identification of product. The generation of appropriate medicinal treatment based on pharmacological knowledge and modeling once seemed fanciful but with better understanding of functional relationships and with the introduction of gene therapy treatments, where treatment is matched to the specific genetic information of a patient, this may well become commonplace (Bailey). It would also be appropriate for engineers to be specifying systems in which the identification of the product is tied closely in with the function as well as the manufacturability of the product and generating the product and process in a seamless manner. The same may also be true for specialty chemical products produced biologically and of course there is progress in engineering approaches to this problem which would be directly applicable (see for example Moggridge and Cussler). Foodstuffs manufacture should also be thought of in an integrated manner considering the functionality required of the product - taste, mouthfeel, nutritional content - in the specification of product and process. Perhaps this last area is the most difficult because of the difficulty in characterizing much of the functionality in a form that is amenable to quantitative treatment. Biological products are normally produced in fermentation processes. The product is expressed in some type of organism and can i) be excreted and collected, ii) remain inside the cell as a soluble product and the cell must be broken open and the product extracted from the broth, or iii) remain as an insoluble product where again the cell must be broken and particulate recovery methods are also required. There are many choices to be made in the production. Many host organisms are used to express the product:
common bakers yeast, Escherichia coli a common bacterium, Aspergillus niger a fungal bacterium, and mammalian cells are commonly used. If the product is expressed naturally by the organism (such as ethanol by yeast) then there are many choices of strain of the organism to be made, each strain having different levels of expression of the product. But with the advent of genetic engineering it became possible to modify a host organism such that it can produce non-native products. So now the choice is of both host and strain, constrained by limitations of the genetic engineering procedures. It is also possible now to define or modify the expression system within the organisms to optimise production. The whole area of Metabolic Engineering is ripe for greater input for Chemical Engineers and from the CAPE community and Stephanopoulos provides many valuable insights. The topic deserves a review paper of its own but it is worthwhile touching on some of the contributions of our community to this problem. The metabolism is usually represented as a network of well defined chemical reactions although this itself is a simplification as there are other links through enzymatic catalysts present and in the consumption and production of ATP, the main energy transport agent. The CAPE community has much experience of the simulation and optimisation of networks. In the late 60s a general non-linear representation of integrated metabolic systems was first suggested (Savageau) using a power law approximation for the kinetics. Using this formulation it is possible to identify rate controlling steps and bottlenecks, and to optimise the production of individual metabolites. This could be effected practically by changing the environment around the cells or by genetic modification. The optimisation problem can be transformed into a linear progranmiing problem (Voit, Regan et al.) or as an MILP problem (Hatzimanikatis et al) where network structure can also be modified, for example by deactivating paths. The extent to which uncertainty can be directly incorporated has also been tackled (Petkov and Maranas) addressing how likely it is to achieve a particular metabolic objective without exceeding physiological constraints that have been defined probabilistically. Clearly this is a very fertile area but one that must be done in close collaboration with biological scientists since there are many practical issues which limit the modifications that can be made to any metabolic system. It is also possible to assist in the exploration of more fundamental biological problems which the metabolic pathway holds the key to. One example is that pathway proximity gives a measure of how well connected each metabolite is, thus providing us with an objective criterion for determining the most important intermediates of metabolism and this can be formulated as an LP problem (Simeonides et al.). We can expect to see significant opportunities for our community in the area of aiding biological understanding in the future. The ability to manufacture a new product also depends on the ability to purify the product. So the choices can also extend to expressing a product which can easily be separated and then if necessary chemically modified to produce the final usefiil product. One example of this will be discussed later in the paper. So in the manufacture of a new product there are many choices open to the engineers and opportunities to provide CAPE tools to facilitate these thought processes. Some of these choices are shown in fig 1: choices about genetic and metabolic networks, about host organism and type of expression system, and of manufacturing process design.
along with the criteria that influence them: product function and effectiveness (often related to purity) as well as cost, safety and environmental impact (now also with genetic engineering implications). First principles modeling of the entire design process is decades away so the integration of data - both the employment of existing data and the ability to assist in highlighting important data that if collected will significantly enhance the decision making process is critical. Culturally, close interactions with experimentalists is also essential since there is still a significant divide between the life science and the engineering communities with considerable scepticism of the role of computational techniques. However recently there is a much wider recognition of the need for quantitative methods in biology (Chicurel) and so we can be confident of a more receptive response to engineering methods in the future.
Fig 1. Product and process decision making.
3. Process Synthesis We know that it is important to take a systems view of process design. Ideally this also includes all the production and separation aspects and to allow simultaneous treatment of structural and continuous decisions. It should also permit the use of various criteria which must be juggled - economic, technical, environmental and so on. In the following sections I summarise some work we have done on synthesis and design of biochemical manufacturing processes tackling some of these aspects. We are still a long way from comprehensive solutions. In this section I discuss approaches to considering structural flowsheet choices and in the following about choices where the flowsheet has been established and operating conditions are being optimised. The synthesis task here involves a wide range of alternative unit operations and recycle is rare. The problem is often simplified by application of rough guidelines or heuristics, which can be used alone or to simplify the computational task. Leser et al. presented the use of a rule based expert system for use in biochemical process synthesis. In practice an enormous number of heuristics are in common use. Much can be achieved in the early stages of conceptual process design using simple models that encapsulate the key mechanism driving the separation processes. Critical choices between alternatives can be made without having to develop complete simulations. The number of possible configurations increases exponentially as the number of types of separators to be considered increases. The total number of configurations for most practical problems is so large that an exhaustive search is computationally not practical. Clearly because of this explosion it is necessary to use a computationally simple evaluation scheme combined with simple heuristics.
Jaksland et al (1995) developed a property based approach to synthesis of chemical processes which is based on utilising driving forces to characterise the effectiveness of alternative unit operations. Each separation process exploits specific property differences to facilitate purification of the various components in a stream. The key driving force, and corresponding key property, utilised by each technology is identified. Table 1 summarizes properties of several downstream purification operations used in biochemical processes (proposed values of feasibility indices can be found in Steffens et al. 2000a). The approach relies on estimates of the physical properties of the components in the system. While the possibility of predicting all the properties of biochemical systems is still a long way off recent developments in the field for large molecules (polymers) and electrolytic solutions provide encouragement. An extensive UNIFAC group and parameter database of Hansen et al. (1991) was applied to describe activity coefficients of amino acids and peptides using the UNIFAC model. It was demonstrated that the group definition is not appropriate for peptides and therefore proteins. There is considerable research activity going on but it is expected that for the time being the synthesis procedure will be based on measured properties for the system in question and where information is not available to use data from the most similar system. It will of course be necessary to build up a database of data for relevant products and processes but it is hoped that the synthesis procedure will help to guide the experimentation process. Table 1. Separation Processes and key properties (x is particle or molecular diameter that can be handled). Unit operation Centrifugation Sedimentation Conventional filtration Rotary Drum filtration Microfiltration Microfiltration Ultrafiltration Diafiltration Precipitation Two liq phase separation Ion exchange Affinity chromatography Gel chromatography Hydrophobic Int. Chr.
Physical Property Density Density Particle size Particle size Particle size Molecular size Molecular size Molecular size Solubility Partition coefft Charge density Biospecific attraction Lx)g(mol. wt.) Hydrophobicity
Phase S/L S/L S/L S/L S/L I7L L/L L/L L/L UL L/L IVL UL 17L
Notes 0.1<x<100^im x>l ^im x>10^m x>10 ^im 0.05<x<5 ^im 0.05<x<5 ^im 0.001 <x<0.2|Lim 0.001 <x<0.2^m
0.051<x<10^m
SoUds Y Y Y Y Y Y Y Y Y Y N N N N
The synthesis methodology that we have produced (Steffens et al. 2000a) generates and searches the search tree using an implicit enumeration method using the Jacaranda software (Fraga). The screening methodology uses the simple characterizations of the separations. Complex models can be included because the search procedure imposes no requirements on the models' convexity and continuity. Stream characteristics and unit design parameters are discretized hence all searching is done on discrete variables leading to a discrete optimization problem. The algorithm was used to generate alternatives for the manufacture of an animal growth hormone bovine somatotropin or BST. This product is expressed as an insoluble
granule in E.coli and so lessons can be learned for a range of similar expression systems (including one method for the production of synthetic insulin). Figure 2 shows an existing industrial flowsheet for the manufacture of the product and figure 3 shows the optimal flowsheet as generated by the synthesis software. The basic structure of the two flowsheets is the same but Jacaranda suggested the cell concentration step uses microfiltration instead of a centrifuge which is certainly a feasible option. Results for an intracellular product also produced similar flowsheets to those used in practice (Steffens et al 2000a).
y''
22> . -..X
1
Centrifugation
^
^
^v.^
y' /
^^^
^
^
^N.^
—f
>
Centrifi.gatio„
^,^^^^^ Renaturation
homogenisation Fermentation Concentration and diafiltration
I
H^*'^<
i::z:z::z::z;::;^<-
t::z::z::z:~^::i<-
BST
\
i
- 1
Concentration and diafiltration
1
I
*
Microfiltration
Microfiltration
I
Hydrophobic interaction chromatography
Anion exchange
Fig 2. Industrial Flowsheet for the purification of BST (dotted area shows final polishing operations). A powerful advantage of the synthesis methodology is the ability to revise and improve the optimization criteria (Fraga). The synthesis procedure was also posed as a multicriteria optimization problem balancing economic and environmental criteria (Steffens et al. 1999). Here we were able to consider a range of environmental criteria including the life cycle cost (LCA), the critical water mass (CWTM), and the sustainable process index which gives the land mass area required to embed a process sustainably into the environment. We considered the manufacture of penicillin for which a number of processes have been patented. Jacaranda provides a ranked list of best solutions (as many as requested), a very useful feature for process synthesis since some options which may appear to be solutions could have obvious practical problems. Again most of the best solutions were very similar to the industrial flowsheets which include rotary drum filtration, solvent extraction, crystallization, filtration and drying. However when LCA or CWTM environmental criteria are used the solvent extraction process is replaced by cross flow filtration and ion exchange with other potential small changes. This is an expensive option but has clear environmental advantages. The
synthesis problem can easily be modified quickly by the user to meet the business priorities of a company and quickly generate alternatives for further study. Fermenter
Microfiltration (concentration)
High pressure homogenisation
Centrifugation
Concentration/ Diafiltration
Denaturing & refolding tank
Guanidine | HCl
Y Water + soluble medium components
Water +cell debris
Hydrophobic interaction chromatography
Water + cell contents
Diafiltration
1
Water! Product Wtank
K V V v"^r
Water + Guanidine HCl
Water Ultrafiltration (concentration)
Anion exchange (pH>8)
Fig 3. Cost Optimal Flowsheet for BST manufacture generated by Jacaranda. Up to this point the system boundary has excluded the production (cellular fermentation) operation. However if we extend the systems thinking to include this stage further improvements can be made to biochemical processes by changing the product expressed in the fermentation stage into a product that is easily purified. This is done, mostly at bench scale, by adding amino acids to the protein as a tail which will easily bind to a chromatographic adsorption resin. There are twenty amino acids to choose from and any number can be chosen so, again there is a potentially large combinatorial search. We chose to limit the search to a maximum of fifteen amino acids and to base the search on the use of two properties used in adsorption: charge and hydrophobicity (Steffens et al, 2000b). There are approximate methods for predicting these properties. Our procedure searches for the best solution including both production and purification processes assuming that genetic engineering and final tail cleavage costs are the same in all cases. The best three solutions (for BST) were indistinguishable in cost terms but all required the addition of between thirteen and fifteen amino acids and would result in more than halving the manufacturing costs by replacing the section in the dotted box in figure three by one cation exchange column and a cleavage step. These results have not been practically tested but aim to show the potential of using systematic synthesis techniques to suggest ways of searching the huge number of alternatives on offer to the developers of manufacturing processes for new products.
4. Process Design A number of tools have been used for process design once the flowsheet has been established. Bio-Pro Designer (Petrides) has been proposed for this and enables users to put together simple flowsheets for basic sizing and material balancing. However for incorporating more complex models of unit operations we have favoured the use of modelling tools where mechanistic models of separation tasks can be incorporated. Other workers have developed approaches for incorporating business decision making explicitly into the design process developing windows of operation which can be explored (Zhou et al.). We developed a full gPROMS simulation and gOPT optimisation of the BST manufacturing process. To obtain the greatest inclusion body recovery and the highest product purity are conflicting demands. By operating the centrifuge to yield maximal recovery will result in poor purity and these trade-offs are described in more detail in Bogle and Graf. Of course only one of the objectives can be optimised at a time but it is of particular interest what levels of recovery, purity, and productivity can be achieved when the entire process is considered simultaneously. The trade-offs in the multi-criteria optimisation problem can then be explored. Optimisation variables for the process are given in table 2. For the optimisation the fermentation time and the initial fermentation volume have been set. The results of the optimisations are given in table 3. The computational optimisation times for each run were typically 3-4 min on an RS6000. Table 2. Overview of the optimisation variables used. Process Unit
Optimisation variable
Fermenter HarvesterCentrifuge Homogeniser CentrifugeMixer SeparatorCentrifuge
Volumetric throughput (L/h) Settling area (m^) Number of passes (-) Dilution rate (-)
Initial guess 1300 155000 3 3.1
Lower bound 1200 10000 1 1.1
Upper bound 10000 250000 3 4
Volumetric throughput (L/h) Settling area (m^)
1400 105785
1200 10000
10000 250000
Since a desirable target is to maximise the IB recovery, product purity, and productivity, a solution has to be found which introduces an acceptable compromise. The optimisation strategy was changed to include structural decisions and a batchwise recycling of the sediment of the last centrifuge was proposed. The results of the suggested procedure are listed in the final column of table 3. The results show that high levels of all three objectives are achievable: after the second batch, a high purity of 94.6% is reached with an acceptable IB recovery of 83.6% and a productivity of 0.95 kg B.S.T./h which guarantees efficiency of the plant.
Table 3: Results for the maximisation of the IB recovery, purity, productivity, and for an solution where the centrifuge outlet is recycled and reprocessed.
Overall IB recovery (%) Purity (%) Productivity (kg BST /h) Overall process time (h) Final Process Volume (1) Recovered mass of BST (kg)
Maximum Recovery 95 73.8 1.021 37.3 1304 38.1
Maximum Purity 12.7 92.1 0.163 30.6 366 5.1
Maximum Productivity 93.5 55.7 1.16 32.7 815 37.9
Overall Opt. (before/after recycle) 90.7/83.6 80.9/94.6 1.08/0.95 33.6/35.3 1003/370 36.2/33.5
5. New Challenges for Computer Aided Process Engineering The increasing emphasis on quantitative biology means there will be an increasing role for CAPE specialists. There will be opportunities in the design of biochemical products and processes where the complexity is great. The need for a systems view is becoming clear to all and will increase as medicinal products are becoming 'multicomponent'; this can have much greater activity than single active ingredients (Bailey). Our systems view will include function although for medicinal products this challenge is a tough one as it means much more accurate models of living systems. The links to the experimentalists, our ability to help guide useful data collection, and to other disciplines become ever more important. I believe that we have a significant role to play in wider biological problems i.e. if we consider the 'Process' in CAPE to not just be about large scale manufacturing processes but to see it as an entry to any process where physical, chemical and/or biological processes are taking place and the complexity is such that systems approaches are needed. An example project that I am involved in is the modelling of networks from gene through to organ (the liver) building on modelling work being done in a range of Life Science Departments at UCL (http://www.ucl.ac.uk/complex/research/DTI.html). There are huge opportunities for our community to work with a wide range of disciplines and to help tackle some big challenges, not just in the discovery and manufacture of biochemical products but also in some of the biggest problems in this century of biology.
6. References Bailey, J.E. (1999) Lessons from metabolic engineering for functional genomics and drug discovery. Nature Biotechnology, 17 July 1999, 616-618. Bogle, I.D.L., Graf, H. (1997) Simulation as a tool for at-line prediction and control of biochemical processes. In Proc ECCEl, Florence, May 1997, 1933-1936. Chicurel, M. (2000) Mathematical biology: Life is a game of numbers. Nature 408, 900-901. Ernst, Young (2002) Beyond Borders, The Global Biotechnology Report. Fraga, E.S. (1998) The generation and use of partial solutions in process synthesis. Chem. Eng. Res. Des., 76(A1), 45-54.
10 Hansen, H.K., Rasmussen, P., Fredenslund, A., Schiller, M., Gmehling, J. (1991) Vapour liquid equilibria by UNIFAC group contribution. 5. Revision and extension. Ind. Eng. Chem. Res., 30, 2352. Hatzimanikatis, V., Floudas, C.A., Bailey, J.E. (1996) Analysis and design of metabolic reaction networks via mixed integer linear optimisation. AIChE J., 42, 12771292. Jaksland, C.A., Gani, R., Lien, K.M. (1995) Separation process design and synthesis based on thermodynamic insights. Chem. Eng. Sci., 50(3), 511-530. Leser, E.W., Lienqueo, M.E., Asenjo, J.A. (1996) Implementation in an expert system of a selection and synthesis of multistep protein separation processes for recombinant proteins. Ann NY Acad Sci 782:441-455. Moggridge, G.D., Cussler, E.L. (2000) An introduction to chemical product design. Chem. Eng. Res. Des., 78 (Al): 5-11. Petkov, S.B., Maranas, CD. (1997) Quantitative assessment of uncertainty in the optimisation of metabolic pathways. Biotech. Bioeng., 56/2, 145-161. Petrides, D.P. (1994) Biopro designer - an advanced computing environment for modelling and design of integrated biochemical processes. Comput. Chem. Engng., 18,S621-S625. Regan L., Bogle, I.D.L., Dunnill, P. (1993) Simulation and optimisation of metabolic pathways. Comput. Chem. Engng., 17, 627-637. Savageau, M.A. (1969) Biochemical Systems Analysis. 1. Some mathematical properties of the arte law for the component enzymatic reactions. J. Theor. Bio., 25, 370-379. Simeonidis, E., Rison, S.C.G, Thornton, J.M., Bogle, I.D.L., Papageorgiou, L.G. (2003) Analysis of biochemical networks using a pathway distance metric through linear progranmiing. Submitted to Metabolic Engineering. Steffens, M., Fraga, E.S., Bogle, I.D.L. (1999) Multicriteria Process Synthesis for Generating Sustainable and Economic Bioprocesses. Comput Chem Engng., 23, 1455-1467. Steffens, M.A., Fraga, E.S., Bogle, I.D.L. (2000a) Synthesis of downstream purification processes using physical properties data. Biotech Bioeng., 68/2, 218-230. Steffens, M.A., Fraga, E.S., Bogle, I.D.L. (2000b) Synthesis of purification tags for optimal downstream processing. Comput Chem Engng., 24, 717-720. Stephanopoulos, G. (2002) Metabolic Engineering: Perspective of a Chemical Engineer. AIChE J., 48/5,920-926. Voit, E.O. (1992) Optimization of integrated biochemical systems. Biotech Bioeng 40 572-582. Zhou, Y.H., Titchener-Hooker, N.J. (1999) Visualizing integrated bioprocess designs through "windows of operation". Biotechnol Bioeng., 65 (5): 550-557.
7. Acknowledgements I would like to acknowledge the work of many colleagues in Chemical Engineering at UCL whose work over many years made this paper possible: Eric Fraga, Lazaros Papageorgiou, Marc Steffens, Vangelis Simeonides, David Johnson, Sabine Agena, and Holger Graf and to the members of CoMPLEX (Centre for Mathematics and Physics in the Life Sciences and Experimental Biology) also at UCL.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
11
A Two-stage Optimisation Approach to the Design of Water-using Systems in Process Plants Solomon Abebe^, Zhigang Shang^ and Antonis Kokossis^ ^Department of Process & Systems Engineering, Cranfield University, Cranfield, MK43 OAL, UK ^Department of Chemical Engineering & Process Engineering, University of Surrey, Guildford, Surrey GU2 5XH, UK
Abstract The minimisation of water and wastewater flows is an objective of increasing value in industries. The paper presents a systematic optimisation approach for the design of the water-using systems in process plants with minimum water usage and capital investment. The proposed approach combines the benefits of water-pinch concepts and mathematical programming techniques. It follows a two-stage procedure, namely targeting stage and network design stage. Both stages rely on the optimisation of a superstructure model with all possible connections between (i) freshwater sources and operations, and (ii) different water-using operations. The approach facilitates the inherent combinatorial nature of the design problem and solves it efficiendy.
1. Introduction The design and analysis of water using systems has received much attention from the research community since 1970s. As the social awareness of the environmental problems caused by extensive industrialisation increases, the need for decision support tools to aid in minimising pollution has been accentuated. Wastewater is generated by processes as byproducts of reaction and when water comes into contact with process materials in mass transfer and extraction operations, direct contact heat transfer and steam ejectors. Also, wastewater is generated by utility systems from boiler feedwater treatment processes, boiler blowdown, cooling tower blowdown, etc. (Nemerow, 1971). A number of approaches have been reported for the optimal design of water-using systems using optimisation techniques. One of the earliest works on the application of optimisation techniques to regional water-using systems is that of Bishop and Hendricks (1971). The authors approached the design task as a transshipment problem and used linear programming techniques to obtain solutions. Takama et al. (1980) used mathematical programming to solve a refinery example. These authors addressed the problem of water management as a combination of water allocation among processes and wastewater distribution to cleanup units. More recently Alva-Argaez et al. (1999) and Huang et al. (1999) presented MINLP or NLP models. Bagajewicz (2000) presented necessary conditions of optimality for water-using systems. These conditions are used later in an algorithmic design procedure.
12 A targeting approach known as water pinch technology was developed by Wang and Smith (1994) for water and wastewater minimisation for the process industries. This technology is conceptually based and represents a systematic tool for the design of water networks. It can effectively pick optimal water reuse solutions. The method is based on assuming: • Constant pollutant load picked up in each process; • Maximum inlet and outlet concentrations in each process. Wang and Smith (1994) introduced a water use representation for a water using operation, which consists of a concentration versus mass load diagram. Based on this representation, every water-using operation can be described with the concept of a limiting water profile. The water profile with maximum inlet and outlet concentrations is defined as the limiting water profile. Once the limiting water profile for every waterusing operation has been obtained, a limiting composite curve can be constructed by combining all the individual profiles into a single composite curve as it is shown in Figure 1 for four processes. The water supply line touches the composite curve at pinch point and determines the minimum overall water consumption. Concentration (ppm)
Concentration (ppm)
8og
800
Mass load (kg/h)
21 41 Mass load (kg/h)
Figure 1. Limiting composite curve construction. Here we will present a two-stage optimisation strategy for the design of water using systems which makes use of the insights provided by water pinch analysis and combines them with mathematical programming tools. Both capital cost and fresh water consumption are minimised.
2. The Two-stage Optimisation Approach The pinch technology is found to be efficient when the number of operations is very few with single contaminate. However, when the number of operations escalates the pinch method becomes impractical to tackle such problems. In the section a new automated methodology for the design of water suing systems is described which makes use of the insights provided by water pinch analysis and combines them with mathematical programming tools. The methodology not only tackles problems with many operations but also problems with multi-contaminates and problems with multiple freshwater
13 sources. The methodology overcomes the limitations that are encountered using pinch method. The design of a water-using system involves water-related elements in the form of a number of freshwater sources available to satisfy the demand and water-using operations described by loads of contaminants and concentration levels. The design task is to fmd the network configuration to minimise the overall demand for freshwater (and consequently reduce wastewater volume) compatible with minimum total annual cost. The investment cost of the network includes piping costs and the cost of mass exchanger units. The approximate length of the pipe can be specified for each possible connection together with the materials of construction. The cost of mass exchange units assumes thermodynamic parameters and equilibrium relationships between the process streams and the water streams for the key contaminants, as well as the corresponding design equations and cost functions. The proposed approach follows a two-stage procedure, namely targeting stage and network design stage. Both stages rely on the optimisation of a superstructure model with all possible connections between (i) freshwater sources and operations, and (ii) different water-using operations. The example of a superstructure with three freshwater sources and three water-using operations is shown in Figure 2. An important feature of the superstructure is that the process streams in the mass exchange operations are considered implicitly through the construction of the limiting water profiles. r 1
•©
*]
Operation 1
Freshwater 2
•
7v
»^
Operation 2
Freshwater 3
x
-*\
Operation 3
*M
I
• Effluent 1
*m
^^ •m
• Effluent 3
Figure 2. Superstructure representation. The number of units in the water network determines existing units and discrete options account for connections between sources and sinks of water. The superstructure is developed so that: • each freshwater stream entering the network is split amongst the water-using operations; • each operation is preceded by a mixer fed by streams from the freshwater splitters and re-use streams emanating from the outlets of all other operations; • each operation is followed by a splitter that feeds the final mixer and the other operations in the system. The objective of the targeting stage is to determine the minimum freshwater cost in a water-using system. The problem is formulated as an NLP model. Once the target is obtained, there are often many network alternatives with the same minimum target. The objective of the network design stage is to obtain one from among these networks one that has minimum capital cost since this will usually correspond to an optimal solution. At this stage it is assumed that the minimum freshwater cost and freshwater flowrates
14 from different sources have been determined at the first stage. An MINLP model is formulated to select the most cost-effective water-using system by minimising the capital cost. The capital cost in the objective function consists of the piping costs and the cost of mass exchanger units. The approximate length of the pipe can be specified for each possible connection together with the materials of construction. The cost of mass exchange units assumes thermodynamic parameters and equilibrium relationships between the process streams and the water streams for the key contaminants, as well as the corresponding design equations and cost functions. As illustrated by the case studies, both the NLP model and MINLP model can be linearised by including constraints to represent necessary conditions of optimality for water-using systems.
3. Case Study The capabilities of the proposed approach have been illustrated with a case study. In this case ten water using operations with single contaminant are considered. The objective is to design a suitable network with the minimum capital cost and minimum freshwater consumption. Table 1 shows the limiting data for a ten processes problem. In this case there is one fixed source of freshwater and ten fixed operations. Each water user can be described by maximum and minimum allowed concentration (inlet and outlet) and the individual mass load of contaminants or the limiting flow rate. Table 1. Limiting data for a ten processes problem (Bagajewicz, 2000). Process ~1 2 3 4 5 6 7 8 9 10
Mass load of contaminant (Kg/hr) 2.00 2.88 4.00 3.00 30.00 5.00 2.00 1.00 10.00 6.50
CriPPrr^ QT (PPf^)
"80 25 90 25 200 25 100 50 50 800 800 400 600 400 100 0 300 50 300 150 Total minimum flow rate te/hr
Minimum freshwater without reuse (te/hr) 25.00 32.00 20.00 30.00 37.50 6.25 3.33 10.00 66.67 21.67 252.42
The problem has been solved in the following two stages: Stage 1 targeting the minimum freshwater flow rate The problem has been formulated as an LP model using the General Algebraic Modeling System (GAMS). The total minimum freshwater flow rate found to be 165.953 te/hr. The freshwater consumption of water is saved by over 34%. The resulting network design is given in Figure 3. As we can see the network consists of some impractical interconnections.
15
-E^n F2,3 (6. 5 t4i)
F2,5 (22.2 t/h)
F2,l(3.43 t/h)
Oppm
-G>
F3,10(22.9t^) ,46.4t/h
•^E-H
166 t/h 920 ppm
F4,7 (0.2
^
-
^0+ i3-
JF9,6(10t/h)
Figure 3. Network design with the minimum freshwater consumption.
4D•EH
Q
-•[T]--'"-'^-*,
F4,10 (13.1 t/h) Fl,3(7.1 t/h)
i52i^h_^gjj'; F3,10(18.7) Oppm 0.57 t/h
im F10,7(llit^)
.[I>--'^'iH F2,9(28.6t^)
^CD *[DJ
F8,10(10t/h)
F9,6(10tAi)
Figure 4. Network design with the minimum capital cost and freshwater consumption.
16 Stage! minimising the capital cost. The optimisation for freshwater saving shown on Stage 1 resulted in a feasible network but the network requires many interconnections. Moreover some of them may be impractical. Once the target is obtained the most cost effective design can be sought by minimising the total capital cost and including the minimum fresh water usage as a constraint. An MINLP model has therefore been formulated to select the most costeffective water-using system by minimising the capital cost. Piping costs are calculated on the basis of the length of the pipe runs, flow velocity, area of the pipe, distance between operation, distance between operation and source, distance between operation and wastewater treatment plant. Figure 4 shows the resulting network with the minimum capital cost and fresh water consumption. It can be seen the number of connections of the network in Figure 4 is lower than that of the network in Figure 3. The network in Figure 4 also avoids irrelevant and uneconomical connections and replaces with economical connections. The final network design shows a reasonable amount flowrate to be discharged from operations, which makes it more economical.
4. Conclusions A methodology for designing a water system that is capable of determining the minimum freshwater usage and minimum total capital cost of the network has been developed. The approach is able to solve problems with multiple freshwater sources and multiple contaminates. The approach can be fully automated and the results from the case study suggest that there are a number of different network configurations that can satisfy the minimum freshwater usage target. The best design should be the network which incurs the minimum capital cost and consumes minimum fresh water.
5. References Alva-Argaez, A., Vallianators, A. and Kokossis, A.C., 1999, A multi-contaminate transshipment model for mass exchange networks and wastewater minimisation problems. Computers & Chemical Engineering, 23. Bagajewicz, M.J., 2000, A review of recent design procedures for water networks in refineries and process plants. Computers & Chemical Engineering, 24. Bishop, A.B. and Hendricks, D.W., 1971, Water reuse systems analysis, J. Sanitary Engng. Div. ASCE. Feb. Huang, C.H., Chang, C.T., Ling, H.C. and Chang, C.C, 1999, A mathematical programming model for water usage and treatment network design. Industrial & Engineering Chemistry Research, 38 (5). Nemerow, N.L., 1971, Liquid waste of industry, theories, practices and treatment. Addison-Wesley Publishing Company, U.S.A. Takama, N., Kuriyama, T., Shiroko, K. and Umeda, T., 1980, Optimal water allocation in a petroleum refinery. Computers & Chemical Engineering, 4. Wang, Y.P. and Smith, R., 1994. Wastewater Minimization. Chemical Engineering Science. 49(7).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
17
Generation and Screening of Retrofit Alternatives Using a Systematic Indicator-Based Retrofit Analysis Method Niels Kau Andersen, Nuria Coll, Niels Jensen, Rafiqul Gani CAPEC, Department of Chemical Engineering, Technical University of Denmark, DK-2800 Lyngby, Denmark Eric Uerdingen, Ulrich Fischer, Konrad Hungerbiihler Safety & Environmental Technology Group, Laboratory of Technical Chemistry, Swiss Federal Institute of Technology Zurich, CH - 8092 Zurich, Switzerland
Abstract In this work, a new simple, yet effective, systematic method to perform retrofit design and analysis is presented. Based on a path flow analysis approach, a set of indicators is calculated in order to pinpoint unnecessary energy and material waste costs and to identify potentially improved retrofit alternatives. The method requires only steady state design data and a database with properties of compounds, including, environmental impact factor related data and safety factor related data. Given a simulation report corresponding to an actual process (or base case design), the method identifies the material and energy path flows and calculates a set of indicators. The most important design variables are then identified through a sensitivity analysis in terms of their effect on the indicator values, which also defines the limiting values of these variables. The generated retrofit (design) alternatives are classified in terms of reduction targets in cost of operation, raw materials, energy consumption, environmental impact and safety. The well-known HDA-process is used as a case study to highlight the main features of the retrofit analysis method.
1. Introduction The increasing concern and focus on pollution prevention and the never-ending quest for higher profits in order to be competitive on the market, makes the necessity of process optimisation obvious. Although, use of advanced mathematical methods is an obvious solution approach, applications of such methods prove difficult and time consuming for large and complex processes. Also, these approaches are dependent on the availability of reliable mathematical models of the process. An alternative approach is to divide the main problem into sub-problems that are easy to solve without sacrificing the accuracy of the calculations, and where the final step involves solution of a well-defined process optimisation problem. Using available data the indicators can identify the important design variables and the potential for improvement. The indicators, by definition, would also provide estimates of improvements in terms consumption of resources, environmental impact and safety, since they involve the same set of variables. The final results would consist of a list of feasible retrofit (design) alternatives that are guaranteed to provide improvements in comparison to the reference
18 design. These results can also be used to derive a simpler mathematical programming problem with respect to the identified design variables and using the generated retrofit alternatives to determine a superstructure with a well-defined solution space. The objective of this paper is to present the indicator based retrofit analysis method starting from the analysis of the process (plant) data and ending with the generation and evaluation of the generated retrofit alternatives. Although, the last step of deriving and solving the mathematical optimisation problem is not part of the objectives of this paper, the improvements achieved through the application of the method are highlighted through a case study. In the evaluation step, the sustainability metrics defined by IChemE Sustainable Development Working Group (Tallis, 2002), the WAR indicators (Cabezas et al. 1999) and the inherent safety indicators (Hekkila, 1999) are considered.
2. Retrofit Analysis Method The retrofit analysis method consists of 6 main steps, as outlined below. A brief description of steps 2 and 3 are given below. A more detailed description of the methodology can be obtained from the corresponding author (
[email protected]). 1. Obtain process (plant) steady state data. 2. Transform the process flowsheet to a process graph, and decompose this graph to identify the path flow diagrams. 3. Calculate the set of indicators ( five for mass and two for energy). 4. Perform sensitivity analysis on the indicators with respect to changes in design variables (to identify the important design variables). 5. Generate retrofit (design) alternatives by changing values of the identified design variables. 6. Evaluate the generated alternatives in terms of improvements with respect to resources (mass & energy), sustainability metrices, environmental impact factors and inherent safety factors. Step 2: (Decomposition of Flows): A process flowsheet consists of unit operations and streams combining these. Uerdingen (2002) applied the graph theory to convert the flowsheet into a process graph where the process units become the vertices and the process streams become the edges that connect the vertices. This helps to identify the flow paths in terms of mass (of each component present in the system) and energy and classify them as supply flows, demand flows or incident flows. A supply flow is either a flow that enters the process or a flow generated in the process. Demand flows are either outlets or flows that are consumed in a reactor. Incident flows are flows that runs between the unit operations. An open path is a flow of a component that enters the process in a supply and leaves in a demand. The path flows are either open, if no recycle occurs, or closed, if recycles are present in the flow sheet. For each recycle, a cycle path is defined for each component in the recycle. The energy demand can exist in two ways - energy leaving with a mass demand flow or as heat transfer. Step 3 involves calculating the set of mass and energy indicators and the most important of these are breifly described below. These indicators, by definition, are implicitly related to the process and operation through variables that indicate how the design/operation can be improved by either decreasing or increasing their values.
19 Material-value added (MVA): For a given open path it is desirable to calculate the value generated from start to end point. This is done by calculating the difference between the value of the component path flows outside the process boundaries and the costs in raw material consumption or feed cost. Negative values for MVA indicates value losses and show that there are potentials for improving the economic efficiency. Energy and waste cost (EWC): The EWC indicator consists of two parts: EC considers the energy costs and WC the process waste costs associated with a given path, by allocating the utility consumption and waste treatment costs. The results will indicate the maximum theoretical saving potential for a given path. High EWC values indicate high energy consumption and waste costs that could be reduced by decreasing the path flow or the duties. For component c in path flow k the equation is: ^
EWCf ^ = m]^^
P^uQu-jj^
+ (Pit)
^Ax\')
(1)
ZJ ^m,uk ^u,uk vm»Pm ) uk=\
Au is an allocation, p j is the density and WAxj is the waste allocation factor. PEu represent the unit price of the utility Q„. Subscript u is the sub-operations index, uk is the index of all component path flows in u and 7 is the path flow index. Reaction quality (RQ): This indicator measures the effect a component path flow may have on the reactions that occur in its path. If the RQ value is positive, the path flow has a positive effect on the overall plant productivity. Negative values indicate an undesirably located component path flow in the process. Accumulation factor (AF): AF is a way of measuring the accumulative behavior of individual components in recycles. .(^) AF=I
(2) EN
OP
a=l
op=l
1 Y^ft^^m
op
where m^^^ is the cycle flow rate. High values of this indicator show that there is a build-up of a component within the system, which possibly could be caused by poor separations or low conversion in the reactive unit. Total value added (TVA): This indicator describes the economic influence a component path flow may have on the variable process costs. Negative TVA values indicate improvement potentials in the process. Still, if a path flow has an high EWC value that is compensated by a high MVA value and gives a positive TVA value it can still be possible to reduce the energy cost. Energy accumulation factor (EAF): The energy accumulation factor (EAF), calculates the accumulative behavior of energy in an energy cycle path flow. Since it is of interest
20 to recycle or recover energy, these factors should be as large as possible in order to save energy. The energy accumulation factor can be calculated as:
EAF^ =—
££
• 'EN
E a=l i=l
^ OP
(3)
^
h op
op=l
where / is the total number of vertices encountered in the energy cycle path flow, and / is the index of these, ec is the index of the cycle energy path flows, ehl is the amount of energy recycled in the particular recycle. Steps 4-6: Once the indicators have been calculated, they are included in a sensitivity analysis in order to pinpoint areas of the process with good improvement potentials and to identify the corresponding set of design variables that may be changed in order to achieve the improvement. The generated retrofit alternatives are evaluated in terms of various performance indices (mass/energy consumption, sustainability metrices, environmental impact factors and safety factors). A process simulation needs to be performed for each alternative. Since the generated retorfit alternatives are usually relatively small changes on the original design, and, a reference flowsheet simulation already exists, these additional simulations are not difficult to make.
4. Case Study Hydrodealkylation of toluene is a process where toluene is converted into benzene by reaction with hydrogen, forming Diphenyl as a byproduct. This is a well known process, which has been studied in numerous publications. The reference design and flowsheet considered in this paper are taken from Seider et al. (1999), where further details can be found. The necessary steady state simulations have been performed with a commercial process simulator, from where the results have been transferred to a software developed for this work to calculate the indicator values. The steady state simulation results are also transferred to ICAS (ICAS Documentation, 2002) to determine the environmental impact factors, the sustainability metrics and the safety factors. Table 1 shows the most important indicator-values from the base case design. A detailed calculation results document can be obtained from the corresponding author. Table 1: Indicator values for base case design. Name CI 02 05 07 09
Description CH4 in gas recycle Diphenyl in stream 26 CH4 in stream 27 CH4 in purge H2 in purge
AF 4.88 -
RQ 0.00 0.00 0.00 0.00 1.06
EWC 484.5 266.8 0.43 7.78 33.98
MVA -13.4 -481.9 -8748.0 -2948.0
TVA -484.5 -280.2 -482.3 -8755.0 -2982.0
The open path flows 05 and 07 score a high negative MVA value because the production of methane from the raw materials is very high compared to the fuel credit
21 given for these paths. Open path flow 09 shows a high negative TVA value. This is because the hydrogen in this path is lost in the purge and the purchase price of hydrogen is high compared to the fuel credit given by incineration. The methane gas cycle CI, shows an AF value above one, which indicates an unfavorable build-up of methane in the system. The low TVA value is caused by the high EWC value, which is again a result of the high flow rate. A sensitivity analysis was performed in order to identify the retrofit alternatives with the largest impact towards an improvement of the process. A component splitter was introduced after the flash operation in order to separate hydrogen and methane before the recycle, which would lead to lower EWC values for the gas cycle paths (see CI value in Table 2). In addition, valuable hydrogen will not be lost in the purge, thus decreasing the negative TVA value of 09. Note that the slight increase in the MVA value for 07 is compensated by the decrease in the MVA values for the other path flows. As hydrogen is fed in excess compared to toluene, reducing the hydrogen feed rate can reduce the amount of hydrogen lost in purge in 09 (see TVA value) as well as the raw material cost, which affects the benefit of the process. Finally, the temperature in the inlet stream to the reactor is optimized (increased) in order to reduce the EWC values of the large streams passing unit E-lOO (pre-heater for the reactor). These suggested changes are implemented and Table 2 shows the improvements in the process and thus in the indicator values for the new process design. Table 2: Indicator values for improved process design. Name CI 02 05 07 09
Description CH4 in gas recycle Diphenyl in stream 26 CH4 in stream 27 CH4 in purge H2 in purge
AF 0.00 -
RQ 0.00 0.00 0.00 0.00 1.04
EWC 0.00 267.2 0.11 6.45 0.06
MVA -12.9 -159.0 -9085.0 -7.5
TVA -484.5 -280.2 -482.3 -8755.0 -2982.0
The sustainability metrices, which show any change (compared to the original design) are summarized in Table 3. As it can be seen, the new design gives an improvement, especially for the energy consumption. Finally, results from the WAR algorithm are listed in Table 4, showing a slight improvement in the GWP and PEI values (all other factors are unchanged). The total inherent safety index (ISI) for the base case process was calculated to be 31. This did not change (as it is closely related to the production rate and identity of the product, in this case, benzene) with the new process design, so no safety or environmental aggravation was encountered by changing the design to a more energy efficient process, using the retrofit alternatives. Besides this HDA case study, the method has been succesfully applied to a range of other processes, including cyclohexane, styrene and monochlorobenzene plants.
22 Table 3: Results from sustainability metrices.
Total net primary energy usage per kg product Hazardous raw material per kg product Net water consumed per unit mass of product
Base case design 78.58E+06 kJ/kg
Improved design 55.24E+06kJ/kg
1.22 kg/kg product 184.6 kg/kg
1.19 kg/kg product 171.3 kg/kg
Table 4: Indicators for the WAR algorithm.
Global Warming Potential (GWP) Potential Environmental Impact (PEI)
Base case design 123.1 573E+03
Improved design 117.6 570E+03
5. Conclusions A systematic and easy to apply computer aided technique for retrofit design has been presented and its application through a well-known case study has been highlighted. An important feature of this technique is that it does not perform any complex calculations but provides feasible retrofit (design) alternatives that show improvements when compared to the reference design. This has been possible because only design variables that are sensitive to the indicators are changed in order to generate the retrofit alternatives. At the same time, since the indicators, by definition, are also functions of the same set of variables as the sustainability metrics, environmental impact factors and inherent safety factors, any generated retrofit alternative that improves the indicator values also improves these factors. For the highlighted case study, it appears that rather than a trade-off between competing factors, the generated retrofit alternatives either improve them or are neutral to them. This means that the process optimisation becomes easier and multiple objectives can be satisfied without resorting to tradeoffs between them. Current work involves developing more case studies so that the question of multiple objectives can be more thoroughly investigated.
6. References Cabezas, H., Bare, J., Mallick, S., 1999, Pollution Prevention with Chemical process Simulators: The Generalized Waste Reduction (WAR) Algorithm, Computers Chemical Engineering, 23 (4-5), 623-634. Heikkila, A.-M., 1999, Inherent Safety in Process Plant Design - An Index-based Approach, Ph.D-Thesis, VTT Automation, Espoo, Finland. ICAS Documentations, 2002, CAPEC Internal Report, PEC02-23, Technical University of Denmark, Denmark. Seider, W.D., Seader, J.D., Lewin D.R., 1999, Process Design Principles, Chapter 3 John Wiley & Sons, Inc., USA. Uerdingen, E., 2002, Retrofit design of continuous chemical processes for the improvement of production cost-efficiency, PhD-Thesis, ETH-Zurich, Switzerland. Tallis, B., 2002, Sustainable Development Progress Metrics, IChemE Sustainable Development Working Group, IChemE, Rugby, UK.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
23
A Comparison of Flowsheet Solving Strategies Using Interval Global Optimisation Methods S. Balendra, and I. D. L. Bogle^ Centre for Process Systems Engineering Department of Chemical Engineering UCL (University College London) Torrington Place, London WCIE 7JE, U.K.
Abstract This paper investigates an approach to process optimisation that allows construction of flowsheets in modular systems that can then be optimised using interval global optimisation methods based on branch and bound methods. The modular flowsheets are constructed with generic unit modules that can provide the interval bounds, linear bounds and derivative bounds using extended arithmetic types. Modular flowsheets are convenient and modular flowsheeting systems very popular. The simultaneous modular approach along with equation-based methods have been used to optimise three case studies. The modular approach has advantages but when optimising there is a price to pay for this. More bounding steps are required resulting in higher computation times although in the case of the water network problem, the most nonlinear of the problems, the price was small.
1. Introduction New methods for global optimization are continually generating interest in chemical engineering. Recent advances in deterministic global optimization methods for the enclosure of all solutions of non linearly constrained problem allow us to consider a wide range of chemical engineering optimisation problems Floudas (1999). Flowsheet optimisation can provide significantly better designs at modest cost during the early stages of design. Modular flowsheeting systems dominate the market and are used by much of the design community. Two important distinctions are identified in formulating flowsheeting problems. In the equation-oriented formulation the flowsheet is treated as a set of mass/energy balance and constituent equations that are solved simultaneously. The alternative sequential modular approach views the flowsheet as interconnected black boxes. Local Modular optimisation methods have been investigated by Schmid & Biegler (1994). Both approaches have their advantages, however the modular approach has the advantage that it matches more closely to the natural structure of the flowsheet. Optimisation can be achieved by the application of global optimisation algorithms to modular flowsheets built from generic models Byrne & Bogle (2000) using methods based on interval analysis.
Corresponding author; d . b o g l e @ u c l . a c . u k ; + 4 4
(0) 20 7679 3803
24 1.1. Interval analysis and automatic differentiation Interval analysis was first introduced by Moore (1966). An interval consists of a lower and an upper bound. Interval arithmetic ensures that the interval result of an operation on two intervals contains all achievable real values. This is useful as continuous real variables can be divided into discrete interval sections. If mathematical operations are carried out on these intervals, according to the rules of interval arithmetic, the result will contain the range of all possible values. As a result, the global optimum can be bounded. Hansen (1992) provides a detailed explanation of some interval methods and their application to global optimization. Certain interval methods make use of interval gradients which require derivatives. Automatic Differentiation (AD) is a method for simultaneously computing partial derivatives using a multicomponent object, called gradient, whose algebraic properties incorporate the chain rule of differentiation. The rules, together with the gradient type, form an extended type that we call AD. This type can be used with intervals.
2. Flowsheet Optimisation Methodology The sequential modular approach can be formulated in a way suitable for interval methods. Modules are connected in the same way but must be modified to handle interval arithmetic. A generic module requires point values or intervals for all the input streams and unit parameters and calculates the conditions for the output streams as respectively values or intervals. Modular flowsheets are constructed with generic unit modules that can provide the interval bounds, linear bounds, derivatives and derivative bounds using extended arithmetic types. Using interval analysis and automatic differentiation as the arithmetic types, lower bounding information is used for optimisation in a branch and bound framework. The interval global optimisation problem is reformulated as a bound constrained linear relaxation problem. Various classes of linear under-estimators, derived from the natural extension and the mean value forms of interval analysis, can be applied to any once differentiable function Byrne & Bogle (1999). These underestimators are combined with the interval bounded linear program to create a rigorous algorithm for constrained global optimisation. These reformulations are used for nonconvex problems.
\Z w
•
f Fig 1. A Generic model allowing multiple types. A generic model, illustrated in Fig 1, is a model that specifies the transformations that need to be applied to some underlying type, T, to obtain the output. The model needs to
25
describe the operations and then the appropriate rules are applied to data of type T. For example, a module, which adds its inputs, should use the interval arithmetic rules if the underlying type is an interval (T = interval) and the rules of AD if the underlying type is an AD type. This formulation also includes models using traditional arithmetic. 2.1. Optimisation of extended- type flowsheets Flowsheets can be built up from generic units and then evaluated using the operations for an extended type, T, which provides the information necessary for the global optimisation algorithm. The variable type
being used is intervals. Each unit calculates the output streams (where the input and output streams are usually a vector) and the cost associated with the unit as intervals. The summation of the costs provides the objective function. The design constraints are added outside the module to the optimisation problem. Interval gradient types have been used, and the global optimisation algorithm uses the NE and MV underestimation scheme (Bogle & Byrne, 1999, p.1341-1350) to construct a linear relaxation of the objective function and constraints in terms of the optimisation variables. The solution of this linear relaxation provides the necessary bounds on the optimisation problem. 2.2. Solving recycle problems The recycle stream is torn with a tear block. Then a range of inputs to the flowsheet is selected by the interval optimisation algorithm and the flowsheet is calculated through the interval units to produce a set of outputs which bound all the possible outputs from the flowsheet with the given parameters. Given a guess for the values in the torn stream, XT, the resulting calculated torn stream, Xc, will also enclose all the possible values of the recycle stream with XT as an input. The simultaneous modular approach has been applied. This corresponds to using a recycle module where the parameters are independent variables in the optimisation and the residuals are the residuals of the constraints Xj-Xc = 0. In this case, the recycle is converged as the optimisation converges. 2.3. Equation based flowsheets Interval global optimisation can also be used with equation based systems. Information about the structure of the problem facilitates efficient solution of the constrained NLPs using interval analysis. This is achieved by reformulating the lower bounding procedure as a convex programming problem and allows the inclusion of convex constraints in the lower bounding problem. Bounding procedures here have been taken from Zamora & Grossmann(1998). 2.4. A comparison of the approaches It is interesting to compare the performance of interval methods on modular and equation based formulations of flowsheeting problems. Efficiency and convenience both need to be analysed when making comparisons. The equation based formulation will be more computationally efficient so it is important to know what the price to pay is for having the convenience of a modular formulation. The following sections will demonstrate modular and equation based optimisation for three case studies.
26
3. Case Study I: Haverly Pooling Problem This problem is to blend four feeds into two products to minimize the total cost. Each feed has a composition XA , and cost c [£/(kmol h)]. Each product has a cost, required composition, and flowrate, F. The feeds are mixed to produce products that satisfy the quality requirements using mixer and splitter units to represent the blending tanks. A diagram and details of the problem can be found in Quesada & Grossmann (1995). This problem is a small scale blending problem and is non-convex. The procedure presented here could be used to obtain the global optimum of large scale non-convex blending problems by reusing a very small number of generic units (mixers, splitters, and feed and product units). This problem has a number of local minimizers, and a global minimizer at F= [0, 100, 0, 0,0, 100,100,200]^ and X A = 0.01 with f(x) = -400. This problem requires feeds, products, splitters and mixers. The flowrate is determined by a single input parameter and the cost by multiplying the unit cost by the flowrate. A mixer has two inputs and one output calculated by 'adding' the inputs together; in this case the cost is zero. Splitters divide one input into two outputs based on the input parameter (split fraction) and again the cost is zero. The quality constraint placed on the two product streams become residuals in the product modules, which have one input and no outputs. The equation-based formulation is given in Quesada & Grossmann (1995).
4. Case Study II: Recycle Flowsheet Problem This case study is a reactor-separator-recycle system to produce monochlorobenzene. The operating parameters and sizes for one of the synthesis alternatives are optimised using the detailed models and the costing information provided. Each unit has a capital cost, Cc, and an operating cost, Co, which is incorporated into the objective function through a pay out time of 2.5 years. The principal units are a CSTR and two separation columns. The models have been reformulated in terms of component flowrates, Fsj. The reactor is a continuous stirred tank reactor (CSTR) which models the reaction between chlorine and benzene (A) to produce monochlorobenzene ( B) and dichlorobenzene (C) at a constant temperature. The maximum (global) profit is $2081/day. This flowsheet contains the same units as case study I plus a reactor. The reactor model has a single input stream, F2,i and three parameters, Ai, Bi, and the extent of reaction, v. The equation based formulation is given in Floudas (1995).
5. Case Study III: Alperovits Eight Link Looped Water Network Fig 2 below illustrates a looped water network design configuration of pipe junction (nodes), pressure source (reservoir, pump) and fixed pipe lengths (connecting the nodes). The global optimisation task is to reduce the cost of the network while satisfying the anticipated water pressure head at nodes. A variety of local optimisation schemes have been developed and very few global approaches have been considered Eiger (1994) et al. and Sherali & Subramanian (2001) . This is the first attempt using interval analysis.
27 Reservoir
<2)—L=(i>
a
TO
Fig 2. Water Network Problem. The optimal design problem is to determine the minimal cost combination of pipeline diameters. Each diameter has a corresponding cost per unit length Ck- Each segment k of pipe link i has a particular diameter and a corresponding length X^i (SXki = 1000m) and the objective function is Cost = ^ CjciXf^i-
The global optimum has a cost of $403,657 with the following values for Xi^ for pipe section diameters: 1 (1000m), 2 (795:205m), 3 (1000m), 4 (1000m), 5 (310:690m), 6 (11:989 m), 7 (99:901m), 8 (1000m). For this looped network we have modules for the pipes and the nodes. The objective function is the sum of the cost of the pipe modules. Due to the difference in head between the two nodes, the head constraints are taken as the residuals. The node module can be thought of as a mixer module as in case study I, with an extra stream leaving representing the demand. The equation-based formulation is given in Sherali & Subramian (2001).
6. Results - Number of Iterations and CPU Time for the Case Studies Table 1 - Numerical results. Case Study
Modular Based
Equation Based
CPU/sec
Iterations
CPU/sec
Iterations
I
18.4
13
2.09
3
II
289.2
26
182
15
III
23.56
9
20.94
9
28 The results in Table 1 were generated using Matlab 6.1 on a Pentium III IGHz. As expected the CPU time for the modular approach is longer since there is an inevitable overhead associated with the modular approach. From the results it can also be seen that the number of iterations (interval divisions) is often higher for modular based systems rather than for equations based systems. This is because more relationships are introduced through the interconnections between modules resulting in more bounding procedures required for modular systems. It is also a result of the well known dependency problem Hansen (1992) causing extra bounding if a given variable, which may occur more than once, is treated as a different variable.
7. Conclusions The simultaneous modular approach along with equation-based methods have been used to optimise three case studies. The modular approach has advantages but when optimising there is a price to pay for this. More bounding steps are required resulting in higher computation times although in the case of the water problem, the most nonlinear of the problems, the price was small. Our future work will be exploring the extent to which this can be alleviated by careful implementation of modules to account for interval arithmetic and in particular to avoid the dependency problem. Also in appropriate modules we will introduce an interval Newton method to reduce the number of bounding procedures.
8. References Byrne, R.P. and Bogle, I.D.L., 1999, Global optimisation of constrained non-convex programs using reformulations and interval analysis. Comput chem Engng, 23, 1341-1350. Byrne, R.P. and Bogle, I.D.L., 2000, Global optimization of modular process flowsheets, Ind. Eng. Chem. Res. 39(11), 4296-4301. Eiger, G., Shamir, U. and Ben-Tal, A., 1994. Optimal Design of Water Distribution Networks. Water Resources Research, Vol. 30, No. 9(1194), 2637-2646. Floudas, C.A., 1995, Non-linear and mixed integer optimisation. Fundamentals and Applications. Oxford University Press, New York. Floudas, C.A., 1999, Recent advances in global optimization for process synthesis, design and control: enclosure of all solutions. Comput. Chem. Engng., 23, S963-S973. Grossmann, I.E., 1996, Global optimization for engineering design. Kluwer Academic Publishers. Dordrecht, The Netherlands. Hansen, E., 1992, Global Optimization Using Interval Analysis, Marcel Dekker, New York. Moore, R.E., 1966, Interval Analysis, Prentice Hall, Englewood Cliffs, New Jersey. Schmid, C. and Biegler, L.T., 1994, A simultaneous approach for flowsheet optimisation with existing modelling procedures. Trans. IchemE, 72, 382-388. Sherali, H.D. and Subramanian, S., 2001, Effective Relaxations and partitioning Schemes for Solving Water Distribution Network Design Problems to Global Optimality. Journal of Global Optimisation, 19, 1-26. Quesada, Land Grossmann, I.E.. A global optimization algorithm for linear fractional and bilinear programs. Journal of global optimization, 6:39-76, 1995. Zamora, J.M. and Grossmann, I.E., 1998, Continuous global optimisation of structured process systems models. Comput. Chem.. Engng., 22/12, 1749-1770.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
29
An Integration of Design Data and Mathematical Models in Chemical Process Design B. Bayer, L. von Wedel, W. Marquardt* Lehrstuhl fur Prozesstechnik, RWTH Aachen, D-52056 Aachen, Germany
Abstract Model-based methods, provided by process modeling environments, nowadays support a variety of activities throughout the lifecycle of designing and running chemical processes. On the other hand, computer-aided engineering (CAE) systems have become popular for maintaining the information relevant to describe the function of the process as well as equipment data. This contribution proposes an integration of these two specialized types of systems. First, such an integration allows to understand the design context in which models have been used. Further, an export functionality of the CAE system allows the modeler to generate a skeleton model that can be modified or enriched, instead of starting modeling from scratch for each model-based task.
1. Introduction In chemical process industries there is a growing demand for an improvement of the design process in order to obtain better plants in shorter development cycles. Existing software tools support only isolated parts of the design process. A sustainable improvement of this software support can only be achieved by an integration of tools into an environment combined with the support of work processes (Marquardt and Nagl, 1998). Design data capturing information about the chemical process and the plant as well as mathematical models used for analysis or optimization purposes are major types of information within chemical process design. Design data characterizing a chemical process (such as flowsheets and equipment data sheets) are captured within computeraided engineering (CAE) tools and project databases. Process modeling environments allow developing and solving mathematical models for process analysis. These models are stored in files by the different users who organize them according to personal preferences. Obviously, there are strong dependencies between both sets of information: flowsheet topology and phenomena are reflected by the mathematical model on the one hand; on the other hand, findings based on simulation experiments form the basis for design decisions about the process structure and must thus be re-integrated into the design database. Thus, the integration of tools managing process design data and of tools used for model-based analysis is an important contribution towards the integration of modelbased synthesis and analysis steps during chemical process design. * Correspondence should be addressed to W. Marquardt, [email protected]
30 1.1. Requirements An integration solution to improve upon the use of design data and mathematical models must realize several important requirements: • It must be possible to create a new mathematical model on the basis of the process as it is represented by a process flowsheet. Here, different parts of the model correspond to different parts of the flowsheet. During his/her work, the engineer must be able to switch back and forth between process steps in a flowsheet and corresponding mathematical models. • Based on the design data, a mathematical model cannot be generated completely. Therefore, it must be possible for the user to further extend, modify, and specify the model, ideally with state-of-the-art process modeling tools such as AspenPlus or gPROMS. • If the process flowsheet is modified by the engineer and simulations are going to be performed for this new design alternative, it must be possible to reuse parts of the mathematical models, which have already been specified for the unmodified parts of the process to avoid that the user repeats work already done. • The simulation results must be integrated back into the flowsheet at least partially so that they are available for further design activities. It is further possible, that the user recognizes during modeling and simulation that a modification of the process or its structure is necessary and modifies the model accordingly. Such changes must then be imported back into the flowsheet in order to ensure consistency between the design data and the mathematical model. • During process design, mathematical models on different levels of granularity with varying degree of detail are employed depending on the design context. Mathematical models might represent mass balances with linear descriptions of unit operations for costing considerations, rigorous kinetics and phase equilibria for operating point optimization, or dynamic phenomena for control system design. These different types of models, which are often realized using different process or generic modeling tools, must be maintained. 1.2. State of the art Currently, the functionality described above is mainly performed manually by engineers concerned with process design and model development. Certainly, there is a risk that work in model development is duplicated, inconsistent models are employed for different purposes, or the models used during process design cannot be found in later stages of the project. In commercial tools, the functionality stated as required above is realized partially. As an example, the Aspen Engineering Suite (AES; AspenTech, 2002) realizes an export of a steady-state model in AspenPlus to a dynamic model in AspenDynamics. However, such a point-to-point integration, although useful, is not extensible towards more general model development problems involving different granularity and abstraction levels. Whereas the AES approach considers the AspenPlus model as the core information, from which other model representations are derived, we consider the flowsheet of a process as the relevant piece of information that is used as a starting point for model development (Bayer et al., 2001b). This permits an extensible architecture which can
31 handle the derivation of models from a process flowsheet in a number of different contexts and between tools of different vendors. During the last years, integrated design environments like Aspen Zyqad have been developed. Such tools are tightly linked to the products of a single vendor and do not permit the use of different process modeling tools best suited for different tasks within process design.
2, Tools for Managing Design Data and Mathematical Models In this work, Comos PT (innotec, 2002), a tool holding design data, has been integrated with the model repository Rome (von Wedel, Marquardt, 2002) which is capable of storing and organizing the models which are created and used during process design activities. 2.1. Comos PT Comos PT is a commercial CAE system, which has been originally developed for the support of data management during detailed engineering and construction. It provides a basis for the consistent storage of data. The system has a layered architecture, where the core is built by an object server. Within this server, an object-oriented data model is specified that can be extended and changed in a flexible manner. Further, the server offers standard integration mechanisms like COM and OLE. As a chemical engineering system, Comos PT offers flowsheets in its user interface, where the objects placed on a flowsheet are directly connected to the objects within the database. Comos PT has been extended with parts of the conceptual information model CLiP (Bayer et al., 2001a) as a basis for the support of conceptual process design. Thus, Comos PT is responsible for manipulating and storing design data. The major objects of that extension are process steps representing the different reactions, physical procedures and unit operations of a chemical process and the phase systems representing the material flowing between these process steps. Further, the chemical plant with its equipment and pipes can be specified within that CLiP implementation. 2.2. ROME
Instead of integrating process modeling tools on an individual basis, the design data management system based on Comos PT is hooked up to the model repository ROME (von Wedel, Marquardt, 2002) which takes care of storing and maintaining process models from a variety of sources, such as gPROMS, AspenPlus, or CAPE-OPEN compliant software components. ROME abstracts these incompatible model definitions into a neutral representation and thereby permits access to heterogeneous models through a unified interface. A number of advantages are realized through this neutral representation, among them being able to document, browse, and aggregate models independent from the modeling tools used for their development. The long term goal of the project is the ability to gradually build up a library of mathematical process models relevant for the business domain of a company regardless of the tools being employed in process modeling. ROME is implemented using an object-oriented database (Versant, 2000) which can be accessed via CORE A middleware (Object Management Group, 2000) to ensure that the model repository is easily integrated with in-house or commercial software in a chemical engineering company's intranet, for example. Functionality such as importing.
32 browsing, exporting, or aggregating models can be performed by means of a Windowsbased graphical user interface as part of the ModKit modeling tool (Hackenberg, 2002).
3. Relations Between Design Data and Models The integration approach outlined in this contribution relies on the information model CLiP capturing the relationships between the concepts handled in the two tools to be integrated, i.e. the relations between process steps and phase systems on the left-hand side and mathematical models on right-hand side (cf. Fig. 1).
ProcessStep
f-O
(from DesignData)
\_
pTOC ess
step modeled by Model (from Models)
PhaseSystem (from DesignData)
I
phase system modeled by
l\
1
/^/
i..„ \
1
process port modeled by
ProcessPorl (from DesignData)
dl..rT
\
Process Steps
1
0..n 1
2 cx)nnectivity represented by
PrcxiessState
\ \o..n
Connector (from Models)
0..n ^2
Coupling
(from DesignData)
(from Models)
1..n
O-.n
Mode
Figure 1: Conceptual data model showing relations between process steps and models. In CLiP, the core element to describe the function of a process is a FrocessStep. Examples for process steps are unit operations such as reaction, separation, or mixing (cf. the left-hand side of Fig. 1). A PhaseSystem represents the processing material and defines chemical components as well as the aggregate state. A ProcessPort represents a distinct point within the process with an explicit state of a process step or a phase system (cf. the ProcessState class in Fig. 1). The relations between process ports and states describe the structure of the process. A mathematical Model (cf. the right-hand side of Fig. 1), maintained in ROME, can be specified by equations and variables or by referring to a tool-specific model, such as an AspenPlus input file. A Connector represents a possible connection point of a model and the Coupling denotes connections among connectors. The connector and couplings concepts permit reflecting the structure of a process in the mathematical model. The behavior of process steps and phase systems can be represented by mathematical models (see the modeled-by relations in Fig. 1). In order to map the process topology to the model, process ports and connectors have to be related as well as process states and couplings by corresponding modeled-by relations. It must be considered that the relations between process design and model concepts are rather complex. Depending on the design context, a process step (such as separation) can be represented by different mathematical models (e.g. a linear splitter or a rigorous distillation column) and a certain mathematical model (such as a first-order transfer block) can be used to
33 represent different process steps (e.g. a tank or a first-order reaction mechanism). Thus, the modeled-by relations must be maintained with many-to-many multiplicities.
4. Integration: Implementation and Functionality In this section the integration of Comos PT and ROME is described. In order to create a mathematical model from a design alternative specified in Comos PT, the user selects models for each phase system and process step within Comos PT. Along this work, we have implemented support that provides defaults for this selection for all types of process steps. These selections are grouped into alternative configurations which correspond to a certain model-based application. The first step to apply some model-based application is to specify a suitable model for each process step. Basically, this step establishes the modeled-by relations (cf Fig. 1) between process steps and phase systems on the one hand and models on the other hand. Mapping the process topology to the model to be generated is taken care of by the automatic model generation step. In order to use linear models (e.g. for basic cost estimation), the integrated system provides a configuration linear process model which consists of defaults for linear models for each process step. Another configuration is provided for rigorous steadystate simulations with AspenPlus. Here, all process steps are mapped to corresponding blocks provided by the AspenPlus flowsheeting tool. In addition, the user can create custom configurations and specify arbitrary models to be used, e.g. when custom models for dynamic simulation shall be employed. These models can be selected among the set of models maintained by the model repository ROME. Once mappings for all process steps have been specified in Comos PT through defaults of a certain configuration or user selections, the model can be exported to the model repository ROME. The model is first exported to an intermediate format represented in XML and is subsequently imported into ROME. From the model repository the user can then export the model to different target formats, among them modeling languages like gPROMS, ACM, and Modelica. It should be noted, that the flowsheet topology is preserved through the series of steps executed. Further support realized in Comos PT allows the user to switch between configurations or to create new ones. In order to customize a configuration, the user can override the default mappings, for example to employ a tailored model for a certain process step. The selection of a model for a process step is simplified by a dialogue from which the user can select among the models already available in ROME. The implementation of the conceptual data model shown in Fig. 1 is performed in a number of tables in MS Excel storing the mappings between process steps and models (organized in rows) for the different configurations (organized in columns). Each design project is organized in a separat Excel sheet. Further sheets contain the default configurations. An extension to Comos PT, developed in Visual Basic for Applications (VBA), generates an XML file referring to the selected model configuration stored within the Excel tables. The repository services for model import and model selection can be accessed from Comos PT via the ROME API (von Wedel, Marquardt, 2002).
5. Conclusions and Future Work Based on the conceptual data model CLiP, the integration of the design data management tool Comos PT and the model repository ROME has been presented. The
34
integration improves the use of different model-based analysis techniques along the process design lifecycle. Our realization further permits to generate parts of a mathematical model and to switch between the flowsheet and the corresponding models so that decisions made on the design level can be quickly verified with suitable modelbased techniques. As opposed to proprietary solutions, our approach is open and permits the use of process modeling tools stemming from different vendors. Currently, our approach implements only plain export functionality in order to simplify the transition from a flowsheet to a mathematical model. The use of data integration technology based on the explicit management of relations between heterogeneous documents and integration rules (Becker, Bayer, Nagl, 2003) has shown promising results when it comes to a re-integration of changes of the mathematical model back into the flowsheet. The technology permits incremental updates of mathematical models according to modifications in the flowsheet and vice versa. Such technology should be applied in future extensions in order to improve the capabilities of the implementation and to further simplify the handling of process data and models.
8. References Bayer, B., Becker, S., Nagl, M., 2003, Integration Tools for Supporting Incremental Modifications within Design Processes in Chemical Engineering, 8*** International Symposium on Process Systems Engineering, accepted for publication. Bayer, B., Weidenhaupt, K., Jarke, M., Marquardt, W., 2001a, A Flowsheet Centered Architecture for Conceptual Design, In: R. Gani, S.B. J0rgensen (Eds.): "European Symposium on Computer Aided Process Engineering - 11", Elsevier, 345-350. Bayer, B., Krobb, C , Marquardt, W., 2001b, A Data Model for Design Data in Chemical Engineering - Information Models, Technical report LPT-2001-15, Lehrstuhl fur Prozesstechnik, RWTH Aachen, available online at www.lfpt.rwth-aachen.de. Hackenberg, J., 2002, PhD Thesis, RWTH Aachen, in preparation, innotec GmbH, 2002, innotec online, http://www.innotec.de. Marquardt, W., Nagl, M., 1998, Tool Integration via Interface Standardization?, DECHEMA-Monographie 135, 95-126. Object Management Group, 2002, The Common Object Request Broker: Architecture and Specification, release 2.6.1. VERS ANT Corp., 2000, Versant Database Fundamentals Manual. Fremont, California, von Wedel, L., Marquardt, W., 2002, ROME: A repository to support the integration of models over the lifecycle of model-based engineering processes. In: S. Pierucci, (Ed.): "European Symposium on Computer Aided Process Engineering - 10", Elsevier, 535-540.
7. Acknowledgements This work has been funded by the DFG, Deutsche Forschungsgemeinschaft, in the CRC 476 IMPROVE. The authors thank A. Meyers for the support during implementation.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
35
A Production Planning Strategic Framework for Batch Plants Frederic Berard, Catherine Azzaro-Pantel, Luc Pibouleau, Serge Domenech Laboratoire de Genie Chimique - UMR 5503 CNRS/INPTENSIACET BP 1301 - 5, rue Paulin Talabot - 31106 TOULOUSE Cedex 1 - France [email protected]
Abstract Multipurpose batch plant planning is identified as a combinatorial optimization problem. The overall algorithm framework proposed in this study is based on a twostage approach using an object-oriented discrete-event simulation model (AD-HOC+) in combination with a Genetic Algorithm, since this kind of technique is particularly attractive to treat problems of NP-hard nature. The problem formulation and the results corresponding to a finite-horizon production are presented. This strategy deals with a production organized in only one campaign of a given time horizon H. The objective is to maximize product quantities during the campaign.
1. Introduction Multipurpose flexible batch plants are widely used to produce in small quantities highvalue added fine chemicals. Their production involves a multi-step synthesis and is submitted to strict constraints which are due, on the one hand, to market instability (short life cycles, rapid demand fluctuations) and, on the other hand, to rigorous regulations concerning product quality and environmental aspects. In this context, planning is a very complex problem integrated in supply chain management. The common approach to solve the problem of multipurpose plants planning is to use an MINLP (Mixed Integer Non Linear Programming) formulation (Papageourgiou and Pantelides, 1996). The main limitations of MINLP approaches are attributed to a combinatorial explosion for the treatment of large problems. To overcome this difficulty, a stochastic global optimization technique based on Genetic Algorithms (GAs) was developed previously (Berard et al, 1996a, 1996b). The optimization tool embeds a Discrete-Event Simulation (DES) model (AD-HOC+) using object-oriented concepts and describing the global dynamic behaviour of the production system for evaluation of the objective function. The planning production optimization problem was then solved through different strategies corresponding to various plant scenarios, i.e., cyclic production, finite-horizon production and "just-in-time" production. In each case, a set of production management parameters and a performance criterion computed by the DES simulator are to be defined. More precisely, the recipe priority order of final or intermediate products, determining the batch to be first treated at a given time, leads to a marked combinatorial aspect. In this paper, the problem formulation and the results corresponding to a finite-horizon production are presented. This strategy deals with a production organized in only one campaign of a given time horizon H. The objective is to maximize product quantities
36 produced in the facility during the campaign. An example is presented to illustrate how the proposed methodology can enhance managerial decision making.
2. A Finite-Horizon Production Strategy for Production Planning This part deals with the optimal production strategy within one campaign. The constraint is to carry out production at the end of the imposed time horizon. The campaign is composed of a subset of final products FPi. For a given final product FPi, the produced quantity Qi is divided into a number Ui of batch sizes Si. The production ratios are fixed with reference to Qi. The production parameters chosen to manage the production are the following ones: - Qi' quantity manufactured within the finite-horizon campaign for product FPi; - Hi: number of batches for each FPj of size Si; - Pi: priority order between product recipes for selection of the batch to load first at a given time; - hj, h2: heuristic rules for conflict management; hjis used to choose the batch to load first if several batches have the same priority. Possible rules are the following ones: 1 for the batch which enters first the facility and 2 for the batch which enters first an operation waiting queue; hz is relative to the choice of an equipment item for a given operation. The implemented rules are the following ones: 1 for the equipment item which has been available for the longest time; 2 for the one with the smallest workload; 3 for the one with the smallest waiting queue and, finally, 4 for the one for which the sum of all waiting operating times is minimal. Due to the large induced combinatorial aspect of this kind of optimization problems, classical mathematical methods seem inadequate. Consequendy, a stochastic method, and, more particularly, a Genetic Algorithm (which has proven its efficiency for similar problems) has been used in this study (Goldberg 1994, Cartwright and Long 1993, Azzaro-Pantel et al. 1998). Let us recall that a GA computes a set of individuals (the population) and a set of biologically inspired operators that can generate new individuals from parents. According to the evolutionary theory, only the most suited elements of a population can survive and generate offspring, thus transmitting their biological heredity to new generations. The heredity is enclosed in the chromosomes of the individuals, which are subject to mutation and to crossover mechanisms. The offspring generated by the genetic manipulation is evaluated, via a fitness value, measuring the quality of the solution represented by the chromosome. A selection process then retains the best individuals in the whole population for the next generation. The cycle is repeated, generally until a predefined maximum number of generations is reached. The key points of the GA will be presented in the following sections.
3. Problem Formulation The problem can be formulated as follows. Given: i. a set of NP Final Products FPi, of Ui batches of identical size Si released at the beginning of the campaign , ii. the respective amounts Ri = Qi/ Qi of each product, which are fixed relative to a referential Final Product FPI,
37
the objective is to maximize the global production level Q within the given time horizon. The optimization variables are the aforementioned parameters. 3.1. Parameter encoding Parameter encoding is generally considered as a key-factor in GA success. Special attention has thus been given to production planning encoding (see an example of a genotype in Figure 1). The first string of the chromosome consists of an integer list comprised between zero and a superior limit which depends on the required degree of precision. A similar coding has been adopted for batch number. As expected, the encoding of priority levels between batches plays a crucial role and induces a major part of involved combinatorics. Batch position in priority order is the first choice criterion between several batches which are likely to be loaded simultaneously. When considering the problem of production capacity maximization treated here, the number of genes encoding priorities is equal to the number of recipes considered in a given batch plant. Each gene corresponds to a recipe. The integer value of a gene corresponds to the recipe priority level. It must be noted that if two recipes have the same priority, the corresponding batches will be dissociated according to a loading heuristic rule. For instance, let us consider 3 final products (FPl, FP2, FP3), 2 intermediate products (IPl and IP2). The highest priority (1) is given to batches FPl and IPl, an intermediate one (2) is allocated to IP2 and FP2 and finally, the lowest one (3) to FP3. The corresponding coding is illustrated in Figure 1. GENOTYPE OF AN INDIVIDUAL Production objective
Il246j Q,
Number of batches ofFP
5 2
\L nppi nFP2 npps FPl ^
Priority between recipes
2
3 1
FP2 FP3 IPl
Heuristic rules for conflict
2h IP2
h,
4 hj
Figure 1. Example of a genotype encoding. Note also that this individual may also be represented by the code [13413]. Consequently, the same individual will have several codes, which then violates the unicity principle of individual coding. A double-elimination procedure has then been introduced to verify the encoding validity of an individual generated during the initial population creation phase or resulting from the random application of mutation or crossover operators. Finally, the heuristic rules used for conflict management are coded on the last genes of the chromosome. Each integer value corresponds to the rule number to be applied among the possible values. 3.2. Illustration By lack of place, the example which illustrates the approach will not be presented in detail. The plant is characterized by several interconnected processes involving 16 equipment items for 5 different unit operations. Associated with each process are recipes that describe how raw materials (7) are transformed to output products (3 FPs and 2 IPs). In addition, the facility is characterized by resources that are shared among processes. 3.3. Search space The search interval for all parameters is presented in Table 1.
38 Table 1: Search space for parameters. Parameters Qi : quantity of FPi Hi: number of batches pFj: priority between recipes hi : batch to load hi: equipment to use
Integer coding Genci : [0 ; 2000] Gene2 to gene4: [0;7] [0;5] [0;2] [i;4]
Search space Qi = (10 X gene,) + 25000 •* QiS [25000,45000] n N Hi = gene^ + 7 •> iiiS [7, 14] n N 541 possible orders Cf§2 Cf§2
The treated example illustrates the combinatorial aspect of production management optimization problem and justifies the implementation of a Genetic Algorithm. The order of magnitude of the associated search space is 4,43.10^. 3.4. Genetic operators (Goldberg 1994) The genetic crossover operator adopted here is the k-point crossover. The mutation operator is based on a random sort of a procedure to guarantee the diversity of the explored solutions (random gene replacement, gene permutation ...). The classical Goldberg's biased roulette wheel is used for selection. 3.5. Evaluation function The performance criterion is founded on profit maximization within the campaign and can be expressed as follows: FF3
PF3
P= YB,.«,.., = y 5,..Q, i=PFl
i=PF\
The values of Bi are relative to the profit per product (in 1). The evaluation function has to take into account the constraint Da^H, meaning that the completion time of all products Da must be achieved at the end of time horizon H. For this purpose, the following expression has been considered : F = P if Da < H and F = P - p.(Da - H)^ if Da> H with p = lO"^ The value of p results from successive simulation runs in order to guarantee that P, Da and H have the same order of magnitude. 3.6. GA parameter setting A design of experiment was carried out to select the GA parameters. Finally, the study was conducted with the following choice of parameters (see Table 2). Table 2: GA parameter set. GA parameters Population size Crossover probability Mutation probability Number of generations
Parameter value 80 individuals 0,60 0,12 250
4. Typical results The five best solutions obtained are presented in Table 3.
39 Table 3 : Best solutions obtained by GA. Production ^l^'
FP2
FPl
Net Heuristic rules ''^J^^^"^
Priority order between recipes
FP3
1
ti(l) ni t2(l) n2 t3(l) n3 FPl FP2 FP3 IPl IP2 3508 10 1798 13 1063 11 1 2 1 1 1
2 3 4
4385 3508 3508
8 1798 13 1063 10 1798 13 835 10 1798 13 1169
4385
8
11 14 10
2 1 1
1798 13 1169 10
1
2 2 2
3 2 2
hi hz 1 4
1
][
1
1
]I 1I
1 1
1
1
4 4 4
7.307 7.307 7.307 7.307 7.307
The results obtained point out that: i. In all the runs, Qi varies from 8 to 10 batches, Q2 is always produced in 13 batches whereas Q3 varies from 11 to 14 batches. Since the search space for the number of batches is comprised between 7 et 14 lots, the best solutions are always obtained with a high number of batches, ii. Concerning priority orders, the shared Intermediate Products have a higher priority than Final Products for all solutions and have generally identical priority levels. Final Product FPl comes just after and has a higher priority index than the other FPs. iii. The heuristic rule relative to equipment choice which has the lowest queue length is always chosen (h2 = 4).The batch which enters first the workshop is favoured for h^ The best solution #1 obtained at the 116^ generation is detailed in Table 4. Table 5 presents the comparison between the results obtained by optimization and those obtained by a trial-and-error procedure. sol
I
45 i 40 35 Oi
S30l icl 2 5
f
ki
Fitnessof the best individual Average fitness
20 15' 10] 5 )
20
40
60
80
100 120 Generation
140
160
180
200
Figure 2: Typical evolution of the GA. Note that this procedure, based on the successive runs of ADHOC+, was first used to find a satisfying solution which leads to a high value for net revenue with respect to the time horizon constraint and which is then used as a reference to evaluate the performances of the GA. It can be seen that a substantial improvement is obtained (5.66%).
40 Table 4: Best solution obtained by the GA. Production level Net revenue/month Production plan Priorities between recipes Batch to load first Selected equipment
Qi = 350801 P = 47 936 300 F 10 batches of 3508 1 (350801 of FPl) 13 batches of 1798 1 (233741 of FP2) 11 batches of 1063 1 (11693 1 of FP3) l)FPl,FP3,FPletIP2 2)FP2 1 - the batch entered first 4 - with the highest queue
Table 5 .'Comparison between optimization and empirical Net revenue Empirical method 6 916 672 €
Net revenue AG-DES 7 307 842 €
results. Comparison/empirical + 5,66% i.e., 391 169 €
5. Conclusion The production strategy is particularly interesting for capacity analysis of a facility within a given time horizon. The proposed methodology is well-suited for proposing scenarios under a fluctuating demand, which is typically the case for batch plants. Besides, the approach can be used for a flexibility study, in order to evaluate the performance of various production planning scenarios.
6. References Azzaro-Pantel, C , Bernal Haro, L., Baudet, P., Domenech, S., Pibouleau, L. A twostage methodology for short-term batch plant scheduling : Discrete-Event Simulation and Genetic Algorithm, Computers & Chem. Eng, Vol. 22, n°. 10, 1461-1482 (1998). Berard P., Azzaro-Pantel, C , Domenech, S., Pibouleau, L. "A General Framework for Simultaneous Optimization of Scheduling and Equipment Operating Conditions: A Case Study for Fine Chemistry Batch Plants", ECCE 2, Montpellier, France, 5-6-7 Octobre (1999a). Berard F., Azzaro-Pantel, C , Pibouleau, L., Domenech, S., Navarre, D., Pantel, M., "Towards an Incremental Development of Discrete-Event Simulators for Batch Plants: Use of Object-Oriented Concepts", Computers and Chem Engineering Supplement, S565-S568, (1999b). Cartwright, H.M., Long, R.A. Simultaneous Optimization of Chemical Flowshop Sequencing and Topology Using Genetic Algorihms, Ind. Eng. Chem. Res., 32,2706-2713(1993). Goldberg, D.E., Genetic algorithms in search, optimization and machine learning, Addison (1994). Papageourgiou, L., Pantelides, C , Optimal campaign planning/scheduling of multipurpose batch/semicontinuous plants. 1. Mathematical formulation, Ind. Eng. Chem. Res., vol. 35,488-509, (1996).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
41
Managing Financial Risk in Scheduling of Batch Plants Aima Bonfiir, Jordi Canton^, Miguel Bagajewicz^, Antonio Espuna^ and Luis Puigjaner^'^ (+)DEQ-ETSEIB-UPC. Av. Diagonal 647, P. G, 2° P. (08028) Barcelona, Spain, e-mail [email protected]; (++) University of Oklahoma, School of Chemical Engineering and Materials Science. 100 E. Boyd St., T - 335, Norman, OK 73019. On sabbatical leave at ETSEIB; (#) Corresponding Author
Abstract The short-term scheduling problem of a multiproduct batch plant with uncertain product demands is addressed in this article. The problem is modelled using a two-stage stochastic approach maximising the expected profit and integrating inventory costs and penahies for production shortfalls. Additionally, risk is controlled using a recently developed methodology.
1. Introduction The scheduling problem in the chemical industry has been extensively studied and alternative methodologies and problem statements have been proposed in the literature to address the combinatorial character of this problem (Shah, 1998). However, most of the formulations presented are based on nominal parameter values without considering the uncertain requirements after the operations are planned and scheduled. The uncertainty from a real environment entails a risk that is initially traduced in a cost but may lead to an unfeasible situation. The aim of the present work is to provide a tool to support the decision making of developing a scheduling policy in an uncertain environment while controlling the variability over the possible scenarios. For this purpose, a deterministic MILP formulation is first presented and used as starting point for modelling a two-stage stochastic optimisation approach. To control the variability over the different scenarios, the concept of risk is introduced. The simultaneously reduction of risk and maximisation of profit results in a multiobjective stochastic optimisation problem. Two alternative risk definitions, one related with financial risk introduced by Barbaro and Bagajewicz (2002a,b) and another considering the downside risk defined by Eppen et al. (1989) are considered and the latter is used as an objective. The outcome is a set of parametric solutions corresponding to different levels of risk.
2. Model The scheduling problem of a multiproduct batch plant under uncertain product demand is addressed with the aim to maximise the expected profit. The scheduling policy involves the number of batches to be produced for each product, the detailed sequence and the starting and final times of each operation performed.
42 To derive the proposed models, one production line with fixed assigned equipment units and zero wait transfer policy is assumed. The scheduling is addressed within a time horizon of one month. Finally, inventory costs and penalties for production shortfalls, proportional to the amount of underproduction, are adopted for each product. 2.1. Deterministic scheduling model The deterministic scheduling model is formulated as a MILP problem based on a batch slot concept. With this formulation, the time horizon is viewed as a sequence of batches, each of which will be assigned to one particular product. The proposed mathematical model is shown in equations (1) to (13). It maximises the expected profit (sales inventory costs - production shortfalls) using X(b,p) as sequencing decision variable (see Nomenclature). A small penalty term on the sum of the initial time of all batches is added to force the schedule to finish in the smallest makespan possible. MaxP = ^[F(/?)-i^r(p) - I{p)'CI{p) - Fr{pyPe{p)]-a^Tin{o,b)
(1)
oJ>
P
subject to: H>TMo,b)
Vo,b
(2)
T(o,b) = '^X{b,p)TOP{o,p)
Vo,b
Tfn{p,b) = Tin{o,b) + T(p,b) Tin{o,b +1) > Tfn{p,b)
yo,b
Tfn{o,b) = Tin{o + \,b) ^X(b,p)
\fo,b
(4) (5)
yo,b
(6)
\/b
nip) = Y,^(b,p)
(3)
("7)
(8)
yp
b
(9)
i>^n(p)
Vip) = mm{D(p), n(p) • BSip))
^p
(10)
I(p) = n(p)-BSip)-V(p)
yp
(11)
43 F(p) = Dip)-Vip)
Vp
(12)
2.2. Stochastic model From the formulation presented in the previous subsection and to derive the two-stage stochastic program, the uncertain product demands are modelled by probability distributions and a set of scenarios is generated using sampling techniques. The number of batches to be produced and the corresponding sequence of products are considered as first-stage decisions, since it is assumed that they have to be taken at the scheduling stage before the realisation of the uncertainty. The sales, inventory and production shortfalls are then recomputed on a second-stage for each scenario generated. Therefore, equations (10) - (12) are rewritten to take into account the different scenarios s. Then the maximisation of the expected profit is defined as: EP = Y^ Prob(s) • Pis) -a'Y,
^Ho, b) =
o,b
-T
Probis) 'J^[V(p,s). Prip) - Iip,s) • CI(p) - F(p,s) • Pe(p)] -a'Y,^in(o,b) (1^) o,b
2.3. Risk management The concept of financial risk defined by Barbaro and Bagajewicz (2002a) and its alternative of downside risk (Eppen et al., 1989) have been incorporated into the stochastic model to support the decision making. Downside risk is finally used. 2.3.1. Financial risk Managing financial risk involves the maximisation of: EP = J^[Prob(s)• P(s)]-a• ^7-m(o,6)-^^^[ProfcC*)• p,. • z^sj)] o,b
s
Pis)>n(i)-U'z(s,i)
5
(14)
i
\/s,i
(15)
2.3.2. Downside risk The utilisation of the alternative downside risk concept requires the redefinition of the objective function and constraints as follows.EP = ju 'Y,[P^ob(s) • P(s)]-a • Y,THo,b) -Y^[Prob(s) • S(s)] o,b
s
S(s)>Q-P(s) ^(^)>0
\/s V^
(16)
s
(17) (18)
44
3. Results and Discussions The proposed methodology to incorporate risk management in the framework of twostage optimisation for scheduling of multiproduct batch plants under uncertainty has been applied to a case study involving four production stages and five different products adapted from Petkov and Maranas (1997). A total number of 500 independent scenarios were simulated using Monte Carlo sampling technique from the given normal demand probability distributions. To assess the importance of the stochastic formulation, the deterministic model using the nominal demand values has been first solved and compared with its stochastic counterpart. The results obtained are detailed in Table 1. The schedules are not shown for space reasons, but they are significantly different. In addition, it is noted that the makespan of the deterministic model is shorter, because the model does not generate inventory to hedge from adverse scenarios, as the stochastic one does. One important thing to notice is that the deterministic model predicts a solution that poorly represents the uncertain environment, i.e. the schedule obtained with the nominal parameters may be unrealistic when another demand is ordered. Indeed, although the profit of the deterministically generated schedule is higher than the expected value of the stochastically generated, when the first one is used to face the uncertainty the expected value of profit drops 16%. The stochastic schedule performs better as it is also reflected in the shift to the left of the risk curve (Figure 1). This is one indication of the danger of using deterministic models in the believe that stochastic models will only "refine" the solution. Table L Results of Deterministic and Stochastic Models.
Products n(p) E[Sales] E[Inv.] E[Penalt.] Profit E[Profit] Makespan
Deterministic schedule P2 PI P4 P3 P5 10 10 15 15 8 726 941 7099 3086 1884 74 59 401 214 116 56 123 25 468 168 79875 (value in nominal scenario) 67165 617
Stochastic schedule P4 PI P2 P3 P5 10 10 16 16 8 726 941 7313 3165 1884 74 59 687 355 116 25 56 254 89 123 76275 (value in nominal scenario) 68539 662
The management of risk has been next performed considering several profit targets (one at a time) along with various weight values reflecting different levels of risk exposure. At each profit target, a schedule with its correspondent risk curve is obtained. Risk values of schedules obtained by managing downside risk with a weight value \x = 0.001 are reported in Table 2. As it was expected, the expected profit value of the schedules approaches the maximum value obtained with the stochastic model (SP) as the profit target is incremented. From a comparison with the stochastic schedule it can be observed that reductions of downside risk of 30% and 16%) are attained at target profits Q. = 60,000 and 65,000, respectively. Selected risk curves are depicted in Figure 2.
45
30000
40000
50000
60000 Profit
70000
30000
40000
50000
60000
70000
80000
90000
Profit
Figure 1. Risk curves from the stochastic Figure 2. Selected downside risk curves for schedules with different profit targets. and deterministic schedules.
Table 2. Risk values of selected schedules when optimising at different profit targets fl. Stochastic solution (SP) FRisk (%) DRisk FRisk (%) DRisk Schedule Target Q E[Profit] 4.2 2 227 102 64,310 50,000 C 8.2 516 7 275 64,442 55,000 D 15.4 1,085 13 770 67,160 60,000 E 27.6 2,140 30.6 1,787 67,160 65,000 F 49.6 4,048 60.6 4,040 67,165 70,000 G 74.6 7,170 74.6 7,170 68,539 75,000 H
4. Conclusions The treatment of uncertainties in batch process planning and scheduling has been addressed as a two-stage stochastic optimisation problem with the incorporation of financial risk management, thus resulting in a multiobjective stochastic system. The proposed methodology has been applied to the scheduling of a multiproduct batch plant and uncertainty in product demands has been first considered and modelled by probability distributions. A comparison between the deterministic and the stochastic formulations has been performed to assess the importance of the stochastic approach. The schedule obtained with the nominal parameter values will poorly perform in most of the probable scenarios. However, with the stochastic modelling a significant improvement is attained and schedules with a good performance over all the scenarios are obtained. The management of risk has been next introduced. From the results obtained it can be noticed that risk can only be managed at the expense of the expected profit. In addition, despite the flexibility introduced by the uncertain parameters and the inventory, different cost-dependent alternatives are required to be able to manage risk in a meaningful way. A variety of alternative schedules are obtained reflecting different risk exposure poHcies.
46
5. Acknowledgements Financial support received from the Spanish "Ministerio de Educacion Cultura y Deporte" (FPU research grant) and "GeneraHtat de Catalunya" (project GICASA-D), and from the European Community (project GlRD-CT-2000-00318 and project GRDl2000-25172) is thankfrilly appreciated. The support of the ministry of Education of Spain for the sabbatical stay of Dr. Bagajewicz at UPC is also acknowledged.
6. Nomenclature Sets: b: i: o: Ps: Parameters: BS(p): CI(p): D(p): H: L: Pe(p): Pr(p): Prob(s): TOP(o,p): U: Q(i): ^(i):
Batches Profit targets Operations Products Scenarios Batch size Inventory cost Demand Horizon time Maximum n** of batches Production shortfall cost Sales benefit Probability Operation time Big value Profit target Financial risk upper bound
P(i): 5(s): l^a: Variables: EP: F(p)/F(p,s): I(p)/I(p,s): MS: n(p): P(s): T(o,b): Tfn(o,b): Tin(o,b): V(p)/V(p,s): X(b,p): z(s,i):
Weight value Downside risk Weight value Low value Expected profit value Shortfall Inventory Makespan Number of batches Profit value Processing time Final time Initial time Sales of product/^ Batch b assigned to prod. p (Binary variable) Binary variable
7. References Barbaro, A. and Bagajewicz, M.J. submitted 2002a, Managing Financial Risk in Planning under Uncertainty - Part I: Theory. Barbaro, A. and Bagajewicz, M.J. submitted 2002b, Managing Financial Risk in Planning under Uncertainty - Part II: Applications and Computational Issues. Eppen, G.D., Martin, R.K. and Schrage, D. 1989, A Scenario Approach to Capacity Planning. Operations Research, 37, 517 - 527. Petkov, S.B. and Maranas, CD. 1997, Multiperiod Planning and Scheduling of Multiproduct Batch Plants under Demand Uncertainty. Ind. Eng. Chem. Res. 36,4864-4881. Shah, N., 1998, Single- and Multisite Planning and Scheduling: Current Status and Future Challenges. Foundation of computer-aided process operations, AIChE symposium series, 340.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
47
A Network Model for the Design of Agile Plants A. Borissova, M. Fairweather and G.E. Goltz Leeds Institute of Particle Science and Engineering, University of Leeds, Leeds LS2 9JT, UK
Abstract The network model provides the formalisation and theoretical basis for the generation of options for the design of agile production plants, and the subsequent selection of the most appropriate option when reconfiguring a plant to meet changing product and process demands. The model is developed by applying the concepts of modularisation, standardisation and reconfigurability of plants, and provides the foundation for the development of tools to aid in the analysis of the design of a reconfigurable plant, and to assist in the decision making process. The model represents a plant as a set of nodes and links, with each element of the network, its functionality and the interactions/operations within the network being described in mathematical terms. Plant reconfigurability is described as a transition from one state to another of a plant characterised by specific functionality and level of modularisation. Modularisation and reconfigurability analyses as part of the methodology for the design of agile plants use the network model to simulate the options for plant design.
1. Introduction The life-cycle of a speciality chemicals product is frequently much shorter than the lifecycle of the plant used to produce it. Pisano (1997) and Burgess et al. (2002) show that this requires a redefinition of the business environment. The introduction of a new product, and fitting the new process into a plant, often requires compromises that reduce the yield and simultaneously increase waste and its associated costs. One approach is to develop multi-product reconfigurable modular plants. These are built in modules offsite and assembled on-site in such a way that the modules can be removed, exchanged or re-built as the process requirements change. The design of agile plant seeks to map the equipment performance onto process requirements in terms of ranges rather than points. Standardisation and a systems approach are necessary at all stages, from the draft design to the operating plant. The main elements of the systems approach to process and plant design (Gilles, 1998) are the process and plant structures, process and plant integration, plant properties, modularity and multi-functionality, and systematic process and plant modelling. Process and plant design is a combinatorial task requiring application of sophisticated methods for systems analyses and syntheses. A multi-level presentation of the process and plant structure is given in Table 1. Gilles (1998) considers the lower four levels of the process structure. The first level, considered in this paper, corresponds to the modular concept of the agile plant and represents an addition to Gilles' presentation.
48 Table 1 Process and Plant Structure Levels. Level Module (agile plant)
Components Modules
Device
Devices
Phase
Phases
Bulk property
Continuum
Molecular
Molecules
Interconnections Connections/ Adaptors Valves, pumps, compressors, heat exchangers, etc. Phase boundaries, membranes, valves, pumps, etc. Reactive, diffusive, convective flows Molecular interaction forces
Aggregates -> Plant —> Modules —> Devices -^ Phases -> Bulk properties / Continuum
Through the application of standardized structures and a systems approach to each level, a formal representation (abstraction) of the system can be created, thus allowing the system analyses to be performed on that representation rather than on the real system itself. At the stage of system design, this is the quickest way for simulating and testing the system. The development of a mathematical model of a reconfigurable process plant, representing the structured character and properties of a plant as a system, is the aim of this work. The model describes the plant as consisting of simple entities, or assemblies of simpler entities, that interact with each other in a specific manner. The models used to simulate plant operation should be flexible enough to consider nontrivial decisions, such as relate to safety, which must be taken in the design of an agile plant. For the case of reconfigurable plant design, safety presents additional problems.
2. Network Model of a Process Plant Network models are widely used to model the behaviour of different complex systems, e.g. Cisternas and Swaney (1998), and Farkas and Li (2001). Bulsari (1995) and Lewin (1998) have applied network models to chemical engineering problems. The systems approach to a process plant leads to the concept of the network model of plant structure and operation, and it has been applied to modelling dedicated plants. This work is the first attempt to apply the network approach to the modelling of a reconfigurable plant. A process plant can be represented as a network of units (nodes, modules, clusters of modules) and links (connections) between them. This is shown in Figure 1, where the blocks represent the nodes (modules, or a set of closely-coupled modules) and the lines the connections.
•-H
Figure 1. Agile Plant as a Network.
49
3. Modules as Nodes in the Plant Network The module is a unit, preferably of standard size, with fastening, fitting and termination points, and is characterised by its process functionality, capacity range, utilities and process connections required for its operation. It can be transported and replaced by another module to achieve plant reconfigurability. The module functionality is defined by its "task". For instance, a module may contain a reactor with its peripheral units. The capacity of a module will not only depend on the equipment within the module, but also on the process streams and their properties. Modules can be classified by their functionality, size, etc. The modular concept in reconfigurable plant design requires consideration of many factors, including infrastructure environment, business criteria and constraints. Thus the design of a reconfigurable plant is a multi-objective task and a systems approach is essential for identifying viable options. Applying the building block (or Lego"*"^) concept, a plant could be divided into substructures, including different numbers of modules. Eventually all modules could be divided into their basic components. The smallest unit that cannot be sub-divided is the plant item. Thus the smallest module contains just one plant item. On the basis of this hierarchical structure of the plant, modules are characterised by their level of integration (rank of modularisation). The two extreme ranks correspond to the following cases: where the whole plant is one module (1^^ rank of modularisation); and where each unit of the plant is housed in separate modules, i.e. extreme modularisation. The rank of the module is considered at two levels - external and internal. The external rank characterises the level of decomposition of the whole plant. It is the number of modules in the plant, and thus also represents the rank of the plant. It has the same value for each module in the plant. The internal rank of a module characterises the number of units in that module. It is a specific number for each module.
4. Connections as Links in the Plant Network AH the modules installed in the plant need to be connected to create the plant network. The concept of reconfigurability sets special requirements for the connections between the nodes of the network. There are two kinds of connections in the plant network depending on whether they are inside or outside the modules, i.e. internal and external connections. For reconfigurability purposes, however, every internal connection could become an external one in a decomposition of the module, and vice versa. This makes the standardization of all connections highly desirable. There are three main kinds of flows in the plant network: material, utility and information flows. Material flows transfer materials within and between the modules. Utility flows are steam, cooling water, compressed air, nitrogen, electricity, etc. Information flows are the links between the modules and the computers used to monitor and control the processes in the plant. For the purposes of the reconfigurable plant design, the connections between modules reflect the range of properties of the material flows, and should allow for changes of the properties of both material and utility flows. To minimize the lead-time to reconfigure the plant, connections should be standardized in the range necessary for the family of products and processes considered, and for their scales of production.
50
5. Mathematical Formulation of the Network Model The network model is a formalised representation of a modular plant as a network of modules and connections. Each element of the network (module, connection, plant), its functionality and the concepts the network is based on are described mathematically. The functionality of the whole network and of each module is described as a set of operations (functions, tasks) performed in the plant and by the module. The functionality of each module depends on the functionalities of the units included in it. Let the full range (set) of plant functions (operations) be denoted by F, and the values of the F elements by F,- (for / = l,Nf), where Nf is the total number of functions, hence
(1)
^ = U F /
The following notation is used for the F elements: R for reaction, S for separation, etc. The Boolean symbols c and u are used to denote sub-set and summation operations. The functionality of a module FM is a sub-set of functions of the general set F, i.e. FMI^ F (i = 7,r), where r is the number of modules (rank of the plant) that is defined by both the product and process envelopes. If A^ modules have the same functionality, FM, then the functionality of the network of these modules is F = F^. Thus the functionality of a network of three crystallisers (CR) can be represented by the relation F=CRuCRuCR. In terms of tasks, however, their number is 3. Modules are represented with the expression M"^'^** , where r is the external rank of the module (number of modules in the plant; rank of the plant), u is the internal rank of the module (number of units in the module, used as a characteristic of the complexity of the module), and n^ is the number of elementary tasks performed by the module. Tasks are denoted by Tf, where r is the rank of a task and F is the set of functions (or sub-tasks). The task is considered in the context of both a module and a plant. The task of a module is the set of functions performed by the equipment units in the module. Equipment tasks are derived from the process definition. A set of equipment tasks may correspond to one or more process tasks, and visa versa. The task of a plant is the full set of functions leading to the general objective (set of process tasks) of the plant. The rank of the task is the number of the functions performed. A plant is denoted by F / , where r is the rank of the plant and F the full set of tasks performed by the plant. The following notation is used to represent the connections in the plant: C ^' '*° - internal connection between /* and 7* unit inside module, '•»"..•
C ^'^^^ - external connection between i^ moduleM/' 'f' and/^ module M ^' / in plant, f,Ujj
r ,Uj
r
,Uj
where r is the external rank of the modules, uy the quantitative characteristics of the connection (i.e. flow rate), and/jy the qualitative characteristic of the connection (type of connection).
51 Plant 1
p/•,,«,i ^
_!:"?.
_
HJ [U [A]---[±] Plant 2
,_i"z=i_
'"2'"?,
[±] /57---Z57Z37 Figure 2. Reconfiguring Plant 1 to Plant 2.
6. Reconfigurability Reconfiguring a process plant is an operation of changing the rank and/or functionality of a plant (modules in the plant). The rank of the plant/modules is the number of modules/equipment units, and the rank of a task is the number of functions performed. Equation (2) is the mathematical expression of plant reconfigurability: p'^i^p'^i
(2)
Fl = / l + / 2 + / 3 + •••+/„,
(3)
Fl =/l" +f2 +f3 +
(4)
•••+K
Equations (3) and (4) consider the functionality at the elementary task level. Figure 2 illustrates the case when a plant is reconfigured from state 1 to state 2. The functionality of the first state Fj represented by nj tasks is replaced by the second one F2 performing ^2 tasks. The following cases are possible depending on whether the functionality of the plant is maintained. Case A corresponds to reconfigurability aimed at improving the efficiency of the plant, where the set of tasks is not changed. Case B corresponds to reconfigurability leading to a different set of tasks (i.e. another product). Case A - F; = F2: If r/ ^ r2, typically this means an increase or decrease in the number of modules without changing the functionality. If rj = ri the external rank of the plant does not change, i.e. the number of modules is the same, but some may be replaced by others, e.g. by improved equipment units in new modules. Case B - Fj ^ F2. If r; ^ r2, this is the case where both functionality and plant rank change. Plant reconfiguration is performed through changing the number of modules, leading to a different set of tasks from the existing one (e.g. new product). If ry = r2, the plant functionality may be changed without changing its rank, e.g. by using the same equipment to produce a different product, and applying a different set of elementary tasks.
52
7. Mathematical Operations with Modules and Network Analysis A high-level mathematical language, based on the main arithmetic operations and their physical interpretation in terms of the process plant, includes the following operations: addition, subtraction, multiplication, division, substitution and removal. These mathematical operations correspond to reconfiguring of the plant modular system so that the external rank r and/or the internal rank (number of units) u and functions of the modules (the number n and the functionality F), change. Substitution of modules can be of tv^o kinds: with a new or with an existing module. It is an operation of consecutive removal of one and addition of another module. Two types of network analyses have been proposed, namely modularisation analysis and reconfigurability analysis. Modularisation analysis includes procedures for: determination of the external and internal rank of the modules; determination of the functionality of the modules; setting of criteria for evaluation of the modular plant; visualisation of the modular plant; modular plant option generation and selection. The reconfigurability analysis includes procedures for: setting of the new plant requirements; selection of criteria for the new modular plant estimation; selection of the strategy for reconfiguring (degree of reconfigurability); virtual reconfiguring of the existing modular plant; option evaluation according to the chosen criteria; best option selection; virtual construction of the new modular plant; verification by simulation of the new plant operation. The mathematical operations with modules are used to represent the reconfigurability concept. Not all of the mathematical operations are applicable in practice. For example, division of a 1-unit module is not possible.
8. Conclusions The network model is a formalised, mathematical description of a modular plant which embodies the concepts of modularisation and reconfigurability. It can be used in the generation of options for the design of agile plants, and the subsequent selection of the most appropriate option when reconfiguring a plant to meet changing product and process demands. Modularisation and reconfigurability analyses used in the design of agile plants employ the network model to simulate the options for plant design.
9. References Bulsari, A.B., 1995, Neural Networks for Chemical Engineers, Elsevier, Amsterdam. Burgess, T.F., Hwarng, H.B. Shaw, N.E. and de Mattos, C , 2002, Eur. Man. J., 20, 197. Cisternas, A.L. and Swaney, R.E., 1998, Ind. Eng. Chem. Res., 37, 2761. Farkas, I. and Li, P., 2002, Proc. 4* Int. Conf Cognitive Modelling, Fairfax, Virginia. Lewin, D., 1998, Comp. and Chem. Eng., 22, 1387. Gilles, E.D., 1998, Chem. Eng. Technol., 21, 2. Pisano, G.P., 1997, The Development Factory: Unlocking the Potential of Process Innovation. Harvard Business School, Boston.
10. Acknowledgement The authors wish to thank the EPSRC for their financial support of the work described.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
53
A Vision of Computer Aids for the Design of Agile Production Plants A. Borissova, M. Fairweather and G.E. Goltz Leeds Institute of Particle Science and Engineering, University of Leeds, Leeds LS2 9JT, UK
Abstract Chemical engineers may have to determine the functional requirements for a plant to produce a range of products, rather than just one. This gives rise to the need for process design tools that accept ranges rather than point values. Alternatively, to reconfigure a plant quickly and reliably, a good information model and agreed standards are essential. Existing solid modelling tools can then be used to visualise the changes and ensure that any clashes are detected and resolved before the physical changes are implemented on the plant. This paper describes a combinatorial approach to process, equipment and plant design that is capable of encompassing all these requirements, and contracts this method with traditional approaches. It is shown that traditional design methods may miss options that are identified using the combinatorial approach. Options identified by the latter approach may also lead to novel types of processes and equipment. Application of the new methodology is described in terms of scanning the multidimensional space describing the process, equipment and plant attributes. The new approach is particularly appropriate for the design of agile plants where decisions have to be made as how best to reconfigure an existing facility to manufacture a new product.
1. Introduction According to Pisano (1997) the business environment in which the pharmaceutical, agrochemical and speciality chemical industries operate places a high premium on time to market due to the limited life cycle of their products. Thus, "learning before doing" rather than "learning by doing" becomes important. Business decisions are subject to considerable uncertainty, and computer-based tools can assist in obtaining agreement within a business, even though the final decision is based on "soft" information. Shaw et al. (2001) report on the needs of the UK speciality chemicals industry and the decision processes for the type of plant and its likely capacity. Agile plants that can easily be reconfigured to cope with changing products are likely to be favoured rather than dedicated plants. The challenge is to design plants that are efficient and have either a wide operational envelope or can be interchanged rapidly. Modular plants, where whole modules can be replaced, or pipeless plants may be appropriate. Alternatively, learning from robotic, combinatorial chemistry may be translated to production by replicating rather than scaling-up. Computer aids to help make the right decisions are needed.
54 Traditional methods of process and plant design are often semi-empirical and depend primarily on the ingenuity and experience of engineers. Clark et al. (2000) have indicated how previous designs can be linked to simulation codes. Eggersmann et al. (2002) addressed the issue of the management and re-use of design models. Experience has shown that re-use of P&ID's often leads to over-instrumentation and to the exclusion of the best modern technology. Rudd and Watson (1968) and Douglas (1988) have shown the value of techniques such as process synthesis, while Linnhoff et al. (1985) have demonstrated the benefits of pinch and network technology in obtaining an optimal design for the processes. The advent of modern computers has permitted equipment design based on computational fluid dynamics and molecular modelling. Traditionally, process and plant design has been considered practical in nature, with little fundamental content. Designs have been based on empirical approaches such as the minimum number of units, maximal utilisation of raw materials and energy, and dedicated equipment units that decrease the freedom of the design. The generation of options is, however, a key activity in the design of a plant. Rudd and Watson (1968) describe the creation of plausible alternatives as consisting of three steps: definition of the primitive problem, gathering of cognate facts, and creating specific problems. Douglas (1988) and Biegler et al. (1997) proposed methodologies for option generation. At the equipment level it is well known that engineers tend to favour equipment with which they are familiar as this increases confidence. At the plant level, the layout and detailed design are frequently restricted to a specific process, even if the design remit states that processes for other products may be needed in the future. In the case of both equipment and plant design, the use of point values as input parameters, rather than ranges of the values of the design parameters, has restricted creativity.
2. Combinatorial Approach to Process and Plant Design The combinatorial approach to process and plant design is based on diversity, and this has already been applied in chemistry and materials science, for instance by Engstrom and Weinberg (2000) and by Gordom and Kerwin (1998), to produce millions of potential new chemicals, drugs, sensors and superconductors. Only a few applications in chemical engineering, e.g., solvent selection by Kim and Diwekar (2002), have been identified. The combinatorial approach has been applied to production planning, the selection of manufacturing sites and to scheduling by, for example, Verwater-Lukszo and Otten (1994), Shah and Pantelides (1996) and Pekny (2002). Physico-chemical data are usually treated as point values in design. This ignores the uncertainty in the data, and the fact that different products may have to be produced in the future. This is exacerbated by the complexity of physico-chemical processes involving multi-phase, multi-particulate and multi-component systems, with multi-level interactions within them. The formal design of a complex processing plant, involving several hundred variables with complex interactions, is a formidable, if not an impossible, task. Complexity can be reduced by dividing the process into more manageable steps, identifying key variables and concentrating work where effort will give the greatest benefit. This will not, however, necessarily lead to the optimal design for the whole process.
55
Figure 1. Option generation -from the vision to the real process/plant.
Figure 2. Scanning variable space.
design
Combinatorial design is defined as a theoretical approach to the design of a process and plant. It creates the full set of options for the process and plant without considering the limitations of existing practice. It is applied as a first stage in the design procedure to create the initial process and plant design, or combinatorial, environment. It is not limited either by technical or by business requirements or restrictions. Combinatorial process and plant design is a methodology for the rapid generation of all feasible options for the desired process and plant, and for creating predictive models of their performance and operation. The combinatorial environment is used as a basis for a subsequent Option Generation and Selection methodology for process and plant design that considers the limitations of existing practice (e.g. process parameters and phases, costs, materials of construction, dimensions, etc.). Combinatorial design can also be applied in the research phase of the process and plant design as a tool for the generation of ideas and new design concepts. Supplied with the necessary computer software for the quick generation and evaluation of all possible options, combinatorial design will then be an efficient tool for the design of an agile plant. A General Set of Options (GSO) for the process and equipment design is created mechanistically on the basis of the design variables (see below) in defined ranges that include process routes and equipment diversity. This set of options represents the design envelope. The GSO is an analogue of the "library" used in combinatorial materials science by Gordom and Kerwin (1998), defined as "the array or set of materials to be investigated". The magnitude of this set depends on the complexity of the process and the ranges and steps (precision or granularity) used in defining the design parameters. The initial vision of a process/plant is often in the form of a concept of the future process or plant. Figure 1 represents the development of the process/plant from the conceptual model to the real options. The concept, represented by the top plane, may be the result of technology push or market pull. When the concept is projected on to the requirements plane (denoted by a) the image changes. Even if all the requirements are fulfilled, it might still not have the same shape. Conceptual design should answer the
56 question "how does the process or plant concept project onto the requirements plane". The required plant performance envelope, including ranges of equipment characteristics, is defined in the a plane. Further projection leads to the creation of the real plant configurations/options in the P plane. Different options in the P-plane are possible, some of which may not be feasible. The combinatorial approach requires formalisation. The creation of options is based on changing Design Variables, which are groups of process, plant and equipment characteristics that may be discrete (e.g. type of plant, solvent, etc.) or continuous (e.g. temperature, pressure, etc.). Combinatorial design creates the GSO that gives different values of the Design Variables. Continuous Design Variables are split into discrete intervals. The size of the intervals represents the granularity of information and depends on the level at which the process and plant are considered. The full set of options is then obtained mechanistically by scanning the multi-dimensional space of the Design Variables. Figure 2 illustrates the method of combinatorial design. The hexagon represents the initial design that is satisfied by the seven options shown as stars. An agile plant, however, may also need to accommodate different levels of the Design Variables that are depicted by the pentagon. The possible options are shown as 16-pointed stars. Only three options overlap. The bigger the initial space of the Design Variables the greater will be the agility of the process design. The tools for combinatorial design are based on computer simulations involving mathematical models, databases and the management of information. Together with the emergence of cost-effective and time-efficient computing systems, computational simulations of industrial processes are now a realistic alternative to either the traditional experimental or analytical methods of investigation. A new paradigm in process engineering is therefore emerging, with Edgar (2000) suggesting that computer simulations should be performed even at the discovery phase.
3. Combinatorial Plant Design Combinatorial plant design defines the plant Design Variables; examples are type of plant, plant layout and connections. While it is generally true that no two plants are identical, and therefore the number of plant design options approaches infinity, it is possible to distinguish broad classes of plants. No classification can be guaranteed to be complete, but below are given some suggested types of plant: 1. Dedicated plants - designed to produce a specific product or products throughout the life of the plant. Typically these are plants for bulk chemicals. 2. Versatile plants - designed with fixed plant items, but with the expectation that the piping connecting the plant items will be modified. 3. Re-configurable plants - designed with interchangeable plant items. 4. Modular plants - designed as interchangeable modules containing groups of equipment with pre-defined interfaces to utilities, control and process connection points. 5. Pipeless plants - where vessels are moved between processing stations rather than moving material between vessels.
57 Agile plants can accommodate the change from one product to another quickly in response to changing market needs. Agility generally increases from type 1 to 5. Taking the design of modules as an example, it becomes clear that this is a combinatorial task. Key variables are the functionality, dimensions and the equipment layout within the module, as well as the connections between process modules and to utility modules. Reconfiguring an existing plant for a new product or process is also a combinatorial task. Such a systematic approach aids the decision on the functionality required of each module. Discussion of the standardisation of interchangeable modules (dimensions of the module and the size, type and location of all the external connections) is, however, beyond the scope of this paper.
4. Combinatorial Equipment Design One of the challenges for the design of process equipment is to create the conditions for an agile plant. A way of meeting these requirements is to apply the combinatorial approach to improve the basis for option selection through exploration of a wide range of both existing and novel (unknown) equipment units. Equipment Design Variables considered by combinatorial equipment design include: types of equipment, geometries, materials of construction, etc. The requirements for the type of equipment are derived from the process duty. They can, however, be met in a variety of ways depending on the physico-chemical properties, the driving forces and the fluid dynamics. At the conceptual level the requirement may be to remove heat. At the next level many options may exist. Thus, heat can be removed by direct contact, by making use of the latent heat of vaporisation or by indirect heat transfer. Each of these more specific duties leads to a range of equipment with their associated properties. No list of equipment will ever be exhaustive as new devices are developed to meet particular needs. Multi-functional equipment in particular may extend the range of applications and thus improve agility.
5. Challenges for the Future Apart from developing the tools for combinatorial design, including option generation and selection and a network model to aid both the initial design and subsequent modification of reconfigurable plants, other challenges remain. These include efficient and transparent inter-operability between different software packages, and tools to obtain the maximum information from experimental work at an early stage. Data also need collecting, assessing and presenting in a form that is easily assimilated. Many products are large molecules, so that they are solids under normal conditions. Solids formation is still less well understood than processes involving fluids and there is a lack of reliable models. Furthermore, experimental physico-chemical data that underpin almost all chemical engineering calculations are rarely available. Good, validated estimation methods are needed. Many of the products are biologically active. Predictive models of the toxicity of products, intermediates and by-products in the pure state and as mixtures would be useful at an early stage, and may help to decide between competing process options.
58
If robustly designed laboratory equipment can easily be replicated, then it may be possible to by-pass the scale-up step that has traditionally caused problems.
6. Conclusions The theoretical basis for the development of tools to aid engineers in generating and selecting options for process, equipment and plant design has been described in terms of combinatorial design. The methodology presented minimises the risk of overlooking attractive options. It can be implemented as a computer-aided option generation and selection tool that considers all options at the appropriate level of detail. A knowledgebased information model should be employed to identify all possible options available for reconfiguring an existing plant to manufacture a new product with the fewest, quickest or cheapest changes. This uses the concept of scanning the multi-dimensional space of the Design Variables to create the options. In addition, the ideas of reconfigurable modular plants can be extended using both theoretical and practical approaches. Application of the concepts described is not restricted to design, but can also be used defensively to assess the risk of a competitor developing a more costeffective process. Some challenges to the CAPE conmiunity have been identified.
7. References Biegler, L.T., Grossmann, I.E. and Westerberg, A.W., 1997, Systematic Methods of Chemical Process Design. Prentice Hall, Upper Saddle River. Clark, G., Rossiter, D. and Chung, P.W.H., 2000, Trans. IChemE Part A, 78, 823. Douglas, J.M., 1988, Conceptual Design of Chemical Processes. McGraw-Hill, New York. Edgar, T.F., 2000, Chem. Eng. Prog., 96,51. Eggersmann, M., von Wedel, L. and Marquandt, W., 2002, Chem.Ing.Tech., 74, 1068. Engstrom, J. R. and Weinberg, W. H., 2000, AIChE JL, 46,2. Gordom, E.M. and Kerwin, G.F., 1998, Combinatorial Chemistry and Molecular Diversity in Drug Discovery. John Wiley and Sons, New York. Kim, K.-J. and Diwekar, U.M., 2002, Ind. Eng. Chem. Res., 41, 1285. Linnhoff, B., Townsend, D.W., Boland, D., Hewitt, G.F., Thomas, B.E.A., Guy, A.R. and Marsland, R.H., 1985, User Guide on Process Integration for the Efficient Use of Energy. The Institution of Chemical Engineers, Rugby. Pekny, J.F., 2002, Comp. and Chem. Engng., 26,239. Pisano, G.P., 1997, The Development Factory: Unlocking the Potential of Process Innovation. Harvard Business School, Boston. Rudd, D.F. and Watson, C.C., 1968, Strategy of Process Engineering. John Wiley and Sons, New York. Shah, N. and Pantelides, C.C., 1996, Ind. Eng. Chem. Res., 31, 1325. Shaw, N.E., Burgess, T.F., Hwarng, H.B. and de Mattos, C , 2001, Int. J. Operations and Production Management, 21, 1133. Verwater-Lukszo, Z. and Otten, G., 1994, J. Proc. Cont., 4, 291.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
59
Synthesis of Integrated Distillation Systems Jose A. Caballero , Juan A. Reyes-Labarta and Ignacio E. Grossmann *Department of Chemical Engineering. University of Alicante. Ap. Correos 99, E03080 Alicante SPAIN. Department of Chemical Engineering. Carnegie Mellon University. Pittsburg, PA, 15213, USA.
Abstract The paper presents a novel superstructure based optimization procedure for design a sequence of distillation columns for separating a non azeotropic n component mixture. The separation can be performed using from conventional to fully thermally coupled distillation sequences going through all the intermediate possibilities of partial thermal integration. A two stage procedure is proposed, in the first a feasible sequence of tasks is selected, with an approximation to the total annualized cost that, under some considerations, produces a lower bound to the total annualized cost. In the second stage the configuration in actual columns is selected among all the thermodynamically equivalent configurations. A case study is presented in order to illustrate the procedure.
1. Introduction The fully thermally coupled (PC) configuration has been known for over fifty years (Wright, 1949). Different theoretical studies has shown that thermally coupled configurations can save among 10 to 30% in total and operating costs (see for example, Glinos and Malone 1988 or Triantafillou and Smith, 1992). Petlyuk (Petlyuk et al, 1965) was the first in analyzing the PC system and found it thermodynamically attractive due to the reversible mixing of feeds with internal column streams. Consider a mixing consisting of three components A, B and C, where A is the lightest and C the heaviest. Por some mixtures, for instance when B is the major component and the split between A and B is as easy as the split between C and D, this configuration (Pigure 1) has an inherent thermal inefficiency (Schultz et al, 2002).
r ^
Molar
fraction
/r^ 1
AB
ABC
B in column
^ 2 .,-—"'" B in column
V l ^ B /'' ^ - ^
/ / Bottom
Trays
Top
Figure 1. Concentration profile for component B in a sequence AB/C A/B. Remix ofB occurs in column 1.
60 In the first column, the concentration of B reach a maximum at a tray near the bottom. On trays below this point, the amount of C continues increasing, diluting B. Energy has been using to separate B to a maximum purity but, due to B is not removed at this point B undergone a remixing and it is diluted till the concentration at which it is removed in the bottoms. This remixing is inherent to any separation that involve an intermediate boiling component, and the result can be generalized to a N component mixture. Halvorsen and Skogestad (2001) proved that the minimum energy consumption for a sequence of columns is always obtained with the FC configuration. Petlyuk and coworkers generalized the FC system for n number of components. They defined a FC system as a one with n(n-l) sections for a n component mixture, only one condenser and one reboiler for the entire system, all the components (except the most and least volatile) distributing between the top and the bottom of each column, and all of the product column that is composed of (n-1) binary separation columns built on top of each other. This definition has latter extended and generalized (Agrawal, 1996). In an FC system is not needed that all the products are produced by a main column, the number of column sections goes from (4n-6) -the minimum number of column section for a FC system- to n(n-l) and the condenser and reboiler do not necessarily belong to the same column. Although the total energy consumption is always lower in FC systems than in any other configuration, there are some drawbacks: The number of total sections increases with the reduction in the number of heat exchangers, increasing the total number of trays and then the total cost (in some cases compensated by the integration in a single shell). The energy must be supplied in the worst conditions, at the highest temperature in the reboiler and removed at the lowest temperature, preventing in most cases the use of utilities like medium or low pressure steam. Operation is more difficult due to the large number of interconnections among columns. As the knowledge of these systems increases and operational problems are being solved interest in thermally coupled systems is renewed. However, it is clear that from conventional systems (each column with a condenser and a reboiler) to fully thermally coupled systems there are a large number of possibilities, and the optimum is probably among them. Here we present mathematical programming base procedure for screening among these possibilities. It is a two stage procedure, in the first a sequence of tasks is selected, and in the second the best configuration of columns among those thermodynamically equivalent is selected. The problem addressed in this paper can be stated as follows: Given a number of components, that do not form azeotropes, to be separated into a predefined set of products. The objective is to find an appropriate and cost effective separation scheme. This scheme includes conventional columns, partially linked distillation systems -with any number of heat exchangers between 2 and 2(n-l)- that can produce prefractionators, sloppy splits, side columns etc. Without loss of generality, the products are listed in a decreasing order of volatilities.
2. Integrated Distillation Model Due to the nature of the problem there is not a one to one match between separation tasks and columns that perform a given separation. Even more, a given feasible sequence of separation tasks can be performed by different sequences of thermodynamically equivalent columns. In this paper we present a task based superstructure with intermediate characteristics between the pure State Task Network (STN) (Yeomans and Grossmann, 1999) in which all the separation tasks are explicitly enumerated, and a superstructure in which equipment is previously determined. Figure 2 shows the superstructure for a mixture of 5 components. Although the picture by itself
61 is not new (for example it corresponds to the classical Sargent and Gaminibandara structure (1976). Some important aspects must be remarked that are important at the modeling level. First, individual separation tasks are not considered explicitly. For instance, for the mixture ABCD we consider the most general approach "separate A from D" in which intermediate boiling compounds can or cannot distribute. However, at the level of relation among tasks it is important which separation has taken place. Second, if a group of compounds (state) do not exists in the final solution it is considered as a simple bypass in the superstructure See Figure 2. And third, heat exchangers are not explicitly included in the superstructure, but are considered at the modeling level. In particular, they appear in the final cost, the flow transfer among pseudo-columns (tasks) is very different if there is or there is not a heat exchanger and the presence of a heat exchanger associated to a final product is related to the minimum and maximum number of separation sections. These approach has proved very robust and computationally efficient.
'XX: State does not exist: Bypass.
State ABCDE, for instance, can produce the following separation tasks: A/BCDE; AB/BCDE: AB/CDE; ABC/BCDE ABC/CDE; ABC/DE; ABCD/BCDE; ABCD/CDE; ABCD/DE; ABCD/E
Figure 2. Superstructure for a mixture of 5 components. Note that if a state (group of components) do not exist then becomes in a bypass. Very important, and related to second point of previous paragraph is the question of how generating feasible sequences. Logical connectivity relationships are not enough. It is also convenient to forbid that a given separation can be produced twice, or a given state can be produced simultaneously by two rectifying or stripping sections. Although some authors (Rong and Kralawski, 2001) considered these possibilities they are always suboptimal from the point of view of energy, an it increases the total number of tasks. It could be justified if different concentrations of the same split are considered or if we are interested in some kind of multi-effect heat integration (in our approach, these constraints can be easily relaxed in order to allow these possibilities if desired). Other configurations like that proposed by Kaibel (1987) need more energy than other fully or partially thermally coupled, however due to the reduction in the number of sections it could be interesting in some cases. A set of logical relationships among tasks in terms of binary variables can be included in order to fulfill all these conditions. (Caballero and Grossmann, 2001). Operation conditions into the columns was calculated by Underwood, Fenske equations. However, any other set of equations including rigorous tray by tray calculations could be implemented. The conceptual procedure continue being the same and only the numerical performance would be affected. As an objective the total annualized cost (operating cost, annualized capital cost of columns and heat exchangers) is considered. Procedure proposed by Douglas (1988) has been implemented.
62 The conceptual model is conveniently represented using a disjunctive formulation as follows: min total annualized cost = Annualized Capital Cost + Operation costs s.t.
Underwood equations Fenske equations
V
Calculation Column Area
Bypass of Flows Capital Cost = 0
Capital Cost = fiD, P, N^^^y^ ) ^s Calculation of Qreb or Qcond Calculation of Utilities cost
Transfer of Liquid and Vapor
V
among columns.
Heat exchanger cost - f(Area,UyAT)
Qreb = Qcond = 0
Distillate or Bottom liquid at their buble point
Costs =0
^{z„W,)=True Where s is an index set making reference to the states in the superstructure, Zs is a Boolean variable that takes the value True if a given state exists, Ws is a Boolean variable that takes the value of True is a Heat exchanger associated to that state exists. And the last equation makes reference to the logical relationships among states (tasks) and heat exchangers in order to assure feasible separations. Remark that previous model is linear except for Underwood equations and those equations related to costs estimation. The model has proven to be reliable and computational efficient. Due to the task based approach, the total cost of columns is calculated assuming that a task can be considered a pseudo-column (each pseudo-column with their own diameter), this approach produces a lower bound to the total capital cost. The actual capital cost depends on the final configuration in actual columns. If the capital cost is not the dominant factor (and even in some cases if it is) for preliminary design the previous approach is good enough. With the optimal sequence of tasks it is possible to obtain the best possible rearrangement of these tasks in actual columns (Caballero y Grossmann 2002). In this second stage it is possible include control and operational constrains. Although this second part can be integrated in the model previously presented, due to the large number of thermodynamically equivalent configurations that some sequences of tasks can produce the increase of dimensionality of the problem do not justify it but in some special cases.
3. Case Study As an example we present the separation for a mixture of 5 hydrocarbons. In Table 1 basic data are presented.
63 Table 1. Basic data for example. Component n-Hexane n-Heptane n-Octane n-Nonane n-Decane Total flow Pressure
Feed composition (molar fraction) 02 0.1 0.2 0.2 0.3 200 kmol/h 2atm.
U= 800 W/m^ K Steam cost = 5.09 $/GJ Cold water cost= 0.19 $7GJ Recovery: 98% of heavy key and light key in each separation
The best sequence of tasks is presented in Figure 3a. The solution involve three reboilers and three condensers with a total of 12 column sections. If we do not take into account the possibility of divided wall columns, the minimum number of columns in which the different separation tasks can be performed is equal to n-1, in this case four columns. Taking into account that each thermal link (link between states without heat exchanger) produces two different possibilities of rearrangement of these separation tasks (Caballero and Grossmann 2002), for our solution there are 16 thermodynamically equivalent configurations. The best obtained solution is shown in Figure 3b. The total annualized cost of the best configuration obtained assuming that each separation task is pseudo-column was $1457487 that is a lower bound to the total cost. When we calculate the total annualized cost of the actual configuration -Figure 3b- this cost increases to $1571159, which is around 8% higher. /^r^ r - ^ c
^C2>-i^Q)^^ Figure 3. a) best sequence of tasks obtained, b) Best arrangement in actual columns. It is possible start an iterative procedure in which the previous result is an upper bound, and study the possibility of getting other configurations. Although it has not been presented here it is also possible consider the integration of some columns in a single shell with it is likely to produce some reductions in the investment cost. Note that each integration reduced by 4 the number of thermodynamically equivalent configurations. Note also that there are a good number of configurations with similar performance. The proposed procedure is flexible and it allows study in a easy way other configurations. Some final remarks: In the solution presented the vapor flows always from columns at higher to lower pressures in order to facilitate the operational problems associated to vapor transfer between columns. However, if the number of interconnections among columns makes difficult the control it is possible to implement some of the alternatives like those proposed by Agrawal (2000,2001) for reducing the flow transfer among
64 columns, or if possible try some integration of columns. This lasts points can be implemented at any of the two levels depending on the influence in the total cost.
4. References Agrawal, R., 1996; Synthesis of distillation column configurations for multicomponent distillation. Ind. Eng. Chem. Res. 35, 1059-1071. Agrawal, R., 2000. Thermally coupled distillation with reduced number of intercolumn vapor transfers. AIChE J. 46(11) 2198-2210. Agrawal, R., 2001; Multicomponent distillation columns with partitions and multiple reboilers and condensers . Ind. Eng. Chem. Res. 40. 4258-4266. Caballero, J.A.; Grossmann, I.E.; 2001; generalized Disjunctive programming model for the optimal synthesis of thermally linked distillation columns. Ind. Eng. Chem. Res. (40) 10,2260-2274. Caballero, J.A.; Grossmann, I.E.; 2002; Logic based methods for generating and optimizing thermally coupled distillation systems. Procedings ESCAPE 12, J. Grievnik, and J van Schijndel (Editors), 169-174. Douglas, J.M., 1988, Conceptual design of chemical processes. McGraw-Hill Chemical Engineering Series. Glinos, K, Malone, P., 1988; Optimality regions for complex columns alternatives in distillation systems. Trans IchemE, Part A. Chem. Eng. Res. Des. 66, 229. Halvorsen, I.J., 2001; Minimum energy requirements in complex distillation arrangements.; Ph.D. Thesis, under supervision of S. Skogestad. Norwegian Institute of Science and Technology. Kaibel, G.; 1987; Distillation columns with vertical partitions. Chem. Eng. Tech. 10. 92 Petlyuk, F.B; Platonov, V.M. and Slavinskii, D.M.; 1965. Thermodynamically optimal method of separating multicomponent mixtures. Int. Chem. Eng. 5, 555. Rong. B; Kraslawski, A.; 2001. Procedings ESCAPE 12, J. Grievnik, and J van Schijndel (Editors) (10) 319-324. Sargent, R.W.H. and Gaminibandara, K., 1976. Introduction: approaches to chemical process synthesis. In Optimization in Action (Dixon, L.C. ed) Academic Press, London. Schultz, M.A.; Steward, D.G.; Harris, J.M.; Rosenblum, S.P.; Shakur, M.S.; O'Brien, D. 2002; Reduce cost with dividing-wall columns. Chem. Eng. Prog. May, 6470. Triantafillou, C and Smith, R., 1992; The design and optimization of fully thermally coupled distillation columns. Trans IchemE, Part A. Chem. Eng. Res. Des. 70, 118. Wright, R.O., 1949 US Patent 2,471, 134. Yeomans, H.; Grossmann, I.E., 1999; A systematic modeling framework for of superstructure optimization in process synthesis. Comp. Chem. Eng. 23-709.
5. Acknowledgements Financial support provided by the "Ministerio de Ciencia y Tecnologia", under project PPQ2002-01734 is gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
65
A Continuous-Time Approach to Multiproduct Pipeline Scheduling Diego C. Cafaro and Jaime Cerda INTEC (UNL - CONICET) Guemes 3450 - 3000 Santa Fe - ARGENTINA E-mail: [email protected]
Abstract Product distribution planning is a critical stage in oil refinery operation. Pipelines provide the most reliable mode for delivering huge amounts of petroleum products through long distances at low operating cost. In this paper, the short-term scheduling of a multiproduct pipeline receiving a number of liquid products from a single refinery source to distribute them among several depots has been studied. The pipeline operation usually implies to accomplish a sequence of product pumping runs of suitable length in order to meet customer demands at the promised dates while satisfying all operational constraints. This work introduces a novel MILP mathematical formulation that neither uses time discretization nor division of the pipeline into a number of single-product packs. In this way, a more rigorous problem representation ensuring the optimality of the proposed schedule has been developed. Moreover, a severe reduction in binary variables and CPU time with regards to previous approaches was also achieved. A realworld case study involving a single pipeline, four oil products and five distribution depots was successfully solved.
1. Introduction Product distribution planning has become an important industrial issue especially for oil refineries continually delivering large amounts of products to several destinations. Pipelines represent the most reliable and cost-effective way of transporting such volumes of oil derivatives for large distances. This paper is concerned with a distribution system consisting of a single multiproduct pipeline receiving different products from an oil refinery to distribute them among several depots connected to local consumer markets. Because of liquid incompressibility, transfers of material from the pipeline to depots for fulfilling consumer demands necessarily occur simultaneously with the injection of new runs of products into the pipeline. By tracking the runs as they move along the pipeline, variations in their sizes and coordinates with time due to the injection of new runs and the allocation of products to depots can be determined. A few papers on the scheduling of a single pipeline transporting products from an oil refinery to multiple destinations have recently been published. Sasikumar et al. (1997) presented a knowledge-based heuristic search system that generates good monthly pumping schedules for large-scale problems. In turn, Rejowski and Pinto (2001) developed a pair of large-size MILP discrete time scheduling models by first dividing the pipeline into a number of single-product packs of equal and unequal sizes, respectively. This work presents a novel MILP continuous-time scheduling approach that accounts for pumping run sequencing constraints, forbidden sequences, mass balances, tank loading and
66 unloading operations and permissible levels, feasibility conditions for transferring material from pipeline runs to depots, product demands and due dates. The problem objective is to minimize pumping, inventory and transition costs while satisfying all problem constraints. The latter cost accounts for material losses and interface reprocessing costs at the depots due to product contamination between consecutive runs.
2. Problem Deflnition Given: (a) the multiproduct pipeline structure (the number of depots and the distance between every depot and the oil refinery); (b) available tanks at every depot (capacity and assigned product); (c) product demands to be satisfied at every depot at the end of the horizon; (d) the sequence of runs inside the pipeline and their initial volumes at the horizon starting time; (e) the scheduled product output at the refinery during the time horizon (product, amount and production time interval); (f) initial inventory levels in refinery and depot tanks; (g) maximum injection rate in the pipeline, supply rate from the pipeline to depots and delivery rate from depots to local markets and (h) the length of the time horizon. The problem goal is to establish the optimal sequence of pumping runs, the run lengths and the product assigned to each one in order to: (1) satisfying every product demand at each depot in a timely fashion; (2) keeping the inventory level in refinery and depot tanks within the permissible range all the time and (3) minimizing the sum of pumping, transition and inventory carrying costs. To do that, variations in sizes and coordinates of new/old runs as they move along the pipeline as well as the evolution of inventory levels in refinery and depot tanks with time are to be tracked.
3. Mathematical Formulation 3.1. Run sequencing constraints Every new pumping run ief^^ should never start before completing the previous one and the subsequent changeover operation. C,-L,>
C,., + T,_, * (>>,_,,. + y,, -1)
where, L , < C , < / i ^
V/€ I'^;s,s'e
S
Vie/""
(1) (2)
3.2. Relationship between the volume and the length of a new pumping run yb„^*L,
vb^*L,
(Zy>,s)*l^
yier"<(Zy,,)n^
seS
(3) \fie I""'
(4)
seS
3.3. Interface size between consecutive runs WF,. > / F , y
* ( J M , . ' + 3^.>
-1)
V/G IJ>ls,s'e
S
(5)
3.4. Upper and lower pipeline coordinates for run iel at time Qy Let F / ' ^ and Fi+/^ ^ be the upper and lower coordinates of run / at the completion time Q' of a later run Vef^"^, respectively. Then,
FJ'^ + W/'"^ = F/'"^ V/G lyve r^'^j'x
(6)
67 3.5. Volume transferred from run iel "^'*' to depots while pumping run i Q/=^/''^+E^/./'^
' ^i'^= ^/'^
V/6/"^"
(7)
3.6. Volume transferred from run i el to depots while pumping a later run TG 7"^'*' The volume of run is I at time C^ is the difference between the size of run / at time C/.i and the volume transferred to depots during the later run i*e I "^^ (/'>/)• ^.(O =^.0'-l) _ J^Dif^
V/G /,V/'G /'^^^,r>/
(8)
jeJ
where X ^ / , / ' ' ^ <w/^""^^ -W/F,-
V/G
/,V/'G
/'^"^,i'>/
(9)
3.7. Feasibility conditions for transferring material from pipeline runs to depots The transfer of material from run i el to depot jeJ is feasible only if the pipeline oudet to depot j is reachable from run /. Fulfillment of such feasibility conditions while pumping a later run i*e f^^ requires that: (a) the lower coordinate of run / at time C,./ must be less than the depot coordinate Pj and (b) the upper coordinate of run / at time C/', decreased by the interface volume WIFi, should be greater than Pj. DJ'^
^ D^'^xJ'^
; F/'"^ -WIF. > P. * jc,./'"^
V/G /, V/^G r^^ ,V> i^je J (10)
3.8. Overall balance around the pipeline during a new pumping run ier^^ The overall volume transferred from runs iel to depots jeJ while pumping a new run Vef^"^ should be equal to the volume injected in the pipeline during run /'. XSD,,/''
Vfe/"--
(11)
3.9. Product allocation constraint A pumping run flowing inside the pipeline just contains a single refinery product. Then, S3',.<1
V/e/«-
(12)
3.10. Fulfillment of market demands The total volume of product s transferred from depots e Js to the local market should be high enough to meet its overall demand qdsj. qmj'
<(C, - C , _ J * v ,
^qmj'^=qd,j
V5G 5,VyG 7,V/G /"^^ \/seSJeJ
(13) (14)
ielnew
3.11. Control of inventory in refinery tanks Assuming that the pipeline injection rate is lower than the processing rate of any production run o, then the worst condition for running out of product s in the assigned tank occurs at the completion time of a pumping run ieT^^ containing product ^^5. In turn, the worst condition for overloading the refinery tank devoted to product s arises at the start of a new pumping run containing s. Then, a^^zu.
V/G r'\oG O
(15)
68 b„ * zl,„ < C, - L,. ;
z/,„
V/ € /"™, o e O
(16)
qu,„
^h.o^B„*zh,
; qh, >vp„*\c,
P,,<M*y,_^
^ier^,seS;
-L. ) - a J - M * z / , „ ^P,.=Q^
'iiel'^.oeO
(17)
Vie / " ' ^ o e O
(18)
^iel"^
(19)
IR°s + X9«,,„ - S^z,. ^ ^^mi".
'^•^eSJel"^
(20)
7/?^ + X < „ - E ^ ; . ^ ^'^max,
V^e 5, ie I"^
(21)
3.12. Control of inventory in depot tanks Let DSi^sf ^ be the amount of product s transferred to depot jsJg from run / during the time interval (Cr - Lr , Cr). Therefore, DStJ''^ = 0 if >;,, =0 and DSiJ''^ = D J ' ' ^ whenever >',;^= 1. Therefore, the following constraints for new pumping runs is f^^ should be satisfied: DSj''
Vie / , . e S,je J^Ve
For old runs j ^ r " ; DS.,J'''^
=DJ'''
\/iG
/"-
I^'^ JG
J^J'e
(23)
T"
Inventory of product 5 in depot jeJs at time C,- , IDsJ' \ is obtained from the one available at time C,-./ by adding the amount (Z, DSi,s.j'' ) supplied by runs iel containing product s, and subtracting deliveries of s to local markets during run i'. IDj'->=IDj'--'>+'ZDS,J'''-qmJ'-> U ID min,,. < ID,."'' < ID max,, S,J
S,J
S,J
^seSJeJ,,
i'e 1"^ (24)
Vi eS.jeJ,, ^ J
i'€ I""" S '
3.13. Initial conditions Old runs ief^^ have been chronologically arranged by decreasing F/*, where F,^ stands for the upper coordinate of run ief^"^ at time t=0. Moreover, the initial volumes of old runs (W^/, ief^^) and the product to which each one was assigned are all problem data. Then, ^i,r-i = ^ ' ' '
V/G r'\r =
first(r''')
(25)
3.14. Problem objective function The problem goal is to minimize the total operating cost including pumping costs, the cost of reprocessing interface volumes and the cost of carrying product inventory in refinery and depot tanks.
69 f
\ d')
s€S jsj[
iel i'slnew
iel i>I
J
1 cardil"'" )1fs
d') jeJs
[i'slnew
J
\i'elnew
4. Results and Discussion The proposed MBLP approach will be illustrated by solving a large-scale multiproduct pipeline-scheduling problem first introduced by Rejowski and Pinto (2001). It consists of an oil refinery that must distribute four products among five depots through one pipeline. Problem data are included in Table 1. Pumping and inventory costs as well as the interface volumes can be found in Rejowski and Pinto (2001). There is initially a sequence of five old runs inside the pipeline containing products (P1,P2,P1,P2,P1) with volumes, in 10^ m \ of (75,25,125,175,75), respectively. The optimal solution was found in 25 s on a Pentium III PC (933 MHz) with ILOG/CPLEX. This represents a threeorder-of-magnitude time saving with regards to the model of Rejowski and Pinto (2001). Figure 1 shows the optimal sequence of new pumping runs as well as the evolution of sizes and coordinates of new/old product campaigns as they move along the pipeline. Four new runs involving products (P2, P3, P2, P4) have been performed.
Run Time
o
interval [hi
Volume [10 " m ^
Figure 1 - Optimal sequence ofpumping runs
5. Conclusions A new continuous approach to the scheduling of a single multiproduct pipeline has been presented. By adopting a continuous representation in both time and volume, a more rigorous problem representation and a severe reduction in binary variables and CPU time have simultaneously been achieved.
70 Table 1 - Problem Data (in l(f m^) Depots Refinery Prod. Level Dl D2 D3 D4 D5 Min 90 90 90 90 90 270 PI Max 400 400 400 400 400 1200 Initial 190 230 200 240 190 500 Min 90 90 90 90 90 270 P2 400 400 400 400 400 Max 1200 180 210 180 180 180 520 Initial Min 50 10 10 10 10 10 P3 Max 350 70 70 70 70 70 Initial 210 50 65 60 60 60 90 90 90 90 90 Min 270 P4 400 400 400 400 400 Max 1200 120 140 190 190 170 Initial 515 Location from 100 200 300 400 475 Refinery [lOW]
Depots D3 D4
]
Dl
Prod. PI
P2
P3
P4
Demand Pumping Cost [$/m^] Demand Pumping Cost [$/m^] Demand Pumping Cost [$/m^] Demand Pumping Cost [$/m^]
D2
D5
100 110 120 120 150 3.5
4.5
5.5
6.0
6.9
70
90
100
80
100
3.6
4.6
5.6
6.2
7.3
60
40
40
0
20
4.8
5.7
6.8
7.9
8.9
60
50
50
50
50
3.7
4.7
5.7
6.1
7.0
6. Nomenclature (a) Sets r^ set of old pumping runs inside the pipeline at the start of the time horizon r"^ set of new pumping runs to be potentially executed during the time horizon (b) Parameters hmax horizon length Pj volumetric coordinate of depot j along the pipeline vb pumping rate qd^ overall demand of product s to be satisfied by depot j Vm maximum supply rate to the local market (c) Variables denoting that product s is contained in run i J'M whenever ^'i^s = 1 denoting that a portion of run i is transferred to depoty while pumping run V denoting that run i ends after the refinery ZM»,. production run o has started denoting that run i begins after the refinery production run o has ended C j , Lti completion time/initial length of the new pumping run iG V^ upper coordinate of run i along the pipeUne Ft at time C,'
S
/ O
iPsy Bo Ooybo
Qi
Du
DSJ
set of derivative oil products set of depots along the pipeUne set of scheduled production runs in the refinery during the time horizon volume of the interface between runs containing products s and s' size of the refinery production run o starting/finishing time of the refinery production run o initial inventory of product s at the refinery initial inventory of product s at depotj volume of run i at time C,' volume of product injected in the pipeline while pumping the new run is T^ volume of product s injected in the pipeline while pumping the new run i volume of run i transferred from the pipeHne to depoty while pumping run V volume of product s transferred from run i to depoty while pumping run V
7. References ILOG OPL Studio 2.1 User's Manual, 1999, ILOG S.A. France. ^nd Rejowski, R., Pinto, J.M., 2001, Paper PIO, Proceedings of T". Pan American Workshop on Process Systems Engineering, Guaruja-Sao Paulo, Brazil. Sasikumar, M., Prakash, P.Patil S.M., Ramani, S., 1997, Knowledge-Based Systems 10, 169.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
71
Optimal Grade Transition Campaign Scheduling in a GasPhase Poly olefin FBR Using Mixed Integer Dynamic Optimization C.Chatzidoukas^'^, C.Kiparissides^ J.D.Perkins^, E.N.Pistikopoulos^'* ^ Department of Chemical Engineering and Chemical Process Engineering Research Institute, Aristotle University of Thessaloniki, PO Box 472,54006 University City, Thessaloniki, Greece. ^ Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London SW7 2BY, UK
Abstract Transitions between different polymer grades seem to be a frequent operating profile for polymerization processes under the current market requirements for product with diverse specifications. A broader view of the plant operating profile without focusing only on a single changeover between two polymer grades raises the problem of the optimal sequence of transitions between a certain number of grades. An integrated approach to the optimal production scheduling in parallel with the optimal transition profiles is carried out using Mixed Integer Dynamic Optimization (MIDO) techniques. A remarkable improvement on process economics is observed in terms of the off-spec production and the overall transition time.
1. Introduction Polymerization processes have adopted a character of continuous multiproduct plants, in response to the current demand for polymers. Precisely, the variability observed in the polymer market demand, in terms of product quality specifications, calls upon frequent grade transition policies on the polymerization plants, with legitimate consequences on process economics, due to the regular "necessary" disturbances from steady-state operating conditions. Therefore, the issue of how to operate such process as continuous multiproduct plants, in a global polymer industry environment with intense competitive pressures, emerges nastily. The products produced during a transition are off-spec, since they do not meet the specifications of either the initial or the final product, and consequently must normally be sent to waste treatment facilities. This problem combined with the usual long residence time (-10^ sec) and therefore long transition time of continuous polymerization reactors, results in an exceptionally large amount of off-spec product and consequently in a serious treatment and product loss problem. In order to develop an economically viable operating profile for the process, under the sequential
To whom correspondence should be addressed. Tel: (44) (0) 20 7594 6620, Fax: (44) (0) 20 7594 6606, E-mail: [email protected]
72 production mode of different grades, an integrated approach to a multilevel process synthesis problem is required, taking into consideration, process design, process control, optimisation of transient operation and production scheduling as interacted subproblems. In the present study a unified approach is attempted considering process control, process operation and production planning issues. The problem is examined in relation to a Ziegler-Natta catalytic gas-phase ethylene-1-butene copolymerization fluidised bed reactor (FBR) operating in a grade transition campaign. The process design is totally defined a priori since the polymerization system employed to carry out this research is a unit of an industrial polymer process. A comprehensive kinetic mechanism describing the catalytic olefin polymerization in conjunction with a detailed model of the FBR and a heat exchanger has been developed to simulate the process dynamics and the polymer molecular properties (Chatzidoukas et al., 2002). This model provides the platform for the study of process control, dynamic optimisation of the transient operation between a number of grades and the optimal sequence of transitions. A mixed integer dynamic optimization (MIDO) algorithm enables the above issues to be dealt with simultaneously avoiding the exhaustive calculation of the dynamic optimal profiles for all possible binary transitions, which might be prohibitive from computational point of view when a large number of polymer grades is considered.
2. Problem Deflnition Even though a year-time is a typical time scale for the operating life of an industrial polymer plants, with the observed fluctuations in the market demand a production scheduling on this basis would be hazardous. Therefore, it is expected that in an annual period polymer plants run several production campaigns and production planning for each one would be more efficient. A short-term campaign involving a single batch of four polymer grades (A, B, C, D) has been selected as representative case study in order to illustrate the concepts of our approach to the dimensions of the problem. Each of the four polymer grades is produced at once, without adopting a cyclic mode for the process operation. Therefore the process starts from an initial operating point and does not return to that point at the end of the campaign and hence, the timely satisfaction of the customer orders should be settled on this base. Furthermore, the campaign is studied separately from a previous and a next one, in the sense that the production sequence is determined independently, neglecting how the final grade of this campaign might affect the sequence of a next campaign as a starting point. Similarly, the starting point (Init) of the current campaign is considered as a given final grade of the previous one. Since it is expected that the Init point will affect the sequence of transitions, its polymer properties have been selected on purpose to lie in between the polymer properties of the four desired polymer grades. Furthermore, the polymer properties of the four grades have been chosen in such a way that a monotonous change (either increase or decrease) when moving from one grade to another during the campaign is impossible for the three polymer properties simultaneously. This renders the problem more complicated, reducing the possibility of applying heuristic rules for the selection of the transition sequence.
73 A simplification assumption, particularly for the formulation of the performance criterion, is that the process between the transitions is running at steady state, operating under perfect control, eliminating any disturbance and hence preventing any deviation from on-spec production. With this assumption, the production periods between the transition periods do not need to be considered in this study and the performance index accounts only for the transition periods. Melt index, polymer density and polydispersity are the molecular polymer properties (PP: MI, p, PD) identifying each polymer grade. The four operating points corresponding to the four polymer grades have been found under steady state maximization of monomer conversion with respect to all the available model inputs, so that the process operates with the maximum monomer conversion when running in a production period. In the framework of the integrated approach attempted in this study, selection and tuning of the regulatory and supervisory close-loop feedback controllers is required. From Figure 1, showing a schematic representation of a gas-phase catalytic olefin polymerization FBR, one can identify nine possible manipulated variables: The monomer and comonomer mass flow rates (Fmonb Fmon2) in the make-up stream; the hydrogen, nitrogen and catalyst mass flow rates (FH2, FN2, Feat); the mass flow rate of the bleed stream (Fbieed); the mass flow rates of the recycle and product removal streams (Free, Fout)» and the mass flow rate of the coolant water stream to the heat exchanger (Fwater)- I^ practice, instead of manipulating the comonomer mass flow rate, Fnion2» the ratio of the comonomer to the monomer inflow rate in the make up stream (Ratio = Fmonz/Fmoni) IS Selected as manipulated variable. The structure derived from a relative gain array (RGA) analysis is applied for the range of all the transitions of the campaign and is responsible for holding the reactor's bed level (h), temperature (T), pressure (P) and production rate (Rp) under control in a multiple input-multiple output configuration of PI feedback controllers. Table 1 describes the pairings of the control scheme defining also the manipulated variables. The last two manipulated variables are used by the optimizer to track polymer properties during a transition to their desired values corresponding to each grade.
3. Mathematical Representation The total transition time and the total off-spec production during the campaign are the criteria that should be incorporated in the objective function for the evaluation of the candidate alternatives. Binary variables are used to model the potential assignment of different grades to time slots in the total horizon. The time horizon for the campaign is divided into 20 time slots (5 intervals for each transition). Two integer variables are employed for each polymer grade, one showing when this grade is a starting point (Yix) of a binary transition and one showing when this grade is a desired- final point (Y2x) of each transition. Since the time topology of the Init operating point is known and constant for all the potential campaigns and besides it cannot be a desired grade, only one binary variable is ascribed to this one, which though is known during all the time slots. Hence a total number of eight 0-1 variables are required to describe the fimely distribution of the 4 desired grades. The mathematical formulation of the combined operational and scheduling problem it can be stated as:
74
V
Cyctone
Rec>cie Stream, TaMc 1. Bcst pairiugs of controlled and manipulated variables.
Bleed Stream,
\
FraO
/ (fydiDgen
Cbnpressor
feed, F i e
Bed height (h)—^Product withdrawal (Fout) Temperature (T)^^Coolant feed rate (F^) Pressure (P)
2
tfeat exchanger
Production Inflow rate (Rp) Density (p)
product RemDval,
Nitrogen feed.
T
Manipulated
Controlled
o
Fluidized bed
>-Nitrogen feed rate (FNZ) Monomer make up feed rate (F„,oni) ->-Comonomer ratio (Ratio)
Pure product Cocatalyst Feed
Melt Index (MI)->iIydrogen feed rate (FH2)
Make-up feed, FB™I,F,«2
Figure 1. Gas-phase polymerization FBR unit.
ethylene
h 3
Minimise
O^j = w 1 J 5^ (PPj _transition_dev) dt + w 2 offspec
U(t),YlX(t),Y2X(t)
(1)
0 i=l
where the normalized square transition deviation of the polymer properties PP from their desired values is defined as: PP; transition dev =
K(t)-Y., ^ ,
PP_A - Y,3,, PP_B - Y,c,, PP_C - Y,!,, PP_D)P Yw,.PPJnit + V,PP_A-Y,^.,PP_A + V,PP_B-Y,3,PP_B + Y,e,,PP_C-Y,e,PP_C + Y,o,,PP_D-Y,D,PP_D
subject to: x(t) = f(x(t),x(t),u(t),t)
(2)
y(t) = h(x(t),u(t),t)
(3)
x(to) = x„
(4)
0 < g(x(t), u(t), y(t), Y,x,„ Y,x,„ t)
(5)
where, x, u, y are the state, control and output vectors. A number of inequahty constraints described in Eqn. (5) stem from the definition of the problem and the need for feasible process operation. Precisely, end point constraints have been imposed on selected process variables to guarantee that each transition ends up at the desired steadystate optimal operating point. Finally, constraints on the binary variables were also imposed to make sure the production of all the polymer grades in a sequential transition mode.
75
4. Solution Algorithm-Results The combined structural and operational nature of the problem where both continuous and discrete decisions are involved, is addressed using a Mixed Integer Dynamic Optimization (MIDO) algorithm (Mohideen et al., 1996; Algor and Barton, 1999; Bansal et al., 2002). Under this approach, the problem is iteratively decomposed into two interacted levels-subproblems, in consistency with the hierarchical framework applied to scheduling problems. (Mah, 1990; Rowe, 1997); an upper level (Primal problem) where the operating profile is determined under dynamic optimisation techniques, and a lower level (Master problem) where candidate production sequences are developed. The dual information and the value of the objective function transferred from the Primal to the Master problem, which is solved as a mixed integer linear programming (MILP) problem, are employed to update the candidate production structure until the solutions of the two subproblems converge within a tolerance. The flow rate of hydrogen feed stream, the ratio of comonomer to monomer flow rate in the make-up feed stream and the binary variables constitute the set of the time varying controls, while the controller parameters as well as the length of each time interval are time-invariant decisions. The conmiercial modelling-simulation package gPROMS® in conjunction with the gOPT® optimization interface (Process Systems Enterprise Ltd, Lx)ndon) are used for the integration of the DAE system and the dynamic optimization of the grade transition problem. Additionally, the GAMS/CPLEX solver is used for the solution of the MILP problems resulting form the Master problem. Four iterations between the primal and master sub-problems were adequate for the MIDO algorithm to locate the optimal solution. Table 2 presents the sequence Init->C->A-^B-^D as the optimal production schedule. It also illustrates the remaining three production sequences derived during the four iterations of the algorithm. A comparison between them in terms of the time horizon, the objective function and the total amount of the off-spec product, reveals the excellence of the optimal sequence which results in a 16% reduction of the off-spec product compared to the worst scenario. Figures 2-4 display the optimal profiles for PD, MI and p during the transition campaign. It is noticed that the MIDO algorithm advocates as optimal production planning a sequence with monotonous change in terms of polymer density and polydispersity, however a simultaneous monotonous change is impossible for MI. Table 2. Comparison of the proposed sequences. Sequences _ _ _ _ _
Time Objective Off-spec horizon Function Product _ _
I n i t ^ C ^ A ^ B ^ D 27.16 hr 166.28 Init^D-^A->C-^B 30.63 hr 221.038 Init^D->B^A->C 34.13 hr 185.963
4.8-
Polymer Polydispersity 1
4.6-
c
1":
j^2tn
132 tn 181 to 153 tn
't
•
S
3.83.6-
r^>^
GradeD
I
GradeB
20
25
3.4-
\ _ l
GradeC
3.2-' 0
5
10
15 TimeOir)
Figi ire 2: Optimal PD profile under the optimal production planning.
3(
76
0.25-
1
Polymer Ml |
Grade D
Polymer Density (gr/cm') I
0.20-
0.15-
0.10Initial point
Vll
Grade C
0.05-
^^__^
0.00'
1
'
1
/
X -1
GradeB
Grade A 1
1
15 Time(hr)
Figure 3.: Optimal MI profile under the optimal production planning.
Tiine(hr)
Figure 4: Optimal density profile under the optimal production planning.
5. Conclusions The production sequence in a gas-phase olefin polymerization plant running a grade transition campaign between four polymer grades has been studied in parallel with the optimal transition profile to switch the process from one grade to another. Both the optimal production scheduling and operating profiles for the optimal transition between the polymer grades have been found using a Mixed Integer Dynamic Optimization algorithm. Reduction of the off-spec production and total transition time during the campaign highlights the economic benefits for the polymerization plant resulted from the integrated approach to the problem.
6. References Algor, R.J., Barton, P.I., 1999, Computers. Chem. Eng., 23, 567. Bansal, V., Perkins, J. D. and Pistikopoulos, E. N., 2002, Ind. Eng. Chem. Res. 41, 760. Chatzidoukas, C , Perkins, J.D.; Pistikopoulos, E.N. and Kiparissides, C , 2002, Submitted in Chemical Engineering Science. Mah, Richard S.H., Chemical process structures and information flows. Howard Brenner (Eds.), Butterworths Series in Chemical Engineering, United States of America, 1990, Chap. 6. Mohideen, M.J., Perkins, J.D., Pistikopoulos, E.N., 1996, AIChE J., 42, 2251. Rowe A.D., 1997, PhD Thesis, Imperial College, University of London.
7. Acknowledgements The authors gratefully acknowledge the financial support provided for this work by DGXII of EU the GROWTH Project "PolyPROMS" GlRD-CT-2000-00422.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
77
Environmentally-Benign Transition Metal Catalyst Design using Optimization Techniques Sunitha Chavali\ Terri Huismann\ Bao Lin^, David C. Miller^, and Kyle V. Camarda^ ^Department of Chemical and Petroleum Engineering, The University of Kansas 1530 W. 15^ Street, 4006 Learned Hall, Lawrence, KS 66045 USA ^Department of Chemical Engineering, Rose-Hulman Institute of Technology 5500 Wabash Avenue, Terre Haute, IN 47803 USA
Abstract Transition metal catalysts play a crucial role in many industrial applications, including the manufacture of lubricants, smoke suppressants, corrosion inhibitors and pigments. The development of novel catalysts is commonly performed using a trial-and-error approach which is costly and time-consuming. The application of computer-aided molecular design (CAMD) to this problem has the potential to greatly decrease the time and effort required to improve current catalytic materials in terms of their efficacy and biological effects. This work applies an optimization approach to design environmentally benign homogeneous catalysts, specifically those which contain transition metal centers. Two main tasks must be achieved in order to perform the molecular design of a novel catalyst: biological and chemical properties must be estimated directly from the molecular structure, and the resulting optimization problem must be solved in a reasonable time. In this work, connectivity indices are used for the first time to predict the physical properties of a homogeneous catalyst. The existence of multiple oxidation states for transition metals requires a reformulation of the original equations for these indices. Once connectivity index descriptors have been defined for transition metal catalysts, structure-property correlations are then developed based on regression analysis using literature data for various properties of interest. These structure-property correlations are then used within an optimization framework to design novel homogeneous catalyst structures for use in a given application. The use of connectivity indices which define the topology of the molecule within the formulation guarantees that a complete molecular structure is obtained when the global optimum is found. The problem is then reformulated to create a mixed-integer linear program. To solve the resulting optimization problem, two methods are used: Tabu search (a stochastic method), and outer approximation, a deterministic approach. The solution methods are compared using an example involving the design of an environmentally-benign homogeneous catalysts containing molybdenum.
1. Introduction Transition metal catalysts play a crucial role in many industrial applications, including the manufacture of lubricants, smoke suppressants, corrosion inhibitors and pigments. The development of novel catalysts is conmionly performed using a trial-and-error approach which is costly and time-consuming. The application of computer-aided molecular design
78 (CAMD) to this problem has the potential to greatly decrease the time and effort required to improve current catalytic materials in terms of their efficacy and biological effects. This work applies an optimization approach to design environmentally benign homogeneous catalysts, specifically those which contain transition metal centers. The use of optimization techniques coupled with molecular design, along with property estimation methods allows the determination of candidate molecules matching a set of target properties. For example, it has now been reported (Hairston, 1998) that a computational algorithm has been successfully implemented in order to design a new pharmaceutical which fights cancer. This work employs connectivity indices, which are numerical values which describe the electronic structure of a molecule, to characterize the molecule and to correlate its internal structure with physical properties of interest. Kier and Hall (1976) report correlations between connectivity indices and many key properties of organic compounds, such as density, solubility, and toxicity. The correlations to compute the physical properties are then combined with structural constraints and reformulated into an MINLP, is then solved via various methods to generate a list of near-optimal molecular structures. Raman and Maranas (1998) first employed connectivity indices within an optimization framework, and Camarda and Maranas (1999) used connectivity indices to design polymers which prespecified values of specific properties. An application of connectivity indices to the computational molecular design of pharmaceuticals was described by Siddhaye et al. (2000). In earlier molecular design work, group contribution methods were used to estimate the values of physical properties, as in Gani, et al. (1989), Venkatasubramanian et al. (1995), and Maranas (1996). The connectivity indices, however, have the advantage that they take into account the internal molecular structure of a compound. The property predictions generated from these indices are thus more accurate then those from group contributions, and furthermore, when a molecular design problem is solved using these indices, a complete molecular structure results, and no secondary problem must be solved to recover the final molecular structure.
2. Property Prediction via Connectivity Indices The basis for many computational property estimation algorithms is a decomposition of a molecule into smaller units. Topological indices are defined over a set of basic groups, where a basic group is defined as a single non-hydrogen atom in a given valency state bonded to some number of hydrogen atoms. Table 1 gives the basic groups used in this work, along with the atomic connectivity indices for each type of group. In this Table, the 5 values are the simple atomic connectivity indices for each basic group, and refer to the number of bonds which can be formed by a group with other groups. The 5^ values are atomic valence connectivity indices, which describe the electronic structure of each basic group, including lone-pair electrons and electronegativity. For basic groups involving carbon, oxygen, and halogen atoms, the definitions of these indices are from the work of Bicerano (1996). However, these indices assume the nonhydrogen atom can only have one valency state. For transition metals which can assume multiple valency states, the definition of S" must be extended. We have defined S" based on the number of electrons participating
79 in the bonding, instead of those present in the outer shell. The resulting values for 5^ for molybdenum groups are listed in Table 1, along with values for other groups from Bicerano (1996). Note that atomic connectivity indices can be defined for any basic group, and the small table of groups used here is merely for illustrative purposes. Table 1: Basic Groups and their Atomic Connectivity Indices.
-CH3 -CH2-
8 1 2
5^ 1 2
-CH<
3
3
-CI
1
0.778
-OH -0-
8 1 2
8^ 5 6
1
5
0.139
6
0.171
>Mo<
I >Mo<
I Once a molecule is decomposed into its basic groups, and the atomic connectivity indices for those groups are known, then molecular connectivity indices can be computed for the entire molecule. The zeroth, first and second order molecular connectivity indices \ , *x , and \ axe sums over each basic group, each bond, and each triplet respectively, and are related to the atomic indices in the following manner:
X
ZJ
j-^
ieG\Oi
X
ZJ
IOC
ijeByjOidj
^
/
2J
ijjeT
C C C
^O-djOi
where G is the set of all basic groups in the molecule, B is the set of all bonds, and T is the set of all triplets. The valence molecular connectivity indices are computed analogously using the valence atomic connectivity indices 8^. Once the equations defining the (molecular) connectivity indices are in place, we can use these indices in empirical correlations to predict the physical properties of novel transition-metal catalysts. For example, the correlation derived in this work for density is: >o = 35.81-44.06^-0.2227V -5.748^;^ +0.0522V +31.38';^-0.037V +I5.9l{^zj + 0.0236(V)' -4.203C;rT -0.0022s{'z'J
+0.l592{'zJ
+ 0.0006(';ir7 " 3 7 . 1 8 ^ ^
Since connectivity indices are defined in a very general way, they are capable of describing any molecule, and thus correlations based on them tend to be widely applicable and fairly accurate over a wide range of compounds. Using such correlations, an optimization problem has been formulated which has as its optimal solution a molecule which most closely matches a set of target property values for a molybdenum catalyst.
80
3. Problem Formulation The optimization problem which determines the best molecule for a given application uses an objective function which minimizes the difference between the target property values and the estimated values of the candidate molecule. This can be written as
min ^ = A^V
p scale
P„-/l
target
where R is the set of all targeted properties, P,„ is the estimated value of property m, Pj"^^^^ is a scale factor used to weight the importance of one property relative to another, and P^^^^^ is the target value for property m. The molecule is represented mathematically using two sets of binary variables: a partitioned adjacency matrix with elements a(ij,k) which are one if basic groups / andy are bonded with a A;^''-multiplicity bond, and zero otherwise. In the example presented here, the basic groups can only form single bonds, and thus the index k will be dropped. This matrix is partitioned such that specific rows are preassigned to different basic groups, so that it can be determined a priori what 5i and 5i^ values should be used for each basic group / in the molecule. Since we do not know how many of each type of group will occur in the fmal optimal molecule, the partitioned adjacency matrix will have many rows which do not correspond to a basic group. The binary variable w, is set to one if the ith group in the adjacency matrix exists in the molecule, and is zero otherwise. In order to store the existence of a triplet in the molecule (to compute \), a new binary variable y(ij,l) is defined. An element y(i,j,l) is equal to one if group / is bonded to group j , and group j is bonded to group /. These three sets of variables provide sufficient information to compute the connectivity indices, and thus estimate molecular properties. These data structures are then included within the equations for the connectivity indices to allow the structure of the molecule to be used to estimate physical properties. Along with these definitions, property correlations using the connectivity indices must also be included in the overall formulation. Finally, structural feasibility constraints are needed to ensure that a chemically feasible molecule is derived. In order to guarantee that all the groups in the molecule are bonded together as a single unit, we include the constraints from a network flow problem into the formulation. A feasible solution to a network flow problem across the bonds of a molecule is a necessary and sufficient condition for connectedness, and the constraints required are linear and introduce no new integer variables. Other constraints include bounds on the variables and property values. The problem written in this form is an MINLP, which then must be solved to obtain the desired structures.
4. Solution Methods In this work, two types of solution methods have been tested: the deterministic method known as outer approximation (Duran and Grossman, 1986), and the stochastic algorithm Tabu search (Glover, 1986,1997). While outer approximation guarantees that the global optimum will be found within a finite number of steps for a convex MINLP, the formulation
81 as listed here is nonconvex. Linear reformulations of the equations for y and the objective function have been implemented which leave the property constraints as the only nonlinear equations. The Tabu search algorithm is a meta-heuristic approach which guides a local search procedure and is capable of escaping local minima. Many issues must be addressed when applying Tabu search to molecular design problems. The Tabu search avoids local minima by storing a memory list of previous solutions, and the length of these lists must be set. Furthermore, strategies for determining when a more thorough search of a local region is needed must also be determined. A discussion of these issues is given in Lin (2002). Other applications of Tabu search within chemical engineering are described in Lin and Miller (2000, 2001).
5. Example The example presented here produces a potential molecular structure for a homogeneous molybdenum catalyst for epoxidation reactions. The possible basic groups in the molecule are those listed in Table 1, and the maximum number of basic groups allowed in the molecule is 15. A target value was set for the density, and all structural feasibility constraints were employed. The problem was solved using outer approximation, accessed through the GAMS modeling language on a Sun Ultra 10 workstation. A resource limit of 20 minutes was set, and no guaranteed optimal solution was found. The best integer solution found is shown in Figure 1. The value of the density for this molecule is 4382 Kg/m^, which is far away from the target value of 4173 Kg/m^. S^aOH
I/ CH3--M0—O
^^^ OH
I/
Mo
CI I
NH^
I
OH
CI
Figure 1: Candidate catalyst molecule found using DICOPT. When Tabu search is applied to this example, near-optimal structures are found in a much shorter time. 100 runs of the code were made, each of 90 seconds duration. The best structure found in most of the runs (80%) is presented in Figure 2. This structure has a density of 4172 Kg/m^, which deviates only slightly from the target. Note that many nearoptimal structures were also found, which can be combined into a list which a catalyst designer could use to choose a candidate for synthesis and experimental testing. OH OH
I ; CI—CH2—Mo—CI OH
Figure 2: Candidate catalyst molecule found by Tabu search.
82
6. Conclusions This work has focused on the use of optimization techniques within a molecular design application to derive novel catalyst structures. The use of connectivity indices to relate internal molecular structure to physical properties of interest provides an efficient way to both estimate property values and recover a complete description of the new molecule after an optimization problem is solved. The optimization problem has been formulated as an MINLP, and the fact that the problem has been formulated in a manner which is not computationally expensive to solve (using Tabu search) gives rise to the possibility that the synthesis route for such a molecule could be derived and evaluated along with the physical properties of that molecule. Further work will include such synthesis analysis, as well as the inclusion of a much larger set of physical properties and basic groups from which to build molecules, and will work toward the design of mixtures and the prediction of mixture properties via connectivity indices.
7. References Bicerano, J., 1996, Prediction of Polymer Properties, Marcel Dekker, New York. Camarda, K.V. and Maranas, CD., 1999, Ind. Eng. Chem. Res., 38,1884. Duran, M.A. and Grossmann, I.E., 1986, Math. Prog., 36, 307. Gani, R., Tzouvars, N., Rasmussen, P. and Fredenslund, A., 1989, Fluid Phase Equil., 47, 133. Glover, F., 1986, Comp. and Op. Res., 5, 533. Glover, F. and Laguna, M., 1997, Tabu Search, Kluwer Academic Publishers, Boston. Hairston, D.W., 1998, Chem. Eng., Sept., 30. Kier, L.B. and Hall, L.H., 1976, Academic Press, New York. Lin, B., Miller, D.C., 2000, AIChE Annual Meeting, Los Angeles, CA. Lin, B., Miller, D.C., 2001, AIChE Annual Meeting, Reno, NV. Lin, B., 2002, Ph.D. Thesis, Michigan Technological University. Maranas, CD., 1996, Ind. Eng. Chem. Res., 35, 3403. Raman, V.S. and Maranas, CD., 1998, Comput. Chem. Eng. 22,747. Siddhaye, S., Camarda, K.V., Topp, E. and Southard, M.Z., 2000, Comp. Chem. Eng., 24, 701. Venkatasubramanian, V., Chan K. and Caruthers, J.M., 1994, Comp. Chem. Eng., 18, 9, 833.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
83
Complete Separation System Synthesis of Fractional Crystallization Processes L.A. Cisternas^^\ J.Y. Cueto^^^ and R.E. Swaney^^^ (1) Dept. of Chemical Engineering, Universidad de Antofagasta, Antofagasta, Chile (2) Dept. of Chemical Eng., University of Wisconsin-Madison, Madison, WI, USA
Abstract A methodology is presented for the synthesis of fractional crystallization processes. The methodology is based on the construction of four networks. The first network is based on the identification of feasible thermodynamic states. In this network the nodes correspond to multiple saturation points, solute intermediate, process feeds and end products. The second network is used to represent the variety of tasks that can be performed at each multiple saturation point. These tasks include cooling crystallization, evaporative crystallization, reactive crystallization, dissolution, and leaching. Heat integration is included using a heat exchanger network which can be regarded as a transhipment problem. The last network is used to represent filtration and cake washing alternatives. The cake wash and task networks are modelled using disjunctive programming and then converted into a mixed integer program. The method is illustrated through the design of a salt separation example.
1. Introduction There are two major approaches for the synthesis of crystallization-based separation. In one approach, the phase equilibrium diagram is used for the identification of separation schemes (For example Cisternas and Rudd, 1993; Berry et al., 1997). While these procedures are easy to understand, they are relatively simple to implement only for simple cases. For more complex systems, such as multicomponent systems and multiple temperatures of operation, the procedure is difficult to implement because the graphical representation is complex and because there are many alternatives to study. The second strategy is based on simultaneous optimization using mathematical programming based on a network flow model between feasible thermodynamic states (Cisternas and Swaney, 1998; Cisternas, 1999; Cisternas et al. 2001; Cisternas et al. 2003). In crystallization and leaching operations, filtration, washing and drying are often required downstream to obtain the product specifications. For example, usually filter cake must be washed to remove residual mother liquor, either because the solute is valuable or because the cake is required in a semiclean or pure form. These issues have been discussed by Chang and Ng (1998), who utilized heuristic for design purposes. The objective of this study is to address these issues using mathematical programming. This work constitutes part of our overall effort on the synthesis of fractional crystallization processes. Drying is not included in this method because, as it normally does not involve a recycle stream, the dryer can be considered as a stand-alone operation.
84
2. Model Development 2.1. Networks for fractional crystallization The model proposed in this paper is composed of four networks: (1) the thermodynamic state network, (2) the task network, (3) the heat integration network, and (4) the cake wash network. The first three networks have been described in our previous works; therefore, emphasis here is given to the cake wash network. The first network is based on the detection of feasible thermodynamic states. Using equilibrium data for a candidate set of potential operating point temperatures, a thermodynamic state network flow model is create to represent the set of potential separation flowsheet structures that can result. This representation was presented by Cisternas and Swaney(1998) for two solutes systems, by Cisternas(1999) for multicomponent systems, and by Cisternas et al.(2003) for metathetical salt. Figure 1 shows the thermodynamic state network representation for a two solute system at two temperatures. The structure contains feeds, two multiple saturation points, and products. The second network, which is also shown in Figure 1, is the task network (Cisternas et al. 2001). Each multiple saturation state can be used for different tasks depending on the condition/characteristic of the input and output streams. For example, if solvent is added to an equilibrium state, the task can be: (1) a leaching step, if the feed is solid; (2) a cooling crystallization step, if the feed is a solution with a higher temperature; or (3) a reactive crystallization step, if the feed is a crystalline material that decomposes at this temperature or in the solution fed to this state (for example, the decomposition of carnallite to form potassium chloride). The third network, a heat exchange network, can be regarded as a transhipment problem as in Papoulias and Grossmann (1983). This transhipment problem can be formulated as a linear programming problem. In this representation hot streams and cold streams corresponds to the arcs in the thermodynamic state network. The fourth network is the cake wash network. Cake washing can be accomplished by two methods: (a) The cake may be washed prior to removal from the filter by flushing it with washing liquour. This can be done with both batch and continuous filters, (b) the cake may be removed from the filter and then washed in a mixer. The wash suspension obtained may then be separated with the filter. Figure 2 shows both alternatives for removing the residual mother liquor of concentration y^.i. Figure 2 shows only one stage, but washing may be performed in one or several stages on either batch or continuous filters. In this work countercurrent washing is not considered. As a result, the first stage provides the most concentrated solution and the last stage provides the least. If operation states are near-equilibrium states, then mother liquor concentration in the cake is substantially that of a saturated solution at the final temperature in the process. 2.2. Mathematical formulation Having derived the networks for the separation problem, a mathematical programming formulation is presented for each network to select the optimum flowsheet alternative of the separation sequence.
85
Figure 1. Thermodynamic state network and task network zr, zw. ypw
ywe *
/ c
ymWf WASHING
RESLURRY
FILTER
\
ymr.
rve
17 Figure 2. Cake wash network for stage e. The mathematical formulation for the thermodynamic state network is the same as that developed by Cisternas(1999) and Cisternas et al. (2003). Here a brief description is given. First, the set of thermodynamic state nodes will be defined as: S=(s, all nodes in the system^. This includes feeds, products, multiple saturation points or operation points, and intermediate solute products. The components, solutes and solvents, will be denoted by the set /= {i}. The arcs, which denote streams between nodes, will be denoted by L={1}. Each stream / is associated with the positive variable mass flow rate w/ and the parameter x^ giving the fixed composition of each component in the stream. The constraints that apply are: (a) Mass balance for each component around multiple saturation and intermediate product nodes. se SJ^JE I /65'""(5)
(1)
leLqnS°"'{s)
where Lq is the subset of L of solid stream product, hi is the mass ratio of residual liquid retained in the cake pores to the solid product /, and xyj is the concentration of the mother liquid in equilibrium with solid product /. Also S'^(s) and S^'*^(s) are the sets of
86 input and output streams to node s. (b) Specification for feeds flow rates ^ w, • x^. = C^. , where se S^Je Ip{s) and C^^, is the desired flow rates of specie leF{s)
i in feed s. The heat integration network follows the approach presented by Papoulias and Grossmann(1983). First, it is considered that there is a set K={k} of temperature intervals that are based on the inlet temperatures of the process streams, highest and lowest stream temperatures, and of the intermediate utilities whose inlet temperatures fall within the range of temperatures of the process streams. The only constraints that apply are heat balances around each temperature interval k: R. -^.-. -lQ:^lQn=l OTGVjt
/iGf/jfe
vv, (C^AT)f, - X w, (C^AT)l l^Hk
keK
(2)
/eCjt
where Q,/ , Qn^ and Rk are positive variables that represent heat load of hot utility m, heat load of cold utility n, and heat residual exiting interval k, respectively. (CpAjfik and (CpATfik are known parameters that represent the heat content per unit mass of hot stream leH^ and cold stream leC^ in interval k. //^ Cjt, Vk and Uk are the hot stream, cold stream, hot utility and cold utility set respectively in interval k. A task network is constructed for each multiple saturation point node s. The mathematical formulation, which is close to that in Cisternas et al. (2001), includes mass and energy balance, logic relations to select the task based on input/output stream properties, and cost evaluations. The formulation use disjunctive progranmiing. A cake wash network is constructed for each solid stream product leLq. Let E(l)={e} define the set of washing/reslurry stages in the solid stream product / e Lq. The variables are defined as follows: y^e^ is the concentration of species i in the residual mother liquor of the solid stream / at the output of wash/reslurry stage e. z^ej and riej are the input and output concentration in the washing liquid for the solid stream /, at stage e. ypwie,h ypr^e.h y^^he.h yf^n.e.h ^^uh zr^e.h f^i,e,h and rr/^,, are the concentration of the internal streams in stage e (see figure 2). The wash efficiency parameter, £w/ ^,, for specie i in solid stream / at the stage e can be defined as £w,^,. = {ymwi^. -ypy^iej)/i^i,e,i ~>'P^/,e.,) ^ r le Lq.ee E{1), ie I. The first two constraints in eq. (3) bellow are the efficiency constraint for the wash and reslurry/filter steps. Note that the efficiency for perfect mixing in the wash mixer is equal to 1. The last two constrains in Eq. (3) are the mass balances for specie / at the stage e of washing solid stream /,. Eri^ej rr^ei - Er^^, ypr^^, - ymr,^. + ypr^^. =0
leLq.eE £(/), / e /
87 nwi^e and nrie are parameters that represent the mass ratio of wash liquid to residual liquor in the cake used in wash and reslurry/filter steps respectively. There ratios are referred to as the wash ratio or number of displacements. This network requires the use of discrete variables, ywi^ and yrie to represent the choices of wash, or reslurry/filter, or neither, for each solid product stream I e Lq ai stage e. The corresponding logical relations are: _ y^Le
-^y^i,e
-^yn,e
yn,e
yi,e,i =
y^^i,e,i
yp^i,e,i
= yi,e-i,i
yi,e,i = yi,e,i y^\e,i
y^fiei
ypn,e.i = yi,e-u ytnWi^.
ypn,e.i=^
yp^i,e,i
=
-'yn.e
yi,e,i =
ymr,^.=0 ^^l,e,i
-^y^i,e
ypn,e,i = 0
= 0
y^^i,e,i
- ^
yp^i,e,i
= 0
=0
V \e,i
=0
V
(4)
^^l,e,i=^
^^l,e,i = 0 ^n,ej = Zi,e,i
^^le,i=^
Q^l,e=^^lA
W°
Qfie =nnehi w'
Qn,e=o Cke=Cfw Cvi^ = Cvw
Qw,^=0
Cf,.=Cfr QWf^
Cv.
=CvrQr.
^^l,ej ~ ^l,e,i
Gw,, =0 Qn,e=o Cfl,e=^
Cv,,=0 This logical relation is rewritten as mixed-integer linear equations. The concentration of the last stage / m u s t satisfy the impurity level /L/„ this isy f. hi< IL^^ for le Lqje I. The objective function is to minimize the venture cost. The following equation can be used as an objective function, min X J^iFC,, +VC„ ^c^Ql +cfC,J)+ ^ C ^ G : + l^c„Ql + S S ( C / , , +Cv,,J S&SM teTis)
meV
neU
(5)
leLq e
Eq. (5) represents the total cost given by the investment and utility cost. In this way, the objective function in Eq. (5), subject to constraints in Equations 1 to 4, defines a mixed integer linear progranmiing problem. The numerical solution to the MILP problem can be obtained with standard algorithms. In Eq. (5) Qts^, QJ, VQS and FQs are the heat loads of crystallization or dissolution, the heat loads of evaporation, and the variable costs and fixed costs for the equipment associated with task t of multiple saturation point s.
3. Illustrative Example This example considers the production of potassium chloride from 100,000 ton/year of sylvinite (47.7% KCl, 52.3% NaCl). Data are given in Cisternas et al. (2001). The solution found is shown in figure 3. The problem formulation in 293 equations and 239 variables (27 binary variables) was solved using 0SL2 (GAMS). The optimal solution
divides the feed into two parts. A sensitivity analysis shows that product impurity level and residual liquid retained level in the cake can affect the solution and cost by 20%.
LliAClirNC. at lOOX
iiquui
UNIT
wash liquid
1
WASH UNIT
1
RESLURRY
KCl Cake
Figure 4. Solution for example.
4. Conclusions The objective of this paper has been to present a method for determining the desired process flowsheet for fractional crystallization processes including cake washing. To achieve this goal, a systematic model was introduced consisting of four networks: the thermodynamic state network, the heat integration network, the task network, and the cake wash network. Once the representation is specified, the problem is modelled as a MILP problem. From the example, we can conclude that the model can be useful in the design and study of fractional crystallization processes. Result from the example indicates that product impurity level and the level of residual liquid retained in the cake can affect the optimal solution.
5. References Berry, D.A., Dye, S.R., Ng, K.M., 1997, AIChE J., 43, 91. Chang, W.C, Ng, K.M., 1998, AIChE J., 44, 2240. Cisternas, L.A., Rudd, D.F., 1993, Ind. Eng. Chem. Res., 32, 1993. Cisternas, L.A., Swaney, R.E., 1998, Ind. Eng. Chem., 37, 2761. Cisternas, L.A., 1999, AIChE J., 45, 1477. Cisternas, L.A., Guerrero, C.P., Swaney, R.E., 2001, Comp. & Chem. Engng., 25, 595. Cisternas, L.A., Torres, M.A., Godoy, M.J., Swaney, R.E., 2003, AIChE J., In press. Papoulias, S.A., Grossmann, I.E., 1983, Comp. & Chem. Engng., 707. Turkay, M., Grossmann, I.E., 1996, Ind. Eng. Chem. Research, 35, 2611.
6. Acknowledgment The authors wish to thank CONICYT for financial support (Fondecyt project 1020892).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. Ail rights reserved.
89
Mathematical Modelling and Design of an Advanced Once-Through Heat Recovery Steam Generator Marie-Noelle Dumont, Georges Heyen LASSC, University of Liege, Sart Tilman B6A, B-4000 Liege (Belgium) Tel:+32 4 366 35 23 Fax: +32 4 366 35 25 E-mail: [email protected]
Abstract The once-through heat recovery steam generator design is ideally matched to very high temperature and pressure, well into the supercritical range. Moreover this type of boiler is structurally simpler than a conventional one, since no drum is required. A specific mathematical model has been developed. Thermodynamic model has been implemented to suit very high pressure (up to 240 bar), sub- and supercritical steam properties. We illustrate the model use with a 180 bar once-through boiler (0TB).
1. Introduction Nowadays combined cycle (CC) power plants become a good choice to produce energy, because of their high efficiency and the use of low carbon content fuels (e.g. natural gas) that limits the greenhouse gases production to the minimum. CC plants couple a Brayton cycle with a Rankine cycle. The hot exhaust gas, available at the output of the gas turbine (Brayton cycle) is used to produce high-pressure steam for the Rankine cycle. The element, where the steam heating takes place, is the heat recovery steam generator (HRSG). High efficiency in CC (up to 58%) has been reached mainly for two reasons: • Improvements in the gas turbine technology (i.e. higher inlet temperature); • Improvement in the HRSG design We are interested in the second point. The introduction of several pressure levels with reheat in the steam cycle in the HRSG allows recovering more energy from the exhaust gas. Exergy losses decrease, due to a better matching of the gas curve with the water/steam curve in the heat exchange diagram (Dechamps,1998). Going to supercritical pressure with the 0TB technology is another way to better match those curves and thus improve the CC efficiency. New improvements are announced in near future to reach efficiency as high as 60%. In the present work we propose a mathematical model for the simulation and design of the once-through boiler. It is not possible to use empirical equations used for the simulation of each part of the traditional boiler. General equations have to be used for each tube of the boiler. Moreover there is a more significant evolution of the water/steam flow pattern type due to the complete water vaporization inside the tubes (in a conventional boiler, the circulation flow is adjusted to reach a vapor fraction between 20% and 40% in the tubes and the vapor is separated in the drum). Changes of flow pattern induce a modification in the evaluation of the internal heat transfer coefficient as well as in the pressure drop formulation. The right equation has to be selected dynamically according to the flow conditions prevailing in the tube.
90 The uniform distribution of water among parallel tubes of the same geometry subjected to equal heating is not ensured from the outset but depends on the pressure drop in the tubes. The disappearance of the drum introduces a different understanding of the boiler's behavior. Effect of the various two-phase flow patterns have to be mathematically controlled. The stability criteria has changed.
2. Mathematical Model 2.1. Heat transfer 2.1.1. Water Mathematical models for traditional boilers are usually based on empirical equations corresponding to each part of the boiler : the economizer, the boiler and the superheater. Those three parts of boiler are clearly separated thus it is not difficult to choose the right equation. In a once-through boiler this separation is not so clear. We have first to estimate the flow pattern in the tube then to choose the equation to be used. "Liquid single phase" and "vapor single phase" are easily located with temperature and pressure data. According to Gnielinski (1993) the equation 1 applies for turbulent and hydrodynamically developed flow. (^/8)(Re^-1000)Pr
_ ^*^
Arw=-
^
1+ 1 2 , 7 7 ( ^ ( p r 2 / 3 _ i j
^
^
•
(1)
•
'
^(i,821og^QRe-l,64)
During vaporization different flow patterns can be observed, for which the rate of heat transfer also differs. In stratified-wavy flow pattern incomplete wetting has an effect on the heat transfer coefficient. A reduction could appear for this type of flow pattern. Computing conditions where a change in flow pattern occurs is useful. A method to establish a flow pattern map in horizontal tube for given pressure and flow conditions is clearly exposed by Steiner (1993). This method has been used in this study. The different flow pattern in the vaporisation zone of the OTB are given in figure 1. The heat transfer coefficient is estimated from numerous data. It is a combination of convective heat transfer coefficient and nucleate boiling heat transfer coefficient. How Pattern Diagram for Horizontal Row (VDI (1993)) Row in tubes with 5.06t^ and 5.166t/h
1 1i
riiistS nular
lE
1£-01
ttoe
1E-03
1E-04
H ^ i-H4l
1 III 1 1
X
Mill
11
f
[Plug
or 5lu
Wavy (2)
-.——r
X
— .— »tra1ified(1) *le-b
uU
Mini l _ J J LL
X-Martinelli parameter
POnoe-throughHFtSG » Tradition^ HRSG|
Figure 1: Flow pattern map in the boiling zone.
Figure 2: Internal heat transfer coefficient 3.
91
= ^'('
conv
(2)
B ,0.37
-2 «(2)..
(l-x)^l.2x^\l-xf^^
T-2
-r-2.2
^li^ Vf^
a
,0.67 \ go 0.01 1+8(1-4
7o
,0.7
(3)
^^^ vc??
cciz)
B --{heatflow,pressure,roughness,geometry} (4) a,lo ttLo is the heat transfer coefficient with total mass velocity in the form of the liquid and OGO is the heat transfer coefficient with total mass velocity in the form of the vapor. The internal coefficients computed for all the tubes of the OTB are presented in figure 2. 2.1.2. Fumes There is no difference between the equations used for a conventional heat recovery boiler and a once trough heat recovery boiler. The main part of the heat transfer coefficient is the convective part (low fumes temperature). The effect of the turbulence has been introduced to reduce the heat transfer coefficient in the first few rows of the tube bundle. The main difficulty to evaluate the heat transfer coefficient for the fume side comes from the fins that enhance the heat transfer, but could also produce other sources of resistance in the heat transfer, such as fouling on the surface of fins or inadequate contact between the core tube and the fin base. There are two methods to evaluate the heat transfer coefficient: • The first one is based on a general equation to evaluate the Nusselt number in cross flow over pipes and the efficiency of the fins. An apparent heat transfer coefficient is then computed with equation 6. • The second one is based on empirical correlations derived from experimental data. For more than four banks in staggered arrangement, equation 7 can be used. It is not obvious to find the most appropriate correlation for a given fin geometry and tube bundle arrangement. The best is to ask finned tube manufacturers to provide their correlations for heat transfer coefficient and fin efficiency corresponding to the required finned tube.
± [•LjQftD-«-FlJMEIN-^VVATERdjf| \"*" Pressure drop - * - vapour fractioni
Figure 3: Heat exchange diagram.
Figure 4: Pressure drop and vapor fraction evolution.
92 A a
=a. * app f
po A
Nu, =0.38Re0.6 d
f
A. fo A
A
(6)
p/3
(7)
2.1.3. Overall heat transfer coefficient Finally the overall heat transfer coefficient is obtained from equation 8. The global heat transferred for each tube is computed with equation 9. We call A7^/ "semi logarithmic temperature difference". It is the best compromise between pure logarithmic temperature difference that has no sense here (only one tube) and pure arithmetic temperature difference that does not allow to follow the evolution of water properties along the tube. The heat exchange diagram of the 0TB is presented in figure 3. l i e 1 — = + :— + a a A A app ;i*_wi a . * - ^ A I A Q = a*A''AT
J si
(8)
\ w2
; AJT, = si In
T
wl) mf
T -T mf wl
+T 2
T -T mf w2 2.2. Pressure drop 2.2.1. Water
AP =
fpV 28
I with d^
/ =
Re 0.3164
(10)
The coefficient f depends on the Reynolds number for flow within the tube. In laminar flow, the Hagen-Poiseuille law can be applied. In turbulent flow the Blasius equation is used. The main difficulty is the evaluation of water pressure drop during transition boiling. The pressure drop consists of three components: friction (APf), acceleration (APm) and static pressure (APg). In once-through horizontal tubes boiler APg=0. The Lockard-Martinelli formulation is used to estimate the friction term. AF
AF O ^2 phases
Miquid
fit
(11)
93
WL.^liquid
(12)
O ^ =1 + — + — - with X = ,r
fit
X
x^
\r^^>
\vapor
The acceleration term is defined with equation 13 where a is the volume fraction of vapor (void fraction). It is reconmiended to discretize the tube in several short sections to obtain more accurate results (figure 4).
AF
o--r
:G^* vap
^
'
(13)
liq
2.2.2. Fumes The pressure drop in a tube bundle is given by equation 14. In this case the number of rows (NR) plays an important role in the pressure drop evaluation. The coefficient f is more difficult to compute from generalized correlations. The easiest way is once more to ask the finned tubes manufacturer to obtain accurate correlation.
AP =
f'P'V
•N
(14)
R
3. Stability Stability calculation is necessary for the control of water distribution over parallel tubes of the same form and subjected to equal heating in forced circulation HRSG and particularly in OTB. The stability can be described with the stability coefficient S. HRSG manufacturers try to keep the stability coefficient in the range (0.7-2). In OTB design inlet restrictions have been installed to increase single-phase friction in order to stabilize the boiler. Based on the 7C-criterion (Li Rizhu, Ju Huaiming, 2002) defined as , the design has been realized with n about 2. This number has to
7r =
be reduced in near future when all various flow instabilities would be identified.
( (
relative change \ in pressure drop/
S =
with
relative change \ in flow rate / S>0 stable S < 0 unstable
d(M) M
Mass flow M
Figure 5: Stability example.
94
4. An Example Results have been obtained for an OTB of pilot plant size (42 rows). WATER (10.25 t/h; Tin=44°C; Tout=500°C) FUMES (72.5 t/h; Tin=592°C; Tout=197°C). In VALIBelsim software in which the simulation model has been implemented, the simulation of the OTB needs 42 modules, one for each row of tubes. Since VALI implements an numerical procedures to solve large sets of non-linear equations, all model equations are solved simultaneously. The graphical user interface allows easy modification of the tube connections and the modelling of multiple pass bundles.
5. Conclusions and Future Work The mathematical model of the once-through boiler has been used to better understand the behaviour of the boiler. Future mathematical developments have still to be done to refme the stability criteria and improve the OTB design. Automatic generation of alternative bundle layouts in the graphical user interface is also foreseen.
6. Nomenclature A Ab Afo Ai ADO
di
AP f G H NR
Nu P
total area of outer surface (m^) bare tube outside surface area fin outside surface area (m^) inside surface area (m^) free area of tube outer surface mean area of homogeneous tube wall specific heat capacity at constant pressure (J/kg/K) tube intemal diameter (m) pressure drop (bar) pressure drop coefficient massflux(kg/m^/s) enthalpyflow(kW) number of rows in the bundle al Nusselt number Nu A pressure (bar)
Pr
Prandl number Pr = — - —
Q Re T
exchanged heat (kW) Reynolds number temperature (K)
V X Xi
a oc(z) X P Tl
A
fluid velocity (m/s) vapor mass fraction component flow rate (kg/s) heat transfer coefficient (kW/mVK) local heat transfer coefficient thermal conductivity (W/m/K) density (kg/m^) dynamic viscosity (Pa.s) or (kg/m/s) fin efficiency
7. References Dechamps, P.J. 1998, Advanced combined cycle alternatives with the latest gas turbines, ASME J. Engrg. Gas Turbines Power 120, 350-35. Gnielinski, V. 1993, VDI heat atlas, GA,GB, VDI-Verlag, DUsseldorf, Germany. Li Rizhu, Ju Huaiming, 2002, Structural design and two-phase flow stability test for the steam generator. Nuclear Engineering and Design 218, 179-187. Steiner, D. 1993, VDI heat atlas, VDI-Verlag, HBB, Dusseldorf, Germany.
8. Acknowledgements This work was financially supported by CMI Utility boilers (Belgium).
European Symposium on Computer Aided Process Engineering- 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
95
Synthesis and Optimisation of the Recovery Route for Residual Products Joaquim Duque^, Ana Paula F. D. Barbosa-Povoa^^ and Augusto Q. Novais^ ^DMS, INETI, Est. do Pago do Lumiar, 1649-038 Lisboa, Portugal ^CEG-IST, DEG, I. S.T., Av. Rovisco Pais, 1049-101 Lisboa, Portugal
Abstract The present work describes an optimisation model for the management of the recovery of residual products originated at industrial plants. The mSTN (maximal State Task Network) representation is used as the modelling framework for the general proposed network superstructure where all possible process transformations, storage, transports and auxiliary operations are accounted for. This is combined with the evaluation of a set of environmental impacts (EI), quantified by metrics (for air, water pollution, etc.) through the minimum environment impact analysis (MEIM) methodology and associated with waste generation at utility production and transportation levels. The final model is described as a MILP, which, once solved, is able to suggest the optimal processing and transport routes, while optimising a given objective function and meeting the design and environmental constraints. A motivating example based on the recovery of the sludge obtained from Aluminium surface finishing plants is explored. This aims at maximizing the quantity of sludge processed and reflects the trade-off between the cost for its disposal, processing, transport and storage, while accounting for the limits imposed in the environment pollutants associated.
1. Introduction Increased awareness over the effects of industrial activities on the environment is leading to the need of providing alternative ways of reducing the negative environmental impacts (Pistikopoulos et al., 1994). In the process industry this problem is highly complex and the potential environmental risks involved forces process manufacturers to undertake special care not only on its production impacts but also on waste disposal, steeping costs and soil occupation. Most of the works looking into these problems addressed the case of designing a plant such that a minimisation of waste produced was obtained (Linninger et al, 1994, Stefanis et al, 1997). A further possible environmental solution, if viable, is the reuse of those waste materials as resources, after total or partial pollutant content removal. In this paper, we explore this problem and propose a model for the synthesis and optimisation of a general recovery route for residual products. The modelling of the general network route is based on the mSTN representation (Barbosa-Povoa, 1994) where all the possible processing, storage and transport operations are considered. A metric for the diverse environmental effects involved is used which is based on the generalisation of the MEI methodology as presented in Stefanis et al (1997).
' Author to whom correspondence should be addressed, e-mail: [email protected], tel:+ 351 1 841 77 29
96 The model is generic in scope and leads both to the optimal network structure and to the associated operation. The former results from the synthesis of the processing steps, while the latter is described by the complete resource time allocation (i.e. processing transport and storage scheduling). A motivating example based on the sludge recovery from Aluminium surface finishing plants is presented with an objective function that maximizes the profit for the proposed network, over a given time horizon. The maximization of the quantity of sludge processed is obtained and reflects the trade-off between the cost for its disposal before and after processing, while accounting for production and transport environment impacts and guaranteeing limits imposed on the environment pollutants.
2. Problem DeHnition and Characteristics The problem of reducing the environmental impact of pollutant products as addresses in this work can be defined as follows: Given: A recovery network superstructure (STN/Flowsheet) characterized by: • All the possible transformations, their process durations, suitable unit locations, capacities, utilities consumption, materials involved and wastes generated. • All waste producers, their location and the quantity of wastes produced along with their pollutants content. • All the reuses and landfill disposals, their locations, utility consumption, capacities and, for the reuse, the wastes generated. • All the possible transport routes, associated suitable transports and duration. Cost data for: • Equipment (processing, transport and storage units). • Reuses and landfill disposal. • Operations and utilities. Operational data for: • Type of operation (Cyclic single campaign mode or short-term operation) • Time horizon/Cyclic time Environmental data (see table 1 Pistikopoulos et al., 1994) • Maximum acceptable concentration limits (CTAM, CTWM) • Long term effect potentials (ex. GWI, SODI) Determine: • The optimal network structure (processing operations, storage locations and transfer routes of materials) • The optimal operating strategies (scheduling of operations, storage profiles and transfer occurrences). So as to optimise an economic or environmental performance. The former can be defined as a maximum plant profit or a minimum capital expenditure accounting for the environmental impacts involved and their imposed limits; the latter can be the minimisation of the environmental impacts where all operational and structural network restrictions as well as cost limits are considered. As mentioned before the mSTN representation is used to model the general network superstructure. This is coupled with a generalization of the MEI methodology so as to account for the waste generation at utility production and transportation levels. For the transport task the environmental impact is calculated based on the fuel oil consumption, therefore at a utility level.
97 Due to the characteristics of the model proposed where the recovery of pollutant products is addressed the system frontier for the environmental impacts is defined at the raw materials level including any type of utilities used. The model has the particularity of considering all possible concurrent transportations and transformations for the same operation (different instances within the superstructure) as well as all raw material producers and re-users. The pollutant is added up for all the different types of waste. The limits on the total waste production and global environment impacts are introduced in the form of waste and pollution vectors, added to the model as additional restrictions. Those limits derive directly from legal regulations for the pollutants considered. The model also considers the possibility of imposing limits on the fmal product amounts required - associated with possible auxiliary operations/removals - as well as on the amount of pollutant materials (raw materials) that should be processed - due to environmental impacts. Table 1. Time dependent environment impact indicators. CTAM ( Critical Air Mass, [kg air/h]) CTWM ( Critical Water Mass, [kgwater/h])
^ polutant emission mass at interval t (kg pollutant/ h) standard limit valu e (kg pollutant/ kg air) C7WM =
polutant emission mass at interval t (kg pollutant/ h) standard limit valu e (kg pollutant/ kg water)
SMD ( Solid Mass Disposal, [kgAi])
SMD
= mass of solids disposed at interval t (kg/h)
^a^t^ CO^I^r^)^^™^^^ ^
^^^ " ^^^^ ^^ P^"- ^^'^^^^^^^t (^^^) ^ ^ ^ ^ (kg C02/kg poll.)
POI ( Photochemical Oxidation Inpact, C2H4 [kg/h]) SODI ( Stratospheric Ozone Depletion Inpact [kg/h])
POl = Mass of poll at interval t (kg poll /h) x POCP (kg Ethylene/kg poll) SODI = Mass of poll at interval t (kg poll /h) x SODP (kg CFCl 1/kg poll)
3. Recovery Route Example In order to illustrate the use of the model proposed, a motivating example based on the optimisation of a recovery route for Al-rich sludge is presented. The anodization and lacquering of aluminium are processes that generate significant amounts of waste in the form of Al-rich sludge. As an economical alternative to disposal, this sludge can be treated and employed as coagulant and flocculant for the treatment of industrial and municipal effluents, or used as agricultural and landfill material. As the surface treatment plant location does not coincide, in general, with the locations for water treatment or landfill location, suitable transports are needed. Based on these characteristics the recovery route network associated with this problem can be described as follows: Given the raw materials differences on pollutant content, two different general types corresponding to state SI and S2 are considered. State SI sludge needs to be processed by task Tl for a two hours period, originating a high pollutant material (S3) that is nonstorable, in the proportions of 1 to 0.97, input and output mass units respectively, and producing a 3% mass units of waste (WTl). State S2 sludge is submitted to a non-aluminium pollutant removal task T2 during two hours, originating a storable intermediate state S4 with the proportions of 2 to 0.98 and originating a 2% waste (WT2), in mass units. This S4 intermediate state is suitable for use as coagulant and flocculant for the treatment of industrial and municipal effluents.
98 The intermediate materials S3 and S4 at respectively 0.6 and 0.4 (mass units) proportions, are then submitted to an aluminium removal task, T3, going on for four hours and originating the final product S5, with the proportions of 1 to 0.99 and originating a 1% waste, in mass units (WT3). This state is stable and has a low pollutant level allowing for its agricultural disposal or the use as a landfill material. Finally the rich aluminium sludge S4 is used for the treatment of industrial and municipal effluents (T4), at a different geographical location, thus requiring a transportation task (Trl) which takes 1 hour of duration. Task 4, leads to the final product S6 and lasts for two hours and has the proportions of 1 input to 0.98 output and originates a 2% waste (WT4) (in mass units). An 8000 tonnes consumption is guaranteed for each final product S5 an S6 to be synthesised from SI and S2, over a production time horizon of 1000 hours with a periodic operation of 10 hours. The STN and the superstructure for the recovery route example are depicted respectively in 0 and 0. The equipment characteristics are presented in Table 2 (raw materials and product storage are unlimited) while impact factors are in Table 3.
Fig. 1. STN Network recovery route.
UA
*h c5
*p
la
1c
Lc C2
V2
I
fl *t>
1b V4
Fig. 2. Recovery route superstructure.
KPciflW
V5
n
2a
C13M
V6
99 The example was solved using the GAMS/CPLEX (v7.0) software running in a Pentium III-E at 863.9 Mhz. The model is characterised by 2046 equations, 1307 variables, of which 144 are discrete, and takes an execution time of 0.110 CPU seconds. The final optimal plant structure is presented in Figure 3 with the corresponding operation depicted in Figure 4. The final recovery route (Figure 3) is characterised by 3 processing steps (in unit lb, Ic and 2a) an intermediate storage location (V4) and a transport route (transport 1, trl). Table 2. Unit characteristics. Units Unit la (la) Unit lb (lb) Unitlc(lc) Unit 2a (2a) Transport 1 Vessel4 (V4) c.u. - currency units
Suitab.
TleT2 TleT2 T3 T4 T5 S4
Capacity (tonne) Max. Min. 150 50 150 50 200 50 200 50 200 50 100 10
Fixed (10' c.u.) 20 20 30 30 0.5 1
Costs Variable (cu/.kg) 0.5 0.5 1 1 0.05 0.1
Table 3. Impact factors. Residues wl_Tl wl_T2 wl_T3 wl_T4 wl_T5 wl_Ul wl_U2 wl_U3 wl_U4
(CTAM) 10 0 1 0 0 0 5 8 2
(CTWM) 0 8 10 8 0 10 0 0 1
u
POCP 0 0 0 0 0 0.05 0 0 0
GWP 0 0 0.03 0.03 0 0 0.004 0.005 0.08
SMD 0.05 0 0 0 0 0 0 0 0
;12
^
I
'
SODP 0 0 0 0 0 0 0.003 0.01 0
L
I—o-J V4 [ 1 1
III Fig. 3. Optimal network recovery route structure. When comparing the options of disposal or processing the materials sludge it can be seen that a value added of 31280 c.u. is obtained against a cost of disposal of 32070 c.u. The recovery option translates a reduction of 95 % in pollutant material with a maximum environmental impact of (ton/hr) CTAM=1.19, CTWM=5.37, SMD=0.005, GWI=0.012 and in POI= SODI=0.
4. Conclusions A model for the synthesis and optimisation of a general recovery route for residual products is proposed. The modelling of the general network route is made through the
100 use of the mSTN representation. This is coupled with a metric for the various environmental effects involved, based on the generalization of the MEI methodology. The proposed model leads to both the optimal network structure, accounting for processing and storage locations of materials, as well as transport, and to the associated operation. The former resulting from the synthesis of the recovery steps (processing, storage and transport), while the latter is described by the complete resource time allocation (i.e. scheduling) where environmental impacts associated not only to the disposal of the materials but also to the utilities and transports utilisation are accounted for. In this way the model is able to suggest the optimal processing and transport route, while reflecting the trade-off between processing and transport costs and environmental worthiness of the modified residual products. It further allows the analysis of the tradeoff existing between the option of the disposal of materials, with a high negative effect to the environment, or their re-processing, while accounting for all the capital, operational, transportation and environment costs associated. As future developments the model is now being extended to account for the treatment of uncertainty in some model parameters. This being investigated on the availability of residual products as well as on the operational and structural levels of the recovery network.
^
h-2_R1a
I
49.98 Tl_R1a
o 2a I 81.63 I T5_Tr1 100 SO
Fig. 4. Scheduling for the recovery route network.
5. References Barbosa-Povoa, A.P.F.D., 1994, Detailed Design and Retrofit of Multipurpose Batch Plants, Ph. D. Thesis. Imperial College, University of London, U.K. Linninger, A.A., Shalim, A.A., Stephanopoulos, E., Han, C. and Stephanopoulos, G., 1994, Synthesis and Assessement of Batch Process for Pollution Prevention, AIchemE Symposium Series, Volume on pollution prevention via process and product modifications, 90 (303), 46-58. Pistikopoulos, E.N., Stefanis, S.K. and Livingston, A.G., 1994, A methodology for Minimum Environment Impact Analysis. AIchemE Symposium Series, Volume on pollution prevention via process and product modifications, 90 (303), 139-150. Stefanis, S.K., Livingston, A.G. and Pistikopoulos, E.N., 1997, Environment Impact Considerations in the optimal design and scheduling of batch processes. Computer Chem. Engng, 21, 10, 1073-1094.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
101
A New Modeling Approach for Future Challenges in Process and Product Design Mario Richard Eden, Sten Bay J0rgensen, Rafiqul Gani CAPEC, Computer Aided Process Engineering Center, Department of Chemical Engineering, Technical University of Denmark, DK-2800 Lyngby, Denmark
Abstract In this paper, a new technique for model reduction that is based on rearranging a part of the model representing the constitutive equations is presented. The rearrangement of the constitutive equations leads to the definition of a new set of pseudo-intensive variables, where the component compositions are replaced by reduction parameters in the process model. Since the number of components dominates the size of the traditional model equations, a significant reduction of the model size is obtained through this new technique. Some interesting properties of this new technique is that the model reduction does not introduce any approximations to the model, it does not change the physical location of the process variables and it provides a visualization of the process and operation that otherwise would not be possible. Furthermore by employing the recently introduced principle of reverse problem formulations, the solution of integrated process/product design problem becomes simpler and more flexible.
1. Introduction As the trend within the chemical engineering design community moves towards the development of integrated solution strategies for simultaneous consideration of process and product design issues, the complexity of the design problem increases significantly. Mathematical programming methods are well known, but may prove rather complex and time consuming for application to large and complex chemical, biochemical and/or pharmaceutical processes. Model analysis can provide the required insights that allows for decomposition of the overall problem into smaller (and simpler) sub-problems as well as extending the application range of the original models. In principle, the model equations representing a chemical process and/or product consist of balance equations, constraint equations and constitutive equations (Eden et al., 2003). The nonlinearity of the model, in many cases, is attributed to the relationships between the constitutive variables and the intensive variables. The model selected for the constitutive equations usually represents these relationships, therefore it would seem appropriate to investigate how to rearrange or represent the constitutive models.
2. Reverse Problem Formulation Concept By decoupling the constitutive equations from the balance and constraint equations the conventional process/product design problems may be reformulated as two reverse
102 problems. The first reverse problem is the reverse of a simulation problem, where the process model is solved in terms of the constitutive (synthesis/design) variables instead of the process variables, thus providing the synthesis/design targets. The second reverse problem (reverse property prediction) solves the constitutive equations to identify unit operations, operating conditions and/or products by matching the synthesis/design targets. An important feature of the reverse problem formulation is that as long as the design targets are matched, it is not necessary to resolve the balance and constraint equations (Eden et al., 2002).
I
Procffi^^ Model Balance and Constraint Equations (Mass, Energy, Momentum) Constitutive Equations (Phenomena model - Function of Intensive Variables)
/ !
Balance and Constraint Equations Identification of design targets by solution of decoupled model
i
REVERSE SIMULATION
D E ^ G N TARGETS
i
REVERSE PROPERTY PREDICTION
Constitutive Equations
Figure 1: Decoupling of constitutive equations for reverse problem formulation. The model type and complexity is implicitly related to the constitutive equations, hence decoupling the constitutive equations from the balance and constraint equations will in many cases remove or reduce the model complexity. Since the constitutive equations (property models) contain composition terms, it is beneficial to solve for the constitutive variables directly, thus removing the composition dependency from the problem.
3. Composition Free Design Methods By rearranging the constitutive equations in a systematic manner, the composition terms can be eliminated from the balance equations. The principal requirement of this new model reduction technique is the choice of a model for the constitutive equations where the constitutive variables, which are calculated through a set of reduction parameters, are linear functions of component compositions. The well-known cubic equations of state such as the Soave-Redlich-Kwong or the Peng-Robinson equations of state satisfy this property requirement. Customized constitutive models may also be generated for this purpose. Michelsen (1986) presented a composition free method for simple flash calculations, where the composition dependent terms of a cubic equation of state are lumped by solving directly for the constitutive variables. This method was extended by Gani & Pistikopoulos (2002) to composition free distillation design. A novel way of representing the constitutive (property) variables of a system is the concept of property clustering (Shelley & El-Halwagi, 2000), where the compositions are eliminated from the process model by characterizing the process streams using physical and chemical properties. The clusters are tailored to possess the two fundamental properties of interand intra-stream conservation allowing for consistent additive rules. Eden et al. (2002)
103 developed cluster based models for fundamental processing units such as mixers, splitters and reactors, through which most sections of process flowsheets can be modeled. The cluster based balance models are derived from the original composition based balance models by systematically removing the composition dependency using property relationships. The clustering approach utilizes property operators defined as:
¥j(PjMix) = 2:x,.VKj(Pjs)
x , = ^
^ i ^ = ^ ^
(^>
S=l
Using an Augmented Property index (AUP) for each stream s, defined as the summation of the dimensionless property operators, the property cluster for property j of stream s is defined: NP
j=l
Cis=—— J' AUR
(3)
The mixture cluster and AUP values can be calculated through the linear mixing rules given by Equations (4) - (5): Ns
Y
-ATTP
CjM.X=EPsqs ' P s - ^ 7 5 ^ s=l
(4)
^^^MIX
AUPM,X=2;'^SAUP,
(5)
s=l
In Equation (4) Ps represents the cluster "composition" of the mixture, i.e. a pseudointensive variable, which is related to the flow fractions (xs) through the AUP values. An inherent benefit of the property clustering approach is that due to the absence of component and unit specifics, any design strategies developed will be generic.
4. Case Study - Recycle Opportunities in Papermaking To illustrate the usefulness of constitutive or property based modeling, a case study of a papermaking facility is presented. Wood chips are chemically cooked in a Kraft digester using white liquor (containing sodium hydroxide and sodium sulfide as main active ingredients). The spent solution (black liquor) is converted back to white liquor via a recovery cycle (evaporation, burning, and causticization). The digested pulp is passed to a bleaching system to produce bleached pulp (fiber). The paper machine employs 100
104 ton/hr of the fibers. As a result of processing flaws and interruptions, a certain amount of partly and completely manufactured paper is rejected. These waste fibers are referred to as broke, which may be partially recycled for papermaking. The reject is passed through a hydro-pulper followed by a hydro-sieve with the net result of producing an underflow, which is burnt, and an overflow of broke, which goes to waste treatment.
Pulp Kraft Digester
^ '
Black L quor
* • • Bleachtng • * •
> » Fiber
Broke
Chemical Cy cle
Waste M
I
raper Product
P^per {Machine
•• »
Reject f
Hydro Sieve
Hydro Pulper
Figure 2: Schematic representation of pulp and paper process. The objective of this case study is to identify the potential for recycling the broke back to the paper machine, thus reducing the fresh fiber requirement and maximize the resource utilization. Three primary properties determine the performance of the paper machine and thus consequently the quality of the produced paper (Biermann, 1996): Objectionable Material (OM) - undesired species in the fibers (mass fraction) Absorption coefficient (k) - measure of absorptivity of light into paper (m^/g) Reflectivity (Roo) - defined as a reflectance compared to absolute standard (fraction) In order to convert property values from raw property data to cluster values, property operator mixing rules are required (Shelley & El-Halwagi 2002; Eden et al. 2002). The property relationships can be described using the Kubelka-Munk theory (Biermann 1996). According to Brandon (1981), the mixing rules for objectionable material (OM) and absorption coefficient (k) are linear, while a non-linear empirical mixing rule for reflectivity has been developed (Willets 1958). Table 1: Properties of fibers and constraints on paper machine feed. Property
Operator
Fibers
Broke
Paper machine
Reference
OM (mass fraction)
OM
0.000
0.115
0.00-0.02
0.01
k(m'/g)
k
0.00115-0.00125
0.001
Rso
(R.)^'^
1
Flowrate (ton/hr)
0.0012 0.0013 0.82
0.90
0.80-0.90
100
30
100-105
From these values it is apparent that the target for minimum resource consumption of fresh fibers is 70 ton/hr (100-30) if all the broke can be recycled to the paper machine. The problem is visualized by converting the property values to cluster values using Equations (1) - (3). The paper machine constraints are represented as a feasibility region, which is calculated by evaluating all possible parameter combinations of the
105 property values in the intervals given in Table 1. The resulting ternary diagram is shown in Figure 3, where the dotted line represents the feasibility region for the paper machine feed. The relationship between the cluster values and the corresponding AUP values ensures uniqueness when mapping the results back to the property domain.
Figure 3: Ternary problem representation.
Figure 4: Optimal feed identification.
Since the optimal flowrates of the fibers and the broke are not known, a reverse problem is solved to identify the clustering target corresponding to maximum recycle. In order to minimize the use of fresh fiber, the relative cluster arm for the fiber has to minimized, i.e. the optimum feed mixture will be located on the boundary of the feasibility region for the paper machine. The cluster target values to be matched by mixing the fibers and broke are identified graphically and represented as the intersection of the mixing line and the feasibility region in Figure 4. Using these results the stream fractions can be calculated from Equation (5). The resulting mixture is calculated to consist of 83 ton/hr of fiber and 17 ton/hr of broke. Hence direct recycle does NOT achieve the minimum fiber usage target of 70 ton/hr. Therefore the properties of the broke will have to be altered to match the maximum recycle target. Assuming that the feed mixture point is unchanged, and since the fractional contribution of the fibers and the intercepted broke are 70% and 30% respectively, the cluster "compositions" (Ps) can be calculated from Equation (4). Now the cluster values for the intercepted broke can be readily calculated from Equation (4), and the resulting point is shown on Figure 5. This reverse problem identifies the clustering target, which is converted to a set of property targets: Table 2: Properties of intercepted broke capable of matching maximum recycle target. Property OM (mass fraction) k(m'/g) Roo
Original Broke 0.115 0.0013 0.90
Intercepted Broke 0.067 0.011 0.879
Note that for each mixing point on the boundary of the feasibility region, a clustering target exists for the intercepted broke, so this technique is capable of identifying all the alternative product targets that will solve this particular problem. Solution of the second
106 reverse problem, i.e. identification of the processing steps required for performing the property interception described by Table 2, is not presented in this work. Most processes for altering or fme tuning paper properties are considered proprietary material, however the interception can be performed chemically and/or mechanically (Biermann 1996, Brandon 1981). C2
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.
Figure 4: Identification of property interception targets.
5. Conclusions Decoupling the constitutive equations from the balance and constraint equations, allows for a conventional forward design problem to be reformulated as two reverse problems. First the design targets (constitutive variables) are identified and subsequently the design targets are matched by solving the constitutive equations. By employing recent property clustering techniques a visualization of the constitutive (property) variables is enabled. A case study illustrating the benefits of these methods has been developed.
6. References Biermann, C.J., 1996, Handbook of Pulping and Papermaking, Academic Press. Brandon, C.E., 1981, Pulp and Paper Chemistry and Chemical Technology, 3rd Edition, Volume III, James P. Casey Ed., John Wiley & Sons, New York, NY. Eden, M.R., J0rgensen, S.B., Gani, R. and El-Halwagi, M.M., 2003, Chemical Engineering and Processing (accepted). Gani, R. and Pistikopoulos, E.N., 2002, Fluid Phase Equilibria, 194-197. Michelsen, M.L., 1986, Ind. Eng. Chem. Process. Des. Dev., 25. Shelley, M.D. and El-Halwagi, M.M., Comp. & Chem. Eng., 24 (2000). Willets, W.R., 1958, Paper Loading Materials, TAPPI Monograph Series, 19, Technical Association of the Pulp and Paper Industry, New York, NY.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
107
Solving an MINLP Problem Including Partial Differential Algebraic Constraints Using Branch and Bound and Cutting Plane Techniques Stefan Emet and Tapio Westerlund* Department of Mathematics at Abo Akademi University, Fanriksgatan 3 B, FrN-20500, Abo, Finland (email: [email protected]) * Process Design Laboratory at Abo Akademi University, Biskopsgatan 8, FIN-20500, Abo, Finland (email: [email protected])
Abstract In the present paper a chromatographic separation problem is modeled as a Mixed Integer Nonlinear Programming (MINLP) problem. The problem is solved using the Extended Cutting Plane (ECP) method and a Branch and Bound (BB) method. The dynamics of the chromatographic separation process is modeled as a boundary value problem which is solved, repeatedly within the optimization, using a relatively fast and numerically robust finite difference method. The stability and the robustness of the numerical method was of high priority, since the boundary problem was solved repeatedly throughout the optimization. The obtained results were promising. It was shown that, for different purity requirements the production planning can be done efficiently, such that all the output of a system can be utilized. Using an optimized production plan, it is thus possible to use existing complex systems, or to design new systems, more efficiently and also to reduce the energy costs or the costs in general.
1. Introduction The problem of efficiently separating products of a multicomponent mixture is a challenging task that is applied in many industries. The objective is to, within reasonable costs, separate the products as efficiently as possible retaining the preset purity requirements. The modeling of different chromatographic separation processes has been addressed in, for example, Saska ^f«/. (1991), Ching ^r a/. (1987) and Guiochon ^f a/. (1994). The optimization of separation processes has been adressed in the pertient literature, for example, in Strube et al (1997) and in DUnnebier and Klatt (1999). A chromatographic separation problem was recently modeled and solved as an MINLP problem by Karlsson (2001). Comparisons of solving the MINLP problem in Karlsson (2001) using the ECP method and the BB method was carried out by Emet (2002). In the present paper, the formulations in Karlsson (2001) and in Emet (2002) are further studied.
108
2. Formulation of the Model Figure 1 shows the different possibilities in a two-column system with two components. At the inlet of a column, it is possible to feed the mixture to be separated (e.g. molasses), the eluent (e.g. water), or the outflow from some other column. At the outlet of a column, one can collect the products or re-use the outcome for further separation. These decisions are modeled using binary variables, y]^,ykij and Xku, as illustrated in Figure 1. The times when these decisions are made are denoted by to, t i , . . . , ^T, where to = 0. The number of intervals is denoted by T and the length of the period by r = tr- The index i denotes which binary variables are valid during the time interval [ti-i,ti]. The indexes k and / denotes the column and the index j the component. The main questions are, thus, what decisions should be made and at which times in order to retain as much separated products as possible.
yu y'^i feed into column k. ykij collect product j from column xiik recycle the outflow from column / to column k.
Column ]
xm ym
Figure 1: A two-column system with two components. 2.1 Dynamic response model The concentration of the component j at the time t > 0 and at the height-position z within column k is denoted by Ckj{t, z). The height of a column is denoted by ZH, and hence 0 < z < ZH' The responses of the concentrations within each column were modeled with the following system of PDEs (Guiochon et ai, 1994):
(l + F ^ , dckj ) ^ + F . ^ f t , ( ^ c , , - ^ + c,,- dt j +u dz ' dt
= Dj
dz'^
(1)
where j = 1 , . . . , C, and k = 1 , . . . ,K. The estimates of the parameters ^j and Pji that were given in Karlsson (2001) were used here. The feed and the recycling decisions provide the following boundary conditions (at the inlet of column k): K
Ckj {t, 0) = yp it) • c f + ^
xik (t) • cij (t,
ZH)
(2)
1=1
The logical functions yp{t)
and xik{t) in (2) are modeled using the following stepwise-
109 linear functions: T i=l T
xik{t) = J2xiik'Si{t)
(4)
where the (Jj(^)-function is defined as
S(t) = l ^ *^
ifte[ti-uti]J
\ 0
=
1,...,T
otherwise.
(5)
The steady state condition of the system (Karlsson, 2001) can be modeled as a periodical boundary condition as follows: Ckj{0,z) =Ckj{r,z)
(6)
That is, the concentrations in a column should be equal at the start and at the end of a period. Conditions on the derivatives can also be formulated in a similar way (Emet, 2002) 2.2 Optimization model The objective function, to maximize the profit over the period, was formulated as:
^^^ ] 7 IZ I ] I ^^*^^ " Y^Pj^f^iJ I [
(7)
where w and pj are parameters that denote the prices of the input feeds, dki, and the collected products Skij • Note that the objective function in (7) is pseudo-convex. The volume of the input feed into column k within the interval i is modeled using the variable dki with the following "big-M" formulation: dki
(8)
where M i > max{dki}' The volumes of the collected products, $kij, are similarly modeled in the following way: Skij < rrikij
(9)
Skij < M2 ' Vkij
(10)
where M2 > max{skij}- The mfcij-variable is defined as the mass of product j within the mterval i at the outlet of column k. These can be formulated as equality constraint as follows rrikij = /
Ckj(t,ZH)dt
(11)
no The nonlinear equality constraints (11) were relaxed as inequalities: rrikij <
/
Ckj{t, ZH)dt + M3 • (1 - ykij) rti
rrikij > /
(12)
J
Ckj{t, ZH)dt - M3 • (1 - V y k u )
(13)
where M3 > max{cfej(t, ZH){U — U-i)}. The purity constraints were formulated as the fraction of the collected product in the total mass of all components: K
T
Z) Z) ^kij '^^^ > Rj
S
(14)
2 Qkij
k=l i = l
where Rj < 1 denotes the purity requirement of component j . The g^^j-variables in (14) are used for measuring the volume of all components within interval i if the component j is collected from column k: c ^
rrikij - Mykij
< Qkij
(15)
The purity constraints (14) were written as linear constraints as follows: K
T
K
T
Rj'YlYl, ^^'3 - IZ m ^**J' - ^ fe=i i = i
fe=i
(^^)
i=i
Linear constraints regarding the order of the timepoints and the binary constraints for the inlet and the outlet were formulated as: fori = 1 , . . . ,T
ti-l
(17)
K
ytl +E Xlik C
E
i=i
1=1 Vkij +
< 1
(18)
X^ Xkil < 1
(19)
K /=1
3. Numerical Methods An analysis of solving the boundary value problem using orthogonal collocation, neural networks and finite differences was conducted in Karlsson (2001). The finite difference method was in Karlsson (2001) reported to be the most robust one (when applied on the chromatographic separation problem), and was hence applied in the present paper. The periodical behavior of the solution was achieved by solving the PDEs iteratively until the changes in the concentrations of two successive iterations resided within a given tolerance
Ill (Emet, 2002). The optimization problem was solved with the ECP method described in Westerlund and Pom (2002). Comparisons were carried out using an implementation of the BB-method for MINLP problems by Leyffer (1999). Whereas the applied BB-method requires both gradient and Hessian information, the ECP-method only requires gradient information. The derivatives needed in each method were thus approximated using finite differences.
4. Numerical Results The profiles of the concentrations of a solution obtained with the BB method are illustrated in Figure 2. Corresponding values are presented in Table 1. The total number of times the system of PDEs has been solved when solving the MENLP problem is also given in Table 1. Note, that most of the CPU-time used in each method was spent on solving the PDEs and calculating the integrals. The purity requirements, (0.8,0.9), were well met using a one-column system. A solution to the two-column problem, obtained with the ECP-method, is illustrated in Figure 3. In the latter problem the purity requirement was (0.9,0.9), and hence recycling was needed. There were, however, severe problems in obtaining any solutions, to the two-column problem, with the BB-method because of the high number of function evaluations needed within each NLP subproblem (Emet, 2002).
Table 1: Results of a one-column system with the purity requirement (0.8,0.9). BB ECP -12.12 -12.28 purity (0.81,0.90) (0.82,0.90) # sub-prob. 124 (MILP) 42 (NLP) # PDE-solv. 11285 1385 1265.4 CPU [sec] 210.0
/•
Figure 2: Profiles of a one-column problem, by BB. Ijl2 (0.01.0.99)
I
^^/; (0.97.0.03)
I
recycle (0.82.0.18)
(0.98,0.02)
(0.12,0.81 2H
(0.8, 0.2)
(0.04,0.96)
(0.23,0.77)
y ^
•
SSH^M
65.6 I
(a) Column 1, recycle to col. 2.
(b) Column 2.
Figure 3: A solution to the two-column problem, f* = —12.4.
recycle
92 9 \ feed
112 3 \
112
5. Discussion A chromatographic separation problem was formulated as an MINLP problem and solved with the ECP method and the BB method for MINLP problems. The dynamics of the underlying separation process was formulated as a boundary value problem that was solved using finite differences. It was shown that, for a lower purity demand, all the outflow of a one-column system could be utilized as products. For a higher purity demand, a more complex system with two or more columns was needed in order to enable the recycling of unpure outflows for further separation. It was further observed that the advantage of the ECP-method was its need for relatively few function evaluations. The main drawbacks of the applied BB-method was the dependency on exact Hessian information. However, improvements in the solving of the boundary value problem, in the solving of the MINLP problem and also within the modeling of these are interesting future research challenges.
6. References Ching C.B., Hidajat K. and Ruthven D.M. (1987). Experimental study of a Simulated Counter-Current adsorption system-V. Comparison of resin and zeolite adsorbents for fructose-glucose separation at high concentration. Chem. Eng. Sci., 40, pp. 2547-2555. Diinnebier G. and Klatt K.-U. (1999). Optimal Operation of Simulated Moving Bed Chromatographic Processes. Computers Chem. Engng Suppl, 23, pp. S195-S198. Emet S. (2002). A Comparative Study of Solving a Chromatographic Separation Problem Using MINLP Methods. Ph.Lic. Thesis, Abo Akademi University. Guiochon G., Shirazi S.G., Katti A.M. (1994). Fundamentals of preparative and Nonlinear Chromatography. Academic Press, San Diego, CA. Karlsson S. (2001). Optimization of a Sequential-Simulated Moving-Bed Separation Process with Mathematical Programming Methods. Ph.D. Thesis, Abo Akademi University. Leyffer S. (1999). User manual for MINLP BB. Numerical Analysis Report, Dundee University. Saska M., Clarke S. J., Mei Di Wu, Khalid Iqbal (1991). Application of continuous chromatographic separation in the sugar industry. Int. Sugar JNL., 93, pp. 223-228. Strube J., Altenhoner U., Meurer M. and Schmidt-Traub H. (1997). Optimierung kontinuerlicher Simulated-Moving-Bed Chromatographie-Prozesse durch dynamische Simulation. Chemie Ingenieur Technik, 69, pp. 328-331. Westerlund T. and Pom R. (2002). Solving Pseudo-Convex Mixed Integer Optimization Problems by Cutting Plane Techniques. Optimization and Engineering, 3, pp. 253-280.
7. Acknowledgements Financial support from TEKES (Technology Development Center, Finland) is gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B. V. All rights reserved.
113
Selection of MINLP Model of Distillation Column Synthesis by Case-Based Reasoning Tivadar Farkas*, Yuri Avramenko, Andrzej Kraslawski, Zoltan Lelkes*, Lars Nystrom Department of Chemical Technology, Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta, Finland, [email protected] *Department of Chemical Engineering, Budapest University of Technology and Economics, H-1521 Budapest, Hungary
Abstract A case-based library for distillation column and distillation sequences synthesis using MINLP has been developed. The retrieval algorithm, including inductive retrieval and nearest neighbor techniques, is presented. The retrieval method and the adaptation of the solution is tested by a heptane-toluene example.
1. Introduction Distillation is the most widespread separation method in chemical process industry. Since the equipment and utility costs are very high the synthesis of distillation columns and distillation sequences is important task. Whereas it is very difficult due to the complexity of structures and equilibrium models. One of the most popular methods of synthesis is hierarchical approach (Douglas, 1988). The other common method of synthesis is mixed integer nonlinear programming (MINLP). MINLP is also used to perform synthesis and system optimization simultaneously (Duran and Grossmann, 1986). The simultaneous design and optimization method has three steps: (a) build a superstructure; (b) generate the MINLP model of the superstructure; (c) fmd the optimal structure and operation with a proper tool. There are two main difficulties when using MINLP: a) Generating a working accurate MINLP model is a complicated task. Usually, every published paper reports a new MINLP model and superstructure according to the problem under consideration. Up to now only one automatic combinatorial method is reported to generate superstructure (Friedler et al., 1992). However it is hard to use for cascade systems. b) Most of the MINLP algorithms provide global optimum in case of convex objective function and searching space. Rigorous tray-by-tray and the equilibrium models in distillation column design usually contain strongly non-convex equations, therefore finding global optimum is not ensured. In such cases the result may depend on the starting point of calculations. In order to overcome these difficulties, the earlier experiences should be used when solving a new problem. Case-based reasoning (CBR) is an excellent tool for the reuse of experience. In the CBR the most similar case to an actual problem is retrieved from a case library, and the solution of this case is used to solve the actual problem. Finally the solution of the problem is stored in the case library for future use (Aamodt and Plaza, 1994; Watson, 1997). The objective of this paper is to present a case-based reasoning method, which for a new distillation problem - an ideal mixture of components that is to be separated into a number of products of specified compositions - provides proper MINLP model with
114 superstructure and gives an initial state for the design of distillation column or distillation sequence. The creation of the case library of the existing MINLP models and results were considered. The library contains 27 cases of separation of ideal mixtures for up to five components.
2. Case-Based Reasoning Case-based reasoning imitates a human reasoning and tries to solve new problems by reusing solutions that were applied to past similar problems. CBR deals with very specific data from the previous situations, and reuses results and experience to fit a new problem situation. The central notion of CBR is a case. The main role of a case is to describe and to remember a single event from past experience where a problem or problem situation was solved. A case is made up of two components: problem and solution. Typically, the problem description consists of a set of attributes and their values. Many cases are collected in a set to build a case library (case base). The library of cases must roughly cover the set of problems that may arise in the considered domain of application. The main phases of the CBR activities can be described typically as a cyclic process. During the first step, retrieval, a new problem (target case) is matched against problems of the previous cases (source cases) by calculating the similarity function, and the most similar problem and its stored solution are found. If the proposed solution does not meet the necessary requirements of actual situation, the next step, adaptation, is necessary and a new solution is created. The obtained solution and the new problem together build a new case that is incorporated in the case base during the learning step. In this way CBR system evolves as the capability of the system is improved by extending the stored experience. One of the most important steps in CBR is the way of calculation of the similarity between two cases during the retrieval phase.
3. Retrieving Method During the retrieval, the attributes of the target case and the source cases are compared to find the most similar case. There are two widely used retrieval techniques (Watson, 1997): nearest neighbor and inductive retrieval. The nearest neighbor retrieval simply calculates the differences of the attributes, multiplied by a weighting factor. In inductive retrieval a decision tree is produced, which classifies or indexes the cases. There are classification questions about the main attributes in the nodes of the tree, by answering these questions the most similar case is found. Due to variety of specifications of the cases, the two retrieval techniques are combined. First, using inductive method, a set of appropriate cases is retrieved, and then only the cases in the set are considered. Next, the cases in the set are ranked according to theirs similarity to the target case using nearest neighbor method. 3.1. Inductive retrieval There are the following classification attributes in the inductive retrieval: Sharp or non-sharp separation Heat integration: According to this classification there are three possibilities: structure without heat integration; structure with heat integration; thermally coupled structure. In single column configuration only non-heat integrated structure is possible. Number of products: Number of products can change from 2 to 5. This classification is considered because the single column configurations and models do not consist of mass balances for the connection of distillation columns, thus these models cannot be used for three or more products problems.
115 Number of feeds: Cases with 1, 2 or 3 feeds are considered. This attribute is required because of the dissimilarity between the MINLP models with single and multiple feeds. 3.2. Retrieval based on the nearest neighbor method The similarity between the target case and all the source cases is calculated using nearest neighbor method. The evaluation of the global similarity between the target and a source case is based on the computation of the local similarities. The local similarity deals with a single attribute, and takes the value from the interval [0;!]. Thus, from the local similarities the global similarity can be derived as: SIM{T,S)
= YJ ^i' ^^^i / S
(1)
^'
where w/ is the weight of importance of attribute /; sinii is the local similarity between the target (7) and the source case (5); k is the number of attributes. The weight of importance takes an integer value from 1 to 10 according to the actual requirements. Five attributes are used: Components similarity. It is a non-numeric attribute. The similarity of components is based on theirs structure. The similarity tree (Fig. 1), where the nodes represent the basic groups of chemical components, was created. To each component group a numeric similarity value was assigned. For two components the similarity value is the value of the closest common node. The more similar the components are, the higher is the similarity value between them. For the identical components the similarity value isl. components 0
alcohol 0,7 - methanol
hydrocarbon 0,6
/ paraffinic 0,8 - propane - n-butane - iso-butane - n-pentane - n-hexane - n-heptane - n-octane - n-nonane
\
nitrile 0,4 - acetonitrile
aromatic 0,5 - benzene - toluene - o-xylene - diphenyl
keton 0,3 - acetone
unsaturated 0,8 - methylacetylene - tams-2-butene - cis-2-butene
Figure 1. Similarity tree of components. The local similarity of the components {simc) is defined as the average of the similarity values between the components:
116 (2)
um^ =Y,^cj /n
where Xcj is the similarity value of the components from the similarity tree; n is the maximal number of components in the compared mixtures. Boiling point and molar weight of components. These attributes are numeric. The similarity is calculated utilizing simple distance approach: the shorter a distance between two attribute's values the bigger the similarity is. For the higher sensitivity not the original values are used, but normalized one from interval [0;1]. The local similarities for these attributes are defined as: sim^ = l - ^ A f ^ . / n
(3) (4)
/=i
/
where Ati,j is the difference of normalized boiling points; Amj is difference of normalized molar weights; n is the maximal number of components. Feed and product compositions. These are also numeric attributes that are vectors. Comparing vector attributes the distance vector is determined. ? = (^P^2v..,^)
5 = (5,,52,...,5„) r,5G[0;l]; (5)
where T is the attribute vector of the target case; S is the attribute vector of the source case. Because there is a number of product composition vectors, the difference vector and the distance are counted for every product pair. The method is the same in the case of multiple feeds cases. The local similarity of feed compositions {simf) and product compositions {simp) are defined as:
Sim
VJ
(6)
•-\% (7)
Sim
E^,where g is the maximal number of feeds; q is the number of products; e^ are the basis vectors in the J?" space (necessary for normalization). Other attributes can also be considered according to the actual requirements, and the weights are taken from 1 to 10.
117
4. Solution The model consists of a superstructure, the set of variables and parameters, the mass and enthalpy balances and other constraints, but in the original articles usually only the superstructure, the variables and the main equations are detailed. Due to these reasons instead of MINLP models descriptions the original articles have been included in the case base. In the articles usually only a flowsheet and some general data are reported as the optimum of a case. In the CBR program this flowsheet and its mathematical representation are the solution, and basing on these data initial point can be proposed for the MINLP iteration. The base of the mathematical representation of a flowsheet (Figure 2) is a mathematical graph (Figure 3). The vertexes of the graph are: the feed (Fl), the distillation columns (CI, C2,...), the heat-exchangers (condensers: Conl, ...; and reboilers: Rebl, ...), the mixers/splitters (MSI, MS2, ...) and the products (PI, P2, ...); the branches are the streams between the units. This graph can be represented in a matrix form (vertexvertex matrix). In this matrix if fly=l then there is a branch from vertex / to vertex 7, if fly=0, then there is no branch. The streams are signed (SI, S2, ...). There is a set of data describing a stream (temperature, flow rate, main component(s) or mole fractions). These connections are described by vertex-branches matrix, where the starting end ending vertexes of the signed streams are shown. Q55 32.167 MU/hr (L/D) = 9J2 97.1% A 0,732 F
- > $9,9% 8 49 TOTAL TRAYS
H— 0.545>F
n
tsH
Q= 33.907 MU/br
Figure 2. Example of flowsheet.
^^-Ki) Figure 3. Graph representation of flowsheet.
In the graph only simple columns are used with maximum three inputs and two outputs. There are reported as a solution three closest cases and according to actual requirements and engineering experiences the most useful model can be selected from among them. Due to the complexity of the distillation problems there is no adaptation of the found MINLP model. The solution of the closest case is proposed as initial point in the design task.
118
5. Example An example is used to test the method including the retrieval, the revision of the model and the solution of the chosen MINLP model. There is given a heptane-toluene mixture. The flowrate of the equimolar [0.5,0.5] feed is 100 kmol/h. The target is to separate the mixture into pure components with 95% purity requirement at the top and at the bottom. In inductive retrieval, a set of non-heat integrated sharp separation cases with one feed and two products was retrieved. The similarity values between our problem and the cases of the set were calculated using the nearest neighbor formulas. The most similar case was a benzene-toluene system (Yeomans and Grossmann, 2000). Using the MINLP model of this case our problem was solved: the optimal solution is a column with 67 equilibrium trays; the feed tray is the 27* from the bottom; the reflux ratio is 5.928; the column diameter is 3,108m.
6. Summary A case-based program has developed, which in the case of a new separation problem can help to generate a superstructure and an MINLP model for the design of distillation column or sequences. During retrieval the important design and operational parameters are compared. The method is tested by heptane-toluene example, which has been solved from the problem statement through the retrieval.
7. References Aamodt,
A., Plaza, E., 1994, Case-Based: Reasoning, Foundational Issues, Methodological Variations, and System Approaches. AI Communications. lOS Press, Vol. 7 : 1 , 39-59. Douglas, J.M., 1988, Conceptual design of chemical processes, McGraw-Hill Chemical Engineering Series; McGraw-Hill: New York. Duran, M.A. and Grossmann, I.E., 1986, A mixed-integer non-linear programming approach for process systems synthesis, AIChE J., 32(4), 592-606. Friedler, P.; Tarjan, K.; Huang, Y.W. and Fan, L.T., 1992, Graph-theoretic approach to process synthesis: axioms and theorems, Chem. Eng. Sci., 47(8), 1973-1988. Watson, I., 1997, Applying case-based reasoning: techniques for enterprise systems, Morgan Kaufman Publishers, INC. Yeomans, H. and Grossmann, I.E., 2000, Disjunctive programming models for the optimal design of distillation columns and separation sequences, Ind. Eng. Chem. Res., 39(6), 1637-1648.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
119
Discrete Model and Visualization Interface for Water Distribution Network Design Eric S Fraga, Lazaros G Papageorgiou & Rama Sharma Centre for Process Systems Engineering, Department of Chemical Engineering, UCL (University College London), Torrington Place, London WCIE 7JE, United Kingdom ABSTRACT The water distribution network design problem poses challenges for optimization due to the tightly constrained nature of the typical mathematical programming formulation. The optimization of existing water distribution networks has therefore often been tackled through the use of stochastic optimization procedures. However, even these suffer from the need to solve systems of nonlinear algebraic equations. This paper describes the implementation of a hybrid method which combines a fully discrete formulation and visualization procedure with mixed integer nonlinear programming (MINLP) solution methods. The discrete formulation is suitable for solution by stochastic and direct search optimization methods and provides a natural basis for visualization and, hence, user interaction. Visual hints allow a user to identify easily bottlenecks and the aspects of the design that most greatly affect the overall cost. The result is a tool which combines the global search capabilities of stochastic algorithms with the pattern recognition and tuning abilities of the user. The solutions obtained provide good initial points for subsequent optimization by rigorous MINLP solution methods.
1 INTRODUCTION The design of a water distribution network (Alperovits & Shamir, 1977) involves identifying the optimal pipe network, the head pressures of the individual supply and demand nodes, and theflowsbetween the nodes, including both the amount and the direction of flow. The objective is tofindthe minimum cost network which meets the demands specified. The water distribution network design problem poses challenges for optimization tools due to the tight nonlinear constraints imposed by the modelling of the relationship between node heads, waterflowin a pipe, and the pipe diameter. The objective function is often simple, consisting of a linear combination of pipe diameters and lengths. The optimization of existing water distribution networks has been tackled through a variety of methods including mathematical programming (Alperovits & Shamir, 1977; Coulter & Morgan, 1985) and stochastic procedures (Cunha & Sousa, 1999; Gupta et al, 1999; Savic & Walters, 1997). However, even the latter require an embedded solution
120 of systems of nonlinear algebraic equations leading to difficulties in initialization and handling convexity. This paper describes a fully discrete reformulation of the water distribution network problem. This formulation is suitable for directly evaluating the objective function and is particularly appropriate for the use of stochastic and direct search optimization methods. Furthermore, it provides a natural mapping to a graphical display, allowing the use of visualization. Visualization permits the user to interact, in an intuitive manner, with the optimization procedures and helps to gain insight into the characteristics of the problem. The solutions obtained are good initial solutions for subsequent rigorous modelling as a mixed integer nonlinear programme (MINLP). Although the majority of existing work is limited to the optimization of a given network, the new formulation can also generate new network layouts. The visualization and optimization methods have been developed to cater for both the optimization of a given network and the identification of optimal networks. The results presented in this paper, however, are limited to network optimization. 2 T H E PROBLEM STATEMENT The least-cost design problem can be stated as follows. Given the water network superstructure connectivity links for the nodes in the network, the pipe lengths, the demand at each node (assumed to be static) and the minimum flowrate and head requirements, determine the optimalflowrateand direction in each pipe, the head at each node and the diameter of each pipe so as to minimise the total cost of the pipes in the network. 3 T H E DISCRETE FORMULATION AND ITS VISUALIZATION The discrete formulation is based on the modelling of the nodes (both demand and supply) in the network as horizontal lines in a two dimensional discrete space. The position of each line is represented by {x,y). The y value specifies the head at the node. The length of a horizontal line in this discrete space represents the amount of water through the node, irrespective of the actual location along the jc-axis. Transportation of water from one node to another occurs when the lines corresponding to the two nodes overlap (in terms of the x co-ordinates) provided a connection between the two nodes is allowed. The definition of the network is actually a superstructure of allowable pipe connections, pipe diameters and distances between nodes. Given the set of x and y values for the nodes in the network, the allowable connections and the pipe diameter, d, (chosen from a discrete set) allocated to each possible connection, the evaluation of the objective function is based on identifying all the matches defined by the positions of the lines in the discrete space. This evaluation is deterministic and enables the identification of the network layout and the direct evaluation of the cost of the water distribution network. This objective function forms the basis of a discrete optimization problem in jc, y, and d. The discrete formulation provides a natural basis for visualization and, hence, user interaction. Figure 1 presents an annotated screenshot of the visual interface. The graphical interface employs visual hints to allow to user to identify bottlenecks and the aspects of the design that most greatly affect the overall cost. Specifically, the diameter of each
121 mm- %ifitrmi*ff^ m>t^.
*t^ '^mn^m^ ^•ja ism^mif jh^^
W g.q» » « B >^i
^
Indication of excess water
Figure 1: Water distribution network visualization interface for Alperovits & Shamir example. pipe is indicated by its width in the visual display. The violation of demand requirements for a node, or in fact the excess of water delivery to a node, is represented using a small gauge within the horizontal bar representing each node. Although not necessarily apparent in this manuscript, a red colour is used to indicate a shortfall in the demand for the node and a blue colour an excess. The interface allows the user, through use of the mouse or the keyboard, to manipulate the location of each horizontal bar and the diameter of each pipe. Furthermore, the implementation, based on the Model-View-Control design pattern (Gamma et al., 1995) and written in Java, allows the user to interact directiy and easily with the optimization procedures. The user can specify which features to manipulate (e.g. pipe diameters alone) and the specific optimizer to invoke. The choice of optimizer includes a variety of implementations of genetic algorithms and simulated annealing procedures. The user may also export a current configuration as an MINLP model which can be subsequently optimized rigorously, using the GAMS system (Brooke et al, 1998), as described in the next section. The result is a tool which combines the global search capabilities of stochastic algorithms with the pattern recognition and tuning abilities of the user. The solutions obtained provide good initial points for subsequent rigorous optimization with a mixed integer nonlinear model. 4 M I N L P OPTIMIZATION MODEL The objective function expresses the network cost, which is assumed to be linearly proportional to the pipe length and pipe diameter. It is assumed that pipe diameters
122 are available at discrete commercial sizes. The objective function is minimized subject to three main sets of mathematical constraints: continuity (flow balance) constraints, energy loss constraints, and bounds and logical constraints. The first set of constraints represents the mass conservation law at each node of the water network. The second set describes the energy (head) losses for each pipe in the network to relate the pressure drop (head loss), due to friction, to the pipeflowrate and the diameter, roughness, material of construction, and length of the pipe. In this work, the commonly used Hazen-Williams empirical formula (Alperovits & Shamir, 1977; Cunha & Sousa, 1999; Goulter & Morgan, 1985) is used. The third set of constraints includes bounds on variables such as minimum head orflowraterequirements. This set also includes constraints to ensure that only one diameter can be selected for each pipe (stream), a more realistic representation rather than having a split-pipe design. The above problem corresponds to an MINLP model due to nonlinearity in the HazenWilliams correlation. This model is solved using the DICOPT method in the GAMS system (Brooke et al, 1998). DiCOPT invokes MILP and NLP solvers iteratively. In this work, we have used the CPLEX 6.5 MILP solver and four different NLP solvers. 5 ILLUSTRATIVE EXAMPLE An example from the literature (Alperovits & Shamir, 1977) is presented. The results show the improved behaviour, particularly in terms of robustness and consistency, achieved through the combination of the stochastic optimization of a discrete model, user interaction, and rigorous MINLP solution. The problem consists of seven nodes and up to eight pipes. When the viewer is instantiated, the initial values for all the variables default to the mid-point between the lower and upper bounds. From this starting point, the user may immediately interact directly with the viewer to alter the configuration or may request the application of a stochastic optimization step. At any point, the current configuration may be exported as a GAMS input file and solved using the rigorous MINLP formulation. Table 1: Sequence of operations for illustrative example (Alperovits & Shamir, 1977). Step 1. 2. 3. 4.
Operation GA User User User
Objective Function ($'000) 397 411 463 423
Infeasibility Measure
oTs 0.13 0.10 0.13
A typical sequence of steps is presented below. For each step, the current objective value (in discrete space) and a measure of its infeasibility (shortfall in demands met in m^ js) is obtained and these are collated in Table 1. Due to the highly constrained nature of the discrete formulation, an exactly feasible solution is unlikely to be achieved. However, the aim is not so much to solve the problem directly with the visualization tool but to provide good initial solutions for the rigorous optimization procedure.
123 m^mM^^^i^e^^^^^m^Mmi
JM '^Jm^^M^fi'^^T^j^imM
t C - » 1 > t « *ym»mmmtJ)$fm^
faw»
tish'imt,,»>im^i4i^isis&^^^'^mm^^mmm
^*.|»*vfr«ii»ytw»»»»»y tiM*^«* «
Figure 2: Initial and final configurations. 1. As the starting configuration is not particularly good, the first step is to apply a genetic algorithm to achieve a reasonable starting layout. An example initial layout obtained in this manner is shown in Figure 2 on the left. 2. We note that, in this initial configuration, node 3 has none of its requirements met and there is a very small pipe from node 2. We increase the diameter of this pipe four times to diameter 5. The value of the objective function increases, indicating a more expensive network, but one which better meets the requirements specified, as can be seen from the measure of the infeasibility. 3. We now note that all the demand nodes have a shortfall but that the single supply node has an excess (i.e. the full amount available is not being used). We increase the diameter of the pipe connecting the supply node to node 2. This leads to an excess in node 2. Again, we look for small pipes as increasing their size is less damaging to the cost than larger pipes. We start at the bottom of the network, increasing the size of pipe 3^1 and 2 ^ 3 . 4. Now there is an shortfall in node 2 and an excess in node 4 so we decrease pipe 2 ^ 4 . The configuration at this stage is shown in Figure 2 on the right. Thefinalconfiguration obtained through these simple steps is then exported in the form of a GAMS input file, providing an initial starting configuration for the MINLP solution procedure. The results of the subsequent rigorous optimization are presented in Table 2. The results obtained using the initial configuration generated by the user with the visualization tool are better than those obtained otherwise. In particular, all four optimizers used are able to find a good solution with this initial configuration.
124 Table 2: Comparison of solutions (in $ '000s) obtained with different combinations of NLP solvers and initial conditions. Initial Configuration None Allflows= 100 Visualization
CONOPT Failed 456 423
NLP Solver CONOPT2 MINOS 486 446 441 452 420 420
MINOSS
Failed Failed 441
6 CONCLUSIONS A hybrid procedure for solving the water distribution network optimization problem has been presented. This procedure combines a discrete formulation, an interactive visualization interface and a rigorous MINLP formulation. The discrete formulation provides a natural basis for the visualization tool. This tool can interact with the user and, together with embedded stochastic optimization procedures, alternative configurations can be generated easily. These configurations can provide good initial guesses for subsequent rigorous MINLP optimization. In this paper, we have concentrated on the optimization of a pre-defined networks. The discrete formulation and the visualization tool also provide the basis for the design of new networks (the x variable described above). This is the topic of current research.
7 References Alperovits, E. & Shamir, U. (1977). Design of optimal water distribution systems. Water Resource Research, 13 (6), 885-900. Brooke, A., Kendrick, D., Meeraus, A., & Raman, R. (1998). GAMS: A user's guide. GAMS Development Corporation Washington. Cunha, M. & Sousa, J. (1999). Water distribution network design optimization: Simulated annealing approach. Journal of Water Resources Planning and Management, 125 (4), 215-221. Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1995). Design Patterns. Elements of Reusable Object-Oriented Software. Reading, Massachusetts: Addison-Wesley. Goulter, I. C. & Morgan, D. R. (1985). An integrated approach to the layout and design of water distribution networks. Civil Engineering Systems, 2,104-113. Gupta, I., Gupta, A., & Khanna, R (1999). Genetic algorithm for optimization of water distribution systems. Environmental Modelling & Software, 14, 437-446. Savic, D. A. & Walters, G. A. (1997). Genetic algorithms for least-cost design of water distribution networks. Journal of Water Resources Planning and Management, 123 (2), 67-77.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
125
Optimal Design of Mineral Flotation Circuits E.D. Galvez^^\ M.F. Zavala^^\ J.A. Magna^^^ and L.A. Cisternas^^^ (1) Dept.of Metallurgical Eng., Universidad Catolica del Norte, Antofagasta, Chile. (2) Chemical Engineering Dept., Universidad de Antofagasta, Antofagasta, Chile.
Abstract This study presents a procedure for the design or improvement of mineral flotation circuits, based on a mathematical programming model with disjunctive equations. The procedure is characterized by: 1. The development of hierarchized superstructures, such that the first level represents certain separation tasks. The second level represents circuits of equipment needed to carry out the tasks. 2. A MILP mathematical model is developed which includes the selection of equipment, first principles, and operational conditions. 3. The objective function is the maximization of profits. Examples are included to demonstrate the advantages of the procedure.
1. Introduction In flotation processes the feed may be divided into different fractions by employing differences in surface properties. For example, a concentration plant may be formed by cells grouped into rougher, scavenger, and cleaner banks in order to divide the feed into concentrate and tailings. The behaviour of these processes depends on the configuration of the circuit and the physical and chemical nature of the slurry treated. Some attempts have been described in the literature on automated methods for the design of these types of circuits (Yingling, 1993; Mehrotra, 1988). Studies can be classified as those which consider revenues, operational and capital costs as an objective function (Shena et al., 1996) and those that consider a technical performance function. The latter group may be separated into those that use bank models and those that use cell models (Yingling, 1990). Two groups may also be distinguished between those that use bank models: (a) those that use a first-order model for the flotation kinetics (Mehrota and Kapur, 1974; Dey et al., 1989), and (b) those that use a model indirectly (Green 1984,1992; Renter and Van De venter, 1990). Due to the nonlinear nature of the equations of material balance, the models are required to include simplifications in their mathematical form, or require the development of a specific strategy for the solution of a mathematical model. The present study presents the formulation of a new procedure for the design or improvement of flotation circuits for minerals, characterized by: (a) Superstructures are developed in a hierarchical form, (b) A mathematical model which includes disjunctive expressions for the selection of the equipment, first principles, operational conditions, and logic expressions, (c) The objective function represents maximization of profit. The resultant model represents an MILP model.
126
2. Model Development 2.1. Strategy The design strategy includes two levels of hierarchized superstructures. The upper level includes a task superstructure, in which are included a rougher system (the task of which is feed processing to obtain the maximum separation), a cleaner system (the task of which is purification of the concentrate from the rougher to obtain the final concentrate), and a scavenger system (the task of which is the treatment of the tailings from the rougher to obtain the final tailings). Figure 1 shows the superstructure utilized, where the triangles represent stream mixers or splitters which permit the presentation of a group of alternatives for mineral processing. At the second level, it is considered that each system is formed of three banks of flotation cells: rougher, cleaner, and scavenger banks. Thus, for example in the scavenger system there exist the scavenger-rougher, scavenger-cleaner, and scavenger-scavenger banks. The superstructure for each system is analogous to the task superstructure, but where each system is replace by a bank of flotation cells. This analogy makes easy the mathematical representation. 2.2. Constraints First it is necessary to define some sets, parameters, and variables in order to develop the model. Then, using these definitions, the equations may be developed which include mass balances, yields, operational conditions and logic constraints. The principal sets are: 5 = {5 / ^ is a system}, L = \i/ iisa. stream}, K= {/://: is a species} and M =^/ misa. mixer or splitter}. LA, LC and LT are subsets of L, which include the feeds, concentrates and tailings, respectively, of each system or bank. Also, Lcc = ^£^,£^)/ i^G LA,i^G LC, £ ^ is the concentrate produced from £ ^}, Ltt = ^i^J^)/
i^e LAJ^e
LT J ^ is the tailing produced from £^}. In addition, LL,
LSI and LS2 are subsets of L, which represent the feed, output 1 and output 2 of each stream splitter. Since the superstructures are analogous, all sets are the same for each superstructure. Each stream € of the task superstructure is associated with the variable that represents the mass flow of species k, W^k- Similarly, each stream 6 in each system s is associated with the variable that represents the mass flow of the species k, WIsjxThe material balance for mixers and flow splitter in the task superstructure are: E
^i,k-
E
^a=^
keK,meM
(1)
Where M''(m) and M^"\m) are the sets of input and output streams of the mixer/splitter m. In general it is not common practice to divide the streams in flotation concentration plants. This study, however, considers the possible division of different levels into a set of discrete values for possible levels of division. This is:
127 feed
f
r^
ROUGHER SYSTEM
SCAVENGER SYSTEM
\ >
^
-viv-
tail
A
^ CLEANER SYSTEM
•
?
concentrate
Figure 1. Task superstructure with rougher, cleaner and scavenger systems. Each system has the same superstructure but with rougher, cleaner and scavenger hanks.
je J,kE K jej\
ieLLJ^e
LSlJ^e
LSI
(2)
where/={/7 7G 7 is a fraction level of division}, and £ is the stream which is split into El and £2- ^j is the fraction j of division. Eq. (2) is rewritten as mixed-integer linear equations. Equations similar to equations (1) y (2) hold for the mixer and stream splitter in each system superstructure, but WlsXk. is used instead of WixIn each flotation bank there is '^^5,£^,it ~^S,la,k
^^S,la,k
(i^J^)eLcc,ke
W,,,„, =(l-r,,,^ Jw,,,^,,
(/,,OG
K,seS Ltt,keK,se
(3) S
(4)
where r^^^^ is the ratio of mass flow of concentrate £c and feed l^, of species k, in system s. The ratio T^,^^^, is related to the separation factor fsj^hy
T^j ^k =
fs,ia,k/0^~^ fs,ia,k^' ^^^ separation factor may be obtained from plant data, values from pilot
plants,
or theoretical
or empirical
models.
For example,
fsi k -
128 (l + /:^^^ ^r^^^ j^''« - 1 , where k^j^,^ is the flotation rate for species k, A^^^ the number of cells and r^^ the retention time in the bank fed by /« in system s. Multiple values of 7^^^^ may be implemented because T^i^,^ depends, among other things, on the type of equipment, the number of cells and residence times. For modelling an N set of alternatives, each one with a characteristic value T^^ ^^ „, fixed costs Cy^^^ ^, and variable costs Q,^^ ^, we use eq. (5). Disjunctive equation (5) is expressed as a set of mixed-integer linear equations.
yn
{l^A^)eLcc,{l^,l^)eLtt,
ne N\
ke K,sG 5, f S,la
f
(5)
S,la^n
Feed specification, input and output streams connection between system and task superstructures, and logical constraints must also be included. Others operational conditions, such as streams with zero flow, and upper or lower bounds for flows of each stream are easily included in the model. 2.3. Objective function The optimal selection of the circuit requires that an appropriate objective function be defined upon which the values of the operational and structural variables may be determined. Since in the present case the income depends on the structure and operational conditions, a useful function is the difference between income and costs. Differing relations may be applied for calculation of the income depending on the type of product and its market. For base metals the net-smelter-return formula may be utilized (Schena et al., 1997). Income = U g,W,,,\p(q-Rfc)H-^W,,,[pu
{q-Rfcyirc^
(6)
where ^^Wjo^t is the mass flow of the concentrate, p the fraction of metal paid, jt
^gk^iok
is the mineral grade of the concentrate, gk is the mineral grade of each
species k present in the concentrate, u is the grade deduction, Trc is the treatment charge, and Rfc is the refinery charge. H is the number of hours per year of plant operation, when the flows are in tons per hour. The grade deduction and the fraction of metal paid depend on the recovery efficiency of the smelter. Values of cost and prices of metals are published in specialized journals. Typical values used in the present study are
129 listed in Table 1. It should be noted that as the flows of the species with a high grade increase, so does the profit. However, this increase in flows brings with it an increase in flows of low grade value, which decrease the profits (second term of eq. 6). The annualized costs of the plant may be considered as the sum of operational costs and capital costs. The operational costs include energy, consumption of reagents, labour, and maintenance. The majority of these costs depend more on the feed flow into the plant than on its configuration, and thus are not considered in this analysis. Only the energy costs of mixing the slurry, generation of bubbles, and dispersion were considered. The capital costs include the cost for the banks and pumps, which can be expressed as a linear function of mass flow rates. Thus, the annualized costs are a linear function of the mass flow rates. The objective function subject to constraints 1, 2, 3 and 4 (1, 2 and 5) represent a problem of mixed-integer linear programming (MILP). This is termed Model PI (P2).
3. Applications This section presents the applications of models PI (case 1) and P2 (case 2) to the design of a copper concentration plant, whose species are: k=l (100% chalcopyrite), k=2 (90% silica, 10% chalcopyrite) and k=3 (100% silica). The principal data of the problem are given in Table 1 and 2. Case 1 includes 15 cells per bank in all the stages. Three levels of flow division were considered (100%, 50% y 0%), for all the flow splitters in each of the superstructures. Figure 2 shows the circuit obtained together with the mass flows for each species. The problem, including a total of 548 equations, 307 variables and 58 binary variables, was solved using the GAMS (0SL2 solver). To study the sensitivity of the solution of the problem to other levels of separation, five levels of flow division were considered (100%, 75%, 50%, 25% y 0%). The solution obtained was the same as in the case with three levels of separation. Case 2 includes three levels of cell numbers (10, 15 and 20) per bank in all the stages. The circuit obtained was similar to case 1 (see Figure 2), but without stream division and with 10 cells per bank in the rougher and cleaner, and 20 cells per bank in cleaner-cleaner. A total of 15 cells were maintained only in the scavenger bank. The problem, having a total of 764 equations, 487 variables, and 84 binary variables was solved using the GAMS (0SL2 solver). Table 1. Typical values for a copper concentration plant (minimum grade of concentrate 28%). Parameter P u q, US$/ton metal Rfc, US$/ton metal Trc, US$/ton Con. H, hours/year
Value 0.975 0.015 1764 200 85 7200
Table 2. Data for the example. Feed flow for species k 1 6ton/h 2 3ton/h 3 291 ton/ Flotation rate, species k Rougher system Rougher Banks
Grade of k, mass fraction 0.385 1 0.050 2 0.0 3 1 2 3 4.787
0.946
0.604
Cleaner system Rougher Banks
2.265
0.156
0.08
Cleaner Banks
2.265
0.156
0.04
2.265
0.156
0.250 1
Scavenger system Rougher Banks
130
k case 1 case 2 1 0,15 ~ 0 2 0.8 0
IRQuJsHiRT k case 1 case 2 1 7.2 6.7 2 1,6 1.2 3 118.4 90.2
k case 1 case 2
3 210.6
"ric^5ei^
260,4 k c a s e 1 case 2 1 1.7 1.1 2 1,6 1.2 3 118,2 90,1
["tS^E^NfEf^l
k case 1 case 2 1 1.5 1.6 2 0,3 0,4 3. 38 59,6
k case 1 case 2 0 0.1 .
k case 1 case 2 1 0,5 0,4 2 3 3 3 290,8 290,9
[ytjlities i Sales
Case 1 Case 2 16,7 17,2 Millions yS$ 18,7 19,2 Millions US$
i concentrate
Figure 2. Circuit solution of the application. Cases 1 and 2 have the same circuit, but different values for stream flows.
4. Conclusions A procedure has been developed for the design and improvement of mineral concentration plants. The most important feature of the model is its linearity. The mass balances in stream splitters were represented by disjunctive equations avoiding bilinear terms in the model. The results showed that the division of flows had little effect on determination of the most efficient circuits. This result agrees with practice since it is unusual to split stream in mineral concentration circuits. Modelling of the flotation banks was carried out using disjunctions with discrete values for the concentrate/feed ratio. The model has been useful in the study and design of circuits for mineral concentration. Future study will include extension of the model to examine the possibility of selecting between various types of equipment (columns and banks) and incorporation of intermediate milling.
5. References Dey, A., Kapur, P.C. and Mehrotra, S.P., Int. J. Miner. Process., 26 (1989) 73-93. Green, J.C.A., Int. J. Miner. Process., 13 (1984) 83-103. Green, J.C.A., Chem. Eng. Sci., 37(1992), 1353-1359. Mehrotra, S.P., Miner. Metall. Process., 5 (1988) 142-152. Mehrota, S.P. and Kapur, P.C, Separation Science, 9, (1974) 167. Reuter, M.A. and Van Deventer, J.S.J., Int. J. Miner. Process., 28(1990), 15-43. Shena, G., Villeneuve, J., and Noel Y., Int. J. Miner. Process., 46 (1996) 1-20. Tingling, J.C, Int. J. Miner. Process., 29(1990) 149-174. Yingling, J.C, Int. J. Miner. Process., 38(1993), 21-40.
6. Acknowledgements The authors wish to thank CONICYT for financial support (Fondecyt Project 1020892).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
131
An MINLP Model for the Conceptual Design of a Carbothermic Aluminium Reactor Dimitrios I. Gerogiorgis and B. Erik Ydstie Dept. of Chem. Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
Abstract Systematic design and optimization set the standard in today's petrochemical industries, but their impact is limited when considering most processes in metallurgical industries. The present study presents an MINLP model developed for design and optimization of the electrode heating system of a new multistage high-temperature carbothermic reactor. The focus here is the optimization of electrode placement and imposed voltage profile; the model and conclusions can effectively assist ongoing multiscale process modeling.
1. Introduction Metallurgical industries are historically characterized by a high degree of complexity (high-temperature reactions, multiphase flows, complex physicochemical phenomena) and a predominantly experimental, iterative nature of novel process design efforts, both resulting in lack of formal accurate models that could facilitate optimization endeavors. Thus, advances in this field can definitely improve profitability in the globalization era, exactly as petrochemical industries have benefited from MINLP problem formulations and routine use of optimization computer tools that yield significant investment savings. The present paper presents a MINLP model for a novel complex metallurgical process, illustrating its application for the effective design of the reactor electric heating system. Mixed Integer Nonlinear Programming (MINLP) theory and applications have pervaded the field of computer-aided design and optimization of traditional chemical processes. Exemplary processes studied include numerous petrochemical unit operations, such as heat-integrated distillation (Novak et al., 1996; Caballero and Grossmann, 1999; Yeomans and Grossmann, 1999), azeotropic distillation (Bauer and Stichlmair, 1998), reactive (Ciric and Gu, 1994) and cryogenic distillation (Boisset-Baticle et al., 1994), extraction and dehydration (Lelkes et al., 2000; Diaz et al., 2000; Alonso et al., 2001), supercritical fluid extraction (Kalampoukas and Dervakos, 1996), membrane separation systems (Qi and Henson, 20(X)), wastewater treatment (Galan and Grossmann, 1998), heat exchanger networks (Yee and Grossmann, 1990; Aggarwal and Floudas, 1992; Daichendt and Grossmann, 1994; Galli and Cerda, 1998; Bjork and Westerlund, 2002), mass exchanger networks (Papalexandri and Pistikopoulos, 1994) and reactor networks (Esparta et al., 1998; Stein et al., 1999; Rooney and Biegler, 2000; Pahor et al., 2001). Applications include power plants (Manninen and Zhu, 1998), reactive power planning (Chattopadhyay and Chakrabarti, 2002), pump systems (Westerlund et al., 1994), copolymer production (Mantzaris et al., 1999), paper industry (Harjunkoski et al., 1999) and leather industry (Graells et al., 1992), but not the design of metallurgical processes.
132
2. The Carbothermic Aluminium Reactor Carbothermic reduction of aluminium oxide for the production of pure aluminium is an energy-efficient and environmentally benign reduction process with industrial potential. This method has been proposed as a feasible alternative to the prevalent Hall-Heroult electrolytic process; several reactor designs have been tested (Motzfeldt et al., 1989). Nevertheless, reaction complexity poses notable technical obstacles to implementation. The multistage, multielectrode high-temperature carbothermic reactor recently proposed (Johansen et al., 2000; Johansen and Aune, 2002) is a new idea addressing known issues so as to achieve the reactor scaleups that would permit the desired capital cost savings. Nevertheless, its structural and operational complexity entails many design challenges. The most interesting one is the distributed nature of the process, as the electric heating necessary for the endothermic reaction is achieved using independent electrode pairs, both in the first pre-reduction smelting stage as well as in the second reduction stage. The design challenge is to optimize electrode positions and the imposed voltage profile so as to achieve the reaction advance without unnecessary reactor space or energy use. Obviously, dense electrode placement and high voltage result in excessive superheating (a catastrophic effect causing aluminium evaporation, major yield reduction and losses) while sparse electrode placement and low voltage fail to achieve adequate slag heating (an equally undesirable situation resulting in limited conversion and low productivity). Submerged horizontal electrodes also act as obstacles to horizontal molten slag flow; flow and other concurrent multiphase phenomena (Al liquid and CO gas generation) introduce further modeling complications which are not considered in the present study. Figure 1 depicts the structure of the four-stage carbothermic aluminium reactor studied: 1. The first stage of the process is the pre-reduction zone where slag formation occurs. Carbon and aluminium oxide pellets are continuously fed to the electric arc smelter, melt and form a viscous molten slag contained under inert atmosphere and oil cooling. Carbon monoxide and aluminium vapors (AI/AI2O) generated are fed to the third stage. The reaction of AI2O3 with an excess of C to form the A^Cs-rich slag is thus written as: 2Al203(s) + 9C(s) -> (Al4C3+Al203)(siag) + 6CO(g)
(T > 1900 °C)
(1)
2. The second stage is the high-temperature slag reduction zone: the first-stage molten slag flows slowly into the multielectrode submerged arc main reactor, where it is heated to a higher temperature, avoiding local surface superheating caused in open arc reactors. Liquid Al and CO gas are rapidly generated; AI4C3 injection from the third stage assists in shifting the chemical equilibrium towards Al and in avoiding active carbon depletion. The decomposition of the Al4C3-rich slag to form the Al-rich metal phase is written as: (AI4C3 + Al203)(slag) ^
(6A1 + Al4C3)(metal) + 3C0(g)
( T > 2 0 0 0 °C)
(2)
3. The third stage consists of a vapor recovery reactor (VRR), where Al and AI2O vapors react with C to form AI4C3 (unless gas Al species are recovered countercurrent to the incoming solid feed, metal loss has a catastrophic impact on the process economics). 4. The fourth (final) stage of the process is the purification zone: the liquid Al alloy produced floats and flows through an overflow weir to a tank, where dissolved C and entrained AI4C3 particles can be removed by proprietary technology to recover pure Al.
133
3. Carbothermic Reactor Modeling The present study employs a simple finite volume model of the carbothermic reactor: a relatively coarse one-dimensional structured discretization of the domain is considered and a steady state CSTR reactor model is used in each resulting rectangular volume to probe mass, heat and molar balances using temperature-dependent physical properties; the latter have been previously reported in a relevant study (Gerogiorgis et al., 2002). The coarse finite volume decomposition and drastic assumptions are necessary because of the lack of experimental data, and to ensure a computationally manageable model. The presence of electrode pairs (inert conductors) is modeled using binary variables; voltage and intensity profiles are considered piecewise constant along the reactor axis. The advance of the reversible reduction towards Al (liquid) and CO (gas) is considered governed by the overall reaction proposed in a recent kinetic study (Frank et al., 1989): (3)
Al203(s) + 3C(s) -^ 2A1(,) + 3CO,
(g)
The pseudo-first order kinetic model as reported by Frank et al. (1989) has been used, as the presence of metallic solvent (Cu/Sn) in their study is not affecting reaction kinetics. The assumption of instant thermodynamic equilibrium (extremely high temperatures) can be alternatively used to elucidate concentrations within each of the finite volumes; the use of a multiscale model is necessitated in that case (Gerogiorgis and Ydsfie, 2002). A number of simplifying approximations are used for the development of this model. The potential presence of suspended C/AI4C3 particles in the incoming slag is ignored; the generated liquid and gas streams are considered immiscible with the molten slag; species diffusion, horizontal backmixing and vertical recirculation effects are neglected. The Joule heating effect of each electrode is thus confined within the respective volume; the relative concentrations within each volume only depend on the temperature therein, considering only the production of Al (liquid) and CO (gas) without any phase changes. The latter product streams are taken to completely escape the slag at each finite volume. ••CO (Tc p i t o y iacoi«ri F I N I T E V O L U M E M O D E L F O R STAGE 2
«X%A!^^
- StUIDiriCATION FRONT -REAdOR COOLING JAOET
SIDF-FNTW HfdRODtS
CSTR# 1
Figure 1: The conceptual carbothermic aluminium reactor (Johansen andAune, 2002).
134
4. The MINLP Model for Electrode Heating System Design This section presents the formulation of the MINLP finite volume model: the goal here is to perform electrode placement as well as imposed voltage profile optimization for maximization of Al production under mass, heat and molar species balance constraints. The mathematical formulation is based on a CSTR series steady state process model; each finite volume is assumed a CSTR with perfect separation of reactants and products. Thus, the Al maximization objective function and the balance constraints are written as: ^
max
(4)
^ ^ j = 3,i,P ' M , P i=l
S.t.
F,-.,s=I^,s+Fi.P
(5)
Fi-i,s -Cpi-Ls -(Ti., - T , ) = F,s -Cpjs -(Ti - T J + F,p-Cp. p -(T; - T J - Q „ , +Qe, +QR,i (6) Fi-i,s -x j,i-,.s = Fi.s -x j,i,s + Fi.p -x j,i,p +Cj -R, - V Exj,i,s = 1. S x j i,p = 1, x,j.3,,,,s = 0 , X(j.,,,),,p = 0 (i = 1,2,.. N CSTRs) J i Ei = y r V i / L , X y i = Y < N
(7) (8) - (11) (12)-(13)
i
cJi=cJi(Ti) = a,+p,.ln(Ti)
(14)
QH,i=^i(Ei)',Qc,i=U-A.(T,-T,),Q^,=R,V°.Aff Rr=koexd
f-AG^^
•Cj.,,, AG°=aG+bG.T
(15) - (17) (18)-(19)
RTj
Cpi,s/P= S^j,i,s/p-Cpj,s/p,
Cpj,s/P = (ao,j + ai,jT + a2,jT' + a3,jT' + a.2,jT-')
(20) - (21)
j
2173 < T < 2573 (K), 0 < V < 100 (Volts), 0 < x < 1
(22) - (23)
Here, Fi,s and Fi p are the slag and product molar flows exiting finite volume i (CSTR), Xj, i are the molar species fractions, Ti are the temperatures, Cj, i are the concentrations, Vi is the imposed voltage on a pair, Ei is the field intensity, L is the lateral tip distance, Yi are binary variables indicating absence (yi = 0) or presence (yi = 1) of electrode pairs and parameter Y denotes the maximum number of electrode pairs allowed in the reactor. QH,! is the Joule heat production, QR^ is the reaction heat consumption using the kinetic model proposed by Frank et al. (1989), Cj are the stoichiometric coefficients in Eq. (3), Qci, U, A and Tc are cooling rate and parameters, V° is the finite volume of the CSTR, ai(T) is the slag electrical conductivity, and Cp i,s » Cp i,p are the specific heat capacities. The free energy of formation (AG°) for the reaction is published by Frank et al. (1989). Indices i and j refer to position and species (hAliOa, 2:Al4C3, 3:A1, 4:C0), respectively; subscripts S and P denote slag (AI2O3 + C) and product (Al + CO) streams, respectively;
135 N is the total number of isothermal CSTR reactors considered (a sensitivity variable). Temperatures T], concentrations Cjj and fractions Xj^ are the state variables considered. MINLP Model Results The most important results obtained using the MINLP model are presented in Figure 2. IMPOSED ELECTRODE VOLTAGE
2
3 4 FINITE VOLUME #0)
HORIZONTAL (SLAG) FLOWS
ABSOLUTE TEMPERATURE
2
3 4 5 FINITE VOLUME #(i)
VERTICAL (PRODUCT) FLOW^S
MOLAR FRACTION OF Al
Figure 2: MINLP model results for Y -6 electrodes (black) and Y —3 electrodes (gray).
5. Conclusions and Future Work The MINLP model outlined for electrode voltage optimization in a carbothermic reactor captures macroscopic phenomena, giving insight for preliminary heating system design. Electrode positions, voltages and temperatures for maximization of liquid Al production are illustrated in Figure 2, for two different electrode number constraints (Y = 3, Y = 6). Electrode number is equal to the maximum, but heating is most important right at the reactor inlet; temperature increases as slag flows and limiting reactant (AI2O3) deplete. Production increases with power input (6.06 kmol.min'^ / Y = 3, 8.80 kmol.min"^ / Y = 6). More complicated objectives (simultaneous minimization of Al(g) losses and energy use) can also be addressed by augmenting the model with VL equilibrium equations that will account for all 6 species (Al203(i), A^Csd), ^^d)' Al(g), Al20(g), CO(g)) in all fluid streams.
136 The finite volume process model and the conclusions of this study can also be used for multiscale modeling of this complex chemical process (Gerogiorgis and Ydstie, 2002).
6. References Aggarwal, A., Floudas, C.A. 1992, Comput. Chem. Eng. 16 (2), 89. Alonso, A.I., Lassahn, A., Gruhn, G. 2001, Comput. Chem. Eng. 25 (2-3), 267. Bauer, M.H., Stichlmair, J. 1998, Comput. Chem. Eng. 22 (9), 1271. Bjork, K.M., Westerlund, T. 2002, Comput. Chem. Eng. 26 (11), 1581. Boisset-Baticle, L., Latge, C , Pibouleau, L. Comput. Chem. Eng. 18 (S), S99. Chattopadhyay, D., Chakrabarti, B.B. 2002, Int. J. Elec. Power 24 (3), 185. Daichendt, M.M., Grossmann, I.E. 1994, Comput. Chem. Eng. 18 (8), 679. Ciric, A.R., Gu, D.Y. 1994, AIChE J. 40 (9), 1479. Diaz, S., Gros, H., Brignole, E.A. 2000, Comput. Chem. Eng. 24 (9-10), 2069. Esparta, A.R.J., Obertopp, T., Gilles, E.D. 1998, Comput. Chem. Eng. 22 (S), S671. Frank, R.A., Finn, C.W., Elliott, J.F. Met. Mat. Trans. B. 20B (4), 161. Frey, T., Bauer, M.H., Stichlmair, J. 1997, Comput. Chem. Eng. 21 (S), S217. Galan, B., Grossmann, I.E. 1999, Ind. Eng. Chem. Res. 37, 4036. Gerogiorgis, D.I., Ydstie, B.E. 2003, Proceedings of the Foundations Of ComputerAided Process Operations Meeting (FOCAPO 2003), Coral Springs, FL, 581. Gerogiorgis, D.I., Ydstie, B.E. and Seetharaman, S. 2002, Proceedings of the Computer Modeling of Minerals, Metals & Materials Processing Meeting (TMS2(X)2), Seattle, WA, 273. Graells, M., Espuna, A., Puigjaner, L. 1992, Comput. Chem. Eng. 16 (S), S221. Harjunkoski, I., Westerlund, T., Porn, R. 1999, Comput. Chem. Eng. 23 (10), 1545. Johansen, K., Aune, J. 2002, U.S. Patent 6,440,193 (to Alcoa Inc. and Elkem ASA). Johansen, K., Aune, J., Bruno, M., Schei, A. 2000, Proceedings of the Sixth International Conference on Molten Slags, Fluxes and Salts, Stockholm, Sweden (#192). Kalampoukas, G., Dervakos, G.A. 1996, Comput. Chem. Eng. 20 (B), S1383. Lelkes, Z., Szitkai, Z., Rev, E., et al., 2000, Comput. Chem. Eng. 24 (2-7), 1331. Manninen, J., Zhu, X.X. 1998, Comput. Chem. Eng. 22 (S), S537. Mantzaris, N.V., Kelley, A.S., Srienc, F., Daoutidis, P. 2001, AIChE J. 47 (3), 727. Motzfeldt, K., Kvande, H., Schei, A., Grjotheim, K. 1989, Carbothermal Production of Aluminium - Chemistry and Technology, Al Verlag, Dusseldorf, Germany. Novak, Z., Kravanja, Z., Grossmann, I.E. Comput. Chem. Eng. 20 (12), 1425. Pahor, B., Kravanja, Z., Bedenik, N.I. 2001, Comput. Chem. Eng. 4-6, 765. Papalexandri, K.P., Pistikopoulos, E.N. 1994, Comput. Chem. Eng. 18(11-12), 1125. Pettersson, F., Westerlund, T. 1997, Comput. Chem. Eng. 21 (5), 521. Qi, R.H., Henson, M.A. 2000, Comput. Chem. Eng. 24 (12), 2719. Rooney, W.C, Biegler, L.T. 2000, Comput. Chem. Eng. 24 (9-10), 2055. Stein, E., Kienle, A., Esparta, A.R.J., et al., 1999, Comput. Chem. Eng. 23 (S), S903. Westerlund, T., Pettersson, F., Grossmann, I.E. 1994, Comput. Chem. Eng. 18 (9), 845. Yee, T.F., Grossmann, I.E. 1990, Comput. Chem. Eng. 14 (10), 1165. Yeomans, H., Grossmann, I.E. 1999, Comput. Chem. Eng. 23 (9), 1135.
7. Acknowledgments The authors acknowledge the financial support of ALCOA Inc. for the present study (part of the carbothermic aluminium production project co-funded by the U.S. DOE). The first author (D.I.G.) gratefully acknowledges an Institute of International Education Fulbright fellowship as well as an Alexander S. Onassis Foundation doctoral fellowship.
European Symposium on Computer Aided Process Engineering - 13 A. Krasiawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
137
Towards the Identification of Optimal Solvents for Long Chain Alkanes with the SAFT Equation of State Apostolos Giovanoglou^ Claire S. Adjiman^' ^, Amparo Galindo and George Jackson Department of Chemical Engineering and Chemical Technology, Imperial College London, Exhibition Road, London SW7 2AZ. U.K.
Abstract An optimisation framework is presented to enable the identification of immiscibility in binary mixtures of short and long-chain n-alkanes, or polymers. Using the SAFT-HS equation of state and introducing a general thermodynamic criterion implying liquidliquid immiscibility, the largest short-chain n-alkane, which is immiscible with a given long-chain n-alkane is identified. Possible applications of this study can be found in solvent and mixture design formulations in the oil and polymerisation industry, where short-chain alkanes are readily available on-site and can offer good performance.
1. Introduction Fluid phase behaviour is of significant importance in process systems engineering. Understanding and predicting the phase behaviour of a process mixture is an invaluable tool for process design and operation optimisation. In the polymer industry, for instance, processing conditions should be such that polymerisation takes place in a single phase for the end product to be homogeneous. Moreover, in the case when molecular thermodynamic models are available, a tool that guarantees robust phase equilibrium calculations would also be useful as part of a material design framework. Building on the polymerisation example, one may be searching for a solvent that ensures the formation of a single phase inside the polymerisation reactor within the range of operating conditions, or may be seeking a solvent to separate the polymer from the nonreacted monomer in a subsequent product-purification stage. The complexity and dimensionality of the multi-component fluid phase equilibrium/stability means that the development of tools able to account robustly and efficiently for fluid phase behaviour inside process and/or material design formulations is not a trivial task. Such issues have largely been treated either by exploiting prior knowledge of the mixture phase behaviour, or by assuming the number of stable equilibrium phases is known a priori. In this paper a methodology is presented to distinguish between two different types of phase behaviour, which occur in binary mixtures of short and long-chain alkanes or polymers. Extensions of the methodology to the more general case of multi component mixtures can find applications in case studies involving solvent selection and mixture ^ Also affiliated with the Centre for Process Systems Engineering at the same address ^ Corresponding author. Tel: +44(0)20 7594 6638; Fax: +44(0)20 7594 9929; Email: [email protected]
138 design in the oil and polymerisation industries. The choice of solvent in such cases can be narrowed down by noting that short-chain n-alkanes are readily available on-site and can usually offer good performance. The structure of this paper is as follows: in the next section there is a brief description of the problem considered, in section 3 a general thermodynamic criterion for identifying liquid-liquid immiscibility in binary mixtures is introduced, in section 4 the overall problem formulation is presented, and the results are shown in section 5.
2. Problem Statement The aim of this work is to develop a methodology to distinguish between two types of phase behaviour occurring in binary mixtures of n-alkanes of two different lengths: a shorter n-alkane, referred to as the short-chain n-alkane, and a longer n-alkane, referred to as the long-chain n-alkane. Given the size of the long-chain n-alkane, the goal is to identify the short-chain n-alkane with the largest molecular weight such that the mixture exhibits liquid-liquid (L-L) separation; this type of phase behaviour corresponds to type V in the classification of van Konynenburg and Scott (1980) (see figure 1). For molecular weights of the short-chain n-alkane greater than the one identified above, liquid-liquid separation is no longer observed; in the classification of van Konynenburg and Scott (1980) this phase behaviour is the so-called type I (see figure 1). TYPE ll V
L
Cl/ L-V
Wi<Wi'
^is /
J2 ,
Figure 1: Pressure-temperature phase diagrams illustrating the change in phase behaviour from type V to type I in a binary mixture of n-alkanes as the molecular weight (w) of the shortest n-alkane (component 1 and 1 \ respectively) increases. The main characteristics of type V phase behaviour compared to type I are the region of liquid-liquid immiscibility, the so-called cloud curve, close to the critical point of the more volatile component (the short-chain n-alkane), and the appearance of a three-phase line (L-L-V) close to the vapour pressure curve of the same component (see figure 1). Since demixing of the two components is the main difference between these two types of phase behaviour, a criterion to identify liquid-liquid immiscibility is developed. Such a criterion is introduced in the next section. A simplified version of the statistical associating fluid theory (SAFT-HS) is used to model the n-alkane molecules. This approach offers a good representation of the entire n-alkane series (Galindo et ah, 1996) incorporating an intermolecular parameter, which describes the n-alkane size. In this
139 way it is possible to study the behaviour of n-alkane mixtures as a function of the size of the short chain n-alkane.
3. Identifying Liquid-Liquid Immiscibility in Binary Mixtures A mixture reaches equilibrium at the global minimum in its free energy. This sometimes requires the existence of several phases, giving rise to liquid-liquid immiscibility. In the case of a binary mixture, and according to the Gibbs phase rule, the maximum number of liquid phases that can coexist in equilibrium is two. A general criterion to determine immiscibility can be derived directly from the classical thermodynamic rules related to the concept of stability limits. A system is said to be at its limit of stability when even a small perturbation in one of its state variables leads to a phase change (Beegle et al, 1974b). For a binary mixture at a specified temperature and pressure, assuming the absence of a vapour-like density branch at these conditions, the existence of stability limits is a necessary and sufficient condition for liquid-liquid equilibrium to occur over a range of total compositions. This range is bounded above and below by the compositions of the co-existing phases. When non-existence of a vapour-like density branch cannot be guaranteed, the above statement is no longer sufficient, since vapourliquid equilibrium may be more stable than liquid-liquid equilibrium. For the rest of this section non-existence of a vapour-like density branch is assumed, corresponding to high-pressure fluid phase behaviour. Beegle et al. (1974a, 1974b), Modell and Reid (1983) and more recently Firoozabadi (1999) have worked on the derivation of simple and general criteria that can be used to identify stability limits. Starting from the maximum entropy or minimum free energy principle, and making a series of expansions around the state variables, inequalities are developed to identify whether or not a system is indeed in stable equilibrium. Using Legendre transform theory a representation of the derived expressions in different statevariable spaces can be obtained. In the case of a binary mixture it can be shown (Firoozabadi, 1999) that a single liquid phase is stable (or metastable) if and only if:
>0
for/= lor2
(1)
where //, and x, are the chemical potential and the mole fraction of component /, respectively, T is the temperature, and P is the pressure. The above partial derivative becomes negative if the single phase is unstable and the system acquires the global minimum in its energy by separating into two liquid phases. The points at which the above partial derivative vanishes (cf eq. 2) are the limits of stability, and the existence of such points is a signature of liquid-liquid co-existence (spinodal decomposition).
9x
=0
(2)
This is illustrated qualitatively in figure 2 for a model mixture of two spherical molecules of equal diameter. At temperature Ti and pressure P (plots a and c in figure
140 2) the mixture forms one homogeneous liquid phase. At the same pressure P but lower temperature T2 (plots b and d in figure 2), the two components become immiscible as indicated by the existence of extreme points in the chemical potential of component 1, or by the zeros in its partial derivative with respect to composition. In the next section we will discuss how this criterion is used to distinguish between type I and type V phase behaviour in binary mixtures of short and long-chain n-alkanes. One liquid phase
Two liquid phase
Figure 2: Chemical potential and chemical potential partial derivative with respect to composition for a model mixture of two spherical molecules: at conditions T\, P where the two components are miscible (plots a and c), and at conditions T2
4. Problem Formulation In order to derive a mathematical formulation for the problem considered, a thermodynamic model capable of describing the whole series of n-alkanes is required. In this work we use the SAFT-HS approach (Jackson et al, 1989; Chapman et ai, 1989; Galindo et al, 1996), a simplified version of the statistical associating fluid theory (SAFT), which treats molecules as chains of hard-sphere segments with van der Waals interactions. More specifically, n-alkanes are described using a united-atom model where m, hard-sphere segments of equal diameter o; are bonded tangentially to form a chain. Attractive interactions are described with an integrated interaction energy cxa associated with each segment. A simple empirical relationship between the number of carbon atoms in the alkyl chain C, and the number of spherical segments m, has been proposed by Jackson and Gubbins (1989) and Archer et al. (1996): m,= l+(Cr l)/3. The
141 equation can be used in a transferable way to characterise all members of the series in terms of their chain length. This representation gives a reasonable description of the critical point. In the context of this work this is especially important since the onset of liquid-liquid immiscibility first occurs close to the critical point of the short chain nalkane (van Konynenburg and Scott, 1980). Using the SAFT-HS equation and exploiting the immiscibility criterion presented in section 3, the problem can be formulated as a non-linear constrained optimisation program, where for a given long-chain n-alkane, the largest short-chain n-alkane that is immiscible with the long-chain component is identified. The temperature and pressure search area is confined to that close to the critical point of the short chain n-alkane as liquid-liquid immiscibility always appears in this area. The overall formulation is a follows:
max
m,
s.t.
{m^, /?, X,, r) = 0
(3)
T,P
P = Pc,Mx) 0.9 7^
0 < mj < m2
p>0 0 < x <1
where indices 1 and 2 denote the short and long-chain n-alkane, respectively, subscript C stands for critical, //, is the chemical potential, jc, the mole fraction, and m, the SAFTHS chain length parameter of component /, p is the density of the mixture, and P and T denote pressure and temperature, respectively. The partial derivative of the chemical potential with respect to composition at constant temperature and pressure was calculated analytically using the chain rule.
dx.* dp dx.
(4)
JT,P
dx,^ JT.
dp
t,X,
The critical properties of the short-chain n-alkane, which are used to fix the pressure and to bound the temperature are functions of mx only. They are calculated by applying the criticality conditions for component 1, i.e..
^1iTc.vPc.x) = 0,
and
ly^
(rc,„Pc,i) = 0
(5)
142
5. Results The optimisation problem was solved using gPROMS/gOPT (Process Systems Enterprise Ltd.). We have considered long-chain n-alkanes with chain length ranging from 75 up to 9,000 carbon atoms. The results are presented in figure 3. Our calculations suggest a near linear dependence for increasing sizes of the small n-alkane; this is in qualitative agreement with experimental findings (Rowlinson and Swinton, 1986). 800
700
600
SI
o 300 SI
m
200 100
0 1000
2000
3000 4000 5000 6000 long chain size (Cg)
7000
8000
9000
Figure 3: The length of the largest short chain n-alkane which is immiscible with a given long chain n-alkane. The length of the long chain alkane ranges from 75 up to 9,000 carbon atoms.
6. Conclusions An optimisation framework to identify liquid-liquid separation in binary mixtures of nalkanes is presented. A general thermodynamic criterion, based on the concept of stability limits, is introduced to identify the largest short-chain n-alkane which is immiscible in a given long-chain polymer-like n-alkane.
7. References Archer, A.L., Amos, M., Jackson, G. & McLure, I.A., 1996, Int. J. Thermoph., 17, 201. Beegle, B.L., Modell, M. and Reid, R.C., 1974a, AIChE J., 20, 1194. Beegle, B.L., Modell, M. and Reid, R.C., 1974b, AIChE J., 20, 1200. Chapman, W.G., Jackson, G. and Gubbins, K.E., 1988, Mol. Phys., 65, 1057. Firoozabadi, A., 1999, Thermodynamics of Hydrocarbon Reservoirs, McGraw-Hill. Galindo, A., Whitehead, P., Jackson, G. & Burgess, A., 1996, J. Phys. Chem., 100, 6781. gPROMS Version 1.8, www.psenterprise.com. Jackson, G., Chapman, W.G. and Gubbins, K.E., 1988, Mol. Phys., 65, 1. Jackson, G. and Gubbins, K.E., 1989, Pure Appl. Chem., 61, 1021. Modell, M. and Reid, R.C., 1983, Thermodynamics and its Applications, Prentice-Hall. Rowlinson, J.S. and Swinton, F.L., 1982, Liquids and Liquid Mixtures, Butterworths. Van Konynenburg, P.H. and Scott, R.L., 1980, Phil. Trans., A298,495.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
143
Combined Optimisation and Process Integration Techniques for the Synthesis of Fuel Cells Systems Julien Godat, Fran9ois Marechal Laboratory of Industrial Energy Systems, Institute of Energy Sciences, Swiss Federal Institute of Technology , CH-1015 Lausanne mailto:[email protected], tel + 41 21 693 35 16, fax +41 21 693 35 02
Abstract A method that combines process modelling and process integration techniques has been developed to tackle the design of complex integrated systems like fuel cell systems. This method uses the equations of the composite curves as constraints to model the ideal heat exchanger network and the corresponding utility system. When combined with the use of process modelling techniques, this method allows to synthesize optimized fuel cell integrated systems with high efficiency and that integrates the technological design constraints.
1
Introduction
The physical principles behind a fuel cell system is the electrochemical conversion of a fuel into electricity. If the membrane system that allows the separate transfer of the ions and the electrons is the heart of the system, the system integration is of major importance with respect to the system performances(Hirschenhofer, 1998). The Ballard Proton Exchange Membrane Fuel Cell (PEMFC) system (Keitel, R.,) has been used as a basis for this study. It includes a fuel processing system, the fuel cell and the post combustion sub-system. The fuel processing section converts the fuel into a suitable form for the electrochemical reaction, avoiding catalyst poisoning and featuring acceptable catalyzed reaction rates. The fuel processing is a steam reforming system operating at a pressure of 3 bar followed by a shift reactor that converts CO of the reformate into additional CO2 and H2 and a Preferential oxidation reactor (PROX) that converts the remaining CO to avoid fuel cell catalyst poisoning. In our study, we did consider a medium temperature (250°C) shift reactor that allows shift reaction to be realized in one step. A H2 loss by oxidation has been considered in the PROX reactor due to the selectivity of the catalyst. The fuel cell operates at low temperature (less than 100°C). The hydrogen crossing the membrane produces electricity by the electrochemical conversion of the free energy of the hydrogen oxidation reaction. The fuel conversion being lower than 100%, The remaining fuel will be recovered by post combustion to satisfy the energy requirement of the system. In order to reach the operating pressure of the system, the air is compressed by a turbo-compressor driven by the expansion of the flue gases resulting from the post combustion. Coupling
144 PEMFC system Energy Flow Diagram Reforming ^ ^i^irtion
U/
Fuel Processing MT shift
PROX
p l
Fuel Cell
4dW\ (M)cold streams
V(tw)
FIG. 1. Energyflowdiagram of the system of a electricity generator allows producing an additional amount of electricity whose production will depend on the turbine inlet temperature TiT and on the gas turbine system integration. Process integration in fuel cell systems concerns therefore different levels. In terms of chemical conversion, the fuel processing performances affect the fuel cell and the post combustion efficiency, but also in terms of combined heat and power production. A characteristic of the fuel cell system design is the heat exchange system that can not be considered as a conventional network of individual heat exchangers but as a system where heat exchanges take place simultaneously with the chemical and the electrochemical reactions in the same vessel. Fronted with this multi-scale integration problem, our goal has been to develop a methodology that combines process modelling, process integration and optimisation techniques to design better integrated fuel cell systems.
2 Process Synthesis Methodology Thefirststep of the process synthesis methodology is the generation of a simulation model based on the Energy Flow Diagram (fig 1). Knowing the values of the decision variables (P), the energy flow model is used to compute the temperatures, pressures,flowratesand energy flows that define the system requirements. In the integration strategy, some of the flowrates (rhu) have to be computed to close the energy balance and satisfy the heat cascade of the integrated system, we refer to these as being the utility system. The temperatures, pressure and specific energy flows of the utility sub-systems are also computed by energy flow modelling when these are related to decision variables (e.g. the operating pressure or TiT). Theflowrateswill be computed by process integration using the Effect Modelling and Optimisation (EMO) approach (Marechal and Kalitventzeff (1998)). The "ideal but feasible at an acceptable cost" heat exchanger network model is formulated as a linear programming (LP) problem including the heat cascade as constraints and the process efficiency as an objective function. The results of the LP optimisation are then used to compute the objective function of the design and to define the optimal values of the decision variables (P) that maximize the efficiency. In our study, the objective function is the energy efficiency of the system, computed by (1) and the optimisation is stated as (2), where rhu{P) is the solution of the LP problem (3).
rje{P,rhu) =
EFC{P)
+
rhu) - rap,(P) * Ep, + f^NGaddiP^^u) * LHVNG
EGT{P.
rriFc * LHVNG
(1)
145 where EFC{P) and EGT{P, T^U) are respectively the electricity production by the fuel cell system and the net production of the integrated gas turbine system for the decision variables P and the utility flowrates ihw The natural gas (with a lower heating value LHVNG) flowrate entering the fuel processing section {mpc) is a constant, while rriNGadd (^' rh^),the additional natural gas in the utility system results from the LP problem (3). mo2{P) is the pure oxygen flow used in the PROX reactor for the decision variables P and £"02 is the energy consumption of the pure oxygen used in the system (300kWh/tonofO2). Minr]e{PMu{P))
MinFLPobj{rhu, P) = rJiNG^dd (P^ '^u) * LHVNG -
Subject to Heat cascade constraint Mechanical power balance
(2)
EGT{P,
(3)
^U)
(4a) (4b)
In the adopted approach, the model are solved as a simulation problem with enough robustness solving only part of the problem by linear optimisation and analysing the sensitivity of the decision variables (P). This allows not only identifying the optimum but also to represent the shape of the search space. 2.1 Process energy flow model The Energy flow model concerns the calculation of the chemical and electro-chemical reactions as well as the heat exchange requirements, the separation units, compression and expansion. The model has been developed using an equation solver approach (Belsim s,a„ 2001). Although this choice is not critical, it allows an easy modification of the list of specifications during the model development without having to redesign the solving sequence as it would have been the case in the sequential approach. The definition of the hot and cold streams in the energy flow model is critical and should be considered carefully if we do not want to miss energy savings potentials. In the conventional system, the steam is produced in a separated heat exchanger before being mixed with the fuel at the inlet of the reformer (dotted line at the vaporisation temperature on the composite curve of figure 1). In order to recover the partial pressure effect of the steam injection, the energy requirement of the feed preheating has been considered as a liquid-vapour mixture of water and natural gas to be heated up (plain line on the same figure). This implies the use of a special heat exchange equipment. The chemical reactions are modelled assuming (where appropriate) equilibrated reactions. In order to account for the catalyst efficiency, the equilibrium is computed at a different temperature from the heat balance one. The conventional representation of the reformer is to consider the feeds preheating up to the present temperature and then to consider a cold stream representing the heat of reaction Qreac at the constant reforming temperature Treac. In order to better fit the reactor temperature profile, we considered the feed stream preheating to Treac, then the reaction takes place to reach equilibrium at Treac. The heat of reaction is considered into parts :
146 a cold stream from Tint (the reaction startup temperature) to Treac corresponding to the reaction products heating and the balance at constant temperature Treac. This approach will better approximate the real temperature profile in the reactor (dotted line of figure 1) and will allow energy savings for example by additional feed preheating. This definition of heat requirement of the reforming reaction should be kept in mind when it will be necessary to compute the heat exchange network, especially if a pinch point appears in the corresponding temperature range. A similar approach has been used to model the energy requirement of the exothermic reactions (shift and preferential oxidation, combustion as well as the fuel cell it self). For the system optimisation, the pressure, the steam to carbon ratio and the reforming temperature have been considered as decision variables while the temperature of the shift reactor, as well as the conversion in the PROX reactor have been considered asfixed.The fuel utilisation in the fuel cell has not been optimised in our study. The post combustion is modelled assuming the stoechiometric combustion of the fuel not converted in the fuel cell (this fuel contains the remaining methane and hydrogen). The resulting flue gases are then expanded in a turbine with an inlet temperature (TiT) considered as a decision variable. It should be noted that the heat requirement associated with the combustion assumes the possible air and fuel preheatings and the high temperature exchange of the combustion products with the reformer. 2.2 The utility system In order to balance the heat requirement of the system, a utility system will be used. According to the decision parameters, two situations may occur : 1) the heat of the post combustion is not sufficient to balance the heat requirement of the system and additional firing is needed or 2) the system is balanced by the heat of the post combustion and actions have to be taken in order to use the energy excess. These two situations will be represented by adding the two sub-systems in the problem presented on the right of figure 1 (Post combustion sub-system). On the top, the sub-system represents the additional fuel added to the post combustion in order to balance the composite curve, this leads to an additional air requirement (that flows through the compressor, the fuel cell and the air preheating section) and, in the center, an air excess that will be computed in order to maximise the energy conversion into net electricity using the Bray ton cycle. The bottom of the figure represents the post combustion of the non converted fuel. The model used for this representation has been presented in (Marechal and Kalitventzeff, 1998) and has been adapted to account for the effect of the oxygen partial pressure on the electricity production in the fuel cell. This model represents the derivative of the additional fuel and the air excess in the composite curve constraints and the objective function. The limit of this configuration will be a renegerative gas turbine operating at a pressure of 3 bar, without the fuel cell system reaching 26% of electrical efficiency. The model used will allow to represent the possible regenerative exchange, not only between the streams of the gas turbine system, but also with the streams of the process. The difficulty of applying the heat cascade model lies on the fact that it can only be solved by solving an optimisation problem : the solution being defined by the activation of inequality constraints : pinch point or utility flowrate.
147
steam to carbon ratio
FIG. 2. System efficiency as a function of reforming temperature and steam to carbon ratio with and without net production of electricity with the gas turbine
3 Results The visualisation of the objective function with respect to some of the decision parameters is shown on figure 2. Two situations are presented as a function of the reforming temperature and the steam to carbon ratio. The first (lower surface) represents the turbocompressor solution where the TiT is such that the expansion turbine will only drive the compressor without net production of electricity (as it is the case in the original design), while the second (upper) assumes a net electricity production by the gas turbine. The optimal efficiency computed for the system is of 53%, obtained for a steam to carbon ratio of 4 and a reforming temperature of 700°C for the gas turbine case. For the turbo compressor case, the optimum efficiency is of 47% with a reforming temperature of 750°C. These values have to be compared with an efficiency of 36% in the original design. It should be mentioned that the optimal decision are quite different in the two situations indicating that the investment and the choice of the catalyst will be affected by the configuration decision. More results are presented in (Godat and Marechal , 2002). According to design assumptions or limits, the activation of different pinch points, that results from the LP optimisation strategy, will lead pratically to different configuration. Considering as an example the composite curves of figure 3 that have been computed for the same steam to carbon and reforming temperature. On the right, we consider system without electricity production by the gas turbine (turbo-compressor). In this situation, the liquid-vapour feed mixture preheating is necessary in order to reach high system efficiency. Producing steam in a separate heat exchanger (doted cold composite) would have activated a pinch point that would have increased the additionalfiringrequirement. The mixture preheating leads to an increase of 1% of efficiency (from 44.1% to 45.2%). On the left of the figure, we present the results of the regenerative gas turbine integration allowing electricity generation by the gas turbine. In this case, the shape of the composite curve indicates that both
148 2000 1800
_H...
:
With vaporisation Mixture preheating Hot composite curve HptMixture.preheating
—^— - ;r - • • 1 -•,'• — f >• • •
•
I
!'"""
•
With vaporisation Mixture preheating Hot composite curve Hpt Mixture, preheating
— •• -• •
--
1600
Gas turbine net pi-oductioii 1400 1200
•
:
: : : : : : : : :
1000
:
-— i
1
:
^.i.'-''
-,'
/f.---
1
1
:
'•( J
f
/: 1
;-^' >/:
:
i
i
Turbo cdmpresso^
:
:
:
:
•
:
y [/ - :
i
:
1
1
4
' *'~ if :
:
i
i
....
l-^j^
1
i
i
FIG. 3. Composite curves of the system with and without net production with the gas turbine for a) Tref = 800° C, S/C = 2.6 andb) Tref = 800° C, S/C = 2.6 separate production of steam (doted cold composite) and mixture preheating may be envisaged without efficiency penalty. In this case, the system efficiency is of 47.8%. This sensitivity analysis has been made because the objective function (the efficiency) is not the only concern. Other design parameters for example the humidity in the fuel cell and of course the investment have also to be analysed.
4
Conclusion
A model based on the combined use of modelling and process integration techniques has been developed to design optimal integrated fuel cell systems. The use of the process integration techniques solved as a linear programming problem allows modelling the heat exchanges in the integrated system and determining the flowrates in the system even if the pinch point position changes. This approach is especially useful to design integrated systems because it allows modelling the heat exchange system without defining a priori its structure. The proposed modelling method is a first step of the synthesis methodology that will integrate multiple objectives (i.e. the efficiency and the cost) in order to finally design the best system structure. Compared to simultaneous simulation and optimisation, the use of a two level approach (solving the model at a lower level), allows not only the identification of the optimal decision parameters but also the characterisation of the optimal region for the system configurations.
5
References
Hirschenhofer J.H., Fuel Cell Handbook, Fourth Edition, (1998) Godat J., Marechal F, Optimization of a fuel cell system using process integration techniques. Fuel cell conference, Amsterdam, 2002,submitted the Journal of Power Sciences, (2002) Marechal F., Kalitventzeff B., Process integration : Selection of the optimal utility system Computers and Chemical Engineering, Vol 22 SuppL, pp. S149-S156, (1998) Keitel, R., 1996, Application with Proton Exchange Membrane (PEM) Fuel Cells for deregulated market place, ALSTOM BALLARD, Frankfurt, Germany. Belsim - Vali IH, vlO User Guide, Belsim s.a.. Rue Georges Berotte, 29A, B-4470 SaintGeorges-sur-Meuse (Belgium), http ://www.belsim.com. (2001)
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
149
Optimal Design and Operation of Batch Ultrafiltration Systems Antonio Guadix^, Eva S0rensen^*, Lazaros G. Papageorgiou^ and Emilia M. Guadix^ ^Centre for Process Systems Engineering, Department of Chemical Engineering, UCL (University College London), Torrington Place, London WCIE 7JE, U.K. ^'Departamento de Ingenieria Quimica, Universidad de Granada, 18071 Granada, Spain.
Abstract In this paper, an approach for the optimal design and operation of a batch ultrafiltration installation is presented. The approach is based on a dynamic model which takes into account both fouling and cleaning issues. An economic objective function, which includes capital and operating costs, is used and reasonable operating constraints are imposed. The overall problem is formulated as a dynamic optimisation model. A protein ultrafiltration plant involving the use of commercially available tubular ceramic membrane modules is studied. Optimal values for both design and operation variables such as the processing tank volume, the number of membrane modules, the feed and circulation pumps sizing, the work pressure profile and the timing of the operating and cleaning tasks are determined simultaneously.
1. Introduction Batch membrane ultrafiltration is well suited to the processing of biological molecules since it operates at relatively low temperatures and pressures and involves no phase changes or chemical additives, thereby minimising the extent of denaturation, deactivation and degradation of highly labile biological products (Zeman and Zydney, 1996). In many biotechnology systems, the final product is a dilute solution of the desired molecule and batch ultrafiltration can be used in the recovery process for product concentration which can significantly improve the economics and effectiveness of other processing steps. Most of the research work performed on ultrafiltration optimisation has focussed on steady-state techniques, e.g. Liu and Wu (1998) and Belhocine et al. (1998). Unfortunately, these methods do not allow operability considerations to be taken into account. Dynamic optimisation methods, however, can be used to determine optimal values for both design and operation variables simultaneously. In recent years, this technique has successfully been applied to a number of processes (for instance, distillation {Furlonge et al, 1999), reaction (Kvamsdal et al, 1999) or heat exchange (Georgiadis et al, 1998)). In this paper, we consider the application of dynamic optimisation to the optimal design and operation of a batch ultrafiltration system, illustrating the method with a practical case study. To the best of the authors' knowledge, this is the first work in which a formal dynamic optimisation methodology is applied to batch ultrafiltration. ' Corresponding author. Tel: +44 20 7679 3802, E-mail: e. sorensen@ucl . a c . u k
150
2. Problem Description A typical batch ultrafiltration plant is represented in Figure 1. It consists of a processing tank, a feed pump, a circulation pump and a membrane unit with a number of modules in parallel. A permeate is obtained from the membranes while the retentate is recirculated until the desired concentration in the processing tank is reached. Then, a cleaning procedure is performed and the system is ready for the next batch.
Processing tank
Feed pump
RETENTATE
Circulation pump
Membrane unit
^ PERMEATE
Figure 1. Scheme of a hatch ultrafiltration plant.
3. Dynamic Model The physical description of the process described above is based on material balances and equipment performance equations, which incorporate the following key assumptions: • The membrane is fully retentive for the solute considered. • The membrane geometry is tubular. • Permeate flux is governed by the osmotic pressure model (Cheryan, 1998): J=
AP-n
(1)
where J is the permeate flux, AP is the transmembrane pressure, n is the osmotic pressure and RM is the membrane resistance. Fouling occurs according to the cake filtration model (Cheryan, 1998): dR M = aAPPJCr dt
(2)
where CR is the retentate concentration and a and P are parameters. The membrane is perfectly regenerated after each cleaning procedure. The duration of the cleaning process is a linear relationship of the filtration time.
151
4. Optimisation Problem Ultrafiltration installations often produce retentates that need further processing to be marketed (e.g. spray drying). Thus, objective functions based on profit are not appropriate in ultrafiltration processes as revenue data are not generally available. Therefore, the objective function proposed in this study is the total hourly cost (to be minimised). Both capital and operating costs are taken into account. The former includes the costs of the equipment (tank, pumps, membranes) while the latter include the electricity consumption in the pumps and the cleaning costs. Is assumed that: • The capital costs are distributed during the equipment life. • The cost of each cleaning procedure is a function of the membrane area and the final membrane resistance. CC = a + b A R M f
(3)
For batch mode, the plant operation is subject to a number of constraints: • The plant has a minimum capacity requirement. • Feed and product concentrations are fixed due to raw material specifications and quality requirements, respectively. • The work pressure should be below a maximum value recommended by the membrane manufacturer during the entire operation time in order to avoid any irreversible damage to the membrane. The optimal design and operation of an ultrafiltration plant can be formulated as a dynamic optimisation problem. With respect to the optimal design, the following parameters are determined: • Processing tank volume, • Feed and circulation pumps power, • Number of membrane modules (N). Simultaneously, the optimal operation parameters are also found: • Timing of the operating and cleaning tasks, • Work pressure profile. The proposed algorithm to solve this problem is the following: • Relax N to a continuous value. This is reasonable due to the large number of membrane modules. • Solve the resulting dynamic optimisation problem and find N to the closest integer value. Each dynamic optimisation problem is implemented in gPROMS (Process Systems Enterprise Ltd., 2001) which incorporates a control vector parameterisation approach. The work pressure profile is considered to be a linear function of time over the entire time horizon.
5. Case Study and Results As an illustrative example, the optimal design and operation of a protein ultrafiltration plant will be studied. The membrane modules considered are those manufactured by US
152 Filter (US Filter, 2002). Each ceramic module integrates 19 channels 1.02 m long with a diameter of 4 mm for a total area of 0.24m^. Recommended maximum pressure and crossflow velocity are 1000 kPa and 3 m/s, respectively. It is assumed that the batch size is 5000 L and that the plant capacity is at least 1000 L/h. The feed concentration is 5 g/L and the product must have a concentration of 50 g/L. The cost (in US$) of the processing tank, pumps and membranes is calculated using the equations 4, 5 and 6, respectively: CT3„k=120.V°53
(4)
Cp„„p=2590-W 0.79
(5)
CMembrane=1000-A 0.90
(6)
where V is the volume in L, W is the power in kW and A is the area in m^. The equipment lifetime is 20 years for the tank, 10 years for the pumps and 5 years for the membranes. Electricity cost is 0.07 $/kWh. Using the procedure described above, an optimal solution has been found. In Figure 2, the objective function, total hourly cost, is represented versus the number of modules. The relaxed optimum is found at 960.6 modules, where the total hourly cost is 33.1 $/h.
6250 L
8.74 kW
500
750
1000
1250
33.89 kW 961 modules
1500
Number of modules
Figure 2. Objective function.
Figure 3. Optimal design.
153
250
C L E A N
(0 Q.
^200
o> 3
S 150
o a
N G
•Jf 100
o 50 0
1
1
1
2
1
3
4 Time (h)
Operating time (h)
Figure 4. Optimal operation.
Figure 5. Retentate concentration.
For the optimal design (Figure 3), the processing tank volume is 6250 L, the feed pump power is 8.74 kW, the circulation pump power is 33.89 kW and the 961 modules involve a total membrane area of 233.92 m^. For the optimal operation (Figure 4), each batch is 5.00 h long, comprising 2.97 h of filtration time followed by 2.03 h of cleaning. This cleaning time is usual in the food industry and involves rinsing, acid cleaning and basic cleaning stages. During the filtration time, the work pressure should be linearly decreased from 258 to 210 kPa. The evolution of retentate concentration can be seen in Figure 5. The permeate flow rate (Figure 6) is decreasing due to the decrease in the work pressure and, more importantly, due to the presence of membrane fouling. This phenomenon is reflected in the increasing in the membrane resistance Rm (Figure 7).
owu -
2500
^•'•'
E 30
V ^v
3- 2000o o 1500 (0
a> E 1000 -
CL 25
^s^^
^
^v.
u 20 c
^v^ ^^v^^^
(0
.i2 15 ^"^s^^
^^*v,^
0) Q.
^^
500 nC)
CO
2
a> 10 c (0
1 ^ 2
1
2 Time (h)
Figure 6. Permeate flow.
0
C Time (h)
Figure 7. Membrane resistance.
The contribution to the total hourly cost of $33.1 is 12 % capital and 88 % operating costs, respectively. A detailed breakdown of the total capital cost, in total $204 141, is
154 represented in Figure 8, where the membrane cost is highlighted. Operating costs are $146.46 per batch (Figure 9), the cleaning cost being the most significant percentage.
1% 4%
Tank 0 Feed pump D Circulation pump ID Membranes
Figure 8. Capital cost breakdown.
Feed pump D Circulation pump CD Cleaning
Figure 9. Operating cost breakdown.
6. Conclusion This work addresses the optimal design and operation of a batch protein ultrafiltration plant. The dynamic optimisation procedure adopted identifies simultaneously the optimal design parameters and operating policy of the installation. It should be emphasised that the approach can be directly applied to other ultrafiltration processes. This is the first work in which a formal dynamic optimisation methodology is applied to batch ultrafiltration.
7. References Belhocine, D., Grib, H., Abdessmed, D., Comeau, Y., Nameri, N., 1998, J. Membrane Sci., 142, 159. Cheryan, M., 1998, Ultrafiltration and Microfiltration Handbook, Technomic, Lancaster. Furlonge, H.I., Pantelides, C.C, S0rensen, E., 1999, AIChE J., 45,781. Georgiadis, M.C., Rotstein, G.E., Macchietto, S., 1998, AIChE J., 44, 2099. Kvamsdal, H.M., Svendsen, H.F., Hertzberg, T., Olsvik, O., 1999, Chem. Eng. Sci., 54, 2697. Liu, C. and Wu, X., 1998, J. Biotechnol., 66, 195. Process System Enterprise Ltd., 2001, gPROMS Advanced User Guide, London. US Filter, 2002, Ultrafiltration Systems for Wastewater Treatment, Palm Desert. Zeman, L.J. and Zydney, A.L., 1996, Microfiltration and ultrafiltration: Principles and Applications, Marcel Dekker, New York.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
155
Process Intensification through the Combined Use of Process Simulation and Miniplant Technology Dr. Frank Heimann, GCT/A - L540, BASF AG, 67056 Ludwigshafen, Germany
Abstract Various approaches to process intensification can be taken at various levels. At the lowest level, it may be possible to optimise basic physical, thermodynamic and chemical processes, for example, by changing geometries and surface structures, or using catalysts, etc. At the next level, possibilities include the use of cleverly designed plant items such as spiral heat exchangers or centrifugal reactors. Finally, at the highest or process level, improvements may involve carrying out several unit operations simultaneously in a single piece of equipment or making specific process modifications. This paper uses three examples (extractive distillation, reactive distillation column, steam injection) to demonstrate the breadth of possibilities for intensifying chemical processes. The processes were all developed with the help of process simulation and verified experimentally in laboratory-based miniplants. The information gained was essential to the successful design and operation of production-scale plant.
1. What is Process Intensification? Often when examples are given for process intensification, at first there are only examples in which it is possible to combine several unit operations in one piece of equipment. The fact that process intensification is more complex than this is clarified by the definition by Stankiewicsz and Moulijn (2000): Process intensification is any chemical engineering development that leads to a substantially smaller, cleaner and more energy-efficient technology! Intensification measures can thus be carried out at different levels which are different in degrees of complexity. The bandwidth extends from the simplest level, the level of underlying physical and chemical processes (e.g. improving heat transfer by the choice of geometry), to the next more-complex level of equipment and machines (e.g. intensification by an optimum construction design) and on to the most complex level, the process level, in which several unit operations can be combined in one piece of equipment. The examples listed here cover all the levels of complexity. Before these examples are handled in detail, there will be a clarification of what characterises miniplant technology.
2. Characteristics of Miniplant Technology (Heimann, 1998) Miniplants are complete systems for process development in laboratory scale, i.e. typical volumes lie in the range from 0.5 to max. 5 1. The throughputs are
156 correspondingly small at approx. 100 g/h to max. 1 - 2 kg/h. It should be noted here that the miniplant does not represent a true-to-scale, miniaturised simulation of a production plant. It is much more the case that the functions of the future production plant will be simulated in a representative manner. Operation of a miniplant is generally fully automated using a process control system. All of the process steps are integrated in this. It is especially necessary to simulate all important material recycling (e.g. solvents or catalysts). Ultimately the miniplants work out all the information necessary to increase the scale of the production process from the miniplant. Another important aspect of miniplant technology is that the construction design of the equipment and machines is selected in such a way that operation can be carried out under defined process engineering conditions. In this way, general modelling and simulation performed simultaneously with testing is possible and thus the foundation is created for an increase in the scale of the equipment. This will be clarified using column packings as an example. 9 1
1
8 50mbar / ._^/_^^
7
100{nbar
^
-"^^i
400lmbar
10ld mbar 1 1 1 1 1 1 1 1 1 1 1
0,0
1.0
2.0
3.0
F-Factor (Paos)
Figure 1. Separation efficiency with the chlorohenzene/ethyl benzene system. The photo in Fig. 1 shows a packing in miniplant scale with 5 cm diameter. The separating efficiency of these miniplant column packing is measured with defined test mixtures. The thermodynamical properties of these test systems, e.g. the vapour/liquid phase equilibrium, are known exactly. The separating installations can be calibrated with the use of these test mixtures. The graph in Fig. 1 shows a separating efficiency measurement, in which the number of theoretical stages per metre is entered over the vapour load in the form of the F-factor. When the column packings are used with actual material systems, reference can be made to this calibration. Miniplant columns with calibrated internal fittings then make it possible to increase the scale of the equipment directly from the miniplant scale to the production scale. This offers the advantage of a fast and cost-effective process development. Miniplant technology is thus an ideal tool to ensure experimentally process concepts selected with a view to process intensification. This will be demonstrated using three examples.
157
3. Examples 3.1. Example "Extractive rectification" Extractive rectification is an example of process intensification at the process level. In a process chain consisting of reaction, precipitation and centrifuging, a mother liquor develops which contains an alcohol, a chlorinated hydrocarbon, abbreviated as CKW, and a by-product formed during the reaction. The mother liquor must be separated into the individual components in the simplest possible way by a central processing unit. This means, the two solvents CKW and alcohol must be recovered in a high degree of purity in order to recycle them into the process. At the same time, the by-product must be separated from the two solvents. First, using simulation calculations, there will be a search for and development of a simple process concept. The table below in Fig. 2 shows some thermodynamic data for the alcohol/CKW solvent system. The boiling temperature increases from the by-product, to the alcohol and then to the chlorinated hydrocarbon. At the same time, it must be noted that the alcohol and the chlorinated hydrocarbon have an azeotrope. This azeotrope prevents the two solvents from being separated in a single rectification column.
azeotropes of the R-OH / CKW solvent system (at 1.013 bar) comp.1
comp.2
H,0 by-product by-product H,0 CKW H,0 R-OH H^O CKW R-OH R-OH CKW
Xi
X2
kg/kg
kg/kg
0.07
0,93
0.84 0.55
0.16 0.45
0.19
0.81
T X 68 79 88 93 100 112 118 121
hetero-az. Iietero-az. hetero-az. azeotrop
Figure 2. Azeotropic compositions and boiling temperatures of the material system. Extraction comes into play here. By adding another component, in this case water, three additional hetero-azeotropes form. By breaking down into an aqueous and an organic phase, an additional material separating effect is obtained, which can be used for processing the mixture. This will become clear from the following diagram (see Fig. 3). The rectification column developed on the basis of the process simulation calculations is shown here. According to the boiling temperatures of the three heteroazeotropes in the column, three side streams can be drawn off and each sent to a phase separator, in which each azeotrope breaks down into the aqueous phase and the organic phase. The organic phases consist of the purified solvents and/or the concentrated by-product. The aqueous phases are each returned to the column. At the column bottom, a waste water stream
158 then occurs which will be disposed of and/or in part will be returned to the head of the column.
aqueous NaOHc;)
N
^
C>NP,96%
OCKW
ao 0,1% R-OHO.5%
feed tf^
ao 0,5%
waste R-OH O^l'ro
Figure 3. Extractive rectification column. Extractive rectification offers another advantage. The chlorinated hydrocarbon can hydrolyse, i.e. splits off hydrogen chloride which leads to corrosion in the column. If aqueous sodium hydroxide solution is used instead of water as the extraction medium, it is possible to neutralise the hydrogen chloride which develops. The process concept was confirmed experimentally in a miniplant column and the foundations were set up for an increase in scale to production scale. Open questions tested in the miniplant related to the correct description of the vapour/liquid phase equilibriums in the simulation, testing of the fluid dynamics behaviour on the basis of the two liquid phases and the corrosion problems. In the mean time, the production column has been put into operation successfully. The specifications required for the alcohol and the chlorinated hydrocarbon are achieved and no corrosion occurs. 3.2. Example "Reaction column" In an equilibrium reaction, an educt is converted using aqueous hydrochloric acid as solvent. In this process, acetone develops in addition to the product. The disadvantage of this reaction is that it is an equilibrium reaction, in which the equilibrium is greatly on the side of the educt. What is advantageous is that the equilibrium state is reached quickly. By removal of acetone, the equilibrium can be moved towards the product side. The fact that no thermal stress can be put on the product is something which will also have to be considered with all process concepts selected for removal of the acetone. Among the alternative solutions tested, a reaction column is the most elegant possibility. Since the chemical equilibrium occurs very quickly in this example, it
159 mainly offers the advantage that in parallel to the reaction, distillative separation can also be carried out. It was also possible to prove this process concept experimentally using miniplant technology. A bubble tray column, 30 mm in diameter, was used as the miniplant column. The advantage of the bubble tray column is that there is a hold-up for each tray giving the advantage that the residence time in the column can be varied by the variation of the feed flow into the column. In this way, the foundation was set up for the increase in scale using thermodynamic and fluid dynamic simulation on the production column. This has a diameter of 600 mm and was manufactured of glass due to the corrosive nature of the aqueous hydrochloric acid. miniplant
acetone ^HCI
production
diameten iilMBiil bubbletray: iilBllli: residence timeftraiiiilliHl
educt^l HCI water
^j>roduct HCI
30mm^ 25 trays 4min
bubble tray coluillil of glass with PTFE-bubbiiil
testing of • residence time • number of trays, energy consumption • optimum position of feed tray
Figure 4. Increase in scale of the reaction column from miniplant to production scale. In this example of process intensification at the level of equipment, as well, the technical realisation was completed successfully. The column is in operation since 2 years. 3.3. Example "Steam stripping" The example explained in the following illustrates process intensification at the level of basic chemical and physical processes. Again, it concerns an equilibrium reaction, in which the equilibrium is strongly on the educt side and a slightly volatile component has to be removed from the system to displace the equilibrium. However, the equilibrium does not occur spontaneously, so the use of a reaction column is not possible. Therefore, a high-stress field develops with high yield and duration of thermal stress. Different process technology alternatives for removal of the low-boiling fraction were computer-simulated. Direct discharge of water vapour steam offers the most favourable option which protects the product.
160 A problem of the reduction in scale of the production to the miniplant scale developed in the simulation of the discharge of steam, since the volume and the cross-section area change by different orders of magnitude. This means it is not possible to keep both steam load and steam introduction duration constant during the reduction in scale from production to the miniplant. An elegant solution is to carry out separate experiments regarding the influence of thermal stress duration and the influence of fluid dynamic load. In addition, the question of simulating the discharge of steam in the miniplant scale was of central importance. In order to achieve the finest possible distribution of steam with the greatest steam bubble surface, it was planned to use special steam discharge valves, which were constructed on the basis of a fluid dynamic simulation in the miniplant (see Fig. 5). In this example, again it was possible to successfully ensure the process concept experimentally using miniplant technology. This production plant is already successful in operation.
production plant 3 valves 80 mm 0 with 216 holes, ea. 3,5 mm 0 miniplant: 1 pipe 10 mm 0 with 5 holes, 3 mm 0
^^^^^^^*^^
6,3 m3. reactor
Figure 5. Reduction in scale of the steam discharge from production to miniplant scale.
4. Closing Remarks Worldwide competition, the necessity of protecting natural resources and minimising environmental stress will continue to play a central role in the development of new processes. The examples presented above show that cost-effective as well as sustainable solutions can been found using process intensification. Miniplant technology is an important tool for quickly and cost-effectively experimental ensuring of solutions which have been proposed, in view of process intensification.
5. References Heimann, F., 1998, CIT 9,1192. Stankiewicsz, A. and Moulijn, J., 2000, Chem. Eng. Progress 1,22.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
161
A Constraint Approach for Rescheduling Batch Processing Plants Including Pipeless Plants W. Huang and P.W.H. Chung Department of Computer Science, Loughborough University, Loughborough, Leicestershire, LEll 3TU, UK
Abstract In the process industries batch plants are attracting attentions because of their suitabiUty for producing small-volume, high-value added commodity chemicals. Pipeless plants have also been developed and built to increase plant flexibility. Unexpected events, such as the failure of a processing unit, sometimes happen during operations. To avoid risk and to utilise the remaining resource, it is important to reschedule the production operation quickly. The constraint model in BPS has been extended to include constraints for rescheduling. These additional constraints are described in this paper and a case study is used to demonstrate the feasibility of the approach.
1. Introduction Efficient scheduling of batch plants is required since it harmonizes the entire plant operation to achieve the production goals. The scheduling of batch plants is challenging, especially for pipeless batch plants where the plant layout has to be considered as well. Many researchers addressing these issues use the Mixed Integer Linear Programming (MILP) approach, where an elaborate mathematical model is required to describe a problem. Kondili et al. (1993) suggested a general algorithm for short-term batch scheduling formulated in MILP using a discrete time representation. Pantelides et al (1995) presented a systematic and rigorous approach for short-term scheduling of pipeless batch plants. However, as the complexity of a plant increases, scheduling problems become harder to formulate in MELP. Constraint Satisfaction Technique (CST) has been used to solve problems ranging from design to scheduling. CST does not require elaborate mathematical formulae but requires a problem to be stated in terms of its constraints. Das et al (1998) investigated a simple but typical production scheduling problem and found it is possible to develop CST-based scheduling solution within very modest computation time. Huang and Chung (1999) developed a constraint model to represent a common class of traditional batch plant scheduling problems and a simple scheduling system. Batch Processing Scheduler (BPS) was produced based on this model. Das et al (2000) compared the approach developed by Huang and Chung (1999) with established mathematical programming approaches and concluded that it is relatively easy to represent complicated batch plant scheduling problems by using the constraint-based approach. Huang and Chung (2000) proposed a constraint model to represent scheduling problems in pipeless batch plants and improved the scheduling system BPS accordingly. Unexpected events, such as the failure of a processing unit, sometimes happen during plant operations. These events will make the original schedule invalid. To avoid risk and to utilise the remaining resource, it is important to reschedule the production operations quickly. However, few papers have reported on the investigation on rescheduling of chemical batch plants. Ko et al (1999) proposed a rescheduling approach for pipeless
162 plants. Their system can overcome unexpected events by adjusting the starting time of reactions and re-determining the sequence of equipment to be processed. Although the paper took plant layout into account, transportation time was ignored, which means the generated schedule would not be feasible in practice. This paper reports on the rescheduling capability as an extension to BPS. The extended constraint model can be applied to solve rescheduling problems for both traditional and pipeless batch plants, where the transfer time between stations is considered.
2. Constraint Model For Rescheduling 2.1. A typical pipeless plant process The constraint model for rescheduling is described with the help of a typical pipeless plant process. The production process is shown in Fig. 1. Circles and rectangles denote states and jobs respectively. The process starts with clean vessels being charged with A and B in an appropriate amount. The charged vessels are then taken to a blender, where the content is homogenized in a blending operation to form material AB. Following this, AB reacts with a third raw material C to form another intermediate material. Int.. Three final products PI, P2 and P3 are formed by blending Int. with three different additives Al, A2 and A3 respectively. The corresponding products PI, P2 and P3 are discharged through a discharge station and the empty dirty vessels must be cleaned before they can be used again. Finally, the clean vessels must move back to the start point waiting for A to be charged. The plant layout is shown in Fig. 2.
Blended AB in vessel
Empty dirty vessel Fig. 1: Production processes of a pipeless batch plant
2.2. Rescheduling constraints Unexpected events can be formulated as additional constraints on the original problem. The goal of rescheduling a problem must is to find a feasible solution so that what has been done before the failure cannot be changed and the breakdown resource cannot be used during its down time. In order to achieve this the failed resource is allocated to a "breakdown" activity in BPS.
163 BA^F,^ STiBA) = Ts md ET(BA) = T,
(1)
Where BA and Fj. represent a "breakdown" activity and the failed resource respectively. Ts and T^ represent the start and end time of the failure period of the breakdown resource respectively. The above formulae show that the failed resource is allocated to the "breakdown" activity to ensure that other jobs cannot use it. Its start and end time {ST and ET) are equal to T^ and T^ respectively. If ET(Jd < ST(BA) then ST\Jd = ST(Jd, ET\Jd = ET{Jd and Sp{Jd = Sp(Jd
(2)
This formula means that if in an original solution a job ended before T^, which is equal to the start time of the "breakdown" activity introduced in rescheduling the problem, the start and end time of the job as well as the selected resource will remain unchanged. ST*(J{) and ET*{J{) represents the job's corresponding start and end time respectively, and ^p* represents the corresponding resource in the rescheduled solution. Essentially, the above constraints "freeze" the part of the original schedule that has already happened. If Then
ST{BA) < ET{Ji) < ET{BA) either (BA) < ST\J{) & ST\Jd = TV^, or J^ <- S^
(3)
Where Ji represents a job that required the failed resource and ended after T^ but ended before T^ in the original scheduling solution. This formula means that in the rescheduled solution, either the job is delayed to start after the failure period or the job selects another suitable resource if there is one. The decision is made by the system according to constraint propagation. If the job is delayed, the exact start time TVr will be determined by the system as well. 5q represents another suitable resource for the job. If Then
ET(BA)<ET{Jd {ST\jd = ST(Jd or
ST\J0
= TV^} and {Sp(Jd = Sp(Jd or J.^S^}
(4)
This formula means that if a job ended after T^ in the original solution, in the rescheduled solution either its start time remains unchanged or it is delayed, which is also possible. In the meantime either the selected resource will remain unchanged or the job selects another suitable one. These decisions are made by the system according to constraint propagation. After a problem is rescheduled, the makespan in the new solution is probably extended because some jobs that required the failed resource during the failure period in the original solution are delayed now. In order to avoid an arbitrary increase of the makespan during rescheduling, an upper bound on the makespan needs to be introduced. The upper bound is set as the original makespan plus the largest possible delay among those delayed jobs. There is a possibility that a delayed job just ended after (i.e. one unit time after) the start of failure period in the original solution and its duration is the largest one among all jobs, so the largest possible delay is the failure duration plus the job's duration minus one. M:<M, + {T,-T,) + D^a.-l
(5)
164 Where Ms* represents the current makespan for rescheduling and Ms represents the original makespan. Dmax represents the maximum duration among the jobs and {T^-T^ indicates the duration of the breakdown activity i.e. the failure period. The rescheduling optimal criterion is either to minimize makespan or to minimize the number of changes of the start time of all jobs compared with the original solution: If
ST\J^ = STiJO then Vi = 0 else V^ = 1
(6)
A^v=2V,
(7)
1=1
Mm(A^v) or Min(M,)
(8)
Where V^i is a binary variable that represents whether the start time of a job has changed. If the start time remains the same, then Vi is assigned to zero, otherwise it is assigned to one. A^v represents the total number of changes i.e. the sum of Vi. Min(N^) means to minimize the total number of changes and Min{M^ means to minimize the makespan.
3. Case Study The process described above is used as a case study to demonstrate the feasibility of the proposed model. The production demands are shown in Table 1 and the time is defined in terms of unit time, which is 0.1 hour, in the schedule. Eight processing stations of six distinct types are considered and the relevant details are shown in Table 3. The plant layout in Fig. 2 is used. The tracks between stations are divided into 11 segments, which are 8 vertical tracks (VO - V7) and 4 horizontal tracks (HO - H3). The moving time for a vessel to pass each track segment is 0.1 hour. Buffers (Bl - B4) are placed in the cross points where vessels can wait for the next track segment to become available. The information on failed resource is provided by the user as shown in Table 2. There are two rescheduling criteria in BPS: 1) minimize the total number of changes to the original schedule; 2) minimize the makespan. Chart 1 shows the optimal solution found given the criterion to minimize the makespan. It illustrates that time and resources are allocated properly considering the failure of Reactor 1. The failure period is shown by the shaded area. The Chart also indicates that Reactor 1 was not allocated to any jobs during the failure period. Compared with the original solution obtained (not shown here), what happened before the breakdown of Reactorl was unaffected. BPS also successfully obtained correct solutions when the criterion is to minimize the number of changes was used. Again, results are not shown here due to space limit. Table 1: The production demands. Products PI
P2 P3
Amounts (m') 20 20
10
Delivery time (unit) 180^^^^ 2 ^ 240^^
Suitable vessels V I , V2 V3 V I , V2
Table 2: Failed resource. Failed Resource Reactorl
Start of Failure (unit time) 30^^
End of Failure (unit time) 3gth
165 Table 3: Duration and suitable stations for each job.
Charge A Charge B Blend A+B Reaction Blend Additives + Int. Discharge Cleaning
Blenderl
Suitable Station Charger 1 Charger 2 Blender 1, Blender 2 Reactor 1, Reactor 2 Blender 1, Blender 2 Discharger Cleaner
Duration (unit time) 5 5 8 8 5 5 5
Jobs
Job No. 1 2 3 4 5 6 7
Chargerl
JVo Ho
Discharger
V2
Blender
V4
H,
IVe
H2
f—'
^ — V
B.
B4 |V5
Cleaner
Charger2
Reactorl
V7
Reactor2
Fig. 2: The layout of a pipeless batch plant. 0 (Stations)
10
20
i
1 1 i
40
1 i
50
60
.12 1.1/^2.1.1 j
j
j
i
i
i
1
1 1 i
i
1 i
j =
j
j
i
j
i
i
5
:
:
5
:
:
.3 :>.3.i
j
i
n.2.:i.2.'^2.2.i
Charger2
*
M
^' ' ' :
H !'•'
Blenderl
| \ i
Blender2
-^•
•
V
Reactor2
i
1
i
VI.2
T\
3.2.2
1
:i_
: 1
:i
1:
1:
l l 2.4.1!
i
i \
\ \
:i
I2I1.4.3I j
1
1 i
1.2.1
i
:
1
3^.5.2
:
ii \u
M !
Cleaner : .1.3 1 . 2.3 1 -1 1 W
i
i
1.3.3
1
1
L_J :
i
i
1 1 1
iiX i
i
: 1 il
,7.3 2.7.^1
1 : 1 : 1 1 :
1.1.1
1
J
:i
1:
1 : ^ ^ ^ f:
^^^^m^^mm^^^^
J
!V .7.2
.2.1
1,3.1 j
1 : 1 :
7.1!
1.1.2 3.2.2 i 3.3.2 j 3.4.2 i 3 5.2 3,6.21 3.7.2 3,1.23.2,2
1
1
1 J.lj
••
2.1.1 2.2.1 j 2.3.1 2.4.1 2.5.1 | 2.6.1 2 j j 1—j_J j i_j \i (_J LI I_:_ I : L : i i
Vessel2
.6.2 1.6 .1
:
\3.7.p
il.4.3 1,5.3J 1.6.:^K7.3 1:
1. S.l I „p j .
j 113.4.2/1,4.1
J i l l ^6.2^.6,^2 .6.1 1 1 = 1 '^ \'- • = 4•J
Discharger
Vessel3
90 100 110 120 f^^'^^m ^^^^^^
3.3.2 13.1
2^5.1I
13.5.2 i.5.3
^
80
1 j 1!
|\
3.2
Reactorl
70
X.Y. Z ' X:*Vessel; Y: job No.i Z:
i i 1 1 i ! ^.—M \~\
Charger 1
Vessel 1
30
1 i
< 1
lAl :
•1
1.5.1 i 1.6.1 K7.li I j f :• :• • \ / 1
1 ! 1
|3.3,2 [3.4. 2 T'5.2 3.6.2 ^ 72
1
j
1
1
....^ mi^^^^mA
1 1
••^H
Chart 1: The optimal solution by minimizing the makespan.
^^^•M^^J
166
4. Conclusion In this paper, a constraint model for rescheduling batch processing plants including pipeless plants is proposed. This research progress is achieved by adding rescheduling capability as an extension to Batch Processing Scheduler (BPS), in which unexpected events such as the failure of resource are treated as additional constraints and the problem will be rescheduled using the original solution as a guide. Essentially, the constraints "freeze" the part of the original schedule that has already happened and stop the failed resource being used during the breakdown period. The feasibility of the model has been demonstrated by a case study.
5. References Das, B.P., Shah, N. and Chung, P.W.H. 1998, "Off-line scheduling a simple chemical batch process production plan using the ILOG scheduler". Computers and Chemical Engineering, Vol. 22, pp. S947-S950. Das, B.P., Shah, N., Chung, P.W.H. and Huang, W. 2000, "Comparative Study of TimeBased and Activity-Based Production Scheduling", Hungarian Journal of Industrial Chemistry, Vol. 28, pp7-10, HU ISSN: 0133-0276. Huang, W. and Chung, P.W. 1999, "Scheduling of Multistage Multiproduct Chemical Batch Plants using a Constraint-Based Approach", Computers and Chemical Engineering, Vol. 23, ppS522-S514, ISSN 0098 1354. Huang, W. and P.W.H. Chung, 2000, "Scheduling of Pipeless Batch Plants Using Constraint Satisfaction Techniques", Computer and Chemical Engineering, Vol. 24, No. 2-7, pp377-383, ISSN: 0098-1354. Ko, Daeho, Seonghoon Na, II Moon and Min Oh, 1999, "Development of a Rescheduling System for the Optimal Operation of Pipeless Plants", Computer and Chemical Engineering, suppl. S523-S526. Kondili, E., Pantelides, C.C. and Sargent, R.W.H. 1993, "A General Algorithm for Short-Term Scheduling of Batch Operations - I. MILP Formulation", Computers Chem. Engng, Vol. 17, No. 2, pp. 211-277. Pantelides, C.C. and Realff, M.J. and Shah, N. 1995, "Short-Term Scheduling of Pipeless Batch Plants", Chemical Engineering Research and Design, Vol 73, NO.A4,431-444.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
167
Integrated MINLP Synthesis of Overall Process Flowsheets by a Combined Synthesis / Analysis Approach N. Irsic Bedenik\ B.Pahor^ and Z.Kravanja^ 1 Faculty of Chemistry and Chemical Engineering, University of Maribor P.O.Box 219, Maribor, SI - 2000 Slovenia e-mail: [email protected] and [email protected] 2 Sika, Ltd., Prevale 13, Trzin, SI - 1236, Slovenia
Abstract This paper describes an integrated MINLP synthesis of overall process schemes using a combined synthesis / analysis approach. The synthesis is carried out by a multilevelhierarchical MINLP optimization of the flexible superstructure, whilst the analysis is performed in an economic attainable region (EAR). The role of the MINLP synthesis step is to obtain a feasible and optimal process structure, and the role of the subsequent EAR analysis step is to verify the MINLP solution and to propose in the feedback loop, any profitable superstructure modifications for the next MINLP. The main objective of the integrated synthesis is to exploit interactions between the reactor network, separator network and the remaining part of the heat/energy integrated process scheme.
1. Introduction The concept by which a reaction/separation network is the kernel of a chemical plant in which other process units have to support, gives rise to the hierarchical strategy of process synthesis (Douglas, 1988). However, due to the many varied interactions between the subsystems, the only way to explore these interactions integrally is to perform simultaneous synthesis of the overall process schemes using the mathematical programming approach. The most efficient way to perform discrete and continuous decisions simultaneously is to apply mixed-integer nonlinear programming (MINLP). Even with the important advantages of this approach; e.g. optimality and feasibility of solution, the direct synthesis of overall process schemes by MINLP is still limited by small and medium size problems and its complexity. The most important deficiency of the superstructure approach is that the search for the optimal solution is within the reach of an abundance of proposed alternatives. On the other hand model complexity, nonlinearities and nonconvexities may give rise to poor optimal solution even when good quality alternatives are proposed in the superstructure. In order to upgrade the capabilities of the MINLP approach, a multilevel MINLP is applied which is combined with sequential analysis of intermediate MINLP solutions in order to propose a more and more profitable evolutionary superstructure. This paper presents an integrated methodology for MINLP process synthesis in an equation-oriented environment, which is upgraded by a sequential geometric analysis of the economic attainable region (EAR). This synthesis is performed within a systematic frame work of the multilevelhierarchical MINLP synthesis of the process schemes (Kravanja and Grossmann, 1997) from reactor network (Level 1) to reactor/separation network (Level 2) and, finally, to the overall heat integrated process scheme (Level 3). The combined MINLP synthesis and EAR analysis approach was first applied to the synthesis of pure reactor networks at
168 Level 1 (Pahor et al, 2000) and, more recently, to the synthesis of reactor/separator networks at Level 2 (Irsic Bedenik et al., 2001). Now it has been extended to synthesis of the overall process schemes at Level 3.
2. Combined Multilevel MINLP Synthesis and Economic Attainable Region Approach The main feature of the MINLP synthesis is to obtain appropriate trade-offs between continuous/discrete cost decisions, whilst the main feature of the attainable region (AR) technique is to identify an optimal reactor network structure by which the maximal conversion, selectivity, yield, etc. can be achieved. The drawback of the MINLP is that it can converge to a poor local optimal solution and the drawback of the geometric AR approach is to that it is limited to 2-D or at most 3-D problems. The main idea of the combined MINLP/AR approach is to upgrade the capabilities of both approaches: the MINLP synthesis is thus performed to obtain appropriate economical trade-offs of multi-D problems and analysis in the AR is carried out to prescreen the solution space, to verify the optimal MINLP solution and, if there exists any extension to the optimal MINLP solution, to propose profitable modifications to the superstructure. In this way a flexible concept of compact superstructures is introduced which enables the solution of synthesis problem areas and to obtain superior solutions. Since the conventional AR is constructed in the concentration space using already mentioned technological criteria, it does not directly reflect process economics and, consequently, the obtained solutions may lie far from the true economic trade-off solution. The same criterion is used in both steps in order to restore consistency in the synthesis and analysis steps. Economic, rather than technological, criterion is used to define the objective function in the MINLP step and, therefore, the conventional attainable region was modified and transformed into an economic one by constructing 2-D projections in economical, rather than concentrated, spaces. Beside the reaction rate vector R(x) which is used to construct a concentration attainable region (CAR), the economic objective function from the MINLP model is now used to transform the CAR into the economic attainable region (EAR). E.g., the profit function:
PiRix)) = Z(c-/^;)-Cp-Z(^;"-^r)-c, p
(1)
peprod ^^ heat
-0
^ heat
-C
^ ^ cool
- 0
^ cool
C
T/- ret
-V
^ ret
C
X ' ^ sep
~ Z^^i
^ HEN
~^
gives rise to 2-D EAR projection of the annual profit produced vs. retention time along the reactor network and molar profit function: P''{R{x)) = P/F^,^^^
(2)
gives rise to 2-D EAR projection of the profit per mol of product produced (P^ in $/mol) vs. flow rate of desired product produced along the reactor network (Fp^desked)- The former EAR is more suitable for direct verification of MINLP solution in the AR and the latter one is more suitable for analysis of the MINLP solution to propose profitable modifications to the superstructure. It should be mentioned that many other useful 2-D projections can be generated. In this way, the conventional trajectories in CAR are now modified to reflect the economic performance of the system: at Level 1 the trajectories reflect the economic performance of the reactor network (reactor efficiency curves), at Level 2 they reflect the mutual performance of the reactor/separator network
169 (reactor/separator efficiency curves) whilst now at Level 3 they resemble complete efficiency curves and reflect the economic performance of the overall process. Level 3 is thus an integrated synthesis of the entire chemical process based on information from Level 2. A combined approach is performed until the analysis in EAR suggests no profitable extensions. The DAE formulation of the process models is exact and it's integration into the process superstructure is relatively easy. The combined MINLP synthesis and EAR analysis approach introduces a certain degree of flexibility and adaptability to the modular flowsheet superstructure which, in this way, becomes more compact so that MINLP synthesis can be performed more simply. This in turn favours simpler, and hence, more realistic process flowsheets, where economy of scale, connectivity, and heat integration can be appropriately accounted for in the selected units. The multilevel MINLP synthesis and EAR analysis approach, is implemented in the MIPSYN (Mixed-Integer Process Synthesizer) computer package and the analysis in the EAR in the MATHCAD 2000 Professional. •1 —|»<2>K>«0 FEED-2 - < ) - * 0
1:
Legend for
interconnection
nodes
n
SINaE CHOICE STREAM SPLITTER
O
MULTIPLE CHOICE STREAM SPLITTER
O
SINGLE CHOICE STREAM MIXER
O
MULTIPLE CHOICE STREAM MIXER
Figure 1: Reactor /separator general superstructure. The multilevel MINLP/EAR is thus performed hierarchically by the following 3-level scheme: Start: General reactor network superstructure (Fig. 1) Level 1 - Reactor network synthesis: MINLP with detailed reactor and very simple separator models with a simultaneous heat integration model (Duran and Grossmann, 1986). EAR - proposes profitable reactor/separation superstructure. Level 2 - Reactor/separator network synthesis: MINLP with detailed reactor and detailed separator models or statistical separator model based on rigorous simulation, Duran's model for simultaneous heat integration to identify process hot and cold streams. EAR - proposes final compact superstructure with only the most attractive alternatives. Level 3 - Reactor/separator/HEN synthesis: Detailed reactor and detailed separator model, modified model for the simultaneous synthesis of heat integrated HEN (Yee and Grossmann, 1990). Stop: When the analysis in EAR suggests no profitable extensions.
3. Case Study A case study of allyl chloride production described in Pahor et al. (2001) is taken to demonstrate the multilevel MINLP/EAR approach. Two consecutive reactions are
170 A + C I 2 — ^ B +HCI (principal one) and B4-CI2 —^^^^C + HCland one parallel reaction A + CI2 —'^^—> D (A propene, B allyl chloride, C 1,3 - dichloro propene, D 1,2 -dichloropropane;ki,o= 1.5 • 10"^ s"\ k2,o = 4.4-10^ s'\ k3,o= 100 lmor^s'\ and Ex = 66 271J/mol, £2 = 99 410 J/mol, £3 = 33 140 J/mol). The reaction rate vector R(x) for components A, B, C, D and CI2 is: (3) ^ 2 ^ B ' ^Cl »
^ 3 ^ A • ^Cl » ~ ^ l ^ A ' ^Cl ~ ^ 2 ^ B ' ^Cl ~ ^ 3 ^ A ' ^Cl J
The objective is to maximize annual profit at a fixed production of allyl chloride (7.560 mol/s). 3.1 Level 1: Reactor network synthesis The general reactor network superstructure was first used without intermediate separation and with ad hoc cost for the final separation. The profit was 17.154 M$/a with one PFR. Analysis in the EAR (Fig. 2a) shows that the actual PFR trajectory is significantly below the horizontal boundary of the AR which would be obtained when CB is kept zero. This observation indicates that additional profit can be gained if suitable intermediate separation of B can be found. When the MINLP synthesis was performed with intermediate separation (IS) and no cost for IS was charged to the profit, the optimal topology resuhed in a set of three reactors, the first was PFR and the next two were recycle PFRs with a profit of 18.773 M$/a. The analysis in the EAR (Fig. 2b) shows reactor trajectories at higher molar profit.
0.756
1.51
2.27
3.02
3.78
4.54
5.29
6.05
6.80 7.56
Molar flow rate of component B (mol/L)
Figure 2a: Reactor trajectory without IS.
434
5.»
t.MI
7Si
Molar flow rate of component B (mol/L)
Figure 2b: Reactor trajectories with IS.
3.2 Level 2: Reactor/separator network synthesis MINLP was repeated with a more detailed statistical separation model and separation cost based on rigorous simulation. The solution without IS of allyl chloride yielded a profit of 19.109 M$/a gained in one PFR. When the solution was analysed in the EAR all IS ahematives was found to be unattractive (Fig 3a). The only possible improvement could be achieved on the final separation. Six final separation alternatives (FS) were analysed in the EAR (Fig. 3b) to suggest new superstructure. The new superstructure comprised one PFR, no IS and the best four final separation alternatives. The optimal MINLP solution yielded a profit of 20.079 M$/a and the alternative with the least expensive condensation of propene was selected for the final superstructure.
171 0.0920
0.0920 ^
0.0893
^
0.0866
...^
SS" 0.0903 0 ~" —
>%
5 0.0839 u •O 0.0812
^^^^^-^^^:^:^'~^'^^^
O- 0.0785 e •3 0.0758
au 0.0731
-
•
-
.
_
.
_
19 109 S 18 935 ^ 18 456*5 18 310 >-
^
Legend: No IS IS is prefractionation with inert component 2 0.0677 IS is prefractionation with flash 0k - • - IS is conventional prefractionation 0.0650 0.75 1.51 2.27 3.02 3.78 4.54 5.29 6.05 6.80 7.56 Molar flow rate of component B (moi/s)
^
0.0704
Figure 3a: Reactor/intermediate separator trajectories.
VI
-^ 0.0886 n ts ,g 2 0. 0 Z, 0
0.0869 0.0852 0.0835 0.0818
" " * • " ' • " •
—
— •—.. ^
" ' •
in
""""'""* — — -.
aZ 0.0801
20 079 19 896 ' ^ 19 6 7 1 ^ 19 350 ^ C 18 899 2 18 467
e«
?" 0.0784
aS
0.0767 0.0750
0.75 1.51 2.27 3.02 3.78 4.54 5.29 6.05 6.80 Molar flow rate of component B (mol/s)
Figure 3b: Reactor/final separator trajectories.
3.3 Level 3: Reactor/separator/HEN network synthesis More detailed heat integration with different utilities (cold water, refrigerator, steam), restrictions and forbidden matches was carried out simultaneously with the synthesis of the reactor/separator/HEN network. Reactors were modelled as adiabatic reactors due to practical constraints. 0.0820 Without heat integration: The MINLP c.=0 0.0813 was first performed in one PFR as "o 0.0806 suggested at Level 2 and the solution ^ 0.0799 ^ 0.0792 yielded a profit of 17.791 M$/a. The 2 0.0785 selectivity of allyl chloride production a .:f"-N. u 0.0778 with respect to the consumption of CI2 is "o 0.0771 very low (Table la) not only because the 0.0764 0.0757 number of passes of propene through the 0.0750 "0 0.756 iM 2J7 3^2^jr^.54 5.29 6.05^780 7.56 rcactor is Small due to very high H MIPSYN 1
0t
Molar flow rate of component B (mol/L)
Figure 4a: Reactor/separator trajectory without heat integration.
COUSUmptioU o f U t i l i t i e s b u t alsO bcCaUSC
of the very wide temperature range in the reactor. (See the rapid decrease of the molar profit in Fig. 4a.)
Simultaneous heat integration: The MINLP was repeated for reactors with a side feed stream in order to decrease the temperature range in the reactor. The profit was increased to 20.393 M$/a by a two PFR with a side cold shot. The selectivity significantly increased (from 0.81 to 0.95). Table 1: MINLP solutions at Level 3. Option Without heat integration Profit, M$/a 17.791 HU, M$/a 0.221 CU, M$/a 0.271 HEN, M$/a / Selectivity 0.81 5.24 N° passes IPFR Topology
Heat integration 20.393 0 0.262 / 0.95 8.44 2 PFR, side feed
Simultaneous HEN synthesis 20.103 0.011 0.250 0.240 0.89 8.40 IPFR
172 Simultaneous HEN synthesis: The MINLP synthesis was carried out by the use of a onestage synthesis model for HEN (Modification of Yee's model) with 5 hot streams(2 segments for one non-isothermal process stream, 2 hot isothermal process streams, one hot utility stream - steam) and 9 cold streams (4 segments for two non-isothermal process streams, 3 isothermal process streams and 2 cold utilities - cold water and refrigerator). The profit obtained in the 23'^ MINLP iteration was 20.103 M$/a and the molar profit in the EAR (Fig.4b) was significantly increased compared to that of the non-integrated solution. It is interesting that the optimal topology (Fig. 5) comprises only one PFR without the side cold feed in order to increase the temperature driving force in HEN. The selectivity was decreased to 0.89 (Table Ic) and 240 k$/ a of investment cost for HEN was charged to the profit.
0.756
1.51
2.27 3.02 3.78 4.54
5.29 6.05
6.80 7.56
Molar flow rate of component B (moI/L)
Figure 4b: Reactor/separator/ HEN trajectory.
T^< Figure 5: Optimal structure of the reactor/separator/HEN network.
4. Conclusions The main advantage of the integrated synthesis approach is to obtain appropriate tradeoffs between product income and raw materials, and operating and investment costs. In this way, any particular improvement in any process subsystem can have a synergistic effect on any improvement to the others, e.g. processes which recycle any increase of reactor efficiency, reduction of separation costs or integration of heat or energy not only directly increase the profit, but can significantly reduce the consumption of raw material, too. One the other hand neglecting parts of the process cost, e.g. HEN cost, overestimates the profit not only because the cost are not subtracted from the profit but because the reactor efficiency and, hence, the income is overestimated, too. It should be noted that the synthesis by the hierarchical multilevel MINLP/EAR approach is guided by engineering creativity which may give rise to new innovative design solutions.
5. References Douglas, J.M., 1988, McGraw-Hill, New York. Kravanja, Z. & Grossmann, I. E., 1997, Comp. Chem. Engng, 21, S421-S426. Pahor, B., Irsic N., & Kravanja Z., 2000, Comp. Chem. Engng, 24, 1403-1408. Irsic Bedenik, N., Pahor, B. & Kravanja, Z., 2001, Supplementary proceedings of ESCAPE-11, CAPEC, Denmark, 59-64. Duran, M.A., & Grossmann, I.E., 1986, AIChE J, 32, 123-138. Yee, T. F., & Grossmann, I. E., 1990, Computers chem. Engng, 14,1165.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
173
Computer Aided Design of Sty rene Batch Suspension Polymerization Reactors C. Kotoulas, P. Pladis, E. Papadopoulos and C. Kiparissides Department of Chemical Engineering & Chemical Process Engineering Research Institute, Aristotle University of Thessaloniki, P.O. Box 472, 540 06, Greece
Abstract The present paper deals with the development of a comprehensive CAD tool for a styrene free-radical batch suspension polymerization reactor. The gPROMS© simulation platform is employed for describing the dynamic behavior of the batch polymerization system. The kinetic model accounts for both thermal and chemical initiation mechanisms, thus, the model can be employed over an extended range of polymerization temperatures. A generalized free-volume model is derived to account for diffusion-controlled reactions (e.g., termination, propagation and chemical initiation). The overall reactor model includes also appropriate dynamic energy balances for the reaction medium and the coolant in the reactor jacket. An equation of state model is employed to calculate the concentration of the various species (e.g., monomer, solvent, H2O) in the various phases present in the reactor. A Windows^^ user-friendly interface, based on DELPHI progranmiing language, has been developed to link the gPROMS model with the input file containing the necessary design and reactor operating data. It is shown that the model can successfully simulate the operation of batch styrene suspension polymerization reactors, and predict the polymerization rate, temperature, pressure and molecular weight distribution of polystyrene.
1. Introduction Polystyrene (PS) is a commercially important thermoplastic, mainly produced by both bulk and suspension free-radical polymerization. The bulk polymerization of styrene is difficult to control and, thus, is being replaced by the suspension process, which produces PS in the form of white granules. In the presence of volatile hydrocarbons (C4 - C6), foamable PS beads can be produced by the suspension process. At low temperatures (e.g., < 100 *^C), the free-radical polymerization of styrene is commonly carried out in the presence of chemical initiators (e.g., azo compounds and peroxides). At higher temperatures (e.g., 100 - 200 °C), the polymerization of styrene can readily proceed via thermal initiation. The main advantage of thermal polymerization of styrene is that pure PS of desirable molecular weight can be produced by simply varying the polymerization temperature. However, formation of low molecular oligomers and close control of the polymerization temperature are the two disadvantages of the thermal polymerization process. Based on the original developments of Mayo (1968) and the later work of Pryor and Coco (1970) regarding the thermal initiation mechanism of styrene, a comprehensive kinetic mechanism was developed to describe the combined thermal and chemical free-radical polymerization of styrene. A key difference between the present work and previous modelling approaches is the use of a comprehensive
174 thermal initiation mechanism, as well as the use of dynamic molar balances for all species related to the formation of 'live' radicals. The complete polymerization model was build in gPROMS© (an equation-based modelling environment) following an open-system architecture. The final model consists of several modelling elements including a comprehensive kinetic model, a thermodynamic model used for equilibria calculations, a gel-glass effect model, etc.). A Microsoft® Windows™ based interface developed in Borland® Delphi™, is used for: (i) the input of process data and model-related parameters, (ii) the selection/editing/storage of the appropriate modelling components, (iii) the construction of the overall model, (iv) the simulation, and (v) the processing of the simulation results.
2. The Batch Suspension Polymerization Process The batch suspension polymerization system considered in the present study, is schematically shown in Figure 1. It consists of a well mixed jacketed vessel. In the suspension polymerization process, liquid styrene is dispersed in the continuous aqueous phase by the combined action of stirring and the use of suspending agents. The reaction takes place in the monomer droplets. For modelling purposes, each droplet can be treated as a small batch bulk polymerization reactor. The heat of polymerization is transferred from the dispersed droplets to the aqueous phase and then to the coolant flowing through the reactor's jacket. Gas Phase:
Monomer and Pentane Fig. 1: Batch suspension polymerization
Suspending medium reactor.
A cascade control system consisting of a master PID and two slave PI controllers was employed to maintain the polymerization temperature within ±0.1 °C of the setpoint value by manipulating the cold and hot water flowrates to the reactor jacket. The master controller monitors the reaction temperature and its output drives the setpoint of the slave controller. The latter monitors the outlet temperature of the jacket fluid and drives the two separate control valves. For the accurate prediction of the reactor pressure with respect to time under both isobaric and nonisobaric conditions, a detailed account of monomer distribution in the various phases is required. In the suspension styrene polymerization process, three
175 different phases can exist, namely, the vapor, the aqueous and the organic phase. During polymerization, the three phases are assumed to be in equilibrium. The fugacities of styrene and pentane in the gas phase are calculated using the Soave-Redlich-Kwong Equation of State (EOS), whereas the corresponding activities in the polymer phase are calculated using the Flory-Huggins equation.
3. The Polymerization Kinetic Mechanism A comprehensive kinetic mechanism is proposed to describe the combined chemical and thermal free-radical polymerization of styrene. Thus, besides the commonly employed reactions (e.g., chemical initiation, propagation and termination), thermal initiation and chain transfer to monomer and to Diels-Alder adduct reactions are included. In particular, the so-called AH thermal initiation mechanism of Mayo comprises a reversible Diels-Alder dimerization of styrene to form 1-phenyl-1,2,3,9tetrahydronaphtalene (AH), the formation of a styryl (M) and a 1-phenyltetralyl radical \A) via the reaction of AH with monomer, the initiation of new polymer chains and the formation of "dead" trimers via the reaction of AH with styrene. Assuming that the quasi-steady approximation for the concentration of the intermediate species, AH, holds true, one can derive a second- or third-order initiation rate model with respect to the styrene concentration, depending on the reaction step that is considered to be the dominant one in the thermal initiation mechanism (Hui and Hamielec, 1972, p.749). To our knowledge this is the first attempt to model the combined chemical and thermal initiation of styrene without the application of quasi-state approximation for all the intermediate thermal initiation species. In summary, the postulated free-radical polymerization mechanism of styrene includes the elementary reactions presented in Table 1 (Kotoulas et al). In the kinetic scheme, the symbols /, R' and M denote the initiator, primary radicals and monomer molecules, respectively. The symbols R^ and Dn identify the respective "live" and "dead" polymer chains of length n. Table 1: Comprehensive Kinetic Mechanism for Styrene Polymerization. '
Chemical Chain Initiation:
Propagation:
I ^^ )2R* R ' + M *"' )Ri
R„+M-^^^R„,,
Thermal Chain Initiation:
Chain Transfer to Monomer:
2^^C$(^) 1
R^+M
Ph
1
•
AH+M—!^ riT J(A)+ ^^/^>.J/^ Ph
•f^
nW K^
^^ >Ri+Dn
Chain Transfer adduct:
R„+AH
to
Diels-Alder
^^^ >Dn+A
A + M-J^^->R3 Termination by Combination:
AH + M
^' ) trimers
R„+R,-^^^D„„
176 The method of moments is employed to calculate the variation of the leading moments of the number chain length distributions of "dead" and "live" polymerization chains with time. The cumulative molecular weight distribution of the polymer is calculated as a weighted sum of all the "instantaneous" weight chain length distributions produced during a batch run. At low to moderate monomer conversions or/and at high temperatures, the polydispersity index of the polystyrene (i.e., P.D = M^/Mn) will be low (e.g., P.D < 2) and the 'instantaneous' weight chain length distribution (WCLD) will follow the two-parameter Schulz-Flory distribution. Diffusion-controlled phenomena affecting the termination and propagation reactions, as well as the initiator efficiency (gel-, glass- and cage effect, respectively) are expressed in terms of a reaction-limited term and a diffusion-limited one (Keramopoulos and Kiparissides, 2002). The latter depends on the diffusion coefficients of the corresponding species (i.e., polymer and monomer) and an effective reaction radius.
4. Windows based gPROMS Application The model for the batch suspension polymerization reactor was developed using in the gPROMS© simulator. A windows based interface developed in Delphi programming language was employed to build, modify and run the reactor model. The model can predict the monomer conversion, initiator(s) and blow agent (pentane) concentrations, polymerization rate, molecular weight distribution of polystyrene, external jacket inlet/outlet temperatures, and the reactor temperature and pressure. The windows based application calls the gPROMS simulation engine in asynchronous mode, thus, allowing the real-time display of the simulation results. Some features of the application are: (i) storage, modification and retrieval of process operating conditions and parameters of the mathematical model, (ii) user defined gel/glass effect functions, (iii) user-defmed thermodynamic model for phase equilibria calculations, (iv) user-defined set-point policy for reactor temperature, (v) user-defmed initiator, pentane and water flowrates during the course of polymerization, (vi) run-time display of simulation results, (vii) generation of report files with gPROMS simulation activity and simulation results, and (viii) extended graphics output capabilities including comparison with experimental data and among different simulation runs. Major points of consideration in the software developments were the simulation speed, the user friendliness of the interface, and the capability of process model modification by the user. Several options are available for editing/modifying both the gel/glass/cage effect models and the EOS used for thermodynamic calculations. In fact, the user can introduce a completely new model, modify an existing relationship or/and import a user-supplied model in the form of a gPROMS Foreign Object.
5. Simulation Results The predictive capabilities of the new kinetic model were demonstrated by a direct comparison of model predictions with experimental measurements on monomer conversion, number and weight average molecular weights and molecular weight distribution. The polymerization was carried at different temperatures in a batch, bulk polymerization system. In the temperature range of 100 - 150 °C, a chemical initiator (e.g., Dicumyl Peroxide, DCP) was employed in combination with the thermal initiation of styrene. On the other hand, at higher temperatures (150 - 180 °C), the polymerization was initiated exclusively by the thermal initiation mechanism.
177 In Figures 2 and 3, experimental and model results on monomer conversion and number and weight average molecular weights are plotted with respect to the polymerization time at 100°C. The initial concentration of DCP was equal to 4,000 ppm. It is apparent that a very good agreement exists between model predictions and experimental results. Figure 4 shows a comparison between simulation results (continuous lines) and experimental data (discrete points) on monomer conversion at three different temperatures (e.g., 150, 160 and 170°C). The polymerization was carried out in the absence of a chemical initiator. As can be seen, for all reactor temperatures, there is close agreement between experimental conversion measurements and simulation results. In Figure 5, a comparison between model predictions and experimental GPC measurements on weight average molecular weight is shown.
1.0-
0.8-
•/
.2 0.6-
1 O
•/^
0.4-
0.2-
• 9^ Time (min)
Fig. 2: Predicted and experimental Fig. 3: Predicted and experimental conversion-time histories (T =100 ^ C, [IQ] average molecular weights (T =100 ^C, = 4,000ppm DCP). [lol = 4,000ppm DCP).
1.0^.-—S-
^ "X-^^^^^^^^—
0.8-
a
c
.2 2 5 >
•/
y
0.6-
8 i
• 0.2•
D T = 150°C 1 • Tsieo'c * T = 170*'C 1
0.0-1
1 Monomer Conversion
Fig. 4: Predicted and experimental conversion-time histories for thermal polymerization (T = 150 - 17(fC).
Fig. 5: Predicted and experimental weight average molecular weights (T = 150-
i7orc).
In general, very good agreement is observed at all reactor temperatures. It is important to point out that weight average molecular weights, exhibit a characteristic decrease with monomer conversion as it has been reported in the literature. This due to the chain transfer to adduct (AH) reaction.
178
1.0-
1 "o a. •5 c
1 iS
0.8-
Conversion • 21.4% O 56.8 % • 71.2% D 100%
k\
0.6-
f^\ d'' \
PS
0.4-
u.
s 1
0.20.0-
M^
°5.\
^^:
'^^fer^
Molecular Weight
Fig. 6: Predicted and experimental molecular weight distributions at different conversion levels. Finally in Figure 6, model predictions are compared with Experimental GPC measurements on molecular weight distributions at 130°C. Simulation results and the corresponding experimental distributions are plotted at four different conversion levels. It must be noticed that the molecular weight distributions are scaled with respect to monomer conversion. Apparently, the calculated molecular weight distributions are in excellent agreement with the GPC measurements over the whole conversion range.
6. Conclusions A batch suspension polymerization reactor model has been developed in gPROMS, including a comprehensive kinetic mechanism for the combined chemical and thermal polymerization of styrene, a generalized model for the prediction of gel/glass effect, molar species and energy balances for the reactor/jacket and the dynamics of the reactor controllers. The predictive capabilities of the proposed model were demonstrated by the successful simulation of experimental data on styrene conversion, number and weight average molecular weights and molecular weight distributions, over a wide range of temperatures. The gBA can be used as a tool to speed-up considerably the training of new engineers, carry out what-if scenarios for the production of new products and the optimization of temperature and/or initiator addition policies.
7. References gPROMS User Guide, 2002, Process Systems Enterprise Ltd. Hui, A.W. and Hamielec, A.E. 1972, J. App. Polym. Sci, 16,749. Keramopoulos, A. and Kiparissides, C. 2002, Macromolecules, 35,4155. Kotoulas, C., Krallis, A., Pladis, P. and Kiparissides, C. (submitted to Macromolecular Chemistry and Physics). Mayo, F.R., J. Amer. Chem. Soc. 1968,90,1289. Pryor, W.A. and Coco, J.H. 1970, Macromolecules, 3, 500.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
179
Waste Heat Integration Between Processes III: Mixed Integer Nonlinear Programming Model Anita Kovac Kralj and Peter Glavi^ Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova 17, Maribor, Slovenia Phone:+386 62 2294454 Fax:+386 62 2527 774 E-mail:[email protected]
Abstract The simultaneous heat and power integration between processes using mixed integer nonlinear progranmiing (MINLP) contains equations of structural and parametric optimization of the nonretrofitted or retrofitted plants. High pressure steam production and generation of electricity using a steam turbine are included in the integrated structure. MINLP can predict: the optimum energy target of the integration between processes, steam and electricity production. The approach has been illustrated by four complex processes (solvent production, formalin production, methanol production and oil refinery) using MINLP. The objective function have maximized the annual profit of heat and power integration to 303,2 kUSD/a.
1. Introduction Integration between processes can reduce waste heat. It can be performed either by pinch analysis or by mixed integer nonlinear progranmiing, MINLP. The pinch analysis does not guarantee the global optimal solution because it cannot be used simultaneously with material balances but it quickly proposes good integrated structures between nontrivial complex processes. Heat integration between processes was first introduced into the literature by Ahmad and Hui (1991). The paper describes a method that, considering an overall plant consisting of many processes, leads to ''total site integration" where heat recovery from one process to another occurs using their utilities. Rudman and Linnhoff (1986) included a turbine which reduced energy usage among the processes. Bagajewicz and Rodera (2000) developed targeting procedures for direct and indirect integration in the special case of many plants. A very popular MINLP algorithm (Biegler et al., 1997), which is based on the mathematical programming can be used for the integration between processes. Although simultaneous it is difficult to converge for complex and energy intensive processes because the number of variables increases with the number of combinations.
2. Waste Heat and Power Integration Between Processes The processes operating within one location can be mutually integrated. Energy consumption can be lowered, if more waste heat can be recovered. The heat transfer between the processes can reduce waste heat, CO2 and SO2 emissions and thereby pollution. We have extended the method of waste heat integration between processes with the simultaneous generation of electricity using the steam turbine (Kovac Kralj et al., 2002). The method for waste heat integration between processes is composed of three steps: 1. retrofit of individual processes (Kova^ Kralj et. al, 2000), 2. analysis of efficient heat transfer between nonretrofitted or retrofitted processes.
180 3. simultaneous integration between some nonretrofitted and some retrofitted existing processes using mixed integer nonlinear programming (MINLP). The integration between processes can be carried out by all the steps, or by the first and the third step or only by the last step depending on the complexity of the problems. All the steps can be carried out simultaneously and the MINLP algorithm applied with or without the first and the second step. The mathematical model is using the MINLP algorithm, performing the simultaneous heat integration between processes and generation of electricity. The MINLP model contains equations of structural and parametric optimization of the nonretrofitted or retrofitted plants. The high pressure steam production and generation of electricity using the steam turbine are included in the integrated structure. MINLP can predict: the optimum energy target of the integration between processes, steam and electricity production. The procedure does not guarantee a global cost optimum, but it does lead to good, perhaps near-optimum designs. 2.1. The steam turbine system The steam turbine system is a closed - cycle one. It is used to drive the process reducing steam pressure from high or medium pressure to low or atmospheric pressure. The design pressure of the high or medium pressure turbine can be varied. The power of the high or medium pressure turbine (Ptur) is a function of the pressure respectively or of the inlet (rtur,in) and outlet (Tiur.out) temperatures (a, b and c being constants): ^tur = ^tur + ^tur 'Ttm,m + ^tur *'' tur,out
(1)
The efficiency of the high or medium pressure turbine (//tur) is a quadratic function of the inlet (Ti^a,in) temperature: ^tur = ^ef + Kf -TturM + ^ef '(T^An)
(2)
The heat flow rate of the condenser (C^con) is a function of the inlet (rtur.in) and the outlet (T'tur.out) temperatures: ^con ~ ^con "^ ^con * ^ tur,m "^ Qon "^ tur,out
W/
The annual depreciation of the high or medium pressure turbine (Cd,tur in USD/a) is a function of the inlet {Ttar,m) temperature. The quadratic equations is composed of two equations (Biegler et al., 1997, pp. 688): Cd,tur = 925 0 9 0 - 4 2 7 2 -7^,^+ 5,09(7^^,^'
(4)
The annual depreciation of the high or medium pressure pump (Cd,pum in USD/a) is a function of the inlet (rtur,in) temperature (Biegler et al., 1997): Cd,pum = 368 + 7,5 'T^^]n
(5)
3. Case Study Four existing nonretrofitted processes: solvent production from a primary oil, formalin production from methanol and air, oil refinery (atmospheric distillation), low-pressure Lurgi methanol production, are operating within one location and, therefore, the method
181 of waste heat integration between them for steam and electricity production can be used. The MINLP model contains equations of structural and parametric optimization of the nonretrofitted or retrofitted plants. The MINLP model is based on the superstructure (Fig. 1). Hot and the cold process streams of the existing nonretrofitted processes (Kovac Kralj et al., 2002): solvent recovery (E204), the refinery (FG, E211, E210, S80), the formalin (ElO, E5, E6) and the methanol (ACl) or retrofitted methanol (SG) or retrofitted refinery (RS80) which are heated or cooled by utility, are included in the superstructure. The superstructure is including the integration between: the synthesis gas stream of the retrofitted methanol plant and the stream 80 of the existing nonretrofitted refinery (rMsG-Rs8o)» the synthesis gas stream of the retrofitted methanol plant and the stream ElO of the nonretrofitted formalin plant (rMsG-FEio)» the synthesis gas stream of the retrofitted methanol plant and the stream RS80 of the retrofitted refinery (rMsG-rRRS8o)» generation of electricity is using surplus heat in the retrofitted methanol plant (rMsG-TUR) and heat flow rate of condensers can be used for process heating in the nonretrofitted formalin plant (KI-FEIO and K2-FE5), the flue gas stream of the nonretrofitted refinery and the stream of the 37 bar steam production (RFG-P37), the flue gas stream of the nonretrofitted refinery and the stream 80 of the nonretrofitted refinery plant (RFG-RS8O)» the E211 stream of the nonretrofitted refinery and the stream E5 of the nonretrofitted formalin plant (RE2irFE5)» the E211 stream of the nonretrofitted refinery and the stream of water preparation for 37 bar steam (RE2irP37), the E211 stream of the nonretrofitted refinery and the stream E5 in split A of the nonretrofitted formalin plant (RE2irFE5A)» the E211 stream of the nonretrofitted refinery and the stream E6 of the nonretrofitted formalin plant (RE2II-FE6)» the ACl stream of the nonretrofitted methanol and the stream ElO of the nonretrofitted formalin plant (MACI-FEIO)» the E210 stream of the nonretrofitted refinery and the stream E204 of the nonretrofitted solvent plant (RE2IO-SE204) and the E210 stream of the nonretrofitted refinery and the stream E6 of the nonretrofitted formalin plant (RE2IO-FE6)The selected hot streams (/; / = SG, FG, E211, E210, ACl; / = 1,.. /) and cold streams (/; j = S80, P37, ElO, E5, E6, E204, RS80; j = 1, .. 7) are binary variables for possible splitting in structures A or B or C (yi/^, yt^, ytc, yjA, yjB, yjc) or D for the generation of electricity. The binary variables are determining the integration between processes. The redundant heat flow rate of the retrofitted methanol plant can be transferred to the nonretrofitted refinery CVSGA = yssoA = 1) or to the formalin process O^SGB = JEIOA = 1) or to the retrofitted refinery (JSGC = JRSSO = 1) or to generation of electricity O^SGD = }^tur =1)-
The heat of the flue gas can be used for the 37 bar steam production CVFGA = yp3i = 1) or for heating the nonretrofitted refinery stream (JFGB = yssoB =1)- The waste heat of stream E211 in the nonretrofitted refinery can heat the stream E5 and water for 37 bar steam production (yEiuA = >'E5B = yp3i = 1) or E5 and E6 streams of the formalin process CVE2IIB = JE5A = JE6A = 1)- The stream ACl of the existing methanol plant can be integrated with ElO of the formalin one O'ACI = >'EIOB = !)• The heat flow rate of E210 in the nonretrofitted refinery can be transferred to the formalin plant CVE2IOB = yE6B = 1) or to E204 of the solvent production process CyE2ioA = }^E204 = !)•
182 rMsr.-TUR 2096 k W i
875 °C K2-Fpj
. 14761 kW
83:
•
rMsG-Rs80
BOS kW
£)
821,3 °(I
synthesis g^
as split A
- J I — l _ r ) ytur , ^^^70^
rMsG-rRRSj retrofitted methanol
m
"HI-
70''C '
Bl
,^,^o^
3HB kW 2QZ2kW
3178 kW
^^^-^"C 0
I
2072 kW
369°C 0
1
313 °C
ysG 3178 kW 16427 kW K54,6°C ^Bl
875 °C existing refinery A 345,5^^ flue gas
o
RpG-Rs8(J
B
ypG
rMsG-Fmo 313 °C
P37 150 °C 220 °C
o o a o -P37
existing refinery
A
182,7 °C.
yE2ii
E211
B
existing methanol ACl
136 °C|
95 °C
%211-f E5A 136 °C\
110,7°C
P^ACI-FEIO
127,5
O
t
yAci
114 °C RE21irSE204 A ^ 130,8 °C
existing refinery 169,7 °d
A yE210
E210
RE2IO"FHS
134,6 °C
l ^ ^ 'S80A
. 4 existing refinery ' 175,4 °C stream 80
__>-<.
2096,2 kW
ys8o B
233,5 °C M ^S80A
246 °C
6
667,5 k\M
94,25 °C
<—
loc 94,18 °C^
55,1 °C I
•^ existing formalin
2072 kW
94 °C
1840,5 kW
ElO
X j i
II
\
U|^39,2k\7^39,2kW
1
I
53,3 °C E5
aa 75,3 kW 213 °C
'1664,6 kW
Fig. 1: Grid diagram of heat and power integration between the processes.
y^s
existing formalin 15 °C
75,3 kW
Or,
yEio
existing formalin
139,2tW^^
139,2kW,
114,6°C 358 °C
yp37
O
Q
1464,4 kW
^ElOB
37 bar steam production
32 °C
122 k\M
1040 ktV
94,22 °C
.54°C
o
99°(C
6
B
83,5 kW
yE6 E^ existing solvent 69,2 °C
yE204 E204 retrofit refmery yRsso RS80
183 The MINLP model is including the equations of structural and parameter optimization. The hot and cold streams can split to streams A or B or C or D for the generation of electricity: yiA-^yiB-^yic + yiD^i
/=i,.../
(6)
yjA-^yjB
j=l,.-J
(7)
+ yjc^^
The binary variables can influence the structural dependence. The 37 bar steam production is including the preparation of water (yp^i = yEiuA = !)• yFGA+ yp3i ^ 2 yFGA + }'E211A^2
(8) (9)
In integrating the retrofitted methanol with the nonretrofitted refinery (ysGA= >'S80A= 1) or the retrofitted one (JSGC = ^RSSO = 1) the furnace is not needed; therefore, there are no flue gas and 37 bar steam production (ypcA = JFGB = JSSOB = yp3i = yEiuA = 0): JSGA + ySGC + >^FGA ^^ 1 JSGA+^'SGC+yPGB^l ySGA + >'SGC + >'S80B ^ 1 JSGA+ySGC+JP37^ 1 >'SGA+ ySGC+ yE211A^ 1
(10) (11) (12) (13) (14)
The generation of electricity CVSGD) is including the condensation (ytur)* ysGD-ytm'^0
(15)
The MINLP model is using additional annual profit of heat and power integration criterions. The additional annual income of integration sums up the additional savings of: fuel, 5 bar steam, 8 bar steam, cooling water and 37 bar steam and electricity production. In the model the existing areas can be used (AHE,ex) by enlarging them with additional areas (AHE,add)- The additional annual depreciation of enlarged and new areas (AHE,new) of heat exchangers and pipings (Table 1), is multiplied by the payback multiplier (r = 0,2448) to obtain the maximum annual profit of heat and power integration: Max annual PROFIT = Cfuei-[^G-jFGA+^GB+^8o-}'s80A+^s80B ] + Q ' [ ^ 1 0 A + ^ 1 0 B + ^ 1 0 C +^5-tyE5A+ }'E5B+ ^'ESC) + ^6-CVE6A+3^E6B)] + Cg'^204-JE204 + Ccw[^E2irCVE211A+3^E211B) + ^ E 2 1 0 A + ^ E 2 1 0 B ] + Q i -Ptur'^tur
+ Qr^i
-[S(670.
AHE,add'''')-l,8 +1(8600+670. AHE,new'''')-l,8
- Cd,tur - Cd,pum - Cpip • ( S ( yi^A+ yi,B + yi,c) + ? ( J;,A+ }';,B+ >';,c) )]' r
(16)
The simultaneous heat and power integration as optimized by MINLP is selecting the generation of electricity using the high pressure turbine (40 bar with the efficiency of 82 %, 7;ir,in = 500 °C, T^^out = 125 °C, ptur,out = 2 bar ) and the boiler rMsG-TUR in the retrofitted methanol plant (VSGD = ytur= 1; grey heat exchanger in Fig.l). The condensers KI-FEIO and K2-FE5 are transfering total heat flow to the nonretrofitted formalin (y^^ = jEioc = ^Esc = 1 ) - The flue gas is integrated with the process stream in the nonretrofitted refinery process (JFGB = yssoB = 1; RFG-RSSO)- New or additional areas of heat exchangers are: rMsc-TUR (with 2096 kW) of 8,7 m ^ KI-FEIO (with 1464,4 kW, TKI out= 117,7 °C) of 85,7 m ^ K2-FE5 (with 139,2 kW) of 8,5 m\ RFG-RS8O (with 667,5 kW) of 57,5 m\ The
184 structure enables the generation of 496,6 kW of electricity and savings of 5 bar 1603,6 kW of steam and (2-667,5 kW) of fuel. The additional annual depreciation of: the high pressure turbine, pump, insulation piping, new and additional areas of heat exchanger are 176,8 kUSD/a. The additional annual income of electricity production, saving of fuel and 5 bar steam are 480 kUSD/a. The additional profit of the integration is estimated to be 303,2 kUSD/a. Table 1: Cost data for example processes. 4,
(8 600,0+670A^'^^)-3,6 17,06 4,17 2,95 2,60 0,40 4,43 2380,0
Installed costs of heat exchanger : Cost of electricity (Q/)**: Cost of 37 bar steam (Q7)**: Cost of 8 bar steam (Cg)*^: Cost of 5 bar steam (Cs)^: Cost of cooling water (Ccw)^Cost of fuel {Cfueif: Cost of insulation piping (Cpjp)^:
&
Ahmad; Tjoe and Linnhoff, i A=area in m Swaney;
USD USD/(GJ) USD/(GJ) USD/(GJ) USD/(GJ) USD/(GJ) USD/(GJ) USD
Perry, - Ciric and Floudas,
4. Conclusions We have extended the simultaneous integration method to exchange waste heat between the processes with the simultaneous generation of electricity using the steam turbine. Simultaneous heat and power integration between processes can be performed using the MINLP algorithm, in which alternatives of the heat transfer between several existing nonretrofitted or retrofitted processes can be included. We have carried out simultaneous heat and power integration of four existing, nontrivial plants. The objective function has maximized the annual profit to 303,2 kUSD/a.
5. References Ahmad, S. 1985. Heat exchanger networks: Cost tradeoffs in energy and capital. Ph. D. thesis. University of Manchester, Manchester, 113 -306. Ahmad, S. and Hui, C.W. 1991. Heat recovery between areas of integrity. Computers chem. Engng, 15, 809-832. Biegler, L.T., Grossmann, I.E. and Westerberg, A.W. 1997. Systematic methods of chemical process design. Prentice Hall PTR. Upper Saddle River New Jersey. Bagajewicz, M.J. and Rodera, H. 2000. Energy savings in the total site. Heat integration across many plants. Comput. chem. Engng 24, 1237 - 1242. Ciric, A.R and Floudas, C.A. 1989. A retrofit approach for heat exchanger networks. Comput. chem. Engng 13/6, 703-715. Kovac^ Kralj, A., GlaviC, P. and Kravanja, Z. 2000. Retrofit of complex and intensive processes II: stepwise simultaneous superstructural approach. Comput. chem. Engng 24/1, 125-138. Kovac Kralj, A., Glavic, P. and Krajnc, M. 2002. Waste heat integration between processes. Applied Thermal Engng 22, 1259-1269. Perry, R.H. 1974. Chemical engineer's handbook, McGraw-Hill, New York, 25-19. Rudman, A. and Linnhoff, B. 1995. Process integration: planning your total site. Chemical Technology Europe January/february. Swaney, R. 1989. Thermal integration of processes with heat engines and heat pumps. AIChE Journal 35/6, 1010. Tjoe, T.N. and Linnhoff, B. 1986. Using pinch tehnology for process retrofit. Chem. Engng. 28,47-60.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
185
Integration of Process Modelling and Life Cycle Inventory. Case Study: i-Pentane Purification Process from Naphtha L. Kulay^^\ L. Jimenez^^\ F. Castells^^\ R. Banares-Alcantara^^^ and G. A. Silva^*^ (1) Chem. Eng. Dept., University of Sao Paulo, Av. Prof. Luciano Gualberto tr.3 380, 05508-900 Sao Paulo, Brazil. E-mail: {luiz.kulay, gil.silva}@poli.usp.br (2) Chem. Eng. Dept., University Rovira i Virgili, Av. Paisos Catalans 26,43007 Tarragona, Spain. E-mail: {Ijimenez, fcastell, rbanares}@etseq.urv.es
Abstract A framework for the assessment of the environmental damage generated by a process chain and based on a life cycle approach is proposed. To implement it, a methodology based on the integration of process modelling and environmental damage assessment that considers all the processes of the life cycle was developed. This integration is achieved through an eco-matrix formed by eco-vectors containing the most relevant environmental loads. To verify the methodology, a case study on the deisopentaniser plant of REPSOL-YPF (Tarragona, Spain) has been carried out. The environmental profile of the alternative scenarios is improved when co-generation and heat recovery are considered.
1. Introduction Modern society demands a constant improvement on the quality of life. One of the actions of the administration is to guarantee a better environment. In this context, chemical process industries suffer an increasing pressure to operate cleaner processes. To achieve this goal, environmental aspects and the impact of emissions have to be considered in the design of any project using one of the procedures already developed [ISO, 1997]. Life Cycle Assessment (LCA) is the most common tool for the evaluation of the environmental impact of any industrial activity. The LCA is chain-oriented procedure that considers all aspects related to a product during its life cycle: from the extraction of the different raw materials to its final disposal as a waste, including its manufacture and use. According to ISO 14040 [ISO, 1997], LCA consists of four steps: goal and scope definition, inventory analysis, impact assessment and interpretation. The LCA identifies and quantifies the consumption of material and energy resources and the releases to air, water and soil based upon the Life Cycle Inventory (LCI). The procedure as it is applied to chemical processes has been previously described by Aelion et al. (1997). The results from the LCI are computed in terms of environmental impacts, which allow the establishment of the environmental profile of the process. For environmental assessment the application of potential impacts is restricted to the estimation of global impacts. For example, the amount of CO2 released is used as an indicator of climate change due to its global warming potential. One kilogram of CO2 generated by an industrial process in any of the different stages of a product life cycle
186 contributes equally to the climate change. However, this is not the case for sitedependant impacts, such as the potential impact of acidification measured as H"^ release. Unfortunately, the LCA does not accommodate for site-specific information of different process emissions. To include it, weighting factors across the system boundaries have to be selected, a task which is beyond the objective of this work [Sonneman et al., 2000]. For this reason, a methodology that includes environmental aspects in the analysis of processes has been developed. Applying the LCA perspective to different scenarios for electricity generation and steam production provides key information to decision makers at a managerial and/or political level.
2. Methodology This section describes a proposed methodology to evaluate the environmental impact of a chemical industrial process chain in the most accurate way possible. It includes a procedure to compute the LCI based on the concept of eco-vectors [Sonneman et al., 2000]. Each process stream (feed, product, intermediate or waste) has an associated ecovector whose elements are expressed as Environmental Loads (EL, e.g. SO2, NOx) per functional unit (ton of main product). All input eco-vectors, corresponding to material or energy streams, have to be distributed among the output streams of the process (or subsystem). In this sense, a balance of each EL of the eco-vector can be stated similarly to the mass-balance (inputi = outputi + generation,). This is the reason why all output streams are labelled as products or emissions. The eco-vector has negative elements for the pollutants contained in streams that are emissions and/or waste. Figure 1 illustrates these ideas for an example of a chain of three processes that produces a unique product. The proposed procedure associates inventory data with specific environmental impacts and helps to understand the effect of those impacts in human health, natural resources and the ecosystem.
3. Problem Statement The methodology has been applied to the debutaniser and depentaniser columns of a naphtha mixture processed in the REPSOL-YPF refinery (Tarragona, Spain). The process PFD is shown in Figure 2. The first column is fed with a naphtha stream rich in C4 (= 28.3 tonh"^). This unit is a debutaniser and removes n-butane and lighter components (= 0.50 tonh"*). Perfect separation is not achieved since capital investment must be balanced against operating costs to arrive at an acceptable economic payout. As a result, it is more convenient to think of the debutaniser as having a cut point between n-butane and i-pentane, which is removed as top product from the second column (= 16.3 tonh'^). The intermediate naphtha input stream (C5 rich-naphtha = 71.5 tonh'^) comes from another plant in the same refinery. Production under design conditions is 83.0 ton•h'^ Proper understanding of recovery in both columns can improve refinery economics, due to the downstream effects of light components. The plant has four heat exchangers, and two of them (HX-1 and HX-3) recover process heat. Both condensers are air cooled, and thus plant utilities are electricity and steam. The production of these two utilities consumes additional natural resources and generates additional releases to the environment, and thus they were included.
4. Results The LCI was computed using process simulation as a support tool. This approach is appropriate for both, the design of new processes and the optimisation of existing ones. The use of process simulators to obtain the LCI guarantees a robust approach that
187 allows LCA to exploit their advantages in terms of availability of information, and reduces the uncertainty associated with data in the early phases of design. However, we can expect that on a long-term perspective, relative and uncertain values are valid when comparing among alternatives. The models for the naphtha plant, the electricity generation, the steam production and the heat recovery system were developed using Hysys.Plant®, and were validated using plant data. To build accurate models for all alternatives is not practical, and thus the models were reused for the different alternatives considered. The key simulation results were transferred to a spreadsheet (Microsoft® Excel), through macros programmed in Visual Basic™. Despite the fact that emissions were produced at different locations (e.g. those related to its extraction, transport and refining), the eco-vector has a unique value for each stream, i.e. it does not considers site-dependant impacts. The eco-vectors associated to all the inputs and outputs of the process are computed per ton of product (i-pentane). The aspects included in the eco-vector were divided into two categories: > Generated waste: in air (CO2, SO2, NO^, and VOC; estimated as fugitive emissions), wastewater (chemical oxygen demand, COD) and solids wastes (particulate matter and solids).
Raw material
Process I
(^^A + \
J RM
Process 3
(soA
^ (^0^]
+
NO,
\
Process : 2 '
NO,
\
)1
CO^ NO,
+
J2
\
J3
-> Product
=
fsoA co^ NO,
\
J%CA
Figure 1. Life cycle inventory analysis according to the eco-vector principle. Econd D6-C4
HX_1
I
Econd De-iC5
De-C4
Figure 2. Simplified PFD of the REPSOL-YPF plant. >
Consumption of natural resources: depletion of fossil fuels (fuel-oil, gas-oil, carbon, natural gas and oil), consumption of electricity and water. The plant consumes medium-pressure steam, while electricity generation and steam
188 production may use high or low pressure steam. The eco-vectors that correspond to these streams are also considered. The environmental loads of the process inputs were retrieved from the ETH Report [Frischknecht et al., 1996] and the TEAM™ database [TEAM, 1998]. The use of different scenarios allows the comparison among alternatives. The scenarios were chosen based on the source of steam and the generation of electricity (Table 1). Three of them focus on the environmental impacts of the original process (scenarios VI, VII and VIII) where changes related to the production of steam were compared. All other cases compare alternatives for a possible future implementation, e.g. those considering co-generation to produce electricity. For each one of the scenarios the eco-vector was divided into the three different processes: steam production, electricity generation and naphtha plant. As an example, the eco-vectors of scenario III are shown in Table 2. The results indicate that: • To reduce the CO2 and the BOD we have to focus on the production of steam. For scenarios VI, VII and VIII the electricity generation has also a certain impact (^ 3 to 29%). • To decrease the SO2 changes should be made in the production of steam and/or in the generation of electricity (Figure 3). The scenarios that include cogeneration radically minimise this value. • NOx, VOC and solid wastes are produced completely by the generation of electricity. • H2O consumption is mainly due to steam production. As expected, heat integration allows the reduction of this amount by 91%. Results (Figure 4a) show that scenarios VII, VIII and, to some extent, scenario V concentrate most of the consumption of fossil fuels, while the best alternatives in terms of water consumption are scenarios III and IV. As expected, heat recovery has a great impact on the results of scenarios III, IV, VI and, to a lesser extent, scenario VIII. If cases III and VIII are compared, the impact of co-generation on ELs is easily detected. Concerning the consumption of natural resources, the best alternative is scenario III (cogeneration, downgrading of steam and heat recovery). In terms of atmospheric releases (Figure 4b), the best options are scenarios III and IV. On the contrary, the most significant impacts were observed in scenarios VII and VIII. Nevertheless, the releases of NOx, SO2 (scenario V) and VOC's (scenario VIII) must be highlighted. Table 1. Main characteristics of the scenarios considered.
I II III IV V VI VII VIII
Electricity generation Co-generation Co-generation Co-generation Co-generation Expansion of steam in a turbine Spanish energy grid Spanish energy grid Spanish energy grid
Steam production Generation of steam Expansion of steam Generation + heat recovery Expansion + heat recovery Fuel oil & fuel gas burning Fuel oil & fuel gas burning + heat recovery Fuel oil & fuel gas burning Generation + heat recovery
189 Table 2. Eco-vectors for scenario 111.
Natural gas/kg Water/kg Electricity/kW High pressure steam/kg Medium pressure steamykg Electricity/kW High pressure steam/kg Medium pressure steam/kg COz/kg SOa/kg NO,/kg VOC/kg Particulate matter/mg DQO/mg
• Steam production
Steam Electricity production generation Inputs 0. 1.410"' 0. 1.5-10' 0. 0. 0. 1.8-10-' 0. 0. Outputs 3.410"" 0. 1.810"' 0. 0. 1.810"' Atmospheric emissions ^ 0. 3.7-10"' 0. 0. 0. 0. 0. 0. 1.5-10" 0. 1.510" Liquid efluents 3.9-10"' 0.
Plant operation
Total /ton i-Cs
0. 1.510"' 3.4-10"^ 0. 1.810"'
1.410"' 3.0-10"' 3.4-10"^ 1.810"' 1.810-'
0. 0. 0.
3.4-10"'' 1.8-10"' 1.810"'
0. 0. 0. 0. 0.
3.7-10" 0. 0. 0. 1.510 1.510"
0.
3.910"'
I Electricity generation
Figure 3. Comparison of the SO2 generation, (a) Scenario VI; (b) Scenario VII; (c) Scenario VIII. With respect to wastewater generation, there are a few scenarios with low impact (III, IV, VI and VIII) while the rest exhibit very similar values. If all aspects are analysed simultaneously, the best alternatives are scenarios III and IV, while the worst one is scenario VII. It is noteworthy to say that all environmental loads considered in the ecovector have to be balanced to reach a compromise, as their impacts in the ecosystem and human health differ widely. Also, note that some of the impacts are local (e.g. steam production), while others are distributed in different regions (e.g. extraction, external electricity generation) even though the LCA approach does not allow to differentiate among them.
190
I
n
m
IV
V
VI
B Natural gas
H Fuel oil
BFuelgas
QOil
a Water
D Electricity
vn
vffl
• Carbon
I
n
in
IV
V
VI
IZ1C02
BS02
• NOx
0 Particulate
^QOD
• Sobd wastes
vn
Vffl
HVOC
Figures 4a and 4b. Percentage of the impact on different Environmental Loads for each scenario, (a) Raw materials consumed; (b) Emissions.
5. Conclusions Significant progress in the integration of environmental aspects with technical and economic criteria has been achieved to date, although limitations still exist due to the uncertainty of the available data. The proposed methodology shows that the use of process simulators to obtain the LCI guarantees a robust approach. Furthermore, the methodology provides valuable information to compare alternatives for future implementation by assessing and preventing environmental impacts. This study will be extended with the application of models to predict the damage on human health, natural resources and the ecosystem. For the case study two different types of environmental profile can be identified (scenarios I-IV and scenarios V-VIII). The use of co-generation to produce electricity decreases the total damage, as its relative impact is lower than the one resulting from the use of the Spanish electricity grid.
6. References Aelion, V., Castells, F. and Veroutis, A., 1995, Life cycle inventory analysis of chemical processes. Environ. Prog., 14 (3), 193-195. Frischknecht, R., Bollens., U., Bosshart, S. and Ciot, M., 1996, ETH report, Zurich, Switzerland. ISO 14040, 1997, Environmental management. Life cycle assessment. Principles and framework, ISO, Geneve, Switzerland. Sonnemann, G.W., Schuhmacher, M. and Castells, F., 2000, Framework for environmental assessment of an industrial process chain, J. Haz. Mat., 77, 9 1 106. TEAM®, 1998, Ecobilan Group, Paris, France.
7. Acknowledgements One of the authors (L. Kulay) wishes to thank CAPES (Ministry of Education of Brazil) for the financial support. We also acknowledge the cooperation of REPSOL-YPF, and Hyprotech (now part of Aspentech) for the use of an academic license of Hysys.Plant®.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
191
Superstructure Optimization of the Olefin Separation Process Sangbum Lee, Jeffery S. Logsdon*, Michael J. Foral*, and Ignacio E. Grossmann Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA15213, USA;*BP, Naperville, IL60563, USA
Abstract The olefin separation process involves handling a feed stream with a number of hydrocarbon components. The objective of this process is to separate each of these components at minimum cost. We consider a superstructure optimization for the olefin separation system that consists of several technologies for the separation task units and compressors, pumps, valves, heaters, coolers, heat exchangers. We model the major discrete decisions for the separation system as a generalized disjunctive programming (GDP) problem. The objective function is to minimize the annualized investment cost of the separation units and the utility cost. The GDP problem is reformulated as an MINLP problem, which is solved with the Outer Approximation (OA) algorithm that is available in DICOPT++/GAMS. The solution approach for the superstructure optimization is discussed and numerical results of an example are presented.
1, Introduction The olefin process involves a number of steps for producing and separating hydrocarbon components consisting of hydrogen and Ci-Cs components. We address the optimization of the separation system, where the goal is to select a configuration of separation tasks and their corresponding units, as well as pressure and temperature levels in order to perform heat integration. The objective is to minimize the total annualized cost of the separation system. Figure 1 shows the superstructure of the olefin separation system. There are number of states and separation tasks. The white boxes represent sharp split separations and the shaded boxes represent non-sharp split separations. We consider 8 components in the separation system and they are hydrogen, methane, and C2~C5 components. Since we are mainly concerned with the recovery of ethylene and propylene, we assume that the C4 mixture and the C5 mixture can be treated as a single component. As shown in Figure 1, there are 25 states including final products and 53 separation tasks. Non-sharp split separations have intermediate components which appear in both the top and bottom products. For each separation task, there is a subset of technologies available depending on the separation task. Table 1 shows 7 separation technologies considered in the separation process. Dephlegmator is a separation unit where heat exchange and mass transfer take place at the same time. Cold box is a cryogenic separation unit that is based on Joule-Thomson effect. Each separation task can be performed by a number of separation technologies, which are selected based on the components involved in the feeds.
192
Figure 1: Superstructure of separation system. Table 1: Separation technologies. Tl T2 T3 T4 T5 T6 T7
Distillation column Physical absorption tower Membrane separator Dephlegmator Pressure Swing Adsorption (PSA) Cold Box Chemical Absorption tower
2. GDP Model We propose a generalized disjunctive programming model for optimizing the superstructure of the separation system shown in Figure 1 (see Yeomans and Grossmann, 1999a, 1999b). The first level in the embedded disjunction corresponds to the selection of the separation task. Once the separation task is selected, the second level disjunction is for the selection of the separation technologies. For example, if a distillation column is chosen, then the mass and energy balances for distillation column are enforced and the corresponding cost term is considered. An additional disjunction is the heat integration for the distillation columns, and another disjunction is for compression, pumping or pressure reduction of each state. For the separation units, simple mass/energy balances are used. Assumptions for modeling the separation system are as follows: 1) Vapor pressure of the stream is calculated with Raoult's law and by the Antoine equation (Reid et al., 1977) 2) Utility (cooling water/hot steam) cost is given by a function of temperature (see Figure 2) 3) Investment cost is given by concave cost functions (Douglas, 1988)
193 Based on these assumptions, the following nonconvex GDP model is constructed: Indices Distillation column I States k Separation technology s Separation task st Sets K Distillation column k / States / STs Separation technology st for task s Separation task s for state / Si Parameters Minimum T difference EMAT CRu Upper bound for comp. Ratio Lower bound for comp. ratio CRL Variables r, Temperature of state / Xi Flowrate of state / ICi Investment cost for separation of / Pressure of state / Pi Compressor cost for state / CQ UCi Utility cost for separation of / Selection of separation task yz,jt Selection of heat integration for YSi,s state / YCi Selection of compression for state / Selection of separation tech. Y^ s,st RBi Bottom recovery ratio of state / Top recovery ratio of state / RTi Heat transferred from state / to distillation column k QEXu Heat generated or consumed by state / Qi Condenser temperature in distillation column for / Tf Reboiler temperature in distillation column k Model Olefin 1: a) Minimize the annualized cost of capital investment, compression and utility min Z = ^ {iCi + CCi + UC^)
b)
Overall mass balances s.t.
c)
Ax = 0
Pressure and temperature calculation by Antoine equation Pi=fa(Ti),
d)
V/G/
Embedded disjunction for the separation task
xf" = RT^xfi"' xf"" = RB^xf'"'
Vie/
YT
V SES:
mass balance: ftn{Xi) = 0 V steST,
energy balance: /e(jc,, 7;, /^, Q,-) = 0 cosf fuction:iiq,UCi)
=
fc(xi,Ti,Pi,Qi)
194 e)
Disjunction for the heat integration
T[^>T^ + EMAT
r -•I'Za
e£X,.,>0
MQEX,.,-0
\/ieI,\/keK
(/C,.,f/C,) = ^(x,,7;.,i^.,(2£X,.,)J f)
Disjunction for the compressors/pumps
(Ti,Pi)ou.=JPliTi,Pi)in CR[^
v ^ / ' M )out "" v ^ / ' ^ / «
/Pl''
VlG/
CC, =0
g) Logic propositions
\/ieI
rs,i,s h)
\/steST^yseSiyieI .YZi^j, \/ste ST^,\/seSiyieiykGK
vYT^s,st
Variable bounds
0
V/
0
V/
YSi,, YT^^^, YZij^, YCi e [true, false] 0
V/, s, st, k
yi,k
In the above modoi fa, fm, fe, fc, fz, Jpu and ^ 2 are the corresponding functions for the calculation of pressure, mass balance, energy balance, investment and utility cost, heat exchange cost, and compressor/pump cost, respectively. We adopt simple model equations for the cost calculation of the process units and compressor T/P (Douglas 1988; Biegler et al., 1997). The above GDP model is transformed into an MINLP using the big-M formulation for the disjunctions (Lee and Grossmann, 2000).
3. Utility System The process streams and separation units require individual cooling/heating for the temperature changes. We consider heat integration with a number of available utility streams with different temperature. Due to the discrete choices of the temperature and
195 cost, the optimal selection of the utility stream yields a MINLP. However, as shown in Figure 2, the utility cost can be approximated by a smooth function of the temperature. In this way, we avoid introducing 0-1 variables and simplify the modeling of utility system. We construct a third order regression of the utility cost, which yields a good approximation that generally provides an underestimation of the actual cost of the utility streams due to the continuous relaxation of the temperature levels of the utilities.
Figure 2: Utility stream temperature. Total cost: 110.82 M$/yr
^^^P
C2H4
Q .3.P 0 " " ©
C2H6
Figure 3: GAMS optimal solution.
4. Numerical Results Figure 3 shows the optimal solution for the superstructure shown in Figure 1. First the compressors are used to increase the pressure of the feed stream to the dephlegmator, which separates the hydrogen and methane from the heavier components. Then the hydrogen/methane mixture is sent to the cold box and hydrogen is separated from methane. The cold box operates at the low temperature and high pressure which requires
196 additional refrigeration and compression. The C2-C5 mixture is sent to the deethanizer (distillation column) and the C2 mixture is recovered as top product. The C3 mixture is separated by the depropanizer, and the C4-C5 mixture is sent to the debutanizer. A chemical absorber is used for the C2 split and distillation column is used for the C3 split. All the separation units perform sharp separations. Note that there is a heat exchange between the depropanizer and the C3 splitter to reduce the utility cost. The annualized capital cost of the process is 39.1M$/yr. The power cost for the compressors is 29.8M$/yr and the utility cost for the separation units is 41.9M$/yr. The energy cost is about 75% of the total cost. Since the olefin separation process is highly energyintensive, a significant amount of utility cost can be saved by the heat integration. Table 2 shows the statistics of the problem, where it can be seen that the MINLP problem is very large. This MINLP was solved in about 2 hours on a Pentium PHI PC using GAMS/DICOPT++, an algorithm for MINLP problems that is based on the Outer Approximation (OA) with Equality Relaxation and Augmented Penalty (Viswanathan and Grossmann, 1990). CPLEX was used for the MILP solver and C0N0PT2 was used for the NLP solver in GAMS (release 20.7). The heuristic termination was used that is based on the lack of improvement in the objective. Table 2: GAMS computational results. MINLP problem size Number of constraints Number of variables Number of binary variables
52,703 24,475 5,851
DICOPT++ solution Number of iterations 5 CPU seconds 8,778 First integer solution 142.9 M$/yr
5. References Brooke, A., Kendrick, D.., Meeraus, A. and Raman, R., 1997, GAMS language guide. Release 2.25, Version 92. GAMS Development Corporation. Biegler, L.T., Grossmann, I.E. and Westerberg, A.W., 1997, Systematic methods of chemical process design. Prentice Hall, New Jersey. Douglas, J.M., 1988, Conceptual design of chemical processes. McGraw-Hill, NY. Duran, M.A. and Grossmann, I.E., 1986, An Outer-Approximation Algorithm for a class of Mixed-Integer Nonlinear Programs. Math. Prog., 36, 307. Lee, S. and Grossmann, I.E., 2000, New Algorithms for Nonlinear Generalized Disjunctive Programming, Comp. Chem. Eng. 24, 2125. Reid, R.C., Prausnitz, J.M. and Sherwood, T.K., 1977, The properties of Gases and Liquids, 3*^^ edition. McGraw-Hill, New York. Viswanathan, J. and Grossmann, I.E., 1990, A Combined Penalty Function and OuterApproximation Method for MINLP Optimization, Comp. Chem. Eng. 14, 769. Yeomans, H. and Grossmann, I.E., 1999a, A Systematic Modeling Framework of Superstructure Optimization in Process Synthesis, Comp. Chem. Eng. 23, 709. Yeomans, H. and Grossmann, I.E., 1999b, Nonlinear disjunctive programming models for the synthesis of heat integrated distillation sequences, Comp. Chem. Eng. 23,1127.
6. Acknowledgments The authors would like to thank BP for financial support of this project
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
197
Batch Extractive Distillation with Intermediate Boiling Entrainer Z. Lelkes, E. Rev, C. Steger, V. Varga, Z. Fonyo, L. Horvath* Chemical Engineering Department, Budapest University of Technology and Economics, H-1521 Budapest, Hungary *Res. Lab. Mat. & Env. Sci., Chem. Res. Center of HAS, H-1525 Budapest, Hungary
Abstract Feasibility of batch extractive distillation in rectifying column with middle boiling entrainer is studied. Separation of methyl acetate and cyclohexane (forming minimum boiling azeotrope) using carbon tetrachloride, and separation of chloroform and ethyl acetate (forming maximum boiling azeotrope) using 2-chlorobutane are theoretically studied based on profile maps and rigorous simulation. Non-extractive distillation with pre-mixing the entrainer to the charge is also studied in both cases. Feasibility of the processes are demonstrated with laboratory-scaled experiments. Versions of BED according to the types of the azeotrope and of the entrainer are compared; and the decisive properties are pointed out.
1. Introduction Extractive distillation is an efficient separation method for non-ideal and azeotropic mixtures; see amongst others Widagdo and Seider (1996), Knapp and Doherty (1994), Wahnschafft and Westerberg (1993). For realisation of the extractive distillation in batch (BED), the simplest configuration is the rectifier with a continuous entrainer feeding into the column, as shown in Fig 1. More sophisticated configurations such as a middle vessel column (e.g. Safrit et al. (1997), Warter and Stichlmair (1999)) could be also used; but so far BED experimental results have been published mainly for the rectifier (Yatim et al., 1993; Lang et al., 1994; Lelkes et al., 1998b; and Milani, 1999). According to Laroche et al. (1991), the entrainer for homoazeotropic distillation can be the lightest, the heaviest, or even the intermediate boiling component in the system. Lang, Lelkes, and co-workers (1998, 1999) analysed feasibility of separating minimumboiling azeotrope applying BED with light and heavy entrainers and maximum-boiling azeotrope applying BED with heavy entrainer. Another possibility for separating mixtures applying entrainers is the so-called solvent enhanced batch distillation (SBD). In the case of SBD the entrainer is not fed continuously into the column; rather it is pre-mixed to the mixture at the beginning of the process, see e. g. Bernot et al. (1991), Rodriguez-Donis et al. (2001a). Even heteroazeotropic batch distillation (Rodriguez-Donis et al (2001b) could also be used. Our test mixtures, selected according to didactic viewpoints, are methyl acetate and cyclohexane with carbon tetrachloride for minimum boiling azeotrope, and chloroform and ethyl acetate with 2-chlorobutane for maximum boiling azeotrope. Operation steps (sequencing), limiting flows, and limiting stage numbers are determined by feasibility study, based on profile maps; the design is validated by rigorous simulation.
198 Comparison of SBD and BED using intermediate entrainer, according to effectiveness, and the use of light intermediate and heavy entrainer are presented. Cyclohexane (B) 1 1
X (rectificatiot section)
(extractivesection)
Extractive profiles
0.8 j
Rectification profiles
0.6 j 0.4 j
Q \ L^J^^J^v VT>^^^\
0.2 j
ut/7>^^^/^^ p
0 ! 0
0.2
ca4(E)
Figure 1 Sections of BED.
SN W 0.4
^ 0.6
F/V=0.5 R= infinite XD=[0.9; 0.05; 0.05]
^
1 0.8 1 Methyl acetate (A)
Figure 2 Extractive and rectification profiles at R = oo, F/V = 0.5.
2. Minimum Boiling Azeotrope with Intermediate Boiling Entrainer 2.1. Feasibility of SBD Determination of the separation sequence applying SBD is based on the residue curve map. According to Bernot et al. (1991), the separation with SBD is feasible in a stripper only. In a rectifier the first product is the azeotropic mixture, regardless to the initial position of the still composition. Thus, separation of minimum boiling azeotropes with SBD in a rectifier is infeasible. 2.2. Feasibility of BED Using continuous entrainer feeding into the column the separation becomes feasible (Rev et al., 2003). Rectification and extractive profiles at F/V=0.5 and R = oo are shown in Fig 2. A feasible rectification profile is drawn in the figure by bold line. There is a stable node (SN) of the extractive profiles near the A-E edge. As all the extractive profiles arrive at the neighbourhood of SN, they all cross the specified rectification profile; therefore the rectification profile can be reached by any extractive profile from the still even if the still composition is in the middle of the composition triangle. On the basis of the residue curve map and that of the map of feasible extractive profiles, the operation steps can be established as is shown in Table 1 (column c). Moreover, the locus of SN does not depend on the still composition; thus roughly constant distillate composition can be maintained during the production step of the process. 2.3. Simulation results Simulation of steps 2 and 3 is performed with Nextr=15, Nrect=15, Q=1.5kW, F = 0.085 kmol/h, R = 10.0, Xch = [0.5; 0.5; 0.0], H = 6 litre = 0.0645 kmol. The given feed flow rate and boiling power roughly correspond to a feed ratio of F/V ~ 0.5. The computed recovery ratio varies between 85 % (at high productivity) and 97-98 % (at lower productivity). The results are in good agreement with our expectations.
199 Table 1: Step sequences of BED using different type entrainers. Azeotrope: Entrainer: heat-up run-up V'cui CO
ex CO
2"^ cut S'^'cut
b Maximum Heavy R = ooF = 0 R = ooF = 0 ( R = oo R = ooF>0 F>0) R0 R0 A A R
Residue: E
Residue: E
c d Minimum Maximum Intermediate R=:ooF = 0 R = ooF = 0 R = ooF>0 R0 A R
reload
E
R0 EA R
Residue: B
Residue: B
load EA
R0
EA R
A
Lx)adEA R
Residue: E
Residue: A
R
4'*'cut 5^^ cut Main contaminant . in A Bubble point ranking
e Minimum Light R = ooF = 0
B
B
E
E
E
AB, A, B, E
A, B, AB, E
AB, A, E, B
A, E, B, AB
E,AB,A,B
3. Maximum Boiling Azeotrope with Intermediate Boiling Entrainer 3.1. Feasibility of SBD and BED Producing Pure A By Bernot et al. (1991), separation of a maximum boiling azeotrope is feasible applying SBD in a rectifier, because the full composition triangle constitutes a single distillation region. Rigorous simulations have been performed (Lelkes et al., 2002) to validate this process. However, even moderately pure chloroform (XAD ^ 0.9) could not be produced with a great number of theoretical stages (N = 100) and unacceptable great reflux ratio (R = 70). We concluded that pure product cannot be achieved in this way. After the study on the feasibility of BED the same results were achieved, thus the separation is not feasible either with continuous entrainer feeding. 3.2. Feasibility of SBD and BED producing AE Mixture in the first production step Although pure A cannot be produced in the first production step, a mixture of A and E (chloroform and 2-chlorobutane) can be produced, and later separated, because A and E do not form any azeotrope in our text mixture. In order to obtain pure A in a later step, reduced molfraction of A, XAR = XA/(XA+XB), in the distillate is specified. With a high enough value, e.g. XAR = 0.98, this assigns a narrow triangle along the A-E edge, as a range of acceptable distillate compositions. The feasible region for R = 9 and the evolution of the still paths for both SBD and BED are shown in Fig 3. The separation of the azeotrope is feasible in both cases, but BED is more advantageous, because the continuous feeding of the entrainer drives the still path in the direction toward edge B-E. Since the feasibility region valid for SBD reaches edge B-E, feeding to the still is sufficient. That is, the feed need not be applied to the
200 column directly, and distillation in step 2 (in terms of Table 1) with the same recovery specification can be started with less amount of pre-mixed entrainer. Ethyl Acetate (B) 1 Still path for SBD
Still path for BED
0 0,2 2-Chlorobutane (E)
0,8 1 Chloroform (A)
Figure 3 Expected still path directions for BED and SBD. 3.3. Simulation Results Simulations of step 2 of SBD and BED are performed. The two simulation runs are specified in a way that they provide the same purity (XAR = 0.98) in the accumulator and consuming the same amount of entrainer (90 mol). According to the results, BED produces the same purity and recovery of component A (= 92 %) in shorter time (10.0 h vs. 11.3 h) and the still hold-up by BED is half of that by SBD (7.1 litre vs. 15.5 litre).
4. Experimental Results Feasibility of the novel processes was demonstrated with experiments in a laboratory scale packed column (Rev et al., 2002). The column was made of glass with a height of 1.5 m and inside diameter of 5 cm. The initial still hold-up was 1 litre in all of the experiments. For demonstrating the feasibility of separating the minimum boiling azeotrope with middle boiling entrainer, equimolar binary mixture was initially charged into the still. The distillation path in step 2 crossed the isovolatility curve. It means that the azeotrope can be broken in this way; and the separation of a minimum boiling azeotrope with intermediate boiling entrainer is possible by applying BED in a rectifier. In the case of a maximum boiling azeotrope, the distillation path can be driven along the AE side of the triangle by applying BED with intermediate boiling entrainer. For demonstrating this possibility, two experiments were performed with identical specifications. The distillation path in the SBD experiment turned sharply inside of the triangle when the still path reached the boundary of the feasible region. On the contrary, the distillation path in the BED experiment ran along the AE side. This latter result can be considered as an effect of the continuous entrainer feeding into the still. It can be
201 concluded that BED is more advantageous separating a maximum boiling azeotrope with middle boiling entrainer than SBD.
5. Comparison of the Different Cases Separation of minimum boiling azeotropes with light and heavy entrainers, and that of maximum boiling azeotrope with heavy entrainer, all applying BED, have earlier been investigated. The study on separating minimum and maximum boiling azeotropes with intermediate boiling entrainer makes possible to conclude viewpoints of designing an effective separation process. Table 1 presents the separation steps for the different cases. Table 2 summarises the main limiting parameters of the separating processes. On the basis of comparing the corresponding columns of these two tables, the most important information for designing a BED process is the relative position of the entrainer to the azeotrope according to the series of the characteristic bubble points in the studied system. Table 2: The essential limiting parameters of the studied separation process. Azeotrope Entrainer ^min
N • rect A^min, N ^^max, rect N • extr ^^min,
a Minimum
b Maximum Heavy
+ + + +
+ +
c d Minimum Maximum Intermediate
+ + + +
e Minimum Light
+ +
+ +
Envelope of the rectification profiles (This can be extended marginally by extractive profiles.)
Envelope of the rectification profiles (This can be extended marginally by extractive profiles)
^pre-mix, min
• *F/V ' ' mm (at R = 00)
Boundary
Separatrix of the extractive profiles
Specified rectification profile and the corresponding extractive profile
If the bubble point of the entrainer is higher than that of the azeotrope then pure products can be maintained, and the operation steps and limiting parameters are almost the same for all the three systems (Table 1 column a-c). There are some differences for the maximum boiling azeotrope with heavy entrainer. The run-up step (Table 1 column b), that serves for the purification of the first cut product, can be omitted if Xch,A is greater than XAZ,A (see Lang et al, 2000). The other difference is the non-existence of the limiting parameters (Table 2 column b) Nmax, rect and F/Vmin at R = ©o. These parameters do not appear here because the 1^^ cut is an unstable node of the residue curve map in the case of separating a maximum boiling azeotrope, and is a saddle point in the case of separating a minimum boiling azeotrope. If the bubble point of the entrainer is lower than that of the azeotrope then pure products cannot be maintained at the beginning of the process, but binary mixtures, without any azeotrope, can be produced. The separation schemes for minimum boiling azeotrope with light entrainer and for maximum boiling azeotrope with intermediate entrainer are
202 similar, and even their other characteristics are identical. SBD is feasible in both cases; BED is applied just for increasing the effectiveness of the process. The entrainer (component E), instead of B, becomes the main contaminator of A if the entrainer is not the heaviest component. This can be an important viewpoint if A should be produced as free of B as possible.
6. Conclusion Feasibility of batch extractive distillation in rectifying column with middle boiling entrainer was studied for both minimum and maximum boiling azeotropes. Separation of a minimum boiling azeotrope with intermediate boiling entrainer is feasible in rectifier applying BED. Separation of a maximum boiling azeotrope with intermediate boiling entrainer is feasible in rectifier applying either SBD or BED; but application of BED is more advantageous. The results of the feasibility method were validated by rigorous simulations. The main steps of the separations were justified by laboratoryscaled experiments. The results of the studies on separating minimum and maximum boiling azeotropes with light, intermediate, or heavy entrainers are compared according to their operation steps and feasibility domains. The decisive property for designing an effective BED process, separating azeotropes, is the relative position of the entrainer to the azeotrope in the bubble point series. But the type of the azeotrope (minimum or maximum) can modify the existence of some limiting parameters (F/Vmin, Nmax, rect)-
7. References Knapp, J.P., Doherty, M.F., 1994, AIChE J., 40, 243 Lang, P. et al, 2000, Comp. Chem. Eng., 24, 1665 Lang, P., et al, 1999, Comp. Chem. Eng., 23, S93 Lang, P., et al, 1994, Comp. Chem. Eng. 18, 1057 Lelkes, Z., et al, 1998, AIChE J., 44, 810 Lelkes, Z., et al, 1998, Chem. Eng. Sci., 7, 1331 Milani, S.M., 1999, Trans IchEmE., 77, 469 Rodriguez-Donis, I., et al, 2001a, Ind. Eng. Chem. Res., 40, 2729 Rodriguez-Donis, I., et al, 2001b, Ind. Eng. Chem. Res., 40, 4935 Safrit, B.T, Westerberg, A.W., 1997, Ind. Eng. Chem. Res., 36,436 Wahnschafft, O.M, Westerberg, A.W., 1993, Ind. Eng. Chem. Res., 32, 1108 Warter, M., Stichlmair, J., 1999, Comp. Chem. Eng., 23, S915 Widagdo, S., Seider, W.D., 1996, AIChE J., 42, 96 Yatim, H.P., et al, 1993, Comp. Chem. Eng., 17, S57 Bernot, et al., 1991, Chem. Eng. Sci., 46, 1331 Rev, et al., 2003, Ind. Eng. Chem. Res., 42, 162 Lelkes, et al., 2002, AIChE J., 48, 2524 Rev, et al. 2002, Proc. of Distillation and Absorption 2002, Baden-Baden, Germany
8. Acknowledgement: This study has been supported by OTKA T037191, F035085, T030176, and AKP 2001112.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
203
Short-cut Design of Batch Extractive Distillation using MINLP Z. Lelkes, Z. Szitkai, T. Farkas, E. Rev, Z. Fonyo Budapest Univ. Dept. Chem. Eng., 1521 Budapest, [email protected]
Abstract An automatic design method for batch extractive distillation, one of the most important techniques for separating low relative volatility or azeotropic mixtures, is presented. Example calculations are performed to the acetone-methanol mixture using water as entrainer. The NLP and MINLP problems are solved with applying GAMS DICOPT++.
1. Scope For realizing extractive distillation in batch (BED), the simplest configuration is the rectifier with a continuous entrainer feeding in the column (Fig. 1). More sophisticated configurations, such as a middle vessel column, could be also used (e.g. Safrit et al., 1997; Warter and Stichlmair, 1999); but so far BED experimental results have been published mainly for the rectifier (Yatim et al., 1993; Lang et al., 1994; Lelkes et al, 1998ab; and Milani, 1999). To our best knowledge, automatic design method for batch extractive distillation has not been yet elaborated. In the case of the well-elaborated continuous distillation and conventional batch distillation processes, the reflux ratio plays a single deterministic role, and methodologies for determining the minimum flow rates and the minimum number of theoretical stages are well known and easily applicable. On the contrary, design of BED is far from being a trivial task. Even the feasibility of the process is always a question. Not just the optimal values, but even the feasible range of the essential parameters, namely of the reflux ratio and the feed ratio, both at a given heat duty, are difficult to estimate. The feasibility of the process can be easily failed by blind search for appropriate flow rates. That is why special methodologies for determining the feasible range have to be developed. Any search for optimal design parameters cannot effectively performed before a good estimation of minimum and maximum values is given. The most important limiting values of BED are the minimum feed flow rate (Fmin) at given reflux ratios (R), particularly at infinite reflux ratio (R=^), minimum reflux ratio (^min)» minimum number of theoretical stages in the extracting section (A^min,extr)» and minimum and maximum numbers of theoretical stages in the rectifying section (A^min,rect and A^max,rect)- Maximum reflux ratio is, in some cases, also an interesting property of continuous extractive distillation. However, this phenomenon does not occur in the most frequent case of this process, that we consider now (see the next paragraph), if V/L is high enough. There are several cases of BED, according to the type of the system to be separated and to the type of the residue curve map of the system formed with the entrainer. We deal with the most common case of separating a minimum boiling azeotrope with a heavy boiling entrainer not forming any new azeotrope. Our example mixture is (acetone / methanol)+water. The short-cut design methodology is based on the feasibility analysis developed by Lelkes et al (1998a), and using MINLP. The design methodology consists of two main parts: Limiting values of the most important design and operating
204
parameters are first determined; then optimization is performed; both steps using mathematical programming tools.
2. Feasibility Method for BED Here we shortly review the feasibility analysis (Lelkes et al, 1998a). We deal with BED performed in a batch rectifier consisting of three parts (see Fig. 1) namely (1) a rectification section (RS) including the condenser, the reflux divider, and the stages above the feed, (2) an extractive section (ES) including the feed stage and the stages between the feed and the still, and (3) the still itself The feasibility study is based on analysis of a map of feasible column profiles and still path. The following assumptions are applied in the model: No holdup exists apart from the still, dynamics are approximated through quasi steady states, constant molar overflow and boiling point feed and reflux streams are assumed. The concentration profiles are described by the differential eq 1: dx/dh=±(V/L)(y(x)-y*(x)); where y*(x) is the equilibrium vapor composition, and y(x) is calculated from the operating line. XD is applied as initial value for solving eq 1 in the RS; the momentary composition in the still is applied as initial value in solving for the ES; with appropriately changed sign of eq 1. The actual value of VIL and D is a function of V, R and F. For example, the following relations hold in the rectification section: (V/L)rect=V/(y-D), {VIL\^f^VI{yD+F), D=V/(R-\-l). The still path is determined by differential component balance eq 2: d(//sXs)/dt=FxF-Z)xD; the initial value is the charge in the still. BED is called feasible if a column state with the specified product purity (even with zero recovery) can be reached from the initial state (charge in the still, and empty column). A necessary and sufficient condition for feasibility is existence of an extractive profile connecting the still path to a rectification profile reaching the specified distillate composition. This can be checked by analyzing the profile maps (see Fig. 2). r
Methanol (B)
^ (1)^
1^, •^^D
rf^ IfV
SJ^
^~~-—-i^--^-^--''
-XtJ^^ f
1
(3)-<
TT^
Figure 1.
R=oo Possible extractive profile
\ /
,
Residue curves
Z ^ ^ ^ " *
/^^ ^^<—'•
^—r^
J
FA^=0.58
J^^^X^v"^ WJ Iter
(E)
SN
Acetone (A)
Figure 2.
In case of heavy solvent, with total reflux and appropriately selected feed flow rate, the feasible EPs from any point in the composition triangle (A) run to a stable node (SN) situated near or on the AE edge. From the neighborhood of this SN point, high purity distillate composition can be reached by rectification profile. Having the profile maps analyzed, the following operation sequence can be concluded: Step 1: /?=oo, F=0: heat-up; its time is A^i. Step 2: /?=oo, F>0: start-up; its time is At2. Step 3: /?0: production of component A; its time is At^. Step 4: /?
205
3. Basic Data, Targets and Tools Before outlining the method in details, we provide a list of general considerations: Specified data: Charge composition, distillate composition, stage holdup, vapor rate or boiling duty, and either operation time or minimum recovery. Limiting values to be determined: {Fmin, Nmin,extr^ and A^minjectl at /?=oo, and /^min for a given feasible F feed flow rate. Target: Determination of near optimal design and operation parameters with operating policy of constant R and F. Operating the column is much more convenient this way comparing to variable flow rates, and it is shown by Lelkes et al. (1998b) that this strategy is not very far from the optimal one. Objective: In our article we apply the following objective function eq 3:
c
Q-
(A^«+A^£)+
c
: 0.6729 mol"
where CQ and Cf are costs related to heat duty and to the entrainer consumption, respectively, while C/v is specific capital costs for a stage. The total cost is related to this roughly estimated capital. Specific cost ratios are thus applied in the objective function. Computation: All the short-cut computation, including both the equation solving and the optimization steps, is performed with using GAMS DICOPT++ (Brook A. et al, 1992). Example: Our example mixture is (acetone (A) / methanol (B)) + water(E); XD,spec=[0.94; 0.025; 0.035], F=760 torr, y=48 mmol/s. The charge is 170.82 mol with equimolar A-B composition: Xch=[0.5; 0.5; 0.0]. The specified recovery of A (Acetone) is Z D A = 7 0 mol; i.e. 77=0.8196. Volumetric hold-up per tray is //v=70 ml.
4. Finding Limiting Values with JMathematical Programming 4.1. Minimum feed flow rate {F^^ Fmin is minimal at given R and V if A^=oo is necessary to achieve the specified separation (xs to XD) even with zero recovery. We determine F^J^R=co) because Fmin(^=°o)>Fnun(^Frr^^. As is seen in Figure 3, SN just sits on the rectification profile of a pure enough distillate composition if F=F^^.
"71 ,
Methanol (B)
1
1. F=0.008 mol/s; F>F„i„
1
2. F=0.007 mol/s; F=Finin 3. F=0.006 mol/s; F
// ' /^ /""^^
Wa er(E)
Figure 3.
___^ " "~—
~^-^^^^X.
4. F=0.004 mol/s; F
active tray \
<
11
D=0,
•
XD
5. F=0.002 mol/s; F
~^
inactive tray ^4
^ 's-^ ;3 ^ ><-^x 1 2
i"-
Acetone (A)
L,x
Figure 4.
v,ys
206 Since the rectification profile runs along the AE edge, SN is on AE or under it (outside the A) for F>Fmin. If SN is inside the A then F0.0 and finding an SN point, in function of F/V, with XB=e. The criterion of a point being SN is the equality of y to y*. On the other hand, SN compositions inside the A all coincide with the isovolatility curve aAB=l with total reflux (Safrit et al, 1997; Lelkes et al, 1998a), therefore F^n(R='^) can be determined based on the isovolatility curve, as well. This task can be accomplished by assigning a small 6>0.0 and finding a point of the property of isovolatility with JCB=^. Then the corresponding F/Y ratio can be determined according to this point being on some extractive profile [y=(F/y+l)x-(F/V)xF], thus (F/V)min=|>*A(x)-JCA]/[xA-JCF,A]- In this way the SN criterion (y = y*) is not used. Both methods were tried for XD=[0.94; 0.025; 0.035], F=760 torr, V=48 mmol/s, with different trial values of e, giving the same results: Finin=6.133 mmol/s (6)=0.01). 4.2. Minimum stage numbers (A^miii,extr and A^miyect) (^D,A)min should be specified for determining minimum stage numbers. A stage number Nmin is minimal at R=co and given F/V if {^D,A^(-^D,A)min at N^^ and XD,A<(^D,A)min at ^
R=4 FA^=0.58
(B) Bourrlary approximated by a sequence of points
V J ^
Boundary
^s,,^
R=4,10 FA^=0,833
Extractive profiles
Possible extractive profiles
Xo<
Water (E)
Figure 5.
Acetone (A)
Water (E)
Acetone (A)
Figure 6.
A reflux ratio /^min is minimal (at a constant F/V) if XD is reachable from XCH at R=Rjmn but with zero recovery. The specified XD is reachable if there is a still path leading to a momentary Xs that can be connected to XD with a column profile. The actual situation of the feasibility boundary depends on R. All the points of the calculated still path lie in Region-II for any R
207 R=Rrmn- Therefore, at R^in the still path is tangent to the boundary (Lelkes et al, 1998a). That is why the boundary is to be calculated for determining /?min. The approximation of the boundary is based on three observations: (1) Location of UN is rather insensitive of F/V and R; it is situated near corner E. (2) No extractive profile exists starting from edge AB and reaching the neighborhood of E. (3) The feasibility region is convex; the tangents drawn to any point of the boundary lie outside the region. The boundary is approximated by finite differences, as follows (see Fig. 6). The tangent of any point Xp of the boundary can be given as x=Xp+/lAx where Ax is the actual slope of the boundary; the latter can be approximated as Ax=(V/L)[y(xp)y*(xp)]A/i, according to eq 1. The boundary can thus be approximated by a sequence of points {xo, Xi, ..., XK} satisfying the following relations: |XK-XUN|<^small positive real, ^K,E> ^K-i,E> •••> JCo,E= 0.0, and Xp+i=Xp+/lpAx; for some real Ap where Ax=(V/L)[y(xp)y*(xp)]A/i. Although XUN could be determined, XUN~XE= [0.0,0.0,0.0] may be substituted. Thus JCK,E=1-£^' is applied instead of |XK-XUN|<^ above. Apart from the details of calculating the equilibrium vapor composition y*(x), the boundary for given Xp, R, and F/V can be determined by solving an equation system. Rrain cau be determined by modifying the above system of equations as follows. Denote the points of the straight-line section connecting Xch to Xp (in our case XF=XE, the corner E) by xlp and the points of the approximating boundary by Xp. Discretize the composition domain [0.0; 1.0] of molfraction of entrainer into K equal parts and assign the points Xp and xlp so that JCP,E=JC7P,E. Thus, the xlp points are all known. Then search for the maximum R, subject to the condition that all the points of the still path lie outside of the feasible region. In practice, the boundary with the property that the still path is tangent to it is searched for. Fig. 7 demonstrates our results for F=28 mmol/s and y=48 mmol/s (F/y=0.5833) with K=20. The calculated minimum is /?min==0.597. The boundary touches the still path of the second step of the process; and the still path is tangent to the boundary. The computed boundary correctly approaches the real boundary. Methanol (B)
Methanol (B) 1.0-rs^ 0.90 8-" 0.7 0.60.5 0.40.30.2 0.1 0.0 Water (E)
Still path
R=0.597 F=28 mmol/s
1,0 0,9 -
^y>^
Estimated boundaiy
0,8 0,7 -
>\Ch
Estimated boundary
R=3.384 F=0.039mol/s
^^^*\^^ /Still padi y'^V\Xch
0,5 -
Extractive profiles
0,4 • 0,3-
0,2 -
-^ 0.2
0.4
Figure 7.
\ 0.6
0.8
1.0 Acetone (A)
0,1 -
/—Xo^^^
0.0
0.2
Water (S)
0,4
0,6
0,8
1,0
Acetone (A)
Figure 8.
5. Optimal Design Process design of batch extractive distillation is a most complicated task not just for the difficulties encounter in feasibility analysis but also because there are too many essential design parameters. These are the reflux ratio, the feed ratio, stage numbers in the two column sections, and the length of the process steps at specified charge composition, charge amount, product purity, recovery, and heat duty in the still. There are also many constraints to be taken into account (inequality constraints for the limits determined in the previous sections, equality constraints modelling the column profiles and the still path, and additional equality constraints applied in estimating the boundary that constitutes a limit for the still path). Determining a near optimal design and
208 operation via simulation with trial and error is a tedious and rather inefficient procedure. Instead, near optimal parameters can be quickly determined with the suggested method. MINLP optimization can be applied, based on the following considerations. There is rather litde change in XD during step 3 at constant R and F, thanks to the semibatch character of the process. If XD is taken as constant, the still path can be calculated at given R, F, and Xp (see Fig. 8). At the end of the second step of the process (at time t2=^ti+At2), the actual still composition x^ can be calculated from a simple component balance. The still path of step 3 can be determined by integrating eq 2 started from Xs=Xt2. The solution is a straight line if XD is constant. The theoretical end of step 3 is the time when the still path reaches the boundary; therefore, Xt3 is estimated by the intersection of the still path with the boundary. Once Xt2 and Xt3 are known, the amount of used solvent, the amount of yielded distillate, and the final recovery can be calculated. Ideally all the variables R, F, At2, Ats, A^extr, and Afreet can be considered as design variables. However, optimising them together proved to be too complex task for the solver. Therefore, a two-phase, iterative, optimization strategy is applied. In the first phase R, F, and At^ are optimized with fix stage numbers and At2. In the second phase the stage numbers and At2 are optimized with fix flow rates. The cycle ends if the flow rates do not change. Numerical results are shown in Table 1. After two iteration steps the algorithm concluded to the values: /?=3.384, F=39 mmol/s, Ar2=788.5 s, A^3=6802 s, A^extr=8, A^rect=4. Thus the optimization is successfully accomplished. The optimal still path can be seen in the Fig. 8. The results are validates by rigorous simulation. Table 1. Results of the iteration steps and the optimal parameters (recovery=0.7). Iter.
Reflux
F (mol/sec)
Step 2. (sec)
1/a 1/b 2/a 2/b 3/a
3.29
0.042
1105
Step 3. (sec) 6655
3.384
0.039
788.5
6802
3.384
0.039
788.5
6802
Extr. stages 12 (init. est.) 8
Rect . stages
8
4
4
6. References Brook, A., et al: "GAMS. A User's Guide; Release 2.25.", boyd & fraser, USA, 1992. Lang, P., Yatim, H., Moszkowicz, P., Otterbein, M. Comp. Chem. Eng. 18, 1057, 1994. Lelkes, Z., Lang, P., Benadda, B., Moszkowicz, P., AIChE J., 44, 810, 1998a. Lelkes, Z., Lang, P., et al, Chem Eng. Sci., 53, 1331, 1998b. Milani, S.M., Trans. IchemE., 77, 469, 1999. Safrit, B., Westerberg, A.W., Ind. Eng. Chem. Res. 36, 436, 1997. Warter, H., Stichlmair, J., Comp. Chem. Eng., 22, S915, 1999. Yatim, H., Moszkowicz, P., Otterbein, M., Lang, P. Comp. Chem. Eng., 17, S57, 1993. Yeomans, H. and Grossmann, I.E., Ind. Eng. Chem. Res. 39, 1637-1648,2000.
7. Acknowledgement To Hung, grants: OTKA F035085, T037191, and AKP 2001-112.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
209
A Conflict-Based Approach for Process Synthesis with Wastes Minimization Xiao-Ning Li, Ben-Guang Rong, Andrzej Kraslawski*, and Lars Nystrom Department of Chemical Technology, Lappeenranta University of Technology, P.O.Box 20, FIN-53851, Lappeenranta, Finland. *E-mail: [email protected]
Abstract This paper presents a conflict-based analysis approach for process synthesis with wastes minimization for dealing with the conflicts of multi-objective nature in the early design stage. A three-level hierarchical procedure is proposed for carrying out the identification of waste sources, the generation of pollution prevention alternatives and mathematical optimization. It has been shown that the superstructure aimed at the wastes minimization can be systematically formulated on the basis of the proposed strategy. The proposed approach has an important potential for achieving a more efficient solution in the stage of mathematical optimization. The approach is illustrated by a case study: air-based direct oxidation process for the production of ethylene oxide.
1. Introduction With the increasing public pressure and restrictive regulations, wastes treatment has become more and more important issue for chemical process synthesis. Traditionally the end-of-pipe treatment approach, which is aimed to eliminate the produced pollution, is not an efficient method in the view of the sustainable development. Current research efforts put attention on considering the reduction of the waste sources using the hierarchical approach, such as Douglas hierarchical procedure (Douglas, 1992) and onion diagram (Smith, 1995). However, the wastes minimization itself is a complex decision task that involves wastes handling and multi-objective analysis. The wastes treatment always generates the contradictions among design objectives. Those contradictions cannot be handled in the satisfactory way using existing hierarchical decision-making methods. Therefore it is an important theoretical and practical issue to develop a new approach to deal with the emerged contradictions during the design of the processes aiming at the wastes minimization. The objective of this paper is applying the multi-objective based analysis for the process synthesis with wastes minimization. A step-by-step systematic approach is proposed to ensure the reduction or elimination of the conflicts with regards to the waste handling and multi-objective synthesis. The pollution prevention alternatives are identified and the superstructure aimed at wastes minimization is formulated for the further mathematical optimisation. A case study, air-based direct oxidation process for the production of ethylene oxide, is presented for the illustration of the proposed approach.
210
2. Methodology A three-step approach is proposed for the generation of wastes minimization alternatives (Fig. 1). In the first step, the base case is used to evaluate the current performance of the process and to detect the concerned characteristics that are related with the waste sources. Next, in order to improve those concerned characteristics, a developed matrix is used for selecting the suitable heuristics or techniques, which ensure the reduction or elimination of the conflicts with regards to the wastes handling and multi-objective synthesis. Then, the pollution prevention alternatives are identified. Together with the base case, the superstructure aimed at wastes minimization is formulated. The supersturcture is modified and verified by checking the identified key parameters of waste sources and repeating conflict-based analysis. In the third step, the design alternatives obtained from the superstructure are evaluated using multi-objective optimisation. The procedure is realised by applying simulated annealing with the process simulator ASPEN PLUS.
Stepl* problem analysis and diagnosis
Step2. superstructure generation
Define wastes minimization problem with concerned multi-objectives
ijidentify available heuristics of pollution prevention by WM matrix
Analysis characteristics of streams and processes
Multi-objective conflict analysis for evaluation and screen of alternatives
Identify the key parameters for waste sources
Formulate superstructure aimed at waste minimization
Steps* superstructure optimization Simulated annealing + ASPEN | ^ Fig. 1. Three-step design methodology. 2.1. Conflict-based methodology Conflict-based approach is derived from the TRIZ methodology (Theory for solving inventive problems) (AltshuUer, 1998). TRIZ is a methodology to identify the system's conflicts and contradictions for solving the inventive problems. The main idea of TRIZ consists in the modification of the technical system by overcoming its internal contradictions. Therefore, it is an efficient method for handling the conflicts among the objectives in order to identify the promising pollution prevention alternatives. 2.2. Matrix of wastes minimization (WM matrix) A matrix of wastes minimization is formulated for organizing the available heuristics and techniques of wastes minimization. The process is based on the analysis of the heuristics and techniques that improve the characteristics related with the waste sources
211 and their contributions to the process design objectives. Twelve parameters for identification of the sources of wastes minimization are extracted. They are placed in the left column of the matrix as shown in Table 1. There are following objectives considered in this matrix: the economic criteria, product quality, safety and controllability. Every objective is composed of its sub-objectives, which are listed in Table 2 (Douglas, 1998). In this paper there are selected 31 heuristics based on Halim et.al. (2002) and Dantus et.al. (1996). The details of the heuristics are not given here since the space limitation. The heuristics are divided into four groups that deal with: the changes of product, transformation of input material, modifications of technology and good manufacturing practice. Table 1. Characteristics (parameters) of wastes minimization source. 7. recycle ratio 8. purge or emission ratio 9. side product treatments 10. product specifications 11. heat utility efficiency 12. process configuration
1. raw material conditions 2. raw material efficiency 3. reaction conversion 4. reaction selectivity 5. separation solvent 6. separation efficiency
Every heuristics is placed into the intersectional cell between the concerned characteristics and the influenced design objectives. Table 2 shows a fragment of the wastes minimization matrix. The symbols '+ -' are assigned to the available heuristics. The symbol '+' means that the concerned process objective is improved when applying the heuristics; '-' means its deterioration. It indicates that the use of the heuristics of wastes minimization result in the changing of the values of different objectives. This may lead to the conflicts among the concerned objectives. Therefore, via this matrix, the suitable heuristics of wastes minimization can be evaluated and selected based on the analysing the conflicts among the process objectives. Tahlel. A fragment of the wastes minimization matrix. Design objectives Economic Product quality l.raw material cost 1.product species 2. equipment cost 2. product amount 3. utility cost 3. product purity ^ 4. product profit J 5. start-up cost ^ 1
El H6(+)
2
Hll(+) H22(+)
...
E5
Hll(-) H30(-)
Ql
H22(+)
Q3 H4(+) H5(+) H30(+)
Safetv Lionising risk 2.explosion risk 3. toxicity risk 4.hi^^h T/P risk SI
S4
ControUabilitv 1. operating condition 2. process flow control 3. recycle control 4. unit operability Cl H6(-)
...
H22(-)
C4
212 We have derived the meta-heuristics for selecting the wastes minimization heuristics based on the objective conflict-based analysis as folio wings: 1. selecting the heuristics having the positive influence '+' on the concerned design objectives and screening out the heuristics which have the negative effect '-'on the design objectives. 2. trading off the heuristics having simultaneously positive and negative influences on various design objectives. Based on the objective-oriented analysis of the heuristics, the alternatives of pollution prevention are evaluated and refined. Moreover, the potentially optimal structures from the point of view of the particular objectives are indicated by the respective analysis of the objectives.
3. Case study The proposed approach is applied to the air-based direct oxidation process for the production of ethylene oxide. The following discussion focuses mainly on the identification of the waste sources, evaluation of the flowsheet alternatives based on the analysis of conflicts among the objectives, and systematic generation of superstructure aimed at the wastes minimization. A schematic flow diagram of the ethylene oxide process is shown in Figure 2. JE
r^ iV
Primary reactor
Purge J reactor I Primary y absorber
Purge absorbdrl
T Air
"^ Power, heat recovery from vent gas Ethylene
Refiner Waste liquid
Fig. 2. Air-based direct oxidation process for ethylene oxide. 3.1. Process description Referring to the Encyclopaedia of Chemical Technology (1980), the process can be divided into three major sections: reaction system, oxide recovery, and oxide purification. In the first section, as described in Chemical and process technology encyclopaedia (1974), a mixture of ethylene, air and recycle gas, in which the ethylene content is 3-5 vol% is conducted under a pressure of 10-20 atm gage to a tubular reactor with fixed-bed silver catalyst. The following reactions take place during the oxidation of ethylene. The per pass ethylene conversation in the primary reactors is maintained at 20-50% in order to ensure catalyst selectivity. CH2 =CH2 + j ^ 0 2 ->C2H40 CH2 = CH2 + 3O2 -^ 2CO2 +2//2O
213 The second section is ethylene oxide recovery from the crude product gas. The produced ethylene oxide is dissolved by the water solvent in absorber. The unabsorbed gas from the main absorbed overhead is split into two portions. The largest portion is recycled to the primary reactor. A smaller portion is fed to the purge reactor system. Its purpose is to allow reaction of a substantial portion of the ethylene content of the purge gas. The gas leaving the purge reactor entered a purge absorber. In the third section, the ethylene oxide-rich water absorbent streams from both absorbers are combined and feed to a desorber. The oxide is distilled at the top with some light gases, which are separated in a stripper. The partially purified oxide is sent to a final refining column. 3.2. Identifying the key parameters for the reduction of the wastes sources Based on the analysis of the process flowsheet, there are generated several sources of wastes. An attempt will be made to minimize or eliminate the following wastes: 1. the vent gases from the purge absorber. 2. the vent gas from the stripper. 3. the waste liquid from the refinery column. 4. the generation of by-product. 5. the heat loss of the process streams The vent comes from the purge absorber and strippers. It consists of spent air, carbon oxide and traces of ethylene oxide. It is closely related with the input raw material composition and the purge ratio; the reaction conversion determines flowrate of generated by-product; the amount of the discharged waste liquid is affected by the separation efficiency of oxide recovery and purification process; the heat loss prevention is achieved by the efficient heat matching strategies. Therefore the following five key parameters are identified to reduce the waste sources: SI-raw material conditions (1); S2-reaction conversion (3); S3-separation efficiency (6); S4- purge or emission ratio (8); S5-heat utility efficiency (11). 3.3. Generating the superstructure aimed at the wastes minimization In this work, the following objectives are considered: the economic criteria, product quality and process controllability. Based on the identified key parameters for the waste source reduction and the specific process information, the suitable heuristics of wastes minimization are identified and analysed via WM matrix. For example, from the WM matrix, it is seen that there are three heuristics related with changing the raw material condition (SI). Heuristics 4(material purification), 6 (material updating quality) are selected since there are no clear conflicts among the design objectives. Heuristic 5 (material substitution) is screened out since it may bring conflicts between economic criteria and controllability. For improving the reaction conversion (S2) to reduce the generation of the by-product, based on the specific process. Heuristic 13 (changing configuration of reactor) and Heuristic 14 (optimising the reaction condition) are selected. However they could increase the equipment cost and cause the difficulties in process controllability as indicated by the WM matrix. Based on the objective-oriented analysis of the heuristics, the pollution prevention alternatives are evaluated and screened. Moreover, the potentially optimal structures from the point of view of the particular objectives are indicated by the respective analysis of the objectives.
214 As a result of the analysis of the conflicts among the objectives, the suitable heuristics are selected for the generation of the superstructure aimed at the wastes minimization. Figure 3 shows the superstructure with the highlighted waste sources. It involves those alternatives, such as improving the input material by adding the air purifier (H4), using highly purified oxygen (H5), using two parallel reactors (H13) and improving the higher pressure of reactor (H14) to increase the conversion rate, adding the catalytic converter at the main process vent for modifying the emission ratio (H27-material loss prevention), putting additive to water for more efficient absorption (H15-choice the suitable separation solvent), optimising heat exchange network for improving the heat utility efficiency (H19-heat integration) etc. In consequence there is formulated the superstructure favourable to pollution prevention. The compact and efficient superstructure is the foundation for the search of the optimal solution through the multiobjective optimisation.
waste liquid AP-air purifier CV-catalytic converter P- high pressure A-additive — prcx:ess alternatives
Fig. 3. The illustration of wastes minimization centred superstructure.
3. Conclusions An approach based on conflict analysis is presented for process synthesis with wastes minimization. It deals with the conflicts among the objectives in the early design stage. A matrix of wastes minimization is formulated to assist in the conflicts analysis. It contains 12 characteristics and 31 heuristics for the wastes minimization. A three-level hierarchical procedure is proposed for carrying out the identification of waste sources, the generation of pollution prevention alternatives and mathematical optimization. In order to implement the proposed approach, there is studied air-based direct oxidation process for the production of ethylene oxide. It has been shown that the superstructure aimed at the wastes minimization can be systematically formulated on the basis of the proposed strategy. It has an important potential for facilitating the generation of the efficient solutions in the stage of mathematical optimization.
4. References Altshuller, G., 1998,40 principles: TRIZ keys to technical innovation, Inc.MA, USA. Chemical and process technology encyclopaedia, 1974,443, McGraw Hill, USA. Dantus, M.M., High, K.A., 1996, Ind. Eng. Chem. Res., 35,4566. Douglas, J. M., 1998, Conceptual design of chemical process, McGraw Hill, New York. Douglas, J.M., 1992, Ind. Eng. Chem. Res., 31, 238. Encyclopaedia of Chemical Technology, 1980, third edition, 9,441, Wiley-Interscience. Halim, I., Srinivasan, R., 2002, Ind. Eng. Chem. Res., 41, 196. Smith, R., 1995, Chemical process design, McGraw Hill, New York.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
215
A New Continuous-Time State Task Network Formulation for Short Term Scheduling of Multipurpose Batch Plants Christos T. Maravelias, Ignacio E. Grossmann* Department of Chemical Engineering, Carnegie Mellon University Pittsburgh, PA15213, USA
Abstract A new continuous-time MIL? model for the short-term scheduling of multipurpose batch plants is presented. The proposed model relies on the idea of the State Task Network (STN) and addresses the general problem of batch scheduling, accounting for resources other than equipment (utilities), variable batch sizes and processing times, various storage policies (UIS/FIS/NIS/ZW) and batch splitting/mixing. Compared to other general continuous STN formulations, the proposed model is more efficient. Compared to event-driven formulations, it often gives better solutions being equally fast. The application of the model is illustrated through an example problem.
1. Introduction The problem of short-term scheduling of multipurpose batch plants has received considerable attention during the last decade. Kondili et al. (1993) introduced the notion of State Task Network (STN) and proposed a discrete-time MIL? model, where the time horizon is divided into time periods of equal duration. Shah et al. (1993) developed a reformulation and specific techniques to reduce the computational times for discretetime STN models. Pantelides (1995) proposed the alternative representation and formulation of the Resource Task Network (RTN); Schilling and Pantelides (1996) developed a continuous-time MILP model, based on the RTN representation, and a novel branch-and-bound algorithm that branches on both continuous and discrete variables. Mockus and Reklaitis (1999) and Zhang and Sargent (1996) proposed MINLP continuous-time representations for the scheduling of batch and continuous processes. lerapetritou and Floudas (1998) proposed a new MILP formulation, based on time events, for the scheduling of batch and continuous multipurpose plants. Several authors have recently proposed related models for the scheduling of multipurpose batch plants (Castro et al. (2001); Lee et al. (2001)). In this work, we propose a new general State Task Network MILP model for the shortterm scheduling of multipurpose batch plants in continuous time that accounts for resource constraints other than equipment (utilities), variable batch sizes and processing times, various storage policies (UIS/FIS/NIS/ZW) and allows for batch mixing/splitting. The proposed model can be extended to account for multiple intermediate due dates. It is computationally more efficient than other general STN/RTN models and equally efficient to the less general event-based models. Extensive computational comparisons can be found in Maravelias and Grossmann (2003). The key features of the proposed model are the following: (a) the time horizon is divided into a continuous time grid.
* To whom correspondence should be addressed. E-mail: [email protected]
216 common for all units, (b) assignment constraints are expressed using binary variables that are defined only for tasks, not for units, (c) start times of tasks are eliminated, and (d) a new class of valid inequalities that improves the LP relaxation is added to the MILP formulation.
2. Problem Statement We assume that we are given: (i) a fixed or variable time horizon (ii) the available units and storage tanks, and their capacities (iii) the available resources and their upper limits (iv) the production recipe (mass balance coefficients, utility requirements) (v) the maximum batch size and processing time data (vi) the amounts of available raw materials (vii) the prices of final products The goal is to determine: (i) the sequence and the timing of tasks taking place in each unit (ii) the batch size of tasks (i.e. the processing time and the required utilities) (iii) the amount of final products sold The proposed model can acconwnodate various objectives, such as maximization of profit, or the minimization of the makespan for specified demand.
3. Mathematical Model A common, continuous partition of the time horizon is used to account for all possible plant configurations and resource constraints other than those on units. The idea of decoupling tasks from equipment, as proposed in lerapetritou and Floudas (1998), is also used. Assignment constraints are expressed through task binaries Wsin and WfinBinary Wsin is 1 if task / starts at time point n, and binary Wfi„ is 1 if task / finishes at or before time point n. The start time, Tsin, of task / is always equal to time point r„ and thus time matching constraints are used only for finish time, Tfin, of task i. The batch size of task / that starts at, is being processed at, and finishes at or before time point n is denoted by Bstn, Bpin and Bfin, respectively. The amount of state s at time point n is denoted by Ssn and the amount of resource r consumed by various tasks at time point n is denoted by Rm- The amount of state s consumed (produced) by task / at time point n is benoted by ^isn (B^isn)- The details and derivation of the proposed model can be found in Maravelias and Grossmann (2003). 3.1. Assignment constraints Constraint (1) is the main assignment constraint and enforces the condition that not more than one task can be processed in a unit at any time. Constraint (2) enforces the condition that all tasks that start must finish, while constrsaints (3) and (4) enforce the condition that not more than one task can start or finish on a specific unit at any time:
E KWs^-Win)^^ Sw'^.=Sw/;„ v/ n
(D (2)
n
XW5,,<1 y/,Vn te'(;)
V;,Vn
(3)
217 ^Wf^
V;,Vn
(4)
3.2. Duration and timing constraints The duration. Din, and the finish time, % , of a task are calculated through constraints (5), and (6) and (7), respectively. The elimination of start times, Tsin, is made through constraint (8) and the time matching between time points and finish times is achieved through constraints (9) and (10). Note that in the general case a task may finish at or before a time point n [constraint (9)], whereas a task must finish exactly at a time point [constraints (9) and (10)] if it produces a state for which zero-wait policy applies: A„=«,W^^,„+A55,„
V/,Vn
(5)
Tf,„
\/iyn
(6)
Tf,„>Ts^+D,„-H(l-Ws,„)
Vi,Vn
(7)
Ts,„=T„
V/,Vn
(8)
Tf^.,
VJ,Vn
Tf.„_,>T„-Ha-WfJ
(9)
V/GZWOXVn
(10)
3.3. Batch-size constraints and material balances Constraints (11) and (12) impose upper and lower bounnds on the batch sizes Bstn and Bsin, while constraint (13) enforces variables Bsin and Bfin to be equal for the same task. The amount of state s consumed, B^isn, and produced, B^isn, by task / at time n is calculated through constraitns (14) and (15) respectively. Constraint (16) is the mass balance for state s at time n, where SSsn is the amount of state s sold at time n. Constraint (17) is a capacity constraint, where Q is the storage capacity: B^^^Ws.^
V/,Vn
(11)
BrWfi„
^U^n
(12)
BSi,_,^Bp,^_,=Bp,^+BU
"^U^n
(13)
Kn=PisBSin
yU\/n,yseSI(i)
(14)
Bl-PisBU
yiyn,\/sGSO(i)
(15)
Ssn + SS,^ = 5,,., + X B,?„ - S BL |G0(5)
S^
V5, Vn
>1
(16)
l€/(5)
Vs,Vn
(17)
3.4. Resource constraints The amount of renewable resource r required by task / that starts at n, R^^n^ is calculated by constraint (18). The same amount is "released" when task / finishes, R^rn^ and is calculated by constraint (19). The total amount of resource r required at time n is calculated in (20) and bounded, not to exceed the maximum availability RJ^"^^ by (21): Kn=rirWSin+S,,,Bs,„
V/,Vr,Vn
(18)
218
<=r.wy;,+«5,„B/;, v/,vr,v/i Rn,=Rm-^-lRL^+lRL
Vr,V«
I
R^
(i9) (20)
I
yr,yn
(21)
3.5. Time ordering Equations (22) - (24) define the start and the end of the time horizon and enforce an ordering among time points: T„^i=0
(22)
T„.m=H
(23)
T„.i^T„
Vn
(24)
3.6. Tightening constraints The addition of valid inequalities (25), (26) and (27) tightens the LP relaxation and significantly reduces the size of the branch-and-bound tree. llD^n^H
Vj
(25)
te/(y) n
X ^D^.
V/Wn
(26)
jG/(y) n>n
S S(«,w/;,. + AB/;,.)
(2?)
ie/(;)n'
3.7. Objective function While various objective functions can be acconmiodated within the proposed model (e.g. minimization of makespan or production cost for fixed demand), the maximization of income from sales is used here.
maxZ = XS^.'^^.« s WSiru WfinElOJl
(28)
n BSir, Bpi,,
Bfi„^ S^ru SS^r,
T^
Tfir, D^n, B^isru B""isruR\rru R^'inu Rm>0
(29)
The proposed MILP model comprises of constraints (1) - (29), where (8) is used to eliminate Tsin4. E x a m p l e The proposed model is used for the scheduling of the state task network of Figure 1, whose data are given in Table 1. There are six units (Ul, U2,.. .U6) available for the ten tasks. Unlimited storage is available for states Fl, F2, INTl, INT2, PI, P2, P3 and WS; finite intermediate storage is available for states S3 (15 kg) and S4 (40kg); no intermediate storage is available for states S2 and S6, while zero-wait policy applies for states SI and S5. States Fl, F2, and S4 are initially available in sufficient amounts. Furthermore, each task requires one of the there following utilities: cooling water (CW), low pressure steam (LPS), and high pressure steam (HPS). The maximum availability
219 for CW, LPS and HPS is 25, 40 and 20 kg/min, respectively. Constant processing times are assumed.
Figure 1: Sate Task Network of Example.
Table 1: Example data (B^^^ in tons, a in hr, yin kg/min, Sin kg/min per ton). Task Unit
Tl Ul 5 2 LPS 3 2
r»MAX
Dur (a) Utility Y 6
T2 U2 8 1 CW 4 2
T4 Ul 5 2 HPS 3 2
T3 U3 6 1 LPS 4 3
T6 U4 8 2 HPS 4 3
T5 U4 8 2 LPS 8 4
T7 U5 3 4 CW 5 4
T8 U6 4 2 LPS 5 3
T9 U5 3 2 CW 5 3
TIO U6 4 3 CW 3 3
The optimal solution yields the Gantt chart of equipment in Figure 2, and the resource utilization graph in Figure 3. In the optimal solution, 10 tons of product PI and 3 tons of product P3 are produced. As shown in Figure 2, task T6 is not performed since unit U4 is assigned to task T5 only, and thus, product P2 is not produced. Cooling water is the limiting utility, as its utilisation is equal to its availability from t=2 to t=4. The optimal solution is found when the 12-hour time horizon is divided into 8 intervals. The MILP problem consists of 3067 constraints, 180 binary and 1587 continuous variables. Its LP relaxation is $19,500 and its optimal solution is $13,000. The optimal solution was found in 62.8 sec and 2,107 nodes.
Ul U2 U3 U4 U5 U6
Tl
Tl
J3.
T4
itl
_T4
J^ T3
T3
T3 TS
T5 T9 T8
teg^TO
i(^|i^| 10
Figure 2: Equipment Gantt chart for Example.
12
t(hr)
220 50 40
.— ^ " 1 30
^
r 1 1
20
1
r
10 H 0
2
;
L_
1
0
_l
4
6
8
10
12
Figure 3: Resource utilisation level for Example.
6. References Castro, P., Barbosa-Povoa, A.P.F.D., Matos, H., 2001, An Improved RTN ContinuousTime Formulation for the Short-term Scheduling of Multipurpose Batch Plants, Ind. Eng. Chem. Res., 40, 2059-2068. lerapetritou, M.G., Floudas, C.A., 1998, Effective Continuous-Time Formulation for Short-Term Scheduling. 1. Multipurpose Batch Processes. Ind. Eng. Chem. Res., 37,4341-4359. Kondili, E., Pantelides, C.C., Sargent, R., 1993, A General Algorithm for Short-Term Scheduling of Batch Operations - 1 . MILP Formulation. Comput. Chem. Eng., 17,211-227. Kyu-Hwang Lee, Heung II Park, In Beum Lee, 2001, A Novel Nonuniform Discrete Time Formulation for Short-Term Scheduling of Batch and Continuous Processes. Ind. Eng. Chem. Res., 40,4902-4911. Maravelias, C.T., Grossmann, I.E., 2003, A New General Continuous-Time State Task Network Formulation for the Short-Term Scheduling of Multipurpose Batch Plants. Submitted for Publication. Mockus, L., Reklaitis, G.V., 1999, Continuous Time Representation Approach to Batch and Continuous Process Scheduling. 1. MINLP Formulation. Ind. Eng. Chem. Res., 38, 197-203. Pantelides, C.C, 1994, Unified Frameworks for the Optimal Process Planning and Scheduling. In Proceedings on the Second Conference on Foundations of Computer Aided Operations, 253-274. Schilling, G., Pantelides, C.C, 1996, A Simple Continuous-Time Process Scheduling Formulation and a Novel Solution Algorithm. Comput. Chem. Eng., 20, S1221-1226. Shah, N.E., Pantelides, C.C, Sargent, R., 1993, A General Algorithm for Short-Term Scheduling of Batch Operations - II. Computational Issues. Comput. Chem. Eng., 17, 229-244. Zhang, X., Sargent, R.W.H., 1996, The Optimal Operation of Mixed Production Facilities - General Formulation and Some Approaches for the Solution. Comput. Chem. Eng., 20, 897-904.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
221
Life Cycle Analysis of a Solar Thermal System with Thermochemical Storage Process Nur Aini Masruroh, Bo Li and Jiri Klemes Department of Process Integration, University of Manchester Institute of Science and Technology (UMIST), PO Box 88, Manchester M60 IQD, United Kingdom [email protected]
Abstract A new more efficient solar heating / cooling system has been developed for houses and buildings by an EU co-fmanced SOLASTORE (5FW ENERGIE) project with a consortium of partners from France, Spain, Portugal and the UK. It is targeting a considerable reduction of CO2 emission by improvement of the solar energy system efficiency and the extended utilization of the solar energy using reversible chemical reactions as energy storage. Although solar energy is considered environmentally friendly, the whole life cycle of a solar energy production has to be evaluated. The production targets for the SOLARSTORE system are potentially thousands units. This makes it very important to explore the total environmental impacts caused during its whole life cycle. This work employs Life Cycle Analysis (LCA), an ISO 14040 based technique for evaluating the total environmental impacts associated with a product, to analyse the total environmental impacts of SOLARSTORE system during its whole life cycle. The standard LCA methodology has been extended and modified to cope successfully with this task. The LCA results show the total environmental impacts to achieve 1 GJ energy by using SOLARSTORE system: global warming potential ranging in 6.3 - 10 kg CO2, acidification potential in 46.6 - 70 g SO2, eutrophication in 2.1 - 3.1 g phosphate and photochemical oxidant in 0.99 - 1.5 g C2H4. The raw material acquisition processes contribute 99% to the total environmental impacts. A LCA based comparison has been made analysing the total environmental impacts of a traditional solar heating system, a traditional fossil fuel heating system and SOLARSTORE system. It shows that SOLARSTORE provides a better solution for reduction of negative environmental impacts by using solar energy.
1. Introduction Solar energy is offering a considerable potential to limit the greenhouse effect. It enables substitution for fossil fuels used for energy generation, and consequently avoids the atmospheric emissions and other polluting residuals associated with conventional, mainly fossil, energy production processes. The major problem is that solar energy is provided independently of the needs, which leads to a low efficiency of the installations, covering only 30 to 60% of heating requirements and domestic hot water. The majority
^ Corresponding [email protected]
222
of thermal storage systems available on the market are based on a hot water tank (sensible storage) or a phase change material (PCM). These systems need a large area in the house and are not suited for long life of storage. The storage period needed can be as long as five days and a sensible or latent storage would lead to an important loss of heat during this period of storage. SOLARSTORE project is developing a more efficient solar heating/cooling system based on a pair of salts-water endothermic / exothermic reactions. Integration of this thermo-chemical storage process enables to store the solar thermal energy when heat requirements are lower than heat production and restore it when the heat production cannot cover the requirements. Therefore, it is possible to save the thermal solar energy, which would be normally lost. The need of auxiliary energy such as electricity or gas decreases, which is reducing negative environmental impacts due to the use of conventional energy resources. Is it really solar energy "green" and if to what extent? Is a developed solar energy unit environmentally friendly? The production targets for SOLARSTORE system are expected to be thousands units, which makes it very crucial to assess the environmental impacts associated with such systems. A comprehensive appreciation of the environmental impacts associated with the SOLARSTORE system requires an assessment of the emissions released and the consumption of energy and materials during its entire life cycle, from raw material acquisition to waste disposal. Life Cycle Analysis (LCA), which is based on ISO 14040 (Burgess, A.A. and Brennan, D.J., 2001), is an effective tool to make a quantitative assessment of the environmental aspects and potential impacts associated with a product during its entire life cycle. This a 'cradle to grave' (from raw material extraction to waste disposal) approach, providing a systematic way of evaluating the environmental impacts of a product, identifying and quantifying the emissions and material consumption that affect the environment at all stages of the entire product life cycle.
2. Objectives and Basic Specification 2.1. Objectives The objective is to explore the environmental impacts and raw material consumption associated with the SOLARSTORE system, by applying the LCA technique. Further objective is to fmd out whether SOLARSTORE system could create less negative environmental impacts, by comparing the environmental impacts of SOLARSTORE system, traditional fossil fuel heating system and traditional solar heating system. 2.2. Basic specification of SOLARSTORE system The SOLARSTORE system consists of two major units that can be separated physically, a general solar heating unit and a thermo-chemical storage unit. The general solar heating unit consists of a solar collector, a backup boiler, heat exchangers, pumps, connections, valves, and sensors. The thermo-chemical storage unit consists of reactors, evaporators, condensers and reactive compounds that are constructed by compressing certain inorganic salts and certain inert supporting structures together.
223
3. LCA for SOLARSTORE 3.1. LCA framework Fig 1 gives the technical framework for conducting LCA, showing how the basic components, goal and scope definition, inventory analysis, impact assessment and inventory assessment, are interrelated.
Ammsm 1. Goal and scope definition
Data analysis
Constructing the process flow chart
IMTACr 6. Classification
iMmov]iNi:^>ir Reporting and improvement assessment
Characterisation Defining the system boundaries
8. Valuation
Impacts to be evaluated Collecting the data
Processing the data
Detailed Execution step
Fig. 1. Technical Framework for LCA. 3.2. System deHnition and major assumptions SOLARSTORE system consists of two major units that can be physically separated, therefore, the LCA study can be performed for these two units separately. Previous LCA study of general solar heating system (Mirasgedis, S., et al, 1996) provided useful starting point, whilst LCA study of the thermo-chemical storage system has to be newly developed. The processes involved include raw material acquisition processes, manufacturing processes of all components, assembling process of each unit, installation and maintenance processes, processes of using SOLARSTORE system, disassembling process of each unit, disposal processes of the components and recycling processes of the materials. The transportation processes between different sites should also be considered. The difficulties to obtain the full data set (the SOLARSTORE system is still under the development) and the complexity of this analysis dictated the system definition to be modified to cope successfully with the task: L The usable lifetime of SOLARSTORE system is assumed to be 15 years. Since SOLARSTORE system is able to store / release solar thermal energy when necessary without consuming any other energy resources, it can be reasonably assumed that zero emission would be created by SOLARSTORE system during its usable lifetime. Therefore, the use phase of SOLARSTORE system is excluded
224
2.
3.
4.
from the analysis. The installation and maintenance processes are also excluded from the analysis due to the absence of data. SOLARSTORE system should be disposed off when it loses the ability of storing/releasing solar thermal energy. This should be taken into the consideration when assessing the environmental impacts. After being disposed, the equipment might be recycled to produce the material with the same quality (primary recycle) or the one with lower quality (secondary recycle). Although this recycling might be energy intensive and create significant emissions, this phase is excluded from the system boundary. This is because firstly only limited information is available for this recycling and secondly it is assumed that the emissions resulted from the primary recycle process have been considered in the processes of raw material acquisition and manufacturing. Railway and road transport are assumed to be major means for transportation and distribution processes.
Generally the environmental impacts to be considered include the resource depletion, human health and ecological consequences (Masruroh, N.A., 2002). In this study the problem is simplified and only the global warming potential, acidification, eutrophication and photochemical oxidant are evaluated. 3.3. Functional unit The SOLARSTORE system is developed to improve the efficiency of traditional solar heating system. The size of hardware equipment and the amount of the reactive compounds are directed by the potential of energy savings that can be achieved by the system. The functional unit in this study is per GJ of energy provided by SOLARSTORE system. Considering the other expected results of the project, reducing CO2 emission, such a functional unit makes it easier to compare the emissions released by SOLARSTORE during its life cycle with the emissions released by a traditional solar heating system or fossil fuel heating system. 3.4. LCA results In this LCA study are four options for selection of appropriate salt combinations and binders, three situations in which different energy requirements are to be fulfilled by using SOLARSTORE system, and two possible market places. Twenty-four LCA case studies have been carried out to select appropriate reactive compounds for different situations and different market places. The LCA results show that the total global warming potential impacts range in 6.3 - 10 kg CO2/GJ, acidification potential in 46.6 - 70 g SO2/GJ, eutrophication in 2.1 - 3.1 g phosphate/GJ, and photochemical oxidant in 0.99 - 1.5 g C2H4/GJ. It is also shown that the major part of emissions comes from the raw material acquisition phase (around 99%).
225
4. Comparison with Other Heating Systems Although SOLARSTORE system could improve the efficiency of traditional solar heating system, it cannot fulfil the total annual heating requirement in most location around Europe. A backup boiler is used to accommodate the rest heat requirement. kg COz/annum
SOLARSTORE
Conventional Solar Heating System
® Equipment
Natural Gas
Low Sulphur Heating Oil
* Fossil Fuel
Fig. 2. Annual Global Warming Potential Impacts for one Case Study. Fig 2 shows the annual global warming potential impacts caused by using different systems. A lifetime of 15 years is assumed for evaluation. SOLARSTORE system creates more environmental impacts during its manufacturing processes. It is caused by the use of the thermo-chemical unit and the reactive materials. More material and energy is required for this phase. However, SOLARSTORE system could replace 72% of fossil fuel, which consequently results in a significant reduction of the total environmental impacts and the total annual environmental impacts are considerable lower than by using other systems. SOLARSTORE system provides a better solution for reduction of negative environmental impacts by using solar energy, which encourages further development and production from the environmental point of view.
226
5. Conclusion This work employed the LCA to study environmental impacts associated with a new solar heating / cooling system, which is integrated with a thermo-chemical storage unit to improve the efficiency of traditional solar heating system. Twenty-four case studies have been carried out for consideration of different situations, different salt combinations and binders and different market places. The total environmental impacts to achieve 1 GJ energy by using this novel system are: global warming potential impacts ranging in 6.3 - 10 kg CO2, acidification potential in 46.6 - 70 g SO2, eutrophication in 2.1 - 3.1 g phosphate and photochemical oxidant in 0.99 - 1.5 g C2H4. The raw material acquisition processes contribute 99% to the total environmental impacts during the whole life cycle. The results provide a clear comparison of total environmental impacts of SOLARSTORE system, traditional solar heating system and traditional fossil fuel system. It shows that SOLARSTORE provides an advantageous solution for reduction of negative environmental impacts.
6. References Burgess, A.A. and Brennan, D.J., 2001, Application of Life Cycle Assessment to Chemical Processes, Chemical Engineering Science, 56, 2589 - 2604. Masruroh, N.A., 2002, Life Cycle Analysis of a Solar Thermal System with Thermochemical Storage Process, MSc Dissertation, Department of Process Integration, UMIST, Manchester, UK. Mirasgedis, S., Diakoulaki, D. and Assimacopoulos, D., 1996, Solar Energy and the Abatement of Atmospheric Emissions, Renewable Energy, 7, (4), 329 - 338.
7. Acknowledgement The financial support from the EC Project ENERGIE NNE5-2000-00385 "Improvement of the efficiency of solar thermal systems by integration of a thermo-chemical storage processes - SOLARSORE" and the collaboration from project partners CREED, CLIPSOL, ADAI, CNRS and Dalkia are gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
227
Hybrid Synthesis Method For Mass Exchange Networks Andrew K. Msiza and Duncan M. Fraser Department of Chemical Engineering, University of Cape Town, Private Bag, Rondebosch, 7701 South Africa.
Abstract A hybrid tool for the synthesis of mass exchange networks is presented in this paper. The tool uses targets to measure the quality of solutions generated by a mixed-integer non-linear (MINLP) model. The components making up the hybrid tool are a total annual cost target of the expected flowsheet, a physically meaningful initial flowsheet, a driving force diagram and an MINLP model. The total annual cost target represents the best cost scenario for the flowsheet, an initial flowsheet is used to initialise the MINLP solver and the driving force plot assists the designer to generate alternative initial solutions to improve the generated MINLP solution. The hybrid tool produces mass exchange networks whose total annual costs are within 10% of previously reported pinch solutions and networks that are similar or better than those obtained from the MINLP approach alone. The designer has confidence in the quality and optimality of the solutions due to the available targets and the visual driving force diagrams.
1. Introduction Process synthesis can be approached in three different ways: heuristics, physical and thermodynamic insight, and mathematical programming. Hybrid methods where two or all of the synthesis methods are combined are now becoming used, taking advantage of the combined strength of the individual techniques. In Grossmann and Daichendt (1996) the question of how the optimisation and targeting approaches, and heuristics for that matter, can be combined "in such a way that on the one hand the integration is conceptually consistent and rigorous, and on the other hand it exploits the strengths of each approach" is highlighted as one of the important challenges to be solved. In this paper we will adress this challenge using the following work as a basis. Kravanja and Glavic (1997) proposed a match-dependent heat-exchanger area targeting method followed by an NLP optimisation of the structure meeting the area targets. This method fixes the utilities before estimating the heat exchanger area. Papalexandri tt al. (1994) proposed an MINLP model for the simultaneous generation of a network and the determination of the TAC without resorting to any pinch decomposition concepts. The reported total annual costs of most of the problems solved by Papalexandri et al. (1994) were poor when compared to those obtained by Hallale (1998) using a pinch approach, yet the MINLP framework itself was versatile enough to contain even those solutions proposed by Hallale. It would then seem plausible that a transfer of some techniques from pinch technology to MINLP optimisation would expose the actual optimum solutions embedded in the MINLP hyperstructure.
228 Hallale (1998) showed how the capital and operating cost (El-Halwagi and Manousiouthakis, 1989) through supertargeting could be used to optimise the total annual cost (TAC) of a mass exchange network before design. Key features of the supertargeting approach were the determination of a network's total annual cost and a set of network generation criterion based on available driving force. Full details of all this may be found in Hallale (1998) and Hallale and Fraser (1998, 2(X)0a, and 2000b). The key objective of this paper is therefore to propose a new method of synthesis for mass exchange networks such that the techniques of pinch technology developed by ElHalwagi and Mathousiouthakis (1989) and Hallale (1998) are combined with the MINLP hyperstructure approach proposed by Papalexandri et al (1994).
2. Proposed Hybrid Synthesis Method The philosophy behind the integration is described by Figure 1. Determine Supertargets
Set up Superstructure as an MINLP model
^
P
Construct an initial flowsheet
\^
1%
^ ^r Optimise the MINLP model
Analyse DP plot for an alternative initial flowsheet ik
^.-^TA 1.1
^new ^
^V^
N
1 Av^target-^
Optimum Figure 1: The philosophy of the hybrid synthesis method. Most MINLP-models are non-convex and their solutions are not guaranteed to be global, therefore, total cost can be used to measure the optimality of MENs generated by MINLP. The installed cost of process units is commonly estimated from cost charts and cost indices which estimate the costs within 20-30%. These costs are then annualised by factors in the range 20-40% before being added to annual operating costs to give TACs. This means that a 10% approach of the TAC targets can be regarded as being optimal.
229 In the hyperstructure model of Papalexandri et ah (1994) we incorporate physically meaningful structures to initialise the MINLP solver. These initial structures may be: a. An initial solution whereby the mass-exchange duty is completed by external mass separating agents. This approach is analogous to HEN where the heating and cooling duties are completed by externally derived steam or cooling water; or b. A pinch-based initial solution. Once an initial solution is incorporated into the MINLP model, the model is solved via the DICOPT solver employing the Outer approximation algorithm for equality relaxation and augmented penalty (OA/ER/AP). If the resulting solution does not meet the 10% TAC margin, an alternative initial flowsheet is presented to the MINLP solver. A driving force diagram is used to help the designer identify the regions in the reported network that do not make use of the available driving force. The model is then solved with the new initial flowsheet and if the solution is within the target margin, it is accepted as an optimal solution.
3. Case Study 3.1. The problem statement The selected case study involves the dephenolisation of two aqueous streams, Rj and /?2, by solvent extraction (Hallale, 1998). The process MSAs are gas oil {Si) and lube oil {S2). The external MSA is light oil {Ss). The problem data is given in Tables 1 and 2. Table 1: Stream data for the dephenolisation problem. Rich Streams Ri R2
Lean Streams Si S2 S3
G (kg/s) 2 1
y^ (mass fraction) 0.050 0.03
y^ (mass fraction) 0.10 0.006
Density (kg/m3) 1000 1000
L^ (kg/s) 5 3
x^ (mass fraction) 0.050 0.03 0.0013
x^ (mass fraction) 0.10 0.006 0.015
Density (kg/m3) 880 930 830
00
M
b
2 1.53 0.71
0 0 0.001
Table 2: Equipment data for the dephenolisation problem. Exchanger cost (installed) t Eo
Annualisation factor
$9 050V"^ (volume in m^) 10 minutes per stage 100% 0.2
Cost (k$/yr)/(kg/s) 0 0 239.4
230 3.2. The approach Step 1: The supertargets This step involves the determination of the total annual cost target for the expected flowsheet. Using the supertargeting results reported by Hallale (1998), the total annual cost for this problem is $226000 with $158400 contributing to the annual operating cost and $338000 (annualised by multiplying by 20%) accounting for the annual capital cost. Step 2: The superstructure The MINLP model is similar to that of Papalexandri et al (1994). The practical issues regarding the handling of logical operators in the MINLP model are discussed in Zsitkai ^r a/. (2001 and 2002). Step 3: Constructing an initial flowsheet The flowsheet is initialised by using all the available external MSA (S3) to recover the phenol from the rich streams. Step 4: Optimising the MINLP model The hybrid model is implemented in GAMS (Brooke et al., 1988) on a PIT Celeron 533 MHz PC and solved using DICOPT solver implementing an Outer Approximation algorithm. The solution for the dephenolisation problem is similar to the proposed initial structure and has a total annualised cost of $2 236 000 which, is far beyond the 10% margin of the target. The operating and capital cost contributions to this cost are $2 230 000 and $6000 respectively. The MINLP solver did not add other exchangers or use an of the available process lean streams. Si and S2. This can be attributed to the fact that the solver could have been trapped in a local optimum (which is a characteristic of MINLP optimisation solvers). Step 5: Driving force analysis An analysis of the flowsheet in terms of the use of driving force by the selected matches will help in the selection of a new initial flowsheet. It is envisaged that the new flowsheet will re-route the MINLP solution search path to an optimal solution according to the criteria of the hybrid method. The method of constructing a driving force diagram is discussed in detail in Hallale (1998). In Figure 2, the Exchangers 1 and 2 are superimposed in the composite operating line. The exchangers are shown to be making too much use of the available driving force. To illustrate Step 5 we now initialise the MINLP solution with a pinch-derived flowsheet. The final hybrid solution obtained from initialising with a pinch solution is given by Figure 3. The hybrid solution is similar to the pinch solution except for the distribution of the number of stages among the exchangers and the reduction in the final composition of S2. Table 3 compares the results obtained for different initialisations. The hybrid solution is achieved through the simultaneous optimisation of the capital and operating cost and the generation of a network structure and through the iteration between the economic objective and the mass transfer driving force.
231
Figure 2: Driving force plot for the solution obtained by initialisation with an external MSA only. Exch. Nst Load
7 1 0.00125
6 7 0.006
.m-(?} 9.006
0.0106
<7)M11^
0.005
3 13 .0369
5 4 6 13 0.0034 0.0096
3.71 kg/s
(^_^)
0
0.0154 0
2 6 0.0146
1 2 0.0321
0.0339 Q M ^ ^ 2
0.0154
00.00761
Flowrate (kg/s)
Q
ao3JV1 1
0.015
0.0073981 0.01
0.01486
^ ^ ^ ^ _ 0 J L O 2 5 5 ^ 2.34
3.65xlOi
001^
>0.53
Figure 3: Hybrid solution resulting from a pinch-derived initial flowsheet. (CAP = $76600, OPC = $158400) Table 3: Cost of the optimum solutions from tested initialisations. Initialisation External MSA External MSA + S2 External MSA + Si+ S2 Pinch based
TAC ($/yr) 2 236 000 960 700 272 000 235 000
CAP ($/yr) 6 000 18 400 80 000 76 600
OPC ($/yr) 2 230 000 942 300 192 000 158 000
Number of units 2 3 4 7
232
4. Conclusions This paper has presented and tested a design philosophy for the synthesis of mass exchange networks whereby an MINLP approach is integrated with pinch analysis tools. At the heart of the integration is the use of pinch-based total cost targets to evaluate the optimality of MINLP solutions and driving force diagrams to identify different initial solution structures. A case study was solved to illustrate the hybrid philosophy. An important conclusion from the case study is that although the hybrid approach is able to meet the total annual cost target criterion of 10% it still suffers the MINLP limitations in terms of the solver getting trapped in local optima. The result of this is that solutions generated tend to be similar to initial structures given to the solver.
5. References Brooke, A., Kendric, D. and Meeraus, A. (1988). GAMS, A user's guide. The Scientific Press, San Fransisco. El-Halwagi, M. and Manousiouthakis, V. (1989). Synthesis of mass exchanger networks, AIChE J., 35(8), 1233-1244. Grossmann, I. and Daichendt, M.M. (1996). New trends in optimisation-based approaches to process synthesis, Comp. Chem. Eng., 20(6-7), 665-683. Hallale, N. (1998). Capital cost targets for the optimum synthesis of mass exchange networks, PhD thesis. University of Cape Town. Hallale, N. and Eraser, D.M. (1998). Capital cost targets for mass exchange networks. A special case: Waste minimisation, Chem. Eng. Sc, 53(2), 293 - 313. Hallale, N. and Eraser D.M. (2000a). Capital Cost Targets for Mass Exchange Networks, Parts I-II, Computers and Chemical Engineering, 53(2), 293-313. Hallale, N. and Eraser D.M. (2000b). Supertargeting for Mass Exchange Networks, Parts I-II, Trans I Chem E, 78, Part A, 202-216. Kravanja, Z. and Glavic, P. (1997). Cost targeting for HEN through simultaneous optimisation: a unified pinch technology and mathematical programming design of large HEN, Comp. Chem. Eng., 21(8), 833-853. Papalexandri, K.P., Pistikopoulos, E.N. and Floudas, C.A. (1994). Mass exchange networks for waste minimisation: a simultaneous approach. Trans. Inst. Chem. Eng, 72(Part A), 279-294. Wang, Y. and Smith, R. (1994). Wastewater minimisation, Chem. Eng. Sci, 49, 9811006. Zsitkai, Z., Lelkes, Z, Rev, E. and Fonyo, Z. (2001). Solution of MEN synthesis problems using MINLP: Formulation of the Kremser equation. Proceedings Escape-11 Conference, Kolding, Denmark, European Symposium on Computer Aided Process Engineering -11,1109-1114, Elsevier, Amsterdam. Zsitkai, Z., Rev, E., Fonyo, Z., Msiza, A.K. and Eraser, D.M. (2002). Comparison of different mathematical programming approaches for mass exchange network synthesis. Proceedings ESCAPE-12 Conference, Eds: Grievink, J., Schijndel, J. van. The Hague, The Netherlands, 361-366, Elsevier, Amsterdam.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
233
Multiperiod Synthesis and Operational Planning of Utility Systems with Environmental Concerns A. P. Oliveira Francisco^ and H. A. Matos^ ^Dep. de Eng. Quimica, Inst. Sup. de Engenharia de Lisboa, 1949-014 Lisboa, Portugal ^Dep. de Engenharia Quimica, Instituto Superior Tecnico, 1049-001 Lisboa, Portugal
Abstract Utility plants supply the required energy demands to industrial processes. Several authors were addressed the synthesis and design of those plants. However, a multiperiod model for utility systems including environmental concerns wasn't described until now. This paper presents an extension of Iyer and Grossmann (1997, 1998) model to synthesis and multiperiod operational planning in order to include the global emissions of atmospheric pollutants issues coming from the fuels burning. A new four steps algorithm is introduced to solve this multiobjective model. One motivation example enables us to compare the different units and fuel selection and also the operation periods of an industrial utility system taking in account the electrical power import/export policy and environmental concerns.
1. Introduction Utility plants supply the required energy demands to chemical processes, namely, mechanical, electrical and thermal power (different levels of steam). Changes in specifications, composition of feed and seasonal product demands create several process conditions with the corresponding variation in the utility demands during one annual horizon. Several authors were addressed the synthesis and design of utility plants. Among these authors, Papoulias and Grossmann (1983) described a MILP model for the synthesis and design of utility systems, for fixed demands. Iyer and Grossmann described models for multiperiod operational planning (1997) and synthesis and operational planning (1998) of utility systems. Chang and Wang (1996) described a multiobjective programming approach to waste minimization in the utility systems of chemical process, using the concept of global emissions of gaseous pollutants. This model merging economic and environmental concerns in the utility system synthesis was stated for fixed utility demands. Oliveira Francisco (2002) described a methodology for the synthesis and multiperiod operational planning of utility systems in a heat integrated industrial complex. This comprises a multiperiod model for utility systems including environmental concerns. The purpose of the present paper is to show the structure of this modified model and the resolution algorithm, applied to a simple example problem.
234
2. Problem Deflnition Given a set of time variable (multiperiod) demands of steam at various levels of pressure, electricity and mechanical power, the problem of synthesis and operational planning of the utility system consists in a structural and parameter optimization from a superstructure of alternatives. The superstructure will be decomposed in a set of feasible configurations. For each feasible configuration the unit sizes are such that allow demands satisfaction in all operation periods. In a single objective problem - only economic optimization - the configuration with the lowest total cost (investment and operational costs) for the horizon of planning should be chosen. If in addition to economic optimization we have to satisfy to other objectives, for example, environmental concerns, the problem arises to a multiobjective problem and a different optimization strategy should be adopted. The superstructure of utility system adopted in this work is derived from superstructures described by Papoulias and Grossmann (1983) and Iyer and Grossmann (1997, 1998). Figure 1 exemplifies the superstructure adopted for the Example Problem. Purchased Power 1 Power 1 Exports
Power 1 from Turbines
fel Purchased VHP Steam — n
Power 2 from Turbines
Unit 21
Figure 1. Superstructure for Example Problem.
Power 1 Demands
Power 2 Demands
235 This superstructure includes several steam headers at various pressure levels (VHP, HP, MP and LP). Steam can be generated in either conventional fired boilers (units 1, 2 and 20) or with waste heat boilers (units 13 and 28) receiving hot gases from gas turbines. There is a deaerator (unit 11) receiving make-up water and condensates returned from process utilizations. A condenser (unit 8) is provided for condensation of LP steam from unit 4. Power can be generated with several types of steam turbines, gas turbines (units 12, 27 and 30). Electrical generators can be driven by steam turbines or gas turbines. Gas turbines can operate in a stand-alone basis or associated with waste heat recovery boilers. There are also several types of auxiliary equipment as fans, pressure reducers and pumps.
3. Model Formulation Our model for the utility system is an extension of the multiperiod models described by Iyer and Grossmann (1997, 1998) in order to include the concept of global emissions of the gaseous pollutants that came from fuel burning. Following Smith and Delaby (1991) global emissions comprises local emissions derived from the production of utilities in industrial site and the balance between increasing/lowering of the emissions in the regional power station due to electricity imports and exports to regional power network by the utility system. The extended mathematical model for the utility system can be formulated as follows: P
zp=
min
fo(yd'^)-^^ft(xt^yt)-^^
y^,y^,d,x,
t=l
s-t. ht(d,y,.y,.xt.et)^0,
P
K
lAkEGkt
d)
t=\k=\
t = l...,P
(2)
jc,-Q"y,<0,
f = l,...,P
(3)
xr-a'y,>0.
t = 1,..., P
(4)
y, > y , ,
f = l,..., P
(5)
iy,
(6)
EGia-LEMk(llQ„r,)^0
(7)
n r
d& R\x,e
R'^XR^, y ^ G { 0 , i y ,
y,G{0,iy'''
Where y^ are integer variables (0-1) defining the selection of units for the design; d design variables defining the sizes of units; y^ integer variables (0-1) that determine the operational status on/off for period t; Xt the state and control variables for period t, Q the parameters (e.g, utility demands) for period t, EGkt the global emissions of pollutant k, in period r, Q^rf ^^^ production of level r steam, in the unit n, in period t, as the absorbed heat in the steam generator, LEMj^ the limit of global emissions of pollutant k, expressed in relation to the absorbed heat in the steam generator, Xk the weighted
236 parameter meaning the pollutant k contribution to the objective function, i^Q,i^Q are valid limits, lower and higher , respectively, P the number of periods and a a scalar lower than P. The objective function includes the investment cost for the design (fo) and the sum of the operation costs (ft) for all periods r = 1, ..., P., as well as the above referred weighted terms of global emissions of pollutants. Specific integer variables (binary, 0-1) in appropriate restrictions of the mathematical model allow the selection of operation on/off status and operations modes, e.g. extraction or condensing steam turbine. In the Example Problem given below we adopted a MILP formulation for the utility system optimization.
4. Decomposition Algorithm Iyer and Grossmann (1998) proposed a bilevel decomposition algorithm for solving his model. In the described algorithm, the first step is solving a design problem (DP) obtained from the principal model by making zero all fixed operation costs in the objective function, removing constraints of type (4) and (6) and by replacing yt with y^ in the constraints of type (3). The solution of model DP gives a lower bound for the objective function of principal model and provides values for design integer variables yd to use in the next step. In a second step a planning model (OP) is solved by fixing in the main model variables y^ obtained in first step. Solution of OP gives a higher bound for the objective function of principal model and provides an operational plan for the next time DP is solved. Iterative procedure with adding of appropriate cuts converges to the solution of main model. Then a modification of the Iyer and Grossmann algorithm was implemented: a) Fixing values of weighted parameters ^j^. b) Solving the utility system model with Iyer and Grossmann (1998) bilevel decomposition algorithm. In this step we obtain a design and operational plan for the utility system. c) Fixing the values calculated in step b) for binary variables ya and yt in constraints and solving the model obtained by replacing objective function (1) with a second objective function (8). ZE
= min E Z /lit ^^kt
^^^
t k
This step provides a utilization plan for fuels (binary variables) in the utility system arising to a minimum of global emissions. d) Fixing binary variables associated with the selected units for design (y^) and with the utilization plan of fuels obtained in c) and solving the model with objective function (1). Solutions obtained in steps b), c) and d) correspond to diverse importance given to economic and environmental concerns. Less pollutant fuels have higher costs than more pollutant content fuels. Thus, total cost for these three solutions will be b) < d) < c). Since all three solutions will satisfy the global emissions limitations the selected solution depends from the user criteria.
237
5. Example Problem A set of multiperiod utility demands of a new industrial complex is shown in Table 1 (one year with twelve equal periods). The synthesis of the utility system for this site is based in the superstructure represented in Figure 1. Table 1. Steam/BFW and power demands for the industrial complex for each period. Period Steam VHP (tAi) HP MP LP BFWfor Process (t/h) Power 1 (P) 2 (MW)
1 55 150 140 350 50
2 40 149 145 360 55
3 35 143 135 370 55
4 60 155 140 350 50
5 50 150 120 365 50
6 7 8 60 65 35 180 150 150 170 130 150 380 370 370 50 55 50
9 55 180 170 390 50
10 55 120 130 300 50
11 60 165 150 373 50
12 40 120 145 300 55
19.7 6
18 4
17 4
17.5 5
18 4.1
19 5
17 6
18 4
17 4
16.2 5
17.2 4.4
19.5 6
p=\ - electrical power; p=2 - mechanical power; VHP - 10 MPa; HP - 5 MPa; MP - 2 MPa:; LP - 0.35 MPa.
This problem was formulated as a MILP and solved for two situations: Case A - the utility system is not allowed to import electrical power from regional network; Case B electricity imports are allowed. For each case we solve different models: Model 1.1 - Iyer and Grossmann model with local and global emissions calculation. Model 1.2 - Iyer and Grossmann objective function with constraints (2)-(7). Model II -This paper work model (solution from d) in above algorithm). Results are shown in Tables 2 and 3. An increase of the total cost is obtained moving towards the different models, since they take in account more environmental constraints. Moreover, Model II shows a smaller exploit of the Gas Turbines compared with Model 1.2 and the most intensive use of Steam Turbines. Table 2. Selection and operational plan (^WP) of units for Case A/Case B.
Unit. Operation mode 1 (HP boiler) 2 (MP boiler) 13 (HP boiler) 20 (VHP boiler) 28 (VHP boiler) 3.1 (ST) 3.2 (ST) 4.1 (ST) 12.1(GT) 27.1 (GT) 30.1 (GT)
#WP
Model LI Total Cost
12/12 8/7 -/12/12 1/11 / -/-/-/1/-/3
Case A: 68.58 M$/year CaseB: 71.42 M$/year
#WP
Model 1.2 Total Cost
12/12 4/11 10/6 12/12 11/11 -/-/-/10/6 11/11 11-
Case A: 77.30 M$/year Case B: 76.70 M$/year
#WP 12/12 9/8 -/12/12 4/-/1 11/10 10/9 -/4/-/-
#WP- Number of Working Periods; ST- Steam Turbine; GT Gas Turbine
Model II Total Cost
Case A: 90.10 M$/year Case B: 90.44 M$/year
238 Table 3. Fuel usage and Global Emissions of CO2 and SO2 for Case AI Case B.
Fuel usage (in units 1,2 and 20) (kt/year)
Fuel
Model LI
Model 1.2
Model II
1
-/-
-/0.6
387.2/393.9
2
-/-
3
-/-
298.2 / 347.4 -/-
-/-/-
4
448.8/422.5
65.9/44.1
-/-
448.8/422.5
364.1/392.1
387.2/393.9
1272/1544
1275 /1372
861/851
33.7 / 36.9
13.4/13.4
-2.44/-2.70
Total Global Emissions
CO2 (kt/year) SO2 (kt/year)
Fuel: 1 - 75.38% C; 0.1% S; 2 - 86.47% C; 1.35% S; 3 - 87.26% C; 0.84% S; 4 - 84.67% C; 3.97% S.
The fuel consumption reduction obtained by solving the models is also due to reach optimal power import/export strategy (Oliveira Francisco, 2002). Moving towards the models a decrease in Global Emissions could be found, but a significant one is achieved in the Model II for CO2 and SO2. The negative values in the SO2 Global Emissions means that a power is exported from the industrial site causing a necessary lower working level at the Regional Power Station (RPS). The model was solved assuming that RPS burns a coal with 74.5% of C and 2.0% of S.
6. Conclusions Present work formulation is useful for preliminary design and can be applied to grassroots projects or to the revamping of existing utility plants. A motivation example enables us to compare total cost between models. The introduction of environmental terms in the objective function and in the set of constraints show us a different choice of the fuel with a increase of about 7 - 31 % of the Utility Plant annual total cost. This also corresponds a significant reduction in the Global Emissions - about 32 % and more than 100% in C02and SO2, respectively.
7. References Chang, C.-T. and Hwang, J.-R., 1996, A Multiobjective Programming Approach to Waste Minimization in the Utility Systems of Chemical Processes. Chem. Eng. Sci., 51(16), 3951-3965. Iyer, R. and Grossmann, I.E., 1997, Optimal Multiperiod Operational Planning for Utility Systems. Comput. Chem. Eng., 21(8), 787-800. Iyer, R. and Grossmann, I.E., 1998, Synthesis and Operational Planning of Utility Systems for Multiperiod Operation. Comput. Chem. Eng., 22(7-8), 979-993. Oliveira Francisco, A.P., 2002, MSc Thesis, Lisbon, I.S.T., U. T. L. Papoulias, S.A. and Grossmann, I.E., 1983a, A structural optimization approach in process synthesis -1 Utility systems. Comput.Chem. Eng., 7(6), 695-706. Smith, R. and Delaby, O., 1991, Targeting flue emissions. Trans. IchemE, 69(A), 492505.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) • © 2003 Elsevier Science B.V. All rights reserved.
239
Sizing Intermediate Storage with Stochastic Equipment Failures under General Operation Conditions Eva Orban-Mihalyko^ and Bela. G. Lakatos^ ^Department of Mathematics and Computing , ^Department of Process Engineering University of Veszprem, H-8201 Veszprem, PO Box 158, Hungary email:[email protected], [email protected]
Abstract An algorithm and simulation program have been developed for sizing intermediate storages of batch/semicontinuous systems taking into account stochastic equipment failures under general operation conditions. The method is based on the observation that any process of this system can be built up from a random sequence of failure cycles of finite number. By means of simulation and statistical evaluation of the results of the simulation runs the storage is sized at a given significance level. The computation time is favourable, and it appears to be a linear function of the number of the failure cycles.
1. Introduction The intermediate storage has an important role in improving operating efficiency of batch processing systems. It increases the variability of the system and reduces the process uncertainties. Over a long time horizon, it can also buffer the effects of equipment and batch failures when it is sized adequately. The sizing of an intermediate storage often can be treated as a deterministic problem, but, when the equipment and batch failures are significantly of random nature, then its sizing becomes a stochastic rather than deterministic task. Deterministic variations in the failure frequency and recovery time have been considered by Karimi and Reklaitis (1985) and Lee and Reklaitis (1989), while stochastic variations have been studied by Odi and Karimi (1990,1991), and Mihalyko and Lakatos (1998). In these studies, however, constants filling and withdrawing intensities were assumed. The aim of the present contribution is to study the problem under generalized conditions, i.e. when the intensity of filling and withdrawing of material into and from the intermediate storage may be changed arbitrarily during the operation.
2. Process Model Let us consider a non-continuous processing system with n upstream and m downstream units (usually n?^) with an intermediate storage. We assume that the processing units are operated periodically, and stochastic failures of the units under general conditions may occur. Furthermore, we suppose that the filling and removal rates may vary in time arbitrarily, i.e. they are described by general functions exhibiting at most finite number of jump discontinuities. Let o\^,... co^^ and^2,1 v^2,m denote the operation periods of the upstream and downstream units, while t^^ ,-hn
^^^ h,\ ^"•h,m ^^^ the filling and removal times, respec-
tively. Further, let t^^ ,..• ^i°„ and t^^ .— tl^m denote the corresponding delay times. We
240 suppose that t^^ +1^^^ < co^^ for any unit. Then, the mathematical model of the process is formulated under the following assumptions: 1. The ratio of two arbitrary periods is a rational number. 2. Only one upstream unit may suffer failure at the same time. Let its index be 1. 3. The failed unit does not transfer material into the storage during its repairing. 4. The failure of a unit does not affect the operation of the remaining ones. 5. The differences between the serial numbers of the failure periods, denoted by ^„ are integer random numbers of the same distribution having bounded range. We suppose that 4 /=1,2,... are independent. j
Then the serial number of the f^ period of failure is expressed as ^ ^ / , its initial /=i
moment is ^{^i -1)^1,1 , and the fmal moment is ^ ^ / ^ ^ • As a consequence, the/^ j-i
failure cycle is, by definition, the interval
S^/-^u'S^/-^i,
as it shown in Fig. 1.
The filling and removal rates are described by the following functions showing at most finite number of discontinuities: 0, if
0< CO, 'd,s
\(o.'d,s bef.Ai)-
fciM
CO,
t^ +t
M,s
), if
CO, 'd,s
(1)
0),
\co.
!lilh±<. t \co.'d,s co,^
0, if
where /^^ > 0 is some positive function, while d=l and s=l,....n for the upstream, and d=2 and ^=l,....m for the downstream units. The amount of the material transferred by the s^^ unit during an operation period is t=0
1
'
^i'i
tt•
^^^^'^
•
1 L
(^7+&)^7.7
1
[ ik i L
t
•
^ ^
^ ^ w 2^^ failure cycle w Moment of the 7^' Moment of the 2"' failure failure
V^ failure cycle
Fig.l. Characteristic time intervals of the process with equipment failures. ^
\bef,,{x)dx.
(2)
241 and the total amount of the material transferred by the s^^ unit in the time interval [0,f] may be expressed as
Vj-{t)^\bef,,{x)cbc.
(3)
When a failure of the 7" upstream unit occurs, then the filling intensity is described by the function ^e/i,(0,
7 >
fg
if
^ •«i.i /=1
befli^^(t) =
0, if
7J
re
(4)
j
•^u )
and the total amount of material transferred of the V unit is expressed as f
(5)
Then, the amount of material in the intermediate storage is varied in time according to /o
\
0
"
0
'^
0
(6) 5=2
5=1
where (taf) denotes a vector of dimension (n+m) of the delay times.
3. Sizing of the intermediate storage In order to have material in the intermediate storage always sufficient to operate the system without problem, it is necessary to have an initial amount y(0) in the starting moment of operation as
-minVh^'At).
(7)
Oil
There will no overflow if the volume of the storage is equal to maxVh^'\t)
+ V(.0)
(8)
As a consequence, the volume of the storage sufficient to operate of the system is expressed
0
0
(9)
Since in the present case the failures of the units are stochastic, so is also the function (,o )
Vh ''\ and the goal is to determine the distributions of the maxima and minima of this very function. In order to achieve this we proceed in the following way. We divide the process in the interval [0,0 into sub-processes according to the failure cycles and investigate the variation in time of the amount of material in the storage in
242 these sub-processes consecutively. The method developed is based on the following statements. Consider the function in the/'' failure cycle having the following form Vh^'''Xt)-Vh^''\^^i
'CO,0 for te
S^/-^U'S^i
CO,
. Then we have the
Statement 1. The actual delay times related to the initial moment of the /^ cycle becomes modified compared to the original ones, and are given as a function of the initial moment the cycle 7-1
^1,1
y-1 1=1
'(^ny w...+^".-E^/'^u+^'/.^-^
(0,
rl(Z^rco,0 =
(10) •7-1
" 7-1
S^/-^i.i
/=!
1=1
,+^°.+'.,.-Efi'^u
1=1
_ while the amount of material in the storage is varied in time according to Vh'^^'\t)-Vh^''\Y^^,
•co,,)=:Vh '' -
( / - ^ f , -co,,).
(11)
Statement 2. The number of actual delay times r^^ for all upstream and downstream units is finite. Statement 3. The number of vectors \fd,s} formed from the possible actual delay times 1. TT TT u ^u Pu IS equal to I I ^1 ^ • I I ^2 5 » where —— = '
s=2
5=1
'
CO,s
q,^,
^1,1 ,
P2,s =
(O^^s
Q2,s
. . , , and p^ ^ and q^ ^ are rel-
ative primes. The direct consequence of the Statement 3 is the Statement 4. The number of sub-processes arising in an arbitrary process is finite and is n
m
given by A' • J][^i^ '11^2,^ ' where K is the number of possible values of ^^. Statement 5. The maximum and minimum problems, providing the maximum and minimum values of the variation of amount of material in the intermediate storage in any failure cycle, are reduced to finding maxima and minima of finite number of functions. Statement 6. Let M^'-'^k) and m^^'^k) denote the global maximum and global minimum of problem of Statement 5, where k stands for the possible values of random variable ^1. Then the maximum and minimum value of the amount of material in the intermediate storage in the whole/^ failure cycle can be obtained as a sum of the initial /0
)
(0
\
amount of the cycle, and the global maximum M^'^''(^') and minimum m^'^'^^p, respectively. Subsequently, the maximum and minimum values in the whole interval of the process can be obtained recursively.
243 Statement 7. Any process can be built up from a random sequence of functions related to the failure cycles of finite number. By means of simulation and statistical evaluation of the results of simulation runs the storage can be sized at a given significance level.
4. Example For the sake of illustrating the method developed, consider a batch system with two upstream and two downstream units the operation parameters of which are as follows: ^1,1 = '7 ' ^1,2 = 4, 6^2,1 = '7 '^2,2 = ^ ' ^1,1 = 4, t^2 = 3' ^2,1 = 5, ^2 2 = 2 , t^^ = 2, ^1^2 = ^ ,
^21 = 1, ^2,2 "= i • Let us assume that the filling and removal rates of the units are described by the functions of the form: f^^{t) = 4t, fiji^) = 4t, /2,i(0 = i, fi,!^^) - St. Then, the number of the possible vectors of delay times is equal 12 that are given in Table 1. If the first failure of the unit 1 occurs in the 7^^ 2""^ or 3'"^ period of operation, respectively, then 36 different sub-process variations of the changes of the amount of material in the storage during the first failure cycle are possible. Three such subprocesses, corresponding to the vector of delay times (2,0,1,1), are shown in Fig.2, the Vh'"'"'•"(t)
\
\ \
V
a) ' b) ' c) ' Fig. 2. Variation of the amount of material in the storage in the r^ failure cycle when the r^ failure occurs in the a) V^ period, b) 2" period, c) 3^ period of operation Table 1. The characteristics of the possible vectors of delay times. m(k) and M(k) minimum and maximum for failure in the k^ period of operation; EVh - expected value of the amount of the transferred material in the storage during one cycle Vector of the delay times (2,0,1,1) (2,0,1,0) (2,0,1,-1) (2,1,1,1) (2,1,1,0) (2,1,1,-1) (2,-2,1,1) (2,-2,1,0) (2,-2,1,-1) (2,-1,1,1) (2,-1,1,0) (2,-1,1,-1)
m(l) -18.5 -20 -20.5 -24.5 -24.5 -30.5 -16.5 -18.5 -26.5 -10.5 -16 -22.5
Mil) 3.6 0 0 0 0 0 10 6 10 11.5 2 3.5
m(2) -1 -11 -13 -8 -17 -8 -17 -15 -4 -9 -10 -8
M(2) 23.5 19.5 13.5 19.5 9.5 11.5 19.5 17 17 21.5 21.5 21.5
m(3) 0 -8.5 -10 -8 -14.5 -12 -4 -6.5 -6.5 0 -0.67 -6.5
M(3) 33 31 31 30.5 26.5 21 39 35 27 41 31 33
EVh 3 -13/6 -11/3 -3 -25/3 -29/3 13/3 -1 -113 19/3 1 -1/3
values of minima and maxima obtained for these 36 variations, together with the expected values of the amount of transferred material in the storage during one cycle, are presented in Table 1. The global minima and maxima of these functions were
244 determined numerically by the combination of the grid and modified simplex method. According to Statement 7, each process of the system under investigation can be built
1000
1200
1400
t
a) b) Fig. 3. Variation of the amount of material in the storage through a) 15 and b) 100 randomly selected failure cycles, corresponding to the vector of delay times (2,0,1,1). up from these 36 sub-processes. These processes differ from each other in the sequence of the sub-processes sequenced randomly one after the other. These processes can be generated by computer simulation, selecting one of the possible sub-processes randomly according to a defined probability distribution. Such process is shown in Fig.3 that consists of 15 and 100 failure cycles, respectively, the sequence of which were generated randomly by using the discrete uniform distribution. Carrying out 10,000 simulation runs, consisted of 100 failure cycles under the same operation and simulation conditions, we obtained mean value of minima=-483.08, mean value of maxima=27.9316, and mean value of their differences=509.79. Based on the nu-merical results, it can be concluded that the difference is smaller than 684.1619 with probability 0.95, hence 684.2 is a sufficient size for the storage at significance level 95%. Notice that, in the example, the expectation of the transferred material in the r^ cycle is positive on the basis of the initial delay time but the remaining delay times, as it well seen in Table 1, cause a decreasing tendency in the variation of the process.
5. Conclusions A method and simulation program has been developed for sizing intermediate storages in batch/semi-continuous systems taking into account stochastic equipment failures under general operation conditions. The method is based on the observation that any process of this system can be built up from a random sequence of failure cycles of finite number. With the aid of the analysis of the problem, optimisation on large time intervals of the processes is replaced by optimisation on smaller intervals of the sub-processes of that very process. By means of simulation and statistical evaluation of the results of the simulation runs the storage may be sized at a given significance level. The computation time is favourable, since it is a linear function of the number of the failure cycles.
6. References Karimi, LA. and Reklaitis, G.V., 1985, AIChE Journal, 31,44. Lee, E.S. and Reklaitis, G.V., 1989, Computers chem. Eng., 13, 1235. Mihalyko, E.G. and Lakatos, B.G., 1998, Computers chem. Eng., 22, S797. Odi, T.O. and Karimi, LA., 1990, Chem. Eng. Sci., 45, 3533. Odi, T.O. and Karimi, LA., 1991, Chem.Eng.Sci., 46, 3269.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
245
Synthesis, Design and Operational Modelling of Batch Processes: An Integrated Approach Irene Papaeconomou^ Sten Bay J0rgensen^ Rafiqul Gani^ and Joan Cordiner^ ^CAPEC, Department of Chemical Engineering, Technical University of Deimiark, DK-2800, Lyngby, Denmark ^Syngenta, Grangemouth Manufacturing Centre, Earls Road Grangemouth, Stirlingshire, FK3 8XG, United Kingdom
Abstract The objective of this paper is to present a general methodology for the synthesis and design of batch processes. However, for the synthesis to be complete the operational modelling of the individual batch operations, such as reaction and separation, needs to be performed. The general methodology comprises a set of algorithms that supply the batch operational routes, which are a sequence of sub-tasks performed in order to achieve a specific objective for the reaction or separation task. That sequence is the operational model for every unit. Three algorithms are highlighted for the operational design of batch reactors, distillation columns and crystallizers. The algorithms are tested on case studies and the application results, including verification of the generated operational sequences through dynamic simulation, are presented.
1. Introduction The recent increase in the production of high-value-added, low-volume specialty chemicals, fine chemicals, pharmaceuticals and biochemicals has created a growing interest in batch processing technologies. In order to achieve a specific objective for the product, a number of tasks usually need to be performed for a period of time. The synthesis of batch operations requires the identification of the necessary tasks and their sequence. However, only when the individual tasks are fiilly characterized is the synthesis of batch operations complete. This can be achieved by simultaneous modelling and design of the operation of each batch task, where the objective is the minimization of a number of variables, such as time, utility costs or a combination. The operational modelling and synthesis in our approach entails the identification of subtasks and the order they need to be performed so as to reach the specified objective for the overall task. This provides the batch recipe for the individual task. The information acquired from the synthesis and the operational design is important for plaiming, scheduling and control design for batch operations. The general methodology consists of a number of algorithms that generate feasible batch recipes for specified operational and end constraints. The algorithms that have been developed so far are applicable to one-phase batch reaction, non-azeotropic batch distillation and also to batch crystallization and/or two-phase single unit separation. The algorithms, which are rule-based, provide the identity of the operational sub-tasks to be performed and the end of each sub-task and thereby, simultaneously synthesize, design and model the batch operations. The process design, the chemical hazard, the envirormiental impact and the safety risk of the processes define a set of constraints for the problem, which are then used to identify the end of the sub-tasks and to reject
246 infeasible operational schemes. Thermodynamic and physical insights are used to determine the next feasible sub-task, until the product constraints are met. The sub-tasks are defined as operations where the manipulated variables remain constant during the period of the sub-task. For example, operating at a specific reflux ratio and/or vapour boilup rate could be a sub-task considered for batch distillation. The algorithms also identify the principal operational parameters and a corresponding operational model where these parameters are the main variables. The operational model can be verified either experimentally or/and through simulation. Dynamic simulation has been used to analyse and validate the algorithms. The presentation will highlight the overall methodology through illustrative case studies.
2. Methodology The objective for the synthesis of batch operational sequences is to minimize the operating time and/or cost of operation. The focus in some cases might be the optimal time, while in others the minimization of energy/ operating costs. As there is a trade off between these two objectives, ultimately an optimization problem will have to be formulated and solved, where appropriate weights can be given to time and operating costs. However, all optimum solutions have to be feasible. Therefore, a methodology has been developed for the synthesis of batch operational routes, which consists of a set of algorithms that generate feasible/near optimum batch recipes for specified operational and end constraints. The conmion ground of these algorithms is the existence of a number of constraints that need to be satisfied at all time and the usage of manipulated variables to ensure feasible operation. The algorithms handle the operational modelling of each task by identifying the sequence of sub-tasks that need to be performed in order to achieve the objectives of the specified operation. An important feature of the algorithms is that the insights employed by the algorithm actually identify near optimum solutions without use of advanced numerical optimization techniques.
3. Algorithm for the Operational Design of Batch Reaction The algorithm for the design of the operation of batch reactors, with existent only one phase, multiple reactions taking place and operational constraints on temperature present, has been presented earlier (Papaeconomou et al., 2002). The key factor is the existence of desired reactions and of competing reactions, where the latter need to be suppressed. In addition to the operational constraints, there are also end constraints/objectives, namely the molefraction of the limiting reactant in the reaction of interest that should be as low as possible and the progress of the reaction of interest that should be as high as possible. At least one of the end constraints has to be satisfied. In this algorithm, apart from the operational constraints, a supplementary constraint on the projection in the end of selectivity is used. Selectivity Sij of reaction i over competing reaction j is defined as the ratio of the reaction rate of reaction i to the reaction rate of reaction j . The algorithm helps to identify the first operating step (subtask) based on the highest selectivity. For the generation of the recipe, rules are employed at all points to identify the end of each sub-task and to determine the next feasible sub-task, such as isothermal operation, adiabatic operation, heating, cooling, etc. The procedure is repeated until at least one of the product (end) objectives is met. Short example: The case is a batch reactor where a set of reactions is proceeding and operational constraints on the reactor temperature are present [300K = Tmin < T < Tmax = 360K]. There are reactions leading to the product, which are desired, while there are
247 also competing side reactions that need to be suppressed: Reaction 1: (Rl) + (R2) ^ (II) Reaction 3: 2 (11)-^ (12) + (B) + (H2) Reaction 2: (Rl) + (Il)-> (A) Reaction 4: (12) + (Il)<-> (C) The desired product is 12 and the desired intermediate product is II. Thus, reactions 1 and 3 are wanted, while reactions 2 & 4 are unwanted. Therefore, selectivity Sn and S34 must be high and above a certain value Sconstraim (example: Sconstraint=15). The application of the algorithm described above provides briefly the following sequence of operations. Table 1. Operational sequence for the case study generated from the algorithm. Sub task no 1 2 3 4
Type of operation Isothermal operation Heating Cooling Heating
Operating conditions 300 K Different amounts of heat Different amounts of heat Different amounts of heat
Operating time 0.1 hr varies varies varies
For this example, operating isothermally or adiabatically at any (starting) temperature in the accepted temperature range, leads to infeasible operation, because either the temperature or the selectivity constraint is eventually violated. However, isothermal operation at 300K is the best starting point and therefore, it is selected as sub-task 1. As soon as the selectivity constraint was about to be violated the end of this sub-task was detected and according to rules, the next sub-task was identified. Sub-task 2 was found to be heating, because it promotes reaction 1 (less exothermic) instead of reaction 2 (exothermic). A number of alternative feasible operational sequences were generated, as different amounts of heat from a range of available heat input were applied in sub-task 2. That also meant that the operating time for this step varied. The end of sub-task 2 was detected as the temperature constraint was about to be violated and cooling was applied as sub-task 3. The last sub-task was heating for the same reasons as sub-task 2. A detailed description of the results can be obtained from the author. The operational sequences were generated together with corresponding information related to the number, type and operating conditions of the sub-tasks. This information was then used to verify the feasibility of the alternatives through dynamic simulation with BRIC in the ICAS environment.
4. Algorithm for the Operational Design of Batch Distillation The algorithm for the generation of operational sequences for the case of batch distillation has been discussed in Papaeconomou et al., 2003 for the separation of multicomponent non-azeotropic mixtures. In this paper, a case study of binary azeotropic mixture is presented. The algorithm is applied, in order to obtain an operational route that will remove the azeotrope in minimum time and/or cost, so that the desired high purity product will remain in the vessel. The objective of the algorithm is to apriori identify the necessary sequence of sub-tasks in order to achieve desired specified end objectives for the products. These objectives are the product purity and the amount of product (yield) and they both have to be fulfilled. The algorithm employs a repetitive procedure to determine the operating conditions (reflux ratio and vapour boilup rate) along with a very good approximation of the operating time, for each sub-task. The only given data needed is the amount and composition of the initial charge and the desired product (distillate) specifications. The algorithm uses a set of simple equations for the distillation column (Diwekar, 1995) and adapts well-known methods, such as the driving force approach and the McCabe-Thiele
248 diagram to find quickly a near optimum recipe for the separation task. The strong feature of this algorithm is the use of the driving force approach to find the minimum reflux ratio and the reflux ratio used for the specific feed. The existence of a driving force is what makes the distillation feasible, and that is why it is not possible to cross a distillation boundary, such as an azeotrope where there is no driving force. As it has been discussed in Gani and Bek-Pedersen, 2000 operating at the largest driving force leads to the near minimum energy expenses. The advantage of using the driving force approach is that it gives you the physical insights to operate in an easy and near optimum energy costs way. For a specific feed composition, the largest driving force corresponds to the minimum reflux ratio, which however can only be supported by an infinite number of plates. Thus, for a specific number of plates a larger reflux ratio has to be used but still it should be as close to the minimum as possible. The end of each sub-task is detected as the following violation takes place. At some point the reflux ratio used approaches the actual minimum value for the corresponding composition in the vessel. At this point a new reflux ratio has to be used. The limiting composition at the end of the sub-task is found using the simple graphical method of McCabe-Thiele. The distillate amount and the amount remaining in the vessel are calculated from simple mass balance equations. The operating time of the sub-task is found from the overall material balance around the top of the column. This procedure is repeated until the desired yield is achieved. Short example: A mixture of methanol and methyl-acetate (15% kmol MeOH and 85% kmol MeAc) is to be distilled. The objective is to achieve a 99 % purity methyl-acetate left at the bottom of the column, with a 95 % recovery of the maximum yield. In the case of minimum boiling azeotropic mixtures, the azeotrope is considered to be the first product. The algorithm uses the azeotropic composition as the distillate specification. The application of the algorithm provides the following sequence of sub-tasks. The simulation engine in ICAS for batch processes, BRIC, has been employed for dynamic simulation to verify the suggested recipe. The operating conditions for each sub-task were defined, with the distillate composition being the end constraint for the sub-task, except for the last sub-task, where the purity of MeAc left in the column was the end constraint. The results are listed in table 2. Table 2. Operational sequence for obtaining the azeotrope, leaving pure MeAc in the column. Sub task no 1 2
Reflux ratio 4.8 10
Vapour boilup rate 100 100
Operating time 0.942 hr 0.465 hr
Simulated operating time 1.176 hr 0.459 hr
Ach. MeAc purity 95.89% 99.05%
Ach. MeAc recovery 95.91%
Table 3. Results for constant refl\ux ratio operation. Operation
No of intervals
Reflux ratio
Vapour boilup rate
Simulated operating time
Ach. MeAc purity
Ach. MeAc recovery
a) b) c) d)
1 1 1 2
4.8 8.5 10 [4.8,10]
100 100 100 100
1.615 hr 2.324 hr 2.629 hr 1.635 hr
99.05% V 99.05% V 99.05% V 99.05% V
84.90% 95.05% V 96.90% V 95.91% V
The operational sequence Hsted above was compared with constant reflux ratio operation. In that case, it was found that for low values of the reflux ratio, the two end
249 objectives could not both be achieved. For high values of the reflux ratio, the operating time was significantly higher than the one achieved by the operational sequence. Compared to operation with a constant reflux ratio, where both product objectives are achieved, the time in the generated recipe is by 30 % faster, as it can be seen in table 3.
5. Algorithm for the Operational Design of Batch Crystallization The algorithm for the operational design in batch crystallization is illustrated in Figure I. The objective of the algorithm is to identify in advance the necessary sub-tasks and their sequence, in order to achieve specified objectives for the separation task, namely the recovery of solutes (yield). The algorithm is using insights obtained from solidliquid equilibrium diagrams, with the purpose of generating a sequence of feasible/near optimum operational steps. The algorithm consists mainly of a repetitive procedure that identifies the nature of each sub-task and the operating conditions. The initial step is the actual generation of the phase diagram. The phase diagram is derived by drawing the solubility curves for a given temperature range for the operation of the crystallizers. In the repetitive procedure, the feed location is pinpointed to the diagram and the feasibility of the precipitation of the desired solid solution is checked. This is done by relating the feed point with the invariant point connected to the desired solute at each temperature. For the specific feed, the solute to precipitate is identified and the operational task is specified, in terms of choosing the operating temperature, removing or adding solvent, or mixing streams. The exact amount of solvent to evaporate or add can be found from the lever principle, since the material balances can be represented on the phase diagrams in the form of tie lines. It is significant to avoid regions on the diagram, where two solids precipitate, since this is not effective separation. For that reason evaporation is always less than the maximum and dilution is more than the minimum. Knowing the exact location of the slurry, the product yield can be computed, with the lever principle. The composition of the mother Hquor in equilibrium with the precipitated solute can be located on the phase diagram and treated as a new feed. The above-described procedure is repeated until the desired or maximum yield for the feed and range of operations is achieved. GIVEN DATA
I DRAW PHASE DIAGRAM
HjO
1
/^
1
/
\
SELECT OPERATIONAL TASK AND CONDITIONS
/
S/L SEPARATION
^' ^*--**^°°^
/^*t!fML2ff=3
1
/
/
••'
\
^N»ML2;§fc2—WW ^*°'^
'"^513
'ON
\
"'^A
//>'' ^^ ,
Figure 1. Schematic presentation of the algorithm for batch crystallization.
,
20
,
30
,
40
,
50
,
60
,
70
,
,
Figure 2. Phase diagram for the ternary system H20-NaCl-KCl
\
250 Short example: A ternary mixture of water, sodium chloride NaCl and potassium chloride KCl (85, 15 and 5 % w.t. respectively) is to be separated. The objective is to recover 95% of the dissolved NaCl. The permitted range of temperature is 0 to 100 ^C. At first the ternary phase diagram is drawn for the temperature range. For simplification reasons, only the minimum and the maximum temperature are shown in Figure 2. The precipitation of NaCl is feasible at any temperature in the range. However, the slurry density (and thus the yield) is higher as the temperature increases, and so the operating condition for the crystallizer is set to be 100 °C. The mother liquor is treated as a new feed and the procedure is repeated. The second sub-task is chosen to be crystallization at 0 °C. The mother liquor from that sub-task is treated similarly to Feed 1 and a cycle option is created. The results are given in table 4. Table 4. Operational sequence for the batch crystallizer, suggested by the algorithm. Sub-task no
Operation
Temperature
Saltpptd
Yield
1 2 3 4 5
Evaporative crystallization Cooling crystallization Evaporative crystallization Cooling crystallization Evaporative crystallization
100 °C OT 100 °C OT 100 °C
NaCl KCl NaCl KCl NaCl
73.6% 62.5% 90.1% 86.0% 96.3% > 95%
6. Conclusion A general methodology that allows the simultaneous synthesis, design and operational modelling of batch processes has been developed. For that reason a set of algorithms is used, which identify the necessary sub-tasks and their sequence, along with the principal operational parameters creating the operational model of the process. However, the strong feature of the algorithms is that they generate the operational model apriori with no use of rigorous models and minimum computational effort. All the algorithms were tested against case studies and results were briefly presented. Dynamic simulation verified that all the generated operational sequences were feasible, achieving all end constraints, and actually superior to one interval operation with constant values for the manipulated variables. The methodology, the algorithms and the associated computer aided tools provide a systematic and integrated approach to the solution of problems involving batch operation sequence and verification.
7. References Diwekar, U.M., 1995, Batch Distillation: Simulation, Optimal Design and Control. Taylor & Francis, USA, 15-30. Finch, B., 1970, How to Design Fractional Crystallization Processes, Ind. Eng. Chem., 62,6. Gani, R. and Bek-Pedersen, E., 2000, Simple New Algorithm for Distillation Column Design, AIChE J., 46, (6), 1271. Papaeconomou, I., Jorgensen, S.B. and Gani, R., 2002, A General Framework for the Synthesis and Operational Design of Batch Processes, Comput. Chem. Eng., (In Press). Papaeconomou, I., J0rgensen, S.B., Gani, R. and Cordiner, J., 2003, A Conceptual "design" Based Method for Generation of Batch Recipes, Proceedings of FOCAPO 2003, Coral Springs, 473.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
251
Modelling, Design and Commissioning of a Sustainable Process for VOCs Recovery from Spray Paint Booths Sauro Pierucci, Danilo Bombardi\ Antonello Concu\ Giuseppe Lugli^ Department of Chemistry, Materials and Chemical Engineering Politecnico di Milano, Milano, Italy ^ CTP: Costruzioni Termodinamiche Parmensi Parma, Italy
Abstract Air emissions from surface coating operations result from the evaporation of the organic solvents in the coatings and consist primarily of volatile organic compounds (VOCs). Aim of this paper is to present an innovative and sustainable process based on VOC absorption. An absorption tower is fed on the top by oil which efficiently absorbs at low temperature the VOCs contained by the off-gas which enters the tower at its bottom. Saturated oil from the column is then stripped at high temperature in a vacuum system which condenses VOCs at a temperature slightly below the ambient temperature. Stripped oil is then recycled to the absorption tower. The paper will provide a complete description of an industrial site located in Italy including modeling, design, commissioning, industrial operating conditions and economic evaluations.
1. Introduction Many manufactured items receive surface coatings for decoration and/or protection against damage. In general terms, the surface coating process comprises several distinct steps: surface preparation, application of coatings, and curing of coatings. Air emissions from surface coating operations result from the evaporation of the organic solvents in the coatings and consist primarily of volatile organic compounds (VOCs). In order to limit VOCs emissions, two possible actions are today foreseen (Gay 1997, Stone 1997): VOC destruction or VOC recovery. VOC destruction is accomplished via incineration (Darvin et al 2000). In general, costs of incineration can go up dramatically as the temperature is raised and when auxiliary fuel is required to accomplish the incineration. This can happen if the concentration of VOCs in the air stream is below 1000 ppm.(Vasilash 1997) There are two major types of VOCs recovery: refrigerated condensation (Stone 1997) and adsorption, followed by refrigeration (Hussey and Gupta 1997, Kent 1999). Although there are several types of condensation systems the most common adopted is a reverse Rankine cycle where a closed-cycle heat pump with a separate working fluid is used to condense VOCs. Adsorption is a process whereby the VOCs in the air stream are captured physically on the surface of a solid such as carbon. Steam or inert gas is
252 then used to regenerate the solid beds by stripping the concentrated VOCs from the surface before it becomes saturated. Aim of this paper is to present an innovative and sustainable process based on VOC absorption. An absorption tower is fed on the top by oil which efficiently absorbs at low temperature the VOCs contained by the off-gas which enters the tower at its bottom. The mass ratio between oil and gas is approximately around 1, so that a tray configuration results more efficient than an alternative packed column. Saturated oil from the column is then stripped at high temperature in a vacuum system which condenses VOCs at a temperature slightly below the ambient temperature. Stripped oil is then recycled to the absorption tower. The paper will provide a complete description of the plant including modeling, design, commissioning, industrial operating conditions and economic evaluations. The research project in the scope of this paper was jointly conducted by CMIC Department of 'Politecnico di Milano\ Faculty of Chemical Engineering and CTP, 'Costruzioni Termodinamiche ParmensV, an Italian Industry recently created and operating in the area of air cleaning for environmental purposes.
2. Absorption Process The absorption process proposed by this paper is sketched in Figure 1. Contaminated air at ambient temperature and pressure is cooled in C-1 to a value of approximated 4 °C, before entering the absorption tray-tower which absorbs contaminants with oil fed at the same temperature (4-5 °C) in the top of the column. Exhausted oil from the column bottom is preheated to 180"C in the plate heat exchanger E-2. A vacuum stripper (0.2 absolute bars) provides the separation of contaminants from oil through a top condenser working at a temperature slightly lower than ambient temperature. Regenerated oil is then cooled in E-2, flowing countercurrently with the exhausted oil-flow arriving from the column bottom. Regenerated oil is then fiirtherly cooled in the plate heatexchanger C-2. Clean air from the top of the absorption column, pre-cools the air from booths in E-1. Recirculating Oil
Air from Booths Air to Atmosphere
Absorber
Figure 1. Process schematic diagram.
Stripper
253 The cooling utilities required by C-1, C-2 and by the stripper condenser are provided by specific refrigeration systems electrically driven. The heating utility required by the stripper reboiler is supplied electrically with specific finned explosion resistants. The whole process is therefore requiring only electrical power at its battery limits.
3. Process Thermodynamics A typical contaminants composition is reported in table. 1. The TOC ranges normally from 1200 to 2500 mg/Nm^ in the air flow which is maintained practically at the controlled flowrate of 14000 Nm^/h. In special conditions the TOC may reach values up to 3500 mg/Nm^. Table 1. Typical contaminants composition. mg/Nm Acetone
31
mg/Nm Ethyl Acetate
Methyl Ethyl Ketone 683 Toluene
29
mg/Nm Butyl Acetate 492
523 Xylenes
802
The Ester Alcohol 2,2,4-Trimethyl-l,3-Pentanediol Monoisobutyrate was chosen as absorbent oil. Fig.2 reports the experimental vapor pressure dependance on temperature. Laboratory tests were carried out to evaluate the absorbance of contaminants into oil and for validating a thermodynamic model aimed to the prediction of components volatilities in the mixture. A good agreement was found assuming the vapor phase as ideal while the liquid phase activity coefficients were estimated by a standard UNIFAC model. Vapor pressure correlations for pure contaminants were derived from Perry and Green (1997).
350
400
450
500
550
Temperature [K] Figure 2. Vapor Pressure vs Temperature of the absorbent oil
254
4. Contaminants Quality The contaminants quality depends on the TOC value, so that the assumption of constant relative components ratio is not suited for a proper analysis of the process performances. The rational of such a situation depends on the coating adopted for each desired product. Asymptotically it might be expected that each coated product line provides a specific value of the TOC. Extensive on site analysis of air contaminants demonstrated this assumption.
• MethyEthyl Ketone • Xylenes A Buthyl Acetate • Toluene
0.15 1500
2000
2500
3000
3500
4000
TOC
Figure 3. Experimental contaminants mass fractions (relative) vs TOC. Moreover it was experimentally discovered that the dependence of TOC total value with internal contaminants composition varied almost linearly. Figure 3 reports the contaminants relative mass fractions as they result from experimental data. The assumption of a linear dependence arises with an acceptable approximation. Data derived from Figure 3 are useful for analysing the performance of the process at various values of the TOC in the off gas from the spray paint booths.
5. Process Design The basic design of the process was obtained by using the commercial simulator PRISMA. The 'engineering' was provided by the equipments suppliers. Table 2 reports the specifications used for the design. The dependence of internal relative composition of contaminants vs TOC was assumed to vary linearly as it was demonstrated by experiments previously reported in Fig.3. The design phase was conducted with the aim of finding an optimal compromise between costs and performances. This is a trial and error procedure which is strongly dependent also on the client requirements and utility availability. The optimum was, at the end, found with the main specifications reported in Table 2. It was assumed a tray efficiency in the absorber tower of 0.6 so that a 17 valve-trays column was constructed.
255 Table 2. Process Design specifications and main results. Design SpeciHcation Air Flowrate 14000 Nm-^/h TOC in the inlet stream 1500-2500 mg/ Nm^ TOC to atmosphere < 500 mg/ Nm^
Design Main Results Absorption Tower Theoretical Trays 10 Temperature 5'C Pressure 1 Atm Stripping Tower Theoretical Trays 12 Reboiler Temperature 190 X Subcooled Condenser Temperature 5°C Pressure 0.2 Atm 10 mVh Recirculating Oil Tot. heat-exch. surface 480 m^ Total energy absorbed 180 Kw
The stripping column adopted the same efficiciency and tray type. All the heatexchangers were biased-plate exchangers which warranted a good compromise in reducing the volume equipments per cost unit.
6. Commissioning and Performances The plant was constructed early 2001. Due to the innovative solution intrinsically proposed, an extensive test run period was necessary before commissioning the plant to the client. In that occasion also the testing of the model adopted for simulating the plant was also possible. Some typical results are compared in figure 4, were the discharged TOC value is plotted vs a parameter which is the product of two terms: inlet-TOC value times airflowrate value. 1000 900 1
800
I .as Q
600
g
500
700
• TOC Exp. • TOC Predicted
400 300 15
20
25
30
35
TOC*Air-flow*le-6
Figure 4. Comparison between experimental and predicted TOC values. The model demonstrated a good agreement with experiments at design conditions, while had the trend of overestimating the discharged TOC values for higher TOC and Air
256 flowrate. The predictions were in any case conservative in favour of a better performance of the real plant.
7. Process Costs The operational costs are mainly related to the total energy absorbed as electricity for a value of about 180 Kw. Of those about 50% are required by the boiler in the stripping section together with the oil cooling heat exchanger C-2. This value may be reduced almost linearly by increasing the investment costs of the heat exchanger E-2, i.e. by increasing its surface. The actual solution is in principle a compromise between investment and operational costs. Nevertheless it is important to point out that the plant may provide flexibility for changing this optimum in both directions, investments and operational, giving in such a way the possibility time to time to adopt the most convenient solution. The investment costs are intrinsically low due both to the simplicity of the process and to its operating conditions. This is however true for a limited interval of the inlet conditions, that is the total air flowrate and its TOC: in other words the plant capacity is limited to an upper limit of about 20000 NmVh of treated air with a TOC < 1500. Whenever this upper limit is violated the plant cost is practically doubled being more effective to double the plant itself. Whenever this occasion occurs alternative technologies such as incineration are more convenient from the economical point of view although less safe for the environmental impact and sustainability.
8. Conclusions An innovative process has been described for the reduction of TOC in the effluents from spray paint booths. The process pays the advantages of being simple and operating at acceptable conditions. The costs, both investment and operational, are limited and competitive, provided that the plant capacity is below a specified upper limit. The process has undoubtedly the advantage of having a zero impact to the environment and of being totally sustainable being required only electricity as unique utility at its battery limits. The performances are acceptable and in respect of the design specifications. They may be enhanced by increasing either the operational costs of about 50% or the total surface area of specific heat exchangers devoted to the heat recovery.
9. References Darvin, C.H., Proffitt, D. and Ayer, J., 2000, Modern Paint and Coatings, 87,10. Gay, R., 1997, Environmental Technology, 5,45. Hussey, F. and Gupta, A., 1997, Advanced Coatings Technology Conference, April 710, Detroit, Michigan. Kent, S., 1999, Chemical Engineering, 106, 1. Perry, H.R. and Green, D.W., 1997, Perry's Chemical Engineers' Handbook, VII Edition, Mc Graw Hill, Malaysa. Stone, J., 1997, Products finishing, 7, 85. Vasilash, G.S., 1997, Automotive Production, 108,4.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
257
Comparison Between STN, m-STN and RTN for the Design of Multipurpose Batch Plants Tania Pinto^, Ana Paula F. D. Barbosa-Povoa^^ and Augusto Q. Novais^ ^DMS, INETI, Est. do Pago do Lumiar, 1649-038 Lisboa, Portugal ^CEG-IST, DEG, I. S.T., Av. Rovisco Pais, 1049-101 Lisboa, Portugal
Abstract This paper looks into the design problem by each of the three types of representation, STN, m-STN and RTN, with a goal to establish their relative merit and derive a set of guidelines concerning their applicability. From this analysis it was also possible to identify aspects of design, which require further research and a number of recommendations is put forward to this effect. Some cases are described illustrating the flexibility and the applicability of the three types of representation, along with their numerical results.
1. Introduction In multipurpose batch plants, a wide variety of products can be produced via different processing recipes by sharing all available resources, such as equipment, raw material, intermediates and utilities. In order to ensure that any resource in the design can be used as efficiently as possible, an adequate methodology is necessary in order to address such type of problems without creating ambiguities in the process/plant representation. The first general methodology for batch plant design (Shah, 1992), that aimed to account for this aspect, was the State-Task Network (STN). Later on, this methodology was improved and the maximal State-Tasks Network (m-STN) emerged. This considers a more general detailed representation where some deficiencies present in STN were overcome (Barbosa-Povoa and Macchietto, 1994). The process recipes and the plant structure (units and connections) are combined into a single framework that represents the plant topology and all legal transfers of material within the plant. More recently, a new representation, the Resource-Task Network (RTN), was applied to the design of batch plants (Barbosa and Pantelides, 1997). However, some detail related to the full plant topology is not addressed explicitly. This paper considers the application of STN, m-STN and RTN methodologies over two examples of design multipurpose batch plant with the goal to emphasise their conceptual and computational differences and to set a guideline concerning their applicability. A discrete time formulation for a non-periodic single campaign mode of operation was considered.
^ Author to whom correspondence should be addressed, e-mail: [email protected], tel:+ 351 1 841 77 29
258
2. Comparison Between STN, m-STN and RTN When dealing with batch process design, three main aspects must be considered: the process, the plant and the operation. The STN representation as presented by Kondili et al. (1988) addresses only the modelling of the process recipe. Its application to the design case (Shah, 1992) assumes full plant connectivity where no location of material is considered explicitly. Two types of nodes characterise this representation, the State and the Task nodes. The State nodes (denoted as circles) represent the different types of materials within the process recipe while the Task nodes (denoted as rectangles) represent the transformation of materials from one state to another. Later on, BarbosaPovoa and Macchietto (1994) when addressing the detailed design of batch plants used the mSTN representation where the process and the plant topology are considered simultaneously. Four types of nodes are considered: the eStates, the i/oStates, the eTasks and the tTasks. These correspond respectively to the dedicated storage of material, location of material, processing tasks and transfers of material. The RTN representation considers two types of nodes: Tasks and Resources. The tasks, are operations that consume/produce a specific set of resources while resources model the different types of resources needed in the plant. When addressing the detailed design of batch plants, process operation and plant topology must be considered. The application of the above representations resulted in the need of paying a special attention to three modelling aspects. These are respectively the modelling of the storage tasks, the transfers/ location of material in the plant and the instantaneous characteristics of the storage and transfer tasks. In order to illustrate these aspects, a motivating example is studied followed by the presentation of the results obtained for a second and more complex example. The GAMS/CPLEX (v 7.0) software was used running in a Pentium III at 333 Mhz. 2.1. Motivating example A plant is to be designed at a maximum profit so as to produce 80 ton of a single product (S3) from two raw materials, (SI) and (S2). Two different processing tasks are considered. Task Tl that transforms SI into S3 after 1 hour and task T2 that processes S2 during 2 hours and generates S3. A single campaign non-periodic mode of operation was assumed over a time horizon of 5 hours. In terms of equipment, three storage tanks are available (VI, V2 and V3) to store respectively, SI, S2 and S3, and a multipurpose reactor is suitable to process task Tl and T2. Vessels VI and V2 are connected to reactor Rl (connections cl and c2), while the latter is also connected to vessel V3 (connection c3). The plant design should provide in detail the plant structure as well as the operational schedule. 2.7.7. State Task Network - STN In order to address explicitly all the possible transfers of material, storage and processing tasks the application of the STN methodology results in the representation shown in Figure 1.
259
Figure 1. STN Process representation. In here the concept of State and Task was preserved as presented by Shah (1992). The resulting STN representation involves 17 nodes - 8 tasks and 9 states. The Tasks model: the storage of material (Tal, Ta2 and Ta3 - for SI in VI, S2 in V2 and S3 in V3), the processing (Tl and T2 both in Rl) and the transfers of material (pil, pi2 and pi3 - for SI, S2 and S3). The States represent not only the type of material (SI, S2 and S3) but also its location in the plant (Sla, Sib, S2a, S2b and S3a, S3b). 2.7.2. Maximal State Task Network (mSTN) The resulting maximal State Task Network (m-STN) is shown in Figure 2. Eleven nodes are obtained where storage, processing and transfers of material are handled explicitly, as well as the materials location. Thus, we have three dedicated storage (Sl/Vl, S2/V2 and S3/V3), two iStates (iel, ie2) one oState (iol), three transfer tasks (pil/cl, pi2/c2 and pi3/c3) and two processing tasks (eTasks Tl/Rl and T2/R1).
pi3/c3
Figure 2. m-STN, Process representation. 2.1.3. Resource Task Network For the Resource-Task Network, RTN and considering directly the definitions established by Pantelides (1994) for Tasks and Resources the resulting RTN leads to 24 nodes (Figure 3).
©*-|
©^ Ta3
Figure 3. RTN process representation. These correspond to 16 resources (7 equipment resources and 9 material states) and 8 tasks (3 storage, 3 transfers and 2 processing). Note that a material resource may differ not only due to the type of material involved but also due to its location (ex: Sla, Sib).
260 As referred above the main differences between the three types of representations explored are due to the need of addressing explicitly the storage, transfer and location of materials and to the fact that transfers of material are assumed instantaneous. In addition, material in storage must be available continuously. Below, these points are explored. 2.7.4. Storage tasks When modelling a storage task, the availability of material must be considered in a continuous form, while the utilisation of suitable equipment must be made explicit. Considering the application of STN concepts, the usage of the storage vessel is handled through the definition of a task where the allocation of the equipment is accounted for (Figure 4a). Furthermore, and since the availability of material must be instantaneous, an auxiliary state (SIa) was defined. In this way the vessel utilisation is guaranteed, otherwise the model characteristics would result in the non-utilisation of the vessel since state SI would be always available at no storage cost. In the m-STN the equipment allocation is implicit in the eState definition (Sl/Vl, Figure 4b). As for the RTN, a storage task is also created (Figure 4c). This consumes two types of resources - the material (SI) and the vessel (VI) and produces instantaneously an auxiliary material (SIa) and, after one time unit, the equipment resource (VI). Again the need for the auxiliary resource is explained by the instantaneous characteristic of the task, that unless associated with a fictitious transformation of resources, would never be used.
Ta1
S1/ V1
Ta1
Figure 4a. STN.
Figure 4b. m- STN.
- H Sla
Figure 4c. RTN.
Note that, if the direct application of the STN concepts would not be followed the representation for the STN and m-STN could be the same. That is, if it would be assumed that in the STN, a state would model not only the type of material but also its allocation to the storage vessel. This is explored in terms of model resolution as it can be seen in Table 2. 2.1.5. Transfer Task/Material Location Either in the STN or RTN methodologies, the transfer tasks are treated as a normal task. These tasks although not transforming the state/resource in terms of type material, moves it from one place to another - resulting in a different state/resource (Figure 5 a and 5c) - and uses an extra resource to perform it a suitable connection (cl). As for the m-STN (Figure 5b) the transfer of material is handled explicitly, through a transfer task (pil) that uses a suitable connection (cl). Comparing the STN and the m-STN representations, it can be concluded that they are similar although in terms of model statistics the first results in more variables and constraints, as it will be seen later (see Table 2). This is due to the mathematical formulation associated. For the RTN, on the other hand, the number of instances is larger which reflects accordingly in the model statistics. Again, and as before, for the storage task the instantaneous characteristic of these tasks implies the need for creating auxiliary state/resources nodes (Sla).
261
->(^^r)->^^
(^^j\-p^yc^_^,^^ pil
Figure 5a. STN.
Figure 5b. m-STN.
Figure 5c. RTN.
Solving the motivating example using the three representations explored above and the data expressed in Table 1, the results obtained are shown in Table 2. These corresponds to a 0% margin of optimality and to an objective function of 1248,54 x 10^ monetary units where units Rl, V3, CI and C3, with 80 units of capacity where chosen. Table 1. Capacities and equipment cost.
Capacity [u.m/m ] max:min Cost[10^ C.U.] fix;var
VI ^^^
V2 ^^j
V3 ^^S.Q
Rl 200:0
0
0
1:10'^
10:10'^
CI 200:0
C2 200:0
C3 200:0
10'^ 10'^
(c.u.= currency units; u.m= mass units; unl.= unlimited) Analysing the model statistics it can be seen that the m-STN representation results in a smaller problem both in terms of variables and constraints, followed by the STN and RTN. As for the general versus the adapted STN, the latter results in a smaller model. The model statistics influence the CPU times, as it can be seen in Table 2. The m-STN is solved more quickly (0.234) followed by the STN (0.312) and finally the RTN (0.359). However the differences are not very marked. Table 2. Computational data. Methodology STN -general STN - adapted m-STN RTN
N** Variables 137 98 75 364
N** Binary 48 36 24 127
N" Constraints 227 176 122 511
CPU time (s) 0.313 0.312 0.234 0.359
LPs 6 5 7 16
STN - general (dedicated storage model as a task); adapted (state/unit allocation) 2.2. Example 2 Using an example proposed by Barbosa-Povoa and Macchietto (1994), the above representations are again explored. In here, a plant must be designed at a maximum profit so as to produce three final products, S4, S5 and S6, with production capacities between [0:80] ton for S4 and S5, and [0:60] for S6, from two raw materials, SI and S2. The process operates in a non-periodic mode over a time horizon of 12 hours. The results in terms of model statistics are shown in Table 3. The problem modelled through the mSTN presents the smallest statistics followed by the STN-adapted and finally the RTN. The same behaviour is observed when analysing the computational times associated.
262 These facts indicate that again the m-STN representation appears as the most adequate for the modelling/solution of the detailed design of batch plants. Table 3. Computational data. Methodology STN -general STN- adapted m-STN RTN
N** Variables 974 794 600 2615
N^ Binary 358 298 116 1018
N"* Constraints 1743 1503 918 3662
CPU time (s) 8.734 4.469 4.187 123.765
LPs 570 523 522 5029
3. Conclusions This paper discusses the applicability of STN, m-STN and RTN representations to the detailed design of batch plants, where a discretisation of time and a non-periodic operation mode were assumed. The main differences identified were concerned with three important aspects that are related. The need of explicitly considering the storage tasks, which account for the continuous availability of material and the usage of a suitable equipment. The need of considering the different locations of material in the plant and consequently the definition of transfers of material with suitable equipment associated. And, finally the instantaneous characteristic of each one of these tasks. These representations resulted in larger models when using the RTN methodology and consequently harder models to be solved. For the STN also larger models where obtained with regards to the m-STN representation. In conclusion, within the scope of the problem characteristics covered, the m-STN appears as the most adequate representation to the detailed design of batch plants, since it explores the problem characteristics reducing the need of auxiliary instances in the representation, as well as reducing the mathematical formulation statistics associated. Thus, the choice of an adequate representation for the solution of a given problem should, as much as possible, explore the problem intrinsic characteristics. However, it is important to note that the work presented should be further explored and more examples should be solved so as to confirm this conclusion. Also, other problem characteristics such as set-ups dependency, cleaning needs, connectivity suitability, amongst others, should also be studied. This is now ongoing research by the authors.
4. References Barbosa-Povoa, A.P.F.D. and Machietto, 1994, Detailed design and retrofit of multipurpose batch plants. Computer Chem. Engng, 18, 11/12, 1013-1042. Barbosa-Povoa, A.P., Pantelides, C.C., 1997, Scheduling and design of multipurpose plants using the resource-task network unified framework. Computer Chem. Engng, 21b, S703-S708. Kondilli, E., Pantelides, C.C. and Sargent, R.W.H., 1988, A general algorithm for scheduling bath operation. In Proc. of 3"^^ Ind. Symp. on Process Systems Engineering, pages 62-75, Sydney, Australia. Pantelides, C.C, 1994, Unified framework for optimal process planning and scheduling. In D.W.T. Rippin and J. Hale, editors. In Proc. Second Conf. on Foundations of Computer Aided Operations, CACHE Publications, pages 253-274. Shah, N., 1992, Efficient scheduling, planning and design of multipurpose batch plants. Ph. D. Thesis, Imperial College, University of London, U.K.
European Symposium on Computer Aided Process Engineering- 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
263
Generalized Modular Framework for the Representation of Petlyuk Distillation Columns p. Proios and E.N. Pistikopoulos* Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London SW7 2BY, U.K.
Abstract In this paper the Generalized Modular Framework (Papalexandri and Pistikopoulos, 1996) is used for the representation of the Petlyuk (Fully Thermally Coupled) column. The GMF Petlyuk representation, which avoids the use of common simplifying assumptions while keeping the problem size small, is validated for a ternary separation, by a direct comparison of its results to those obtained by a rigorous distillation model.
1. Introduction The Petlyuk column (Petlyuk et al, 1965) is an energy efficient distillation system which, along with its thermodynamically equivalent Dividing Wall Column (Wright, 1949), has been reported of being able to lead to energy savings of even up to 40% when compared to conventional simple column arrangements (Glinos and Malone, 1988 and Schultz et a/., 2002). The importance of this complex distillation column has compelled the development of numerous methods for its design and analysis. These methods can be classified into two main categories, namely, those using simplified (shortcut) models and those using rigorous (detailed) models. Petlyuk et a/. (1965) used shortcut calculations for the minimum reflux based on constant relative volatilities and internal flowrates. Cerda and Westerberg (1981) developed a shortcut model for the minimum reflux assuming sharp separations for the Petlyuk column. In Fidkowski and Krolikowski (1986) the Petlyuk column was studied for ternary mixtures and sharp calculations through a shortcut model for the minimum vapour flowrate based on the Underwood method. Glinos and Malone (1988) and Nikolaides and Malone (1988) designed the Petlyuk column using shortcut calculations under constant relative volatilities and equimolar flowrates. Carlberg and Westerberg (1989) and Triantafyllou and Smith (1992) used a three-simple-column approximation of the Petlyuk column. The former proposed a shortcut model for minimum vapour flowrate for nonsharp separations whilst the latter based their design on the Fenske-Underwood-Gilliland shortcut techniques. Halvorsen and Skogestad (1997) used a dynamic shortcut model based on assumptions of equimalr flowrates and constant relative volatilities for their Petlyuk/Dividing Wall Column model. Agrawal and Fidkowski (1998) used Underwood's method for their Petlyuk design and Fidkowski and Agrawal (2001) proposed a shortcut method for the separation of quarternary and higher mixtures in To whom correspondence should be addressed. Tel.: (44) (0) 20 7594 6620, Fax: (44) (0) 20 7594 6606, E-mail: [email protected]
264 Petlyuk arrangements extending the Fidkowski and Krolikowski (1986) method. Shah and Kokossis (2001) designed the Petlyuk columns in their framework based on the Triantafyllou and Smith (1992) shortcut procedure. Finally, Amminudin et ah, (2001) proposed a shortcut method for the design of Petlyuk columns based on the equilibrium stage composition concept. It must be noted that the above methods provide fast and simple ways of designsing and analysing the performance of the Petlyuk column. However, the fact that they are based on simplifying assumptions can place a limitation on their accuracy and applicability, notably for the cases where these assumptions do not hold. This limitation can be overcome through the use of rigorous methods, not relying on simplifying assumptions. Chavez et al. (1986) examined the multiple steady states of the Petlyuk column through a detailed tray-by-tray model under fixed design, which was solved with a differential arc-length homotopy continuation method. Dtinnebier and Pantelides (1999) designed Petlyuk columns using a detailed tray-bytray distillation model based primarily on the rigorous MINLP distillation model of Viswanathan and Grossmann (1990). Also based on the latter, Yeomans and Grossmann (2000) proposed a disjunctive programming model for the design of distillation columns including Petlyuk arrangements. These methods are based on detailed and accurate models with general applicability. However, they do generate considerably larger nonlinear programming problems which lead to an increase of the computational effort. The scope of the presented work is twofold: a) to provide a valid method for representing and analyzing the performance of the Petlyuk column with respect to its energy efficiency potential at a conceptual level and b) based on this, to put the foundations for the extension of the method to the synthesis level, that is, for the generation and evaluation of all column arrangements for this separation problem, involving simple and also (partially) thermally coupled columns. These will be realized in an integrated way, from a process synthesis point of view, and without generating a large optimization problem (as the rigorous methods), while avoiding the common limiting assumptions, characteristic of the shortcut methods.
2. The Generalized Modular Framework In this work the Petlyuk column is represented through the Generalized Modular Framework (GMF) (Papalexandri and Pistikopoulos, 1996), which is an aggregation framework for process synthesis/representation. The GMF is based on the fact that since a large number of process operations are characterized by mass and heat transfer phenomena (for instance the mass and heat exchange between liquid and vapour streams in distillation), using a generalized method for capturing those, the process operations in question can be systematically represented in a compact and unified way. The GMF through its generalized mass and heat exchange modelling, aims towards that direction. In brief, it can be stated that the GMF is a superstructure optimization method and, alike most of the methods belonging to this class, consists of a Structural Model, responsible for the generation of the (structural) process alternatives and a Physical Model, responsible for the evaluation of the latters' performance/optimality.
265 Cooler
Heater
6
-D
a—o Pure Heat Module
Pure Heat Module
Figure 1: GMF Building Blocks (Ismail et al 2001)' The Structural model consists of: (i) the GMF building blocks and (ii) their interconnection principles. The GMF building blocks (Figure 1) are representations of higher levels of abstraction and lower dimensionality where mass/heat or pure heat exchange take place. The existence of the building blocks is denoted mathematically through the use of binary (0-1) variables. The Interconnection Principles define the way the various building blocks should be connected to each other for the generation of physically meaningful alternative units and their resulting flowsheets. The mathematical translation of these principles is realised through a set of mixed and pure integer constraints, which define the backbone of the GMF structural model. The GMF Physical Model is employed for the representation of the underlying physical phenomena of the generated structures. Each building block is accompanied by its physical model, which is based on fundamental (and thus general) mass and heat exchange principles at the blocks' boundaries, consisting of mass and energy balances, molar fraction summation corrections and appropriate Phase Defining and Driving Force Constraints arranging the mass and heat transfer. The complete GMF mathematical model, as a combination of the structural and physical models is a Mixed Integer Nonlinear Programming problem (MINLP), and can be found in detail in Papalexandri and Pistikopoulos (1996) and Ismail et al. (2001). For the GMF representation of the Petlyuk column a minimum number of 6 mass/heat and 2 pure heat modules are employed (Figure 2). The connectivities of the building blocks are appropriately arranged so that the complex structure of the Petlyuk column is obtained. This is done by fixing the corresponding binary variables to 0 or to 7 for the respective nonexistence or existence of building blocks and their interconnections. For the Petlyuk column representation, each mass/heat module represents a column section (aggregation of trays), where a separation task takes place. The pure heat exchange modules represent the condenser (cooler) and the reboiler (heater) of the Petlyuk column. It must be noted that since a tray-by-tray model is not employed, the equilibrium constraints are being replaced by Driving Force Constraints at the two ends of each one of the six mass/heat modules, according to the type of contact (countercurrent for distillation). These constraints along with the Phase Defining Constraints and the conservation law constraints ensure mass and heat transfer feasibility and define the distribution of the components in the existing building blocks. However, the main motivation for representing the Petlyuk column through the GMF lies on the latter's main representational advantages, which for the examined case are summarised below: (i) the GMF physical model captures efficiently the underlying
266
rSJ^^
xCl^
LS^.
Qreb
P2
y-^ Qh
f-~-^
Figure 2: Petlyuk Column (Conventional and GMF representation). mass/heat transfer phenomena, since it is not based on simplifying and limiting assumptions such as sharp splits, equimolar flowrates, constant volatilities and it does not involve any shortcut calculations, (ii) the GMF physical model can accomodate any thermodynamic model, (iii) the GMF structural and physical models allow the represention of the Petlyuk column in an aggregated way, leading to a smaller and easier to solve optimization problem (iv) the framework can be potentially extended to the evaluation of other (distillation) systems through a superstructure based on the existing six mass/heat modules and by allowing more interconnections. In the following section the above advantages and the framework's validity and representational merit will be demonstrated through a GMF/Petlyuk column case study.
3. Numerical Results - Validation The GMF representation of the Petlyuk column is employed for the separation of the ternary mixture of Benzene, Toluene and 0-xylene. The problem data was taken from Chavez et al (1986) and it involves the separation of a saturated liquid feed of 211.11 mol/s, with molar fractions of Benzene, Toluene and 0-xylene, 0.2, 0.4 and 0.4, respectively, into three product streams with molar fractions of 0.95, 0.9 and 0.95, in the above components. The objective is the minimization of the utility cost. For a fixed (Petlyuk) structure, the corresponding GMF mathematical problem is a nonlinear programming problem (NLP) which was solved in GAMS (Brooke at al, 1992) using the solver C0N0PT2. Due to the inherent stream mixing and splitting terms the problem is nonconvex which is solved only to local optimality. However a systematic procedure has been employed with appropriate initial guesses and bounds for the stream flowrates, temperatures and molar fractions in order to find a local optimal point which represents the potential (energy consumption levels) of the examined Petlyuk column. From the optimization runs for the mixture and feed composition examined, the GMF provided the energy consumption levels (heater duty of 9,026.3 kW) and the operating conditions of the Petlyuk column, using the mass/heat exchange principles of the GMF physical model.
267 However, since the GMF physical model is an aggregated (and, thus, nonconventional) model, the validity of the GMF results for the Petlyuk representation was evaluated by comparing these results quantitatively and qualitatively, to those derived from a conventional tray-by-tray model. For these purposes, the rigorous model of Viswanathan and Grossmann (1990) was used for the minimization of the operating cost, with the problem definition and the column design taken also from Chavez et al. (1986). From the results of the optimization, the two models are found to be in quantitative agreement, since the reboiler heat duty in the rigorous model was 10530 kW, which is very close to that of the GMF heater, indicating that the GMF model predicted correctly the energy consumption of the Petlyuk column. The small divergence between the two is possibly due to the fact that in the GMF the bottoms product stream is removed before the heater (with less liquid entering it, Figure 2). However, such a quantitative agreement needs to be the product of a qualitative agreement (that is, in the components' distribution over the various column sections). Since the GMF does not provide information at the tray level, in order to enable a comparison of the composition and temperature profiles of the two models, the points of the feeds, interconnections and side streams of the GMF representation were placed on the corresponding points (tray locations) of the tray-by-tray model, in a common x-axis. In Figure 3 are shown the profiles of the Toluene composition and of the temperature in the main column of the Petlyuk arrangement. From these it is apparent that the two models are also in qualitative agreement in the main column (similar results were derived for the prefractionation column, as well). This qualitative agreement shows that the GMF provided insights on the performance of the Petlyuk column, with respect to its energy consumption, based on a sound physical model which is capable of capturing efficiently the mass and heat transfer phenomena of the examined system. Another point of importance is related to the size of the generated optimization problem. Due to the aggregated nature of the GMF representation (where variables and equations are accounted for only at the building blocks' boundaries and not at the tray level, as in the rigorous models), a size reduction of 75% in the number of variables and constraints, respectively, has been noted when using the GMF instead of the trayed model (depicted
I
3
5
7
9
11
13
15 17 19 Travs/Moduks
21
23
25
27
29
31
Figure 3: Qualitative Comparison of GMF and Rigorous Models (Petlyuk Column).
268 in Figure 3 with the fewer GMF points), with direct effects in the computational effort. Of course, as it can be observed in Figure 3, the GMF does not provide detailed results and profiles, as the rigorous model does, but this is beyond the aim of the framework, which is not a simulation but a synthesis/representation tool, at a conceptual level.
4. Conclusions As shown, the GMF provides a sound and useful tool for the representation and evaluation of the Petlyuk column and its underlying physical phenomena providing valid information about the energy consumption levels of the fully thermally coupled column. Moreover, the GMF results, which were derived using an aggregated physical model and thus generating a significantly reduced optimization problem, were evaluated for their consistency and validity through their comparison with a well-established rigorous distillation model. Finally, having validated the GMF physical model for the examined system, the complete GMF model, with its physical model, as it was used in the presented work, and with its full structural model (without incorporating a fixed structure but with an adequate number of building blocks and their interconnections to be determined by the optimizer) can now be used for the synthesis problem, i.e. the generation and evaluation of all the alternatives of interest (simple and complex) for the examined separation problem, which is the scope of our current research.
5. References Agrawal, R. and Fidkowski, Z.T., 1998, Ind. Eng. Chem. Res., 37, 3444. Amminudin, K., Smith, R. Thong, D. and Towler, G., 2001, Trans IChemE, 79(A), 701. Brooke, A., Kendrick, D. and Meeraus, A., 1992, GAMS - A User's Guide, Scientific Press, Palo Alto. Carlberg, N.A. and Westerberg, A.W., 1989, Ind. Eng. Chem. Res., 28, 1386. Cerda, J. and Westerberg, A.W., 1981, Ind. Eng. Chem. Process Des. Dev., 20, 546. Chavez, R.C., Seader, J.D. and Wayburn, T.L., 1986, Ind. Eng. Chem. Fundam.,25, 566. Dunnebier, G. and Pantelides, C.C., 1999, Ind. Eng. Chem. Res., 38,162. Fidkowski, Z.T. and Agrawal, R., 2001, AIChE J., 47(12), 2713. Fidkowski, Z.T. and Krolikowski, L., 1986, AIChE J., 32(4), 537. Glinos, K. and Malone, M.F., 1988, Chem. Eng. Res. Des., 66, 229. Halvorsen, I.J. and Skogestad, S., 1997, Comp. Chem. Eng., 21, S249. Ismail, S.R., Proios, P. and Pistikopoulos, E.N., 2001, AIChE J., 47(3), 629. Nikolaides, LP. and Malone, M.F., 1988, Ind. Eng. Chem. Res., 27(5), 811. Papalexandri, K.P. and Pistikopoulos, E.N., 1996, AIChE J., 42, 1010. Petlyuk, F.B., Platonov, V.M. and Slavinskii, D.M., 1965, Int. Chem. Engng, 5(3), 555. Schultz, M.A., Stewart, D.G., Harris, J.M., Rosenblum, S.P., Shakur, M.S. and O'Brien, D.E., 2002, CEP, 98(5), 64. Shah, P.B. and Kokossis, A.C., 2001, Comp. Chem. Eng., 25, 867. Triantafyllou, C. and Smith, R., 1992, Trans IChemE, 70 (A), 118. Viswanathan, J. and Grossmann, I.E., 1990, Comp. Chem. Eng., 14(7), 769. Wright, R.O., Fractionation Apparatus, 1949, U.S. Patent 2,471,134. Yeomans, H. and Grossmann, I.E., 2002, Ind. Eng. Chem. Res., 39,4326.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
269
A Multi-Modelling Approach for the Retrofit of Processes A. Rodriguez-Martinez\ I. Lopez-Arevalo^, R. Banares-Alcantara^* and A. Aldea^ ^Department of Chemical Engineering, ^Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili, Tarragona, SPAIN.
Abstract The retrofit of an existing process is a complex and lengthy task. Therefore, a tool to support the retrofit by reasoning about the existing process and the potential areas of improvement could be of great help. A proposal for a retrofit approach based on a multimodelling knowledge representation is presented in this paper. The use of structural, behavioural, functional and teleological models allows the designer to work with a combination of detailed and abstract information depending on the retrofit step. The proposed retrofit process consists of four steps: data extraction, analysis, modification and evaluation. The HEAD and AHA! prototype systems were implemented for the two initial steps. These systems have been applied in a case study to the ammonia production process.
1. Introduction Industrial processes require periodic evaluations to verify their correct operation, both in technical and economical terms. These evaluations are necessary due to changes in the markets, and safety and environmental legislation. In order to satisfy these demands it is necessary to investigate process alternatives that allow the optimal use of existing resources with the minimum possible investment. The retrofit of processes is a methodology of analysis and evaluation of possible changes to an existing process in order to improve it with respect to some metric (economical, environmental, safety, etc). Historically, the retrofit of processes has been largely centred on energy savings. In the last decades significant advances in this area were obtained through the use of the pinch methodology (Linnhoff and Witherell, 1986) and mathematical programming techniques (Grossmann and Kravanja, 1995). Other systems, such as the one proposed by Fisher et al. (1987), combine heuristic rules with decision hierarchies. These methods generate process alternatives based on the modification of the process structure or the dimensions of the items of equipment. A possible improvement to these approaches would be the reduction of the complexity originated by the use of detailed information. As an alternative approach we propose the use of multiple models (structural, behavioural, functional and teleological) to represent detailed and abstract knowledge for the retrofit of artifacts in general and chemical processes in particular.
To whom correspondence should be sent. Email: [email protected]
270
2. Methodology The proposed methodology for retrofit consists of four steps and the use of a multimodel knowledge representation. 2.1. Retrofit process Our proposed retrofit process is shown in Fig. 1. HEAD
Data Extraction
AHA!
Design Analysis
knowledge AcquisitionL
£
Data Abstraction
i Data Analysis
3:
Alternatives Generation Design Modification Alternatives Adaptation
RETRO Design Evaluation
3:
Alternatives Evaluation
TProposed Alternative
Fig. 1. The retrofit process based on a multi-model knowledge representation. The main steps of the retrofit process are: • Data Extraction. Information of the artifact is extracted from an initial representation (the simulation output from HYSYS^^ in our case). The HEAD system performs the data extraction, see Section 3. • Design Analysis. Extracted information is abstracted at several levels based on a set of hierarchical functions and precedence rules. This abstracted information is analysed to identify promising sections for retrofit. The AHA! system performs the design analysis, see Section 3. • Design Modification. Alternatives are generated based on the application of new specifications to the original artifact. • Design Evaluation. The generated alternatives are evaluated with respect to their specifications. If an alternative does not satisfy the specifications, the Design Modification step is repeated until they are satisfied. The RETRO system is being implemented for the design modification and evaluation steps (see Section 4). 2.2. Knowledge representation We propose a multi-modelling approach for the representation of knowledge as suggested by Chittaro et al. (1993). In our approach, a unit (i.e. the building block of an artifact; in the case of a chemical process it corresponds to an item of equipment or a section of the process) is represented by the following types of models: • Structural, i.e. the class of a unit and its connectivity.
271 • • •
Behavioural, i.e. how the unit works. Functional, i.e. the role of the unit within the artifact. Teleological, i.e. the objective and justification of the unit.
Depending on the retrofit step and the abstraction level we can use detailed information (structural and behavioural models) or abstract information (functional and teleological models) to reason about a unit.
3. Application of tlie Methodology and Results For the data extraction step we have implemented the HEAD system (HYSYS ExtrAction Data). HEAD is programmed in MS Visual Basic™ and its goal is to extract information from a process flow diagram (PFD) taking advantage of the Programmability features of HYSYS™. The extracted information is then sent to AHA! (Automatic Hierarchical Abstraction tool), a Java based prototype system that generates different levels of abstraction from the initial PFD in order to identify sections where retrofit can be applied. In the near future, the output results of AHA! will be used by RETRO (Reverse Engineering Tool for Retrofit Options). RETRO (now being developed in Java) will generate and evaluate process alternatives. 3.1. Generation of meta-units Initially, the information extracted by HEAD from HYSYS™ is used by AHA! to generate Units (process blocks). A Unit consists of four models: structural, behavioural, functional and teleological. The models of a Unit are built as follows: the behavioural model is obtained by comparing its input and output values. The type of Unit and its connectivity constitute the structural model. Furthermore, each Unit is associated with a functional model. Finally, the teleological model defines on an abstract manner the goal and purpose of a Unit inside an artifact. The Units are abstracted by means of inference mechanisms. During this process Metaunit(s) are generated as a result of abstracting two or more Unit(s) and/or Meta-unit(s). These inference mechanisms are implemented as a rule-based system based on (a) Douglas methodology (Douglas, 1988); (b) the identification of generic blocks (Turton et al., 1998); and (c) application of a hierarchy of functions (Teck, 1995). A reduced version of the hierarchy of functions is shown in Table 1. These functions are prioritised according to the precedence shown in Fig. 2. The abstraction process trail can be interpreted as an inverse record of a plausible design history.
Reaction |—HSeparation —" Temperature Change—" Pressure Change —A Flow Change +
Precedence
Fig. 2. Functional precedence in AHA!
272
Table 1. Hierarchy of Functions. General Function Reaction Separation Temperature_change Pressure_change Flow_change
Associated operations Reaction Decantation, extraction, distillation, absorption, stripping, adsorption, crystallisation, leaching, drying, and membranes Heating, cooling Pressure_decrement, Pressure_increment Mixing, splitting
3.2. Case study We have applied HEAD and AHA! to the ammonia production process, see Fig. 3. In this process, a hydrogen/nitrogen stream is fed to three catalytic reactors in series. The NH3 produced is fed to the separation section (V-100, V-101) to obtain a 95% pure product stream. Two heat exchangers are used for energy recovery and two coolers are used to obtain flash conditions. *
Ste £ * ^mMsn* Ffew^iBe* F B i « t e jjMnff H8tt
D t»Cl|4:jgmi|rK=:aOj3»||t»'
nm^nmlP
KP
....^^^^''ISgSglgllSL ^ P3SiSE3^S«r~IJ
^s£ Fig. 3. Ammonia production process in HYSYS ^ The initial ammonia process representation generated by AHA! (abstraction level 0) is shown graphically in Fig. 4. Every Unit (icon) in the figure is modelled according to Section 2. All the relevant information of Units and streams can be accessed through windows, menus, and buttons. Two abstract levels above. Fig. 5 shows the second abstract level of the ammonia process. In this abstract level the corresponding Units of FlowjChange (mixers and splitters) have been abstracted due to their lower precedence. Meta-units are represented as boxes. The designer can navigate through the abstract levels that have been generated and fmd the relations between Units and/or Meta-units at different abstract levels. Browsing through the trail of the abstraction process a designer can identify the sections of an artifact.
273 mUBBM mt em JM»«8«60r» Tw^m
I.UIxl m;^
|.s?iBrl«|o| o|<^| fib|.*| #1
•-laixJ
..^ iCa<»^|C?aQftt-39QgUWMW!B»«Rffi»^
Fig. 4. Multimodel representation of the NH3 process in AHA! (abstraction level 0).
m e* Amtm y^m^ s ^
»MI
o
o c^ af
i=ia
*i..'....
purnm^
:m'i^2mmmit.m(!»i'mek99!miJ>Q9iiKim
ji
Fig. 5, The second abstraction level of the ammonia process generated by AHA!.
4. Discussion The use of structural, behavioural, functional and teleological models allows the designer to work with a combination of detailed and abstract information depending on the retrofit step. With this information it is possible to identify automatically sections of a artifact, such as the reaction and separation sections, just as a human designer does (Turton et al, 1998). We are planning for RETRO to generate process alternatives by means of methodologies grouped within the area of greener process design, and thus propose
274 processes with adequate environmental and safety performance (Sylvester et al. 2000). Together with these methodologies we plan to apply: 1. Conflict-based analysis (design objectives vs. principles), to exemplify the generation of innovative alternatives which is necessary at the abstract end of the abstraction hierarchy (Xiao-Ning et al. 2002). 2. Identification/selection of alternative solvents (Hostrup et at. 1999), as an example of process modifications applicable at the middle of the abstraction hierarchy. 3. Inclusion of effluent treatments, generally made after designing the rest of the process.
5. Conclusions We have presented a multi-modelling approach for the retrofit of processes. Based on a multi-model knowledge representation (structural, behavioural, functional and teleological models) we can generate the artifact at different levels of detail to facilitate its retrofit (data extraction, design analysis, modification, and evaluation steps). The HEAD and AHA! prototype systems have been implemented for the data extraction and design analysis steps respectively. In particular, AHA! can automatically abstract an artifact in order to identify the process sections where the retrofit task should be focused.
6. References Chittaro, L., Guida, G., Tasso, C. and Toppano, E., 1993, IEEE Trans. Sys. Man. Cybern., SMC-23, 1718-1751. Douglas, J., 1988, Conceptual Design of Chemical Process, Mc Graw Hill, New York. Fisher, W. R., Doherty, M. F. and Douglas, J. M. (1987) Ind. Eng. Chem. Res., 26, 2195-2204. Grossmann, I. and Kravanja, Z., 1995, Comp. Chem. Eng., 19, Suppl., S189-S204. Hostrup, M., Harper, P.M. and Gani, R., 1999, Comp. Chem. Engng., 23, 1395-1414. Linnhoff, B. and Witherell, W.D., 1986, Oil & Gas J., 84, 54-65. Sylvester, R., Smith, W. and Carberry, J., 2000, AIChE Symp. Ser., 96, 26-30. Teck, T., 1995, MSc. Information Technology dissertation. Department of Artificial Intelligence, University of Edinburgh, Edinburgh. Turton, R., Bailie, R.C., Whiting, W.B. and Shaeiwitz, J.A., 1998, Analysis, Synthesis and Design of Chemical Process, Prentice Hall, New Jersey. Xiao-Ning, L., Ben-Guang, R. and Kraslawski, A., 2002, In European Symposium on Computer Aided Process Engineering - 12, Elsevier, 241-246.
7. Acknowledgments We thank Hyprotech (now part of ASPENTech) for the use of an academic license of HYSYS™. Also we would like to thank the Universitat Rovira i Virgili (Spain) for the PhD scholarship of I. Lopez-Arevalo and to the Universidad Autonoma del Estado de Morelos (Mexico) for the PhD scholarship of A. Rodriguez-Martinez.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
275
Synthesis of Partially Thermally Coupled Column Configurations for Multicomponent Distillations Ben-Guang Rong*, Andrzej Kraslawski and Ilkka Turunen Department of Chemical Technology, Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta, Finland. *E-mail: [email protected]
Abstract The synthesis of partially thermally coupled column configurations for a multicomponent distillation has been studied in the context of the thermodynamic equivalent structures. There has been formulated a complete space of the possible thermodynamic equivalent alternatives of the partially coupled (PC) configurations for multicomponent mixtures. A formula is presented to calculate the number of all the partially coupled schemes for any n-component mixture. The formulated altematives of all the possible arrangements of PC configurations provide a complete subspace for optimal design of multicomponent distillation systems not only for the economics, but also for column equipment designs.
1. Introduction Thermal coupling technique has been used to design multicomponent distillation systems which have the potentials to significantly reduce both energy consumption and capital cost compared with conventional simple column configurations. A considerable research works have been conducted on thermally coupled systems for ternary mixtures. The mostly studied ternary thermally coupled schemes are the side-stripper (SS), the siderectifier (SR) and the fully coupled scheme (FC) (Petlyuk column) (Petlyuk et al., 1965). For thermally coupled schemes for mixtures with four or more components, there are only a few papers in the literature that are mostly concerned with the finding of the possible structures of the thermally coupled schemes (Sargent & Gaminibandara, 1976; Agrawal, 1996; Christiansen et al. 1997). It is well-known that the side-rectifier (SR), the side-stripper (SS), and the dividing-wall column (DWC) are the mostly employed thermally coupled systems for temary mixtures. In an analysis on the relationships of the known thermally coupled configurations for temary mixtures, we observed that these well-known systems are the thermodynamic equivalent arrangements of the original thermally coupled configurations. Those original thermally coupled configurations are produced from the corresponding conventional simple column configurations by only removing condenser(s) and/or reboiler(s) associated with submixtures of two or more components (Rong & Kraslawski, 2002). The synthesis of the PC configurations with only side strippers and (or) side rectifiers for any n-component mixture has been studied in our earlier work in terms of the economics evaluation (Rong & Kraslawski, 2003). We observed that even though the difference of the capital costs among the thermodynamic equivalent structures will not change the optimal PC system, there are still some differences between the thermodynamic equivalent structures for a specific PC configuration with regard to the column equipment designs. Thus, it needs a further investigation for optimal design of the partially thermally coupled (PC) systems among all of the possible thermodynamic equivalent structures in
276 terms of not only economics evaluation, but also column equipment design. The main objective of this paper is to systematically synthesize the possible partially thermally coupled configurations for four or more component mixtures, with the emphasis on the optimal design of the partially coupled systems among all of the possible thermodynamic equivalent structures.
2. Generation of the Original Partially Thermally Coupled (OPC) Column Configurations It is known that in a conventional simple column configuration for an n-component distillation, there are totally n-1 simple columns (Thompson & King, 1972). Each simple column has a rectifying column section with a condenser and a stripping section with a reboiler. As a consequence, each simple column configuration has totally 2(n-l) column sections, as well as 2(n-l) condensers and reboilers. It is observed that among 2(n-l) column sections in a simple column configuration, there are n column sections which produce the desired pure products from the corresponding condenser(s) and reboiler(s). Each of these n column sections is enriched with one of the components in the feed mixture. While the remaining n-2 column sections do not produce the desired pure products but their condenser(s) and/or reboiler(s) have the submixtures with two or more components. For each of the n-2 column sections enriched with a submixture of two or more components, there is an inevitable remixing at the end of the column section due to the existing of the condenser (reboiler) for supplying the needed reflux (boilup) for the column section. This remixing will produce the separation inefficiency in the distillation system. According to Petlyuk et al. (1965), this remixing can be avoided by removing the condenser or the reboiler at the end of the column section and interconnecting the column units by the two-way liquid and vapor streams, i.e. the thermal coupling streams. Each rectifying (stripping) column section after removing its condenser (reboiler) will require the liquid reflux (vapor boilup) from its subsequent column unit.
^
rJTnAB.
x^,
Column 2
Column 2
jf
i3^
ABCDE Column
CDE Column 4
JColumn 4 Column 3
61
P L_
Column 3
DE
iji-: (a)
(b)
Figure 1. A five-component simple column configuration and its original thermally coupled configuration.
277 As an example, Figure la illustrates the conventional simple column configuration of a five-component separation sequence of (AB/CDE—•A/B->C/DE->D/E). It is seen from Figure la that there are three column sections whose products are internal submixtures with two or three con^onents. Column 1 has rectifying section 1 with condenser AB and stripping section 2 with reboiler CDE. Column 3 has stripping section 6 with reboiler DE. By eliminating the condenser(s) and the reboiler(s) with the internal submixtures, the partially thermally coupled scheme is produced as shown in Figure lb. This partially coupled configuration of Figure lb has exactly the same structural arrangements of the three column units as its counterpart of the conventional simple column configuration of Figure la. We define it as the original partially thermally coupled configuration (OPC) with regard to the introduced thermal couplings for the simple columns of the conventional simple column configuration. Obviously, each of the conventional simple column configurations will produce an original partially thermally coupled configuration by directly replacing of those condensers and/or reboilers associated with submixtures of two or more components with thermal coupling streams. Thus, for any n-component mixture, the total number of the original partially thermally coupled configurations (OPC) is equal to the number of the conventional simple column sequences (SC) for an ncomponent mixture. It can be calculated by the following equation of Thompson & King (1972).
,
fez])! "
(1)
n!(n-l)!
3. Generation of the Thermodynamic Equivalent Partially Coupled (TEPC) Configurations It is seen that in an OPC, each rectifying (stripping) column section, after removing its condenser (reboiler), will receive the needed reflux liquid (boilup vapor) flow from the corresponding rectifying (stripping) section of its subsequent column. From Figure lb, it is seen that by replacing condenser AB of column 1 in Figure la with the thermal coupling streams AB, the reflux liquid in rectifying section 1 of column 1 is supplied by rectifying section 3 of its subsequent column 2. Both reflux liquids of sections 1 and 3 are from the same condenser A of column 2 in Figure lb. Thus, there is generated a structural degree of freedom by the introduced thermal coupling AB that the thermally linked two rectifying column sections 1 and 3 can be combined to receive the needed reflux liquids from the same condenser A. Or in other words, column section 3 of column 2 can be moved to column 1. Similarly, the introduced thermal coupling CDE in Figure lb will bring a structural degree of freedom to combine column section 6 of column 3 with column section 2 of column 1, or the column section 6 is movable, and the structural degree of freedom from the thermal coupling DE will make column section 8 movable. It is observed that for any n-component conventional simple column configuration, the removing of a condenser (reboiler), associated with two or more components of a prior column, will make the rectifying section (stripping section) of its subsequent column movable. Therefore, each thermal coupling introduces a structural degree of freedom in the thermally coupled system for the rearrangements of the column sections that will produce the thermodynamic equivalent arrangements. A thermodynamic equivalent structure is defined as that it has the different arrangement of the column sections among the column units from the original thermally coupled configuration, while it has the same individual splits, the same thermal coupling streams for the same submixtures as its
278 original thermally coupled configuration. In Figure lb, the introduced three thermal couplings AB, CDE and DE generate three structural degrees of freedom that make the three colunm sections 3, 6, and 8 movable among the column units. Thus, it will produce different thermodynamic equivalent structures by different movements of the movable column sections among the column units. If we once move one column section of 3, 6, or 8 there will be produced three thermodynamic equivalent arrangements from the original thermally coupled configuration of Figure lb, as shown in parts a-c of Figure 2, respectively. If we once move two column sections of (3, 6), (3, 8) or (6, 8) there will be produced other three thermodynamic equivalent arrangements as shown in parts d-f of Figure 2, respectively. If we simultaneously move all of the three movable column sections of (3, 6, 8), it will result in a thermodynamic equivalent arrangement as shown in part g of Figure 2.
Figure 2. Thermodynamic equivalent thermally coupled arrangements of Figure 1(b). There are some significant differences of the structural features of the thermodynamic equivalent arrangements in comparison to the original partially thermally coupled configuration. As illustrated in Figure 1, there are two column sections of each of the column units in the original thermally coupled configuration. It is exactly the same
279 structure as its counterpart of the conventional simple column configuration. However, the distribution of the 8 column sections among the 4 column units in the thermodynamic equivalent arrangements of Figure 2 is different from each other. This difference results from the different movements of the movable column sections designated in the original thermally coupled configuration. Depending on the number of the column sections being moved, a column unit in the thermodynamic equivalent structures of Figure 2 can have 1, 2, 3, 4, or 5 column sections. This will significantly affects the design of the column equipment of the thermodynamic equivalent configurations. Furthermore, for a thermally coupled distillation configuration, the introduced thermal coupling streams are perceived to be the source of operating problems. Usually, it is expected that pressure of the column withdrawing a vapor flow, should be a little higher than that of the column receiving the vapor flow in a thermally coupled system (Carlberg & Westerbergl989). Thus, there exist constraints in determination of the pressures of the thermally linked column units for design a PC configuration (Rong et al. 2001). Even though all of the thermodynamic equivalent structures of a partially thermally coupled configuration have the same number of the thermal couplings, their constraints on pressures among the column units might be different. It is so since each movement of a column section will change the vapor flow direction of the corresponding thermal coupling streams. As discussed earlier, in any conventional simple column configuration for an ncomponent distillation, there are n-2 column sections whose condenser(s) and/or reboiler(s) are associated with submixtures of two or more components. Thus, there can introduce n-2 thermal couplings for a conventional simple column configuration for an ncomponent distillation. It means that there will introduce n-2 structural degrees of freedom in a partially thermally coupled configuration. It also means that there are n-2 column sections in a partially thermally coupled configuration that are movable among the column units. The following formula is derived for the calculation of the number of thermodynamic equivalent partially coupled configurations for an n-component simple column configuration {n>i).
Tn =2^1-2 =2"-'
(2)
Where C^_2=l designates the original partially coupled configuration without movements of column sections (e.g. Figure lb); C^_2 (/=1, n-3) designates the number of thermodynamically equivalent arrangements by once moving / column sections (e.g. parts a-c or parts d-f in Figure 2); C^22 =1 designates the side-column thermodynamically equivalent arrangement by simultaneously moving all of the n-2 column sections (e.g. Figure 2g). Thus, the total number of the thermodynamically equivalent partially coupled configurations for all of the conventional simple column configurations of an ncomponent mixture can be calculated from the following formula.
P„=5„xr„=fe:i>llx2"-
(3)
n\{n-i)\ Table 1 illustrates the number of the original partially coupled configurations, as well as the number of the total thermodynamic equivalent thermally coupled configurations generated from the conventional simple column configurations for feed mixtures with different number of components.
280 Table 1. The number of thermodynamic equivalent thermally coupled schemes generated from the simple column configurations for an n-component mixture. No. of No. of SC No. of OPC No. of total TEPC components configurations configurations configurations
3 4 5 6 7 8 9 10 11
2 5 14 42 132 429
2 5 14 42 132 429
1,430 4,862 16,796
1,430 4,862 16,796
4 20 112 672 4,224 27,456 183,040 1,244,672 8,599,552
Obviously, the thermodynamic equivalent partially coupled column configurations have formulated a unique search space of the possible thermally coupled alternatives for optimal design of distillation systems for multicomponent separations. Because of the space limit, the optimal design of the partially thermally coupled systems among all of the TEPC configurations for some specific multicomponent mixtures will be presented in the future publications.
4. Conclusions In this work, the synthesis of the partially thermally coupled column configurations for a multicomponent distillation has been studied with regard to the thermodynamic equivalent structures. There has been formulated a complete space of the possible thermodynamic equivalent alternatives of the partially coupled configurations for multicomponent mixtures. A formula is presented to calculate the number of all the partially coupled schemes for any n-component mixture. The formulated alternatives of all the possible arrangements of PC configurations provide a complete search space for optimal design of multicomponent distillation systems not only for the economics, but also for column equipment design. This can help designers to find the final optimal thermally coupled distillation systems with concerns of both economics and equipment designs.
5. References Agrawal, R., 1996, Ind. Eng. Chem. Res., 35,1059. Carlberg, N.A. and Westerberg, A.W., 1989, Ind. Eng. Chem. Res., 28, 1386. Christiansen, A.C., Skogestad, S. and Lien, K., 1997, Comput. Chem. Eng., 21, S237. Petlyuk, F.B., Platonov, V.M. and Slavinskii, D.M., 1965, Int. Chem. Eng., 5, 555. Rong B.-G., Kraslawski, A., Nystrom, L., 2001, Comput. Chem. Eng., 25, 807. Rong B.-G. and Kraslawski, A., 2002, Ind. Eng. Chem. Res., 41,5716. Rong B.-G. and Kraslawski, A., 2003, AIChE J., 49, xxx. Sargent R.W.M. and Gaminibandara, K., In Optimization in Action; L.W.C. Dixon, Ed.; Academic Press: London, 1976, p. 267. Thompson R.W. and King, J., 1972, AIChE J., 18, 941.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
281
A Multicriteria Process Synthesis Approach to the Design of Sustainable and Economic Utihty Systems Zhigang Shang Department of Process & Systems Engineering, Cranfield University, Cranfield, MK43 OAL,UK Antonis Kokossis Department of Chemical Engineering & Process Engineering, University of Surrey, Guildford, Surrey GU2 5XH, UK
Abstract The proper design criteria for a modern utility plant should include both environmental and economic requirements. In other words, not only the capital and operating costs of a utility plant but also the corresponding utility wastes must be minimised. The paper presents a systematic multicriteria process synthesis approach for designing sustainable and economic utility systems. The proposed approach enables the design engineer to systematically derive optimal utility systems which are economically sustainable and economic by embedding Life Cycle Assessment (LCA) principles within a multiple objective optimisation framework. It combines the merits of total site analysis, LCA, and multi-objective optimisation techniques
1. Introduction In process industries, large amount of gaseous emissions is generated by combustion processes associated with the utility systems. The emissions can result in many impacts on the surrounding environment. As a result of serious concerns about environmental problems in recent years, development of process synthesis methods for waste reduction purpose has become a research issue of growing importance. Thus, the proper design criteria for a modern utility plant should include both environmental and economic requirements. In other words, not only the capital and operating costs of a utility plant but also the corresponding utility wastes must be minimised. Many applications have been presented previously to address the problem of synthesis and design of utility systems. (Papoulias and Grossmann,1983; Colmenares and Seider ,1989; Bruno et ai, 1998; Wilkendorf et al, 1998; Rodrguez-Toral et al, 2001). It should be noted that all studies mentioned above addressed the utility system design problem only based on economical considerations and none of them adopted waste minimisation as one of their design criteria. Research in the latter area has not received much attention until recently. Smith and Delay (1991) tried to establish the minimum targets for the flue gas emissions in the utility system. Linnhoff (1994) proposed an approach to the minimisation of environmental emissions through improved process integration, i.e. the pinch technology. However, these approaches were not able to put a cost against emissions. As the impact of a process on the environment is dependent on its structure and design characteristic, environmental issues and economic ones should
282 be considered simultaneously as an integral part of process synthesis and design (Fredler et al., 1994; Linninger et aL, 1994). This invariably requires some trade-off between these issues. The mathematical programming approach should be in general more comprehensive and less-prone to trade-off these issues as long as all essential engineering insights are formulated in the mathematical models. To address this idea of including environmental impact considerations into process design, Life Cycle Assessment (LCA) is gaining wider acceptance as a method in identifying more sustainable options in process design. Recently, LCA has started to be coupled with multi-objective optimisation to provide a framework for process design by simultaneously optimising on environmental, economic and other criteria (Stefanis et al., 1997; Azapagic, 1999). These developments are still underway. The multi-objective optimisation techniques used by these works can only obtain Pareto-optimum solutions which provide infinite number of options for optimal design. Therefore, other multicriteria decision-making (MCDM) techniques are further required to identify the best compromise solutions. Furthermore, few works have been reported to generate utility system designs based on the integration of LCA and multiobjective optimisation. Here we will present a systematic multicriteria process synthesis technology for designing sustainable and economic utility systems. The technology should be able to generate the best compromise solutions by simultaneously optimising on environmental, economic and other criteria, rather than obtain Pareto-optimum solutions which provide infinite number of options.
2. Multicriteria Process Synthesis The proposed multicriteria process synthesis technology enables the design engineer to systematically derive optimal utility systems which are economically sustainable and economic by embedding LCA principles within a multiple objective optimisation framework. It combines the merits of total site analysis, LCA, and multi-objective optimisation techniques. It follows a four-step procedure as follows: (i) Design candidates identification using total site analysis technology; (ii) Environmental impact assessment using LCA principles; (iii) Formulation of the multi-objective optimisation model to incorporate environmental impact criteria as process design objectives together with economics; (iv) Multi-objective optimisation using goal programming techniques. Step 1: Design candidates identiHcation using total site analysis technology The first step in the formulation of the synthesis problem of utility systems is to consider systematically many alternative configurations by including them in a superstructure. In this step, the technology screens various utility units and identifies the efficient utility units that will be implemented into a superstructure from which the optimum design will be selected. There are enormous number of utility units which can be employed in a utility system, namely boilers, back-pressure /condensing turbines, gas turbines, electric motors, steam headers at different pressure levels, condensers, auxiliary units, and all of their different combinations. If all of them are included in a superstructure, it will be too large to be
283 solved. In this approach, the Total Site Profiles (TSP) (Dhole and Linhoff ,1992) are used to locate the feasible utility units in the context of a total site that may be used to satisfy the heat and power demands of a production site. TSP give a simultaneous view of surplus, and heat deficit, for all the processes on the site and reveal the cogeneration potential for the whole site. Thus the TSP can be used as a conceptual tool to screen and target feasible utility units for the site, such as the location of the steam headers and cogeneration units. The Thermodynamic Efficient Curve (TEC) (Shang and Kokossis, 2001) is then employed to identify the efficient utility units by screening among the feasible units. These efficient utility units will form a superstructure. The TEC tool is able to compare the efficiencies of utility units. Only the units with promising efficiencies will be included in the superstructure. Therefore, the superstructure derived by the proposed approach will be much smaller than a general superstructure that includes all possible units. Step 2: Environmental impact assessment using LCA principles The second step of this approach involves carrying out LCA study of the superstructure. The LCA principles are used to estimate the environmental impact of each candidate unit included in the superstructure. The LCA study considers a broad system which considers not only the utility system but also all processes associated with raw material extraction and imported electricity generation. Raw materials such as fuels and water are assumed to be available at no environmental penalty. In this approach, a typical coal-fired power plant is included to generate the electricity that needs to be imported by the utility system as shown in Figure 1. The advantage of the broad system is that input wastes (to the utility system) by importing electricity can be also accounted for together with output emissions (from the utility system). Next, the LCA study involves estimating the amount and type of each waste leaving the system boundary. Once the inventory has been determined, the impact of each waste on the surrounding environment is quantified. Here we use the widely accepted approach described by Heijungs (1992) in which the wastes are grouped according to the environment on which they will impact. Impacts related to global warming, ozone depletion, acidification, nitrification, photochemical oxidation, resources depletion and human toxicity are considered. The advantage of using such environmental impacts is that the information provided is directly linked to impact on the environment rather than for instance mass flowrates of waste materials. Local Enmssions
Local Emissions
ih::. Process Plant
Figure 1. The broad system boundary.
Central Power Station
284 Step 3: Formulation of the multi-objective optimisation model Having developed the superstructure for the utility system, one can then formulate a mathematical program accordingly for the synthesis of the utility system. In order to consider environmental criteria as distinct objectives together with economics in the design problem of the utility system, a multi-objective optimization formulation is considered to select the most sustainable and economic utility system from the superstructure by minimising all the environmental impacts from, the utility system simultaneously, while minimising the total cost of the utility system subject to the given set of utility demands. The numerical values of the environmental impacts and costs are dependent on design characteristics of the utility system. Therefore, the eight environmental impact criteria identified and quantified in the LCA study step and the total cost of the utility system are considered as independent, distinct minimisation functions in the multi-objective optimisation model. The cost objective function is the sum of annualised capital and operating costs. The former includes the fixed and variable costs of all system units. The latter consists of the costs of fuels, fresh water and purchased electricity. The material and energy balance equations associated with every unit in the superstructure are included as the equation constraints of the optimisation problem. Other than the balance equations associated with all units, models of gas emissions and environmental impacts are also integrated into the optimisation model. Binary variables are used to signify the existence or non-existence of units in the superstructure. The resulting multi-objective optimisation problem is formulated as an MINLP model. The decisions to be made by the multi-objective optimisation model include the configuration of the utility system, the values of the operating pressures and temperatures of different steam headers, the types of fuels used by the units, and all stream flowrates. Step 4: Multi-objective optimisation using goal programming techniques Both structural and parameter optimisation in the superstructure are performed for the multi-objective MINLP model on all environmental and cost objective functions to locate the best utility systems with minimal environmental impact and the desired economic performance. The multi-objective MINLP model is solved with goal programming (GP) techniques so as to provide the optimal configure from a superstructure that has embedded many feasible utility systems. By being able to tradeoff incommensurable objectives, e.g. environmental impacts and economic requirements, the GP methods are able to avoid the well-known problems encountered, for instance, weighting objectives, infinite number of non-inferior solutions, etc. In this approach, the objectives are ranked and then minimised lexicographically using the nonArchimedean GP to identify the best compromise solution. The best performance of each of the criteria over the specified operating ranges are used as the goals for the multi-objective optimisation problem. Rather than attempting to achieve solution optimality for single-objective problems, the approach of the GP is to find the best compromise solution that comes as closely as possible to satisfy the design goals.
285
3. Case Study The methodology is illustrated as it has been applied to an industrial complex. The case study considers a design problem for a site utility system in the industrial complex. Figure 2 shows the superstructure for the utility system that is to be designed for satisfying utility demands of the industrial complex. The utility system should meet the demands of VHP, HP, MP, LP steam and power demands. The superstructure consists of three main boilers (Bl, B2, B3) which use different fuels (natural gas, coal and oil), one gas turbine boiler (GT boiler), two local boilers (PI and P2), six steam turbines (Tl to T6), two gas turbines (GTl and GT2) which use natural gas and oil respectively, one BFW pump and the deaerator. There are five steam levels (VHP, HP, MP, LP and VLP) and one vacuum level. Steam can be generated at two levels: very high pressure (Bl, B2, B3, GT boiler and P2) and high-pressure (PI). Letdown steam from higher levels is also available. The utility system is interconnected with the utility grid. Connection to the grid allows import of electricity in case of a need. There are options to export excess electricity.
-
Emissions
i
Oil
-,
Natural gas
^Vm^ Emissions iissions BFW pump
1 5^
ElectricitvJ
/ — | - \ Central Fmmr Vrif Statton
C^ E
Figure 2. The superstructure of a utility system.
The problem is formulated as a multic-objective optimisation model and is solved using Goal Programming technique. The optimal solution includes one oil boiler (B3), one
286 natural gas turbine (GT2), one gas turbine boiler (GT Boiler), two local boilers (PI, P2) and four steam turbines (Ti , T3, T4 and T6).
4. Conclusions A systematic multicriteria synthesis technology for the design of sustainable and economic utility systems has been developed. The proposed technology enables the design engineer to systematically derive optimal utility systems which are economically sustainable and economic by embedding Life Cycle Assessment (LCA) principles within a multiple objective optimisation framework. The best design should be the utility system which incurs the minimum environmental impact and capital and operating costs.
5. References Azapagic, A. and Clift, R., 1999, The application of life cycle assessment to process optimisation. Computers & Chemical Engineering 23. Bruno, J.C., Fernandez, F., Castells, F. and Grossmann, I.E., 1998, A rigorous MINLP model for the optimal synthesis and operation of utility plants. Chemical Engineering Research & Design. 76. Colmenares, T.R. and Seider, W.D., 1989, Synthesis of utility systems integrated with chemical processes. Ind. Eng. Chem. Res. 28. Dhole, V.R., and Linnhoff, B., 1992, Total site targets for fuel, co-generation, emissions and cooling. Computers & Chemical Engineering, 17. Friedler, F., Varga, J.B. and Fan, L.T., 1994, Algorithmic approach to the integration of total flowsheet synthesis and waste minimisation. American Institute of Chemical Engineering Symposium Series, 90. Heijungs, R., et al., 1992, Environmental Life Cycle Assessment of Products Background and Guide. Leiden: Centre of Environmental Science. Linnhoff, B., 1994, Use pinch analysis to knock down capital costs and emissions. Chem. Engng Prog 90. Linninger, A.A., Ali, S.A., Stephanopoulos, E., Hanand, C. and Stephanopoulos, G., 1994, Synthesis and assessment of batch processes for pollution prevention. American Institute of Chemical Engineering Symposium Series 90. Papoulias, S.A., and Grossmann, I.E., 1983, A structural optimization approach in process synthesis - 1 : Utility systems. Computers & Chemical Engineering 7. Rodriguez-Toral, M.A., Morton, W. and Mitchell, D.R., 2001, The use of new SQP methods for the optimization of utility systems, Comp. Chem. Engng., 25. Shang, Z.G. and Kokossis, A.C., 2001, Design and synthesis of process plant utility systems under operational variations. ESCAPE-11, Denmark. Smith, R. and Delaby, O., 1991, Targeting flue gas emissions. Trans IchemE, 69. Stefanis, S.K., Livingston, A.G. and Pistikopoulos, E.N., 1997, Environmental impact considerations in the optimal design and scheduling of batch processes. Computers & Chemical Engineering, 21. Wilkendorf, F., Espuna, A. and Puigjaner, L., 1998, Minimization of the annual cost for complete utility systems. Chemical Engineering Research & Design, 76.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
287
A Decision Support Database for Inherently Safer Design R. Srinivasan^'*, K.C. Chia^ A-M. Heikkila^ and J. Schabel^ ^ Department of Chemical & Environmental Engineering National University of Singapore 10 Kent Ridge Crescent, Singapore 119260 ^ VTT Industrial Systems P.O. Box 1306 Tampere, Finland
Abstract An inherently safer process relies on naturally occurring phenomena and robust design to eliminate or greatly reduce the need for instrumentation or administrative controls. Such a process can be designed by applying inherent safety (IS) principles such as intensification, substitution, attenuation, limitation of effects, simplification, etc. throughout the design process, from conception until completion. While the general principles and benefits of IS are well known, a searchable collection of inherently safer designs that have been implemented in industry has not been reported. Such a database of inherently safer design (ISD) examples would assist the process designer in the early stages of the design lifecycle when critical design decisions are made. In addition to examples of IS design which have been successfully carried out, the database that we have developed contains process incidents which could have been averted by the application of ISD. In this paper, details of the database, the query engine, and potential applications are presented.
1. Introduction Inherent safety is the pursuit of designing hazards out of a process as opposed to using engineering or procedural controls to mitigate risk. This is usually achieved through intensification, substitution, attenuation, limitation of effects, simplification, avoiding knock on effects, making incorrect assembly impossible, making status clear, tolerance of misuse, ease of control and computer control (Kletz, 1998). Using the above principles, a more robust plant can be designed where departures from normal conditions are tolerated without serious consequences for safety, production, or efficiency. Despite the obvious importance of ISD, there has only been limited work for developing tools that support the assessment of IS. INSET V/SLS developed to promote IS principles, and contains a set of tools which support the adoption of IS principles into process development and design (Malmen et al., 1995; van Steen, 1996, Turney et al. 1997). Recently, an expert system that supports ISD by identifying safety issues and proposing inherently safer alternatives was reported (Palaniappan et al., 2002a; 2002b). One important criticism of toolkits and expert systems is that due to their 'generic' nature and the need to be applicable to a variety of processes, they cannot account for the subtle
' Corresponding Author. Tel: +65 67732671; Fax: +65 67791936; e-mail: [email protected]
288 nuances and special cases that occur during process design. Another issue relates to the link between safety, health, environmental aspects, economics, and the operability of a chemical plant (Palaniappan et al., 2002c). Since safety is rarely considered in isolation, there can be many synergies and tradeoffs between the different facets. Again it is not easy to foresee all the tradeoffs and judgement calls are required. To overcome these shortcomings, IS toolkits and expert systems can be complemented by a knowledgebase of design examples describing scenarios where IS principles have been used. Such a database would also help the process designer by illustrating possible synergies and tradeoffs between safety and other aspects during practical plant design. Such a decision-support database, called iSafeBase, is presented in this paper. The remainder of this paper is organised as follows: in the next section, the conceptual and implementation phases of iSafeBase are described. Two case studies are used to illustrate the use of iSafeBase in Section 3. Conclusions and future work directions for this work are presented in Section 4.
2. Database Design and Development The following were some key considerations during the design of iSafeBase: • Expandable: A database is useful only if it has a sufficiendy large dataset of examples. In order to enable this, it should be easy to enter new examples into the database, not only by the designer familiar with the internal details of the system, but also by any user by means of a simple interface. • Customisable: As mentioned above, safety is related to numerous aspects of process design, not all of which can be pre-enumerated. The design of the database should allow new classes of information to easily be added. • Open architecture: It should have an open and flexible architecture that permits the exchange of information with other design support tools such as flowsheeting packages, CAD systems, or safety evaluation systems. Examples of ISD would then be available while working with those systems. After comparing various database development software packages (including Filemaker, Microsoft FoxPro, Corel Paradox, Microsoft SQL Server and Oracle), Microsoft Access was selected as the preferred platform because of its ubiquitous availability and ease of use. Two distinct steps were needed to develop a structure that met these objectives: designing the data structures and constructing relationships between them. These are described below. 2.1. Data structures The following major classes of information are important: 1. Material properties - such as toxicity, corrosivity, reactivity, explosiveness, and flanmiability. 2. Design-related information - including design stage (chemistry route selection, chemistry route detailed evaluation, process design optimisation, process plant design, etc.), chemistry, and equipment. 3. Safety-related information - including hazards, and IS principles. 4. Design alterations - involving chemistry, material, or equipment modifications. 5. Accident-related information. Tables are used to organise the above data in iSafeBase. Each table comprises a number of fields which store the necessary various attributes for that class. Table 1 shows some example tables and their fields. The reader should note that references are provided for each design example and accident in order to enable the designer to explore further.
289 Table 1: Database tables and their fields. Table IS Design Accidents IS Principles Modification Type of Hazard
Fields Description, Illustration, Design Stage, Equipment, Reference Outcome, Initiating Event, Contributing factors. Consequences, Description, Equipment, Reference Principle, Suggestion Modification, Cost savings Type of hazard. Properties, Role, Unit Operations
2.2. Relationships Once the types of data have been specified, the relationships between the tables must be defined. The primary data tables provide a unique identifier (ID) for each record. Linking tables were created to relate records from different tables, and these links use the identifier to reference data across tables and enable one-to-one, one-to-many, many-to-one and manyto-many relationships. For example, a substance can have more than one hazardous property and a hazardous property can be present in many substances. A many-to-many relationship would be described for a substance (say with ID=1) that is toxic (ID=1) and flammable (ID=3) - this would be captured through one entry in the Materials table, two entries in the Properties table, and two rows in a material-properties link table (where the field 'Material' would have a value of 1, and the field TD-Properties' would have values 1 and 3 respectively). A simplified representation of the various relationships in iSafeBase is shown in Figure 1.
f
3
Materials ID (materials) .•
( ID-Role 1^ ID(RQle)
^ J <-!
f ID-Properties^ »|^ID(PrQpertie^^
[Type of Hazard] Properties Role Unit Ops ID(Hazard)
ID-Unit Ops ^ ID (Equipment)!
•k
n
M
IS Design ID (Pre-Design)1 ID (Post-Design) Design Stage Equipment J
Pre-Design ID J>ost-DesignlDj
pD-Design Stage^
i-J^
ID (Stage) J
{ ISprindples ^ • [lD(Suggestion)J ^
Equipment I^ID(Acddents)J
Figure 1: Relationships in iSafeBase. 2.3. Querying the database Once the examples have been collated, they need to be retrieved. A key consideration for the acceptability of a database is the ease in which it can be queried. Queries have been
290 implemented in iSafeBase to allow searches by specific equipment, hazard, substance, IS principle, modification, design stage, or outcome. Free text searches which search through every field in the database can also be performed. Additionally, the functionality to browse all the cases in the database related to a specific category, through a hierarchical interface, has also been implemented. 2.4. Graphical user interface Developing the graphical user interface (GUI) was the last step in producing a functional database. Figure 2 shows the GUI for the two ways of querying iSafeBase as described.
^ iSafeBase
^ iSafeBase
Bipofe Keyword [confcenbl
Input l^eywQfd Contents ^
Inputfeeyvvord;| Equipment
Querybyi
Hazard
p
IS Principle
€o!
Design Stage Modification Outcome
Figure 2: Querying iSafeBase by (a) Keyword search, and (b) Category specific browsing.
3. Case Study: Keyword 'React' The current version of iSafeBase has forty design examples and accidents. Figure 3 lists the number of examples in each design stage while Table 2 lists the different sources from which the design and accident examples were selected.
O)
Process plant design
CO
CO c .5^ o O
Process design optimisation Chennistry route detailed evaluation Chemistry route selection
No. of cases: o Figure 3: Design examples in each design stage.
10
15
20
291 Table 2: Sources of design examples. Different sources Kletz, T.A., Process Plants: a handbook for inherently safer design, 1998, Taylor & Francis. Chementator, Chemical Engineering, (journal) Proceedings of International Conference & Workshop on Process Safety Management and Inherently Safer Processes, October 8-11, 1996, Orlando, AIChE. Bollinger, R.E. et al.. Inherently Safer Chemical Processes - A Life Cycle Approach, 1996, AIChE. Total
Cases 22 8 7 3 40
A query on the keyword 'react' is used to illustrate the different facets of the program. Twenty-eight different design examples and thirteen accidents were returned for this case from 'air contexts. Two such design examples are outlined in Tables 3 and 4. Table 3: Design example summary from Case study 1.
Hazardous Scenario IS Principle Suggestion Design stage Example of Modification Reference
Case description MIC reacted with alpha-naphthol to make carbaryl. Large inventories of MIC kept in plant. Substitution - Use another process route that involves less hazardous material or conditions. Chemistry route selection. Different sequence of reactions: Alpha-naphthol and phosgene are reacted together to give an ester that is then reacted with methylamine resulting in same product. No MIC is produced. (Pilot tested by Makhteshim, an Israeli company) Kletz, T.A. (1998). Process Plants: A Handbook for Inherently Safer Design, p.68
4. Conclusions While the importance of inherently safer design of chemical plants has been widely accepted, this has not been practised partly because of the lack of support tools. A database of examples of inherently safer designs has been reported in this paper. The software quickly retrieves cases of design modifications and related accidents for a given scenario. By making it possible to retrieve specific examples of ISD through a simple query process, it is hoped that this tool would guide plant designers in their effort to develop safer chemical plants. It would also promote IS in the mindsets of management, since concrete examples of what has been successfully implemented and the associated rewards can easily be presented.
292 Table 4: Design example summary from Case study 2, Hazardous Scenario IS Principle Suggestion Design stage Suggested Modification Example of Modification Reference
Case description Reaction runaway. Ease of control - Use physical principles instead of other measures that may fail or be neglected. Chemistry route detailed evaluation. Use another catalyst. ICI Chemicals & Polymers has developed oxy-anion promoted catalysts in which the selectivity promoter is adsorbed onto the catalyst to activate it. Any temperature excursion in the reactor results in desorption of the activator. Thus, reaction mnaway potential has been eliminated. Hawksley, J.L., and M.L. Preston (1996). "Inherent SHE: 20 Years of Evolution."
5. References Kletz, T., 1998, Process plants: a handbook for inherently safer design. Philadelphia: Taylor & Francis, pp. 1-19, 152-180. Malmen, Y., Verwoerd, M., Bots, P.J., Mansfield, D., Clark, J., Tumey, R. and Rogers, R., 1995, Loss Prevention by Introduction of Inherent SHE Concepts, SLP Loss Prevention Conference, December 1995, Singapore. Palaniappan, C , Srinivasan, R. and Tan, R., 2002a, Expert System for Design of Inherently Safer Processes - Part 1: Route Selection Stage, Industrial and Engineering Chemistry Research, Vol.41(26), pp.6698-6710. Palaniappan, C , Srinivasan, R. and Tan, R., 2002b, Expert System for Design of Inherently Safer Processes - Part 2: Flowsheet Development Stage, Industrial and Engineering Chemistry Research, Vol.41(26), pp.6711-6722. Palaniappan, C , Srinivasan, R. and Halim, I., 2002c, A Material-Centric Methodology for Developing Inherently Safer and Environmentally Benign Processes, Computers & Chemical Engineering, Vol. 26(4/5), pp.757-774. Tumey, R., Mansfield, D., Malmen, Y., Rogers, R., Verwoerd, M., Suokas, E. and Plasier, A., 1997, The INSIDE Project on inherent SHE in process development and design - The Toolkit and it's application, IChemE Major Hazards XIII, April 1997, Manchester, UK. van Steen, J., 1996, Promotion of inherent SHE principles in industry, IChemE - 'Realising an integrated management system', December 1996, UK.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
293
Using Design Prototypes to Build an Ontology for Automated Process Design* IDStalker^ ESFraga^ LvonWedel^ AYang^ ^Centre for Process Systems Engineering, Department of Chemical Engineering, UCL, London WCIE 7JE, UK ^Lehrstuhl ftir Prozesstechnik, RWTH Aachen, 52056 Aachen, Germany E-mail: i . s t a l k e r @ u c l . a c . u k
Abstract Recently there has been an increased interest in agent-based environments for automated design and simulation (Garcia-Flores et al. 2000). In such environments, responsibility for decision making is partly removed from the engineer to the underlying agent framework. Thus, it is vital that pertinent knowledge is embedded within this framework. This motivates the development of an ontology (Gruninger & Lee 2002) for the particular domain. An important first step is a suitable organisation of the knowledge in a given domain.
1. Introduction Automated process design is a complex task that typically makes use of an array of computational tools, for example thermophysical packages. Agent based systems, such as COGents (Braunschweig et al. 2002), offer a potential solution to the dynamic access and configuration of such tools.. To realise this potential an automated design agent requires both process design domain knowledge — that is an ontology — and appropriate knowhow to apply this domain knowledge. This paper describes the use of design prototypes to organise domain knowledge as a first step towards the development of an ontology for process design and the mechanisms needed to invest a design agent with the domain knowledge.
2. A Design Prototype for Conceptual Process Design Design prototypes arose in mechanical engineering but the ideas apply to generic design processes. The conceptual basis is the Function-Behaviour-Structure (FBS) framework (Gero 1990) which is motivated by the following: [...] the metagoal of design is to transform function F (where F is a set) into a design description D in such a way that the artefact being described is capable of producing these functions. (Gero 1990) *Work funded by Project COGENTS, Agent-Based Architecture For Numerical Simulation, funded by the European Community under the Information Society Technologies Programme (1ST), under contract IST-200134431.
294 Documentation
Formulation
F Bg S Bs D
Function Expected Behaviour Structure Actual Behaviour Design Documentation
Figure 1. The Function-Behaviour-Structure (FBS) Framework
This design description represents an artefacts elements and since there is generally no function in structure nor structure in function, the transformation from function to description proceeds in stages. Gero (1990) introduces the FBS framework, Figure 1, to elaborate these stages working on the premise that It is function, structure, behaviour and their relationships which form the foundation of the knowledge which must be represented (Gero 1990). The goal of conceptual process design is to generate and select good process designs, usually represented by aflowsheetwith design parameters and often supplemented with design rationale (Banares-Alcantara 1997) This is the Design Artefact. A design problem begins with the desired products, reactions of interest, available processing technologies, raw materials and a set of criteria for ranking. We seek a process which will derive the desired products from raw materials: this is the function F of our design artefact. Employing the FBS framework allows us to model process design as a combination of the the following activities: formulation^ to realise F a sequence of expected behaviours. Bey such as separation, reaction, etc., is formulated; synthesis, the expected behaviours are used to synthesise an appropriate structure 5; analysis, the structure is analysed for cost and actual behaviours, B^; evaluation, the actual behaviours are compared with the expected behaviours: ideally actula behaviours will be an acceptable superset of the expected behaviours; reformulation, as design problems are typically underdefined, we are likely to find that the first few drafts of aflowsheetare incomplete (Laing & Fraga 1997) and so the expected behaviours and function are reformulated; documentation,finally, the final design artefact is fully documented in (D). Examples of FBS for a generic prototype for conceptual design are shown in Table 1. A Design Prototype is a knowledge representation schema which abstracts "all requisite knowledge appropriate to that design situation" (Gero 1990). Symbolically, a prototype proforma is expressed ^ = {F,B,S,D,K,C)
295 where K = {Kr,Kg,Kc,K^,Kj^) is a tuple of, respectively: relational knowledge, which provides and makes explicit the dependencies between the variables in the function, behaviour and structure categories; qualitative knowledge, which provides information on the effects of modifying values of structure variables on behaviour and function; computational knowledge, which specifies mathematical and symbolic relationships between variables in the function, behaviour and structure categories; constraints or contextual knowledge identifies exogenous variables for a design situation; and reflexive knowledge, a pair K^ = {T,P) comprising, respectively, the typology which identifies the broad class to which the prototype belongs and a partition representing the subdivision of the concept represented by the prototype. Examples are shown in Table 1. C denotes the context in which the design is activity is taking place. In our case, this is the context of process engineering and does not need further elaboration. Two common approaches to developing aflowsheetfor a given engineering process are a hierarchical approach (Douglas 1988) and an algorithmic approach, typically through mixed integer nonlinear progranmiing (MINLP) (Grossmann et al. 1999). These are different mechanisms to transform the expected behaviours identified in the prototype into a suitable structure. Both begin with the statement of the function of the final process. Accordingly, the approaches refine the same base prototype in different directions. The hierarchical approach refines the prototype in small steps: starting with the coarse-grained top level information of process type and applying a number of heuristics to derive the additional (refined) information; this approach emphasises qualitative knowledge. The algorithmic approach refines the prototype in large steps: a minimum of required information is developed and this is used to develop a number of sections of the prototype by appealing to external search mechanisms; this approach emphasises computational knowledge. The two approaches are largely complementary and share a minimum of overlap. Accordingly, to ensure a broad application we have extracted, organised and collated into a single prototype design knowledge from a representative of each approach (Douglas 1988, Fraga et al. 2000).
3. Towards an Ontology, OntoCAPE An ontology may be defined to be "an explicit specification of a conceptualisation" (Gruber 1993). The underlying FBS framework provides natural categories for an ontology of process design. Ontologies were originally motivated by the need for sharable and reusable knowledge bases. However, the reuse and sharing of ontologies themselves is still very limited. Those seeking to reuse a particular ontology do not always share the same model as those who built it: thus, it is often difficult to discover tacit assumptions underpinning the ontology and to identify the key distinctions within it (Gruninger & Lee 2002). The use of a prototype to develop an ontology circumvents these problems: the key distinctions derive from the framework of prototype; if a prototype has been fully developed, then all assumptions are made explicit in the knowledge categories. OntoCAPE specifies a conceptualisation of process modeling, simulation, and design. A skeleton ontology has been developed in which the major categories of COGents concepts
296 Table 1. Generic Prototype for Conceptual Process Design
Function
to convert raw materials to desired products subject to specified constraints: inputs —> outputs
Behaviour
Structure
Kr
Behaviours
separation, reaction, mixing, heating, cooling, recycle, etc.
Variables
recoveries, rates, duties, etc.
Elements
flash, distillation column, reactor, mixer, heater, etc.
Variables
number of units, volume of reactor, heights of distillation columns, reaction temperature, operating pressure, etc.
Properties
component thermophysical properties, thermal conductivity, tray efficiency, etc.
Function to Behaviour
If function is to isolate pure product, the required/expected behaviour would be ''separation."
Behaviour to Behaviour Variables
Recovery specification for separation units.
K,
Recycle structure required; economic potential has an inverse relationship with raw material costs; etc.
Kc
Unit models; cost correlations; product specifications; reaction equilibria; etc.
^ct
Plant data, such as amortisation period; site constraints; ambient conditions; etc.
UR
Typology
process flowsheet
Partition
separation, reaction, reaction-separation, with recycle, without recycle, etc.
297 Application Specific Concepts
process design
process modelling
process simulation
Common Concepts chemical process system processing subsystem realisation
function
behaviour
processmg material
software system
Figure 2. OntoCAPEfigure
are identified and key concepts given for each category. The resulting top level structure of OntoCAPE is illustrated in Figure 2. The full ontology is currently being developed and will provide more detailed class hierarchies, class definitions, attributes, relations and constraints. OntoCAPE comprises of a number of relatively independent partial models. In particular, there are partial models conmion to different CAPE applications nd those peculiar to specific applications. The processing subsystem in the skeleton OntoCAPE has three distinctive aspects: realisation, function, and behaviour. This corresponds naturally to the FBS framework of in design prototypes. Accordingly, design prototypes, especially concrete examples, provide a suitable organisation of material for use in refining the concepts and relations of skeleton ontology relevant to process design. Moreover, the full ontology, in turn, can be used to provide more formal specification of design prototypes. The formal specification of the full ontology will be expressed in DAML+OIL ( h t t p : / /www. d a m l . o r g ) .
4. From Design Prototype to Design Agent To function in an agent based system, the design agent must supplement knowledge of both what is, domain knowledge, with know-how, problem solving knowledge. To this end, ontologies and problem solving mechanisms (PSMs) (also, called problem solving methods or generic task models), go hand-in-hand (van Heijst 1995): ontologies capture domain knowledge; PSMs capture the task-level application of the domain knowledge. Since FBS framework separates knowledge from the computational processes which operate upon it, a design prototype provides a basis from which to develop a systematic approach to identifying PSMs. The transformations broadly embrace the computational processes through which one category of knowledge is developed into another. We apply PSMs to function io formulate expected behaviour; behaviour to synthesise structure;
298 structure to analyse for actual behaviour; expected and actual behaviour to evaluate actual behaviour. Thus, well-developed prototypes are invaluable in developing a design agent: an ontology is derivable from the prototypes; the transformation processes of the FBS framework provide us with a basis for a systematic approach to discovering PSMs.
5. References Banares-Alcantara, R. (1997), *Design support for process engineering III. Design rationale as a requirement for effective support'. Computers and Chemical Engineering 21, 263-276. Braunschweig, B. L., Fraga, E. S., Guessoum, Z., Paen, D. & Yang, A. (2002), COGents: Cognitive middleware agents to support e-cape, in B. Stanford-Smith, E. Chiozza & M. Edin, eds, 'Proc. Challenges and Achievements in E-business and E-work', pp. 1182-1189. Douglas, J. M. (1988), Conceptual Design of Chemical Processes, McGraw-Hill International Editions. Fraga, E. S., Steffens, M. A., Bogle, I. D. L. & Hind, A. K. (2000), An object oriented framework for process synthesis and simulation, in M. F. Malone, J. A. Trainham & B. Camahan, eds, 'Foundations of Computer-Aided Process Design', Vol. 323 of AIChE Symposium Seriesy pp. 446-449. Garcia-Flores, R., Wang, X. Z. & Goltz, G. E. (2000), 'Agent-based information flow for process industries supply chain modelling'. Computers chem. Engng 24,11351142. Gero, J. S. (1990), 'Design protoypes: A knowledge representation schema for design', AI Magazine Winter, 26-36. Grossmann, I. E., Caballero, J. A. & Yeomans, H. (1999), 'Mathematical programming approaches to the synthesis of chemical process systems', Korean J Chem Eng 16(4), 4 0 7 ^ 2 6 . Gruber, T. R. (1993), 'A translation approach to portable ontology specifications'. Knowledge Acquisition 5(2), 199-220. Gruninger, M. & Lee, J. (2002), 'Ontology applications and design: Introduction', Communications of ACM 45(2), 39-41. Laing, D. M. & Fraga, E. S. (1997), 'A case study on synthesis in preliminary design'. Computers & Chemical Engineering 21(Suppl.), 53-58. van Heijst, G. A. C. M. (1995), The role of ontologies in knowledge engineering. Thesis, University of Amsterdam.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
299
Engineer Computer Interaction for Automated Process Design in COGents* ID Stalker^ R A Stalker Firth^ E S Fraga^ ^Centre for Process Systems Engineering, Department of Chemical Engineering, UCL, London WCIE 7JE, UK ^Summertown Solutions Ltd, Suite 140, 266 Banbury Road, Oxford 0X2 7DL, UK E-mail: i . stalker@ucl .ac.uk
Abstract We identify those interaction issues necessary to foster creativity in automated process design. We apply the key distinctions of Engineer Computer Interaction (Stalker & Smith 2002) to ensure that these are included in the development of a process design agent within the COGents framework (Braunschweig et al. 2002, COGents n.d.). The formalism is used to develop a blueprint for interactivity between a designer and a design agent which fosters creativity in design.
1. Automated Process Design in COGents A process design problem begins with the desired products, reactions of interest, available processing technologies, raw materials and a set of criteria for ranking. The result of process design is a flowsheet supplemented with design rationale: this is our Design Artefact (Baiiares-Alcantara 1997). This is a complex task and benefits greatly from the use of automated tools. Recently, agent based systems (Ferber 1999) have received an increased interest for application to automated design and simulation (GarciaFlores et al. 2000). COGents is a European project to use cognitive agents to support dynamic, opportunistic interoperability of CAPE-OPEN compliant software over the internet (COGents n.d., Braunschweig et al. 2002). It is essentially 2L proof of concept for numerical simulation using agent technology, software components and web repositories with the chosen context being computer aided process engineering. Part of this work involves the development of a process design agent which will make use of an automated design tool, Jacaranda (Fraga et al. 2000), in coordination with other agents. The current usage scenario for the Jacaranda System is typical of design tools. The user input is comprehensive: the user sets up the system; fully defines the problem; defines the nature of the solution space through units available for a problem; defines the granularity of the solution space through discretisations of continuous parameters; provides cost *Work funded by Project COGENTS, Agent-Based Architecture For Numerical Simulation, funded by the European Community under the Information Society Technologies Programme (1ST), under contract IST-200134431.
300
Figure 1. Current Usage Scenario Use Case Diagram
models, material components and raw material specifications. We sunmiarise a use case analysis of the current usage scenario in Figure 1. In COGents, we seek to remove the onus from the user through the use of agents. As a necessary step we have identified how to redistribute the use cases appropriately among the design agent, a design tool and the additional COGents framework. We illustrate the anticipated final distribution in Figure 2. The design agent prepares the problem definition with minimum input from the user, obtaining information from other agents in the COGents platform and employing its own knowledge to undertake appropriate decisions.
2. Interaction Issues Advantages of an agent based approach to process design include the automation of routine tasks, access to up-to-date information, access to new technologies and access to an increased range of solution mechanisms. Reducing the burden from the the designer allows him to focus on more creative aspects, increasing the likelihood of truly novel designs. However, an agent based approach not only removes the onus from the user, it also removes a certain amount of control. Consider, in Figure 1 the user controls the information employed by the design tool, through level of discretisation, values for variables and the constants used, and so forth. He can make use of the design tool for preliminary explorations of a given solution space, a key to successful design (Navinchandra 1991, Smithers 1998); for example, through the use of partial solutions (Fraga 1998, Fraga et al. 2000). In Figure 2 the level of automation seems to prevent this creative use of the design tool: the designer must either accept the results of the system without question or seek an alternative; should a design problem remain unsolved, there is no indication of neamess to a solution, nor of those constraints which may have restricted particular design alternatives. Thus, there is no information available to guide a reuse of the system or to take on board when preferring an alternative design tool.
301
E>esign Tool
COGents
Figure 2. Anticipated Final Usage Scenario Use Case Diagram
We seek to realise the full potential of an agent based approach by using the technology to reduce the burden and including mechanisms through which to re-introduce the designer into the loop. One way is to allow a choice of responsibility from the current situation of Figure 1 to the final situation of Figure 2. This returns control but also returns the burden. A preferable way is to promote an increased interactivity, allowing the designer to supervise the design agent. This returns control without the burden. Engineer Computer Interaction (ECI) is a methodology for coordinating aspects of HCI with domain specific knowledge to facilitate the development of more appropriate software systems to support engineers. ECI developed in Structural Engineering (Stalker & Smith 2002). Application of ECI to a given discipline requires the development of three elements: Organisational Schema A representation of the important stages in the life cycle of a design artefact which can be translated into a software structure for computer implmentation. Task Decomposition A decomposition of the generic tasks in developing the artefact through its life cycle. The decomposition available for each task in the original ECI blueprint offers the following modules: Data Management to examine the input information for fitness for use; Model Selection to offer a choice of underlying assumptions; Model Use to allow appropriate revisions and tuning of models; Viewpoints to encourage exploration of the space of solutions from different perspectives; and Comparison of
302 Documentation
Formulation
F Bg S Bs D
Function Expected Behaviour Structure Actual Behaviour Design Documentation
Figure 3. The Function-Behaviour-Structure (FBS) Framework
multiple interpretations. Engineer Identikit A set of generic engineering representations to facilitate the development of domain specific user system interaction.
3. Automated Process Design with Engineer Computer Interaction Organisational Schema We employ the Function-Behaviour-Structure (FBS) Framework (Gero 1990), illustrated in Figure 3. The function F of our design artefact is to represent a process which will derive the desired products from raw materials. To realise this function, a sequence of required behaviours, such as separation, reaction, etc. (expected behaviours, Be) is formulated; these are used to synthesise an appropriate structure 5; the structure is analysed for cost and actual behaviours, B^; as design problems are typically underdefined, we are likely to find that the first few drafts of a flowsheet are incomplete (Laing & Fraga 1997) and so the expected behaviours and function are reformulated. Finally, thefinaldesign artefact is documented in D. Task Decomposition Of particular interest to process design are: Model Selection and Use Appropriate model selection and use are vital to synthesis and analysis tasks. For process design in COGents we have access to models within our design tool and also from the additional COGents framework. Access to model parameters is essential: these are often problem specific, for example, amortisation periods, selectivity, conversion, recoveries; a designer oftens makes a number of choices of discretisation during preliminary explorations (Fraga 1998, Laing & Fraga 1997). Viewpoints and Comparison Results from a number of different models are extremely useful for evaluation and reformulation of behaviours. We compare the actual behaviours of the synthesised structure with the formulated behaviours; we compare full and partial solutions generated in order to maximise insight. For example, the primary design tool in COGents, Jacaranda (Fraga et al. 2000), generates the best N solutions, as requested, by the user. Engineer Identikit Generic engineering representations identified for process design are:
303 Classification such as physical properties databases and thermophysical packages; ontologies and information technology based data models (Bayer et al. 2001); subproblem classifications, such as the subproblem dependencies, qualifiers and solution hierarchies in (Fraga 1998); and cost tables. Procedure and Sequence such as the necessary order of unit operations; procedural information subsists in computational methods. Graphical Representations includingflowsheetreaders such as HYSYS; simple tools for reading tables of subproblems, for example (Laing & Fraga 1997); traditional sketches of graphs. Formulae including mathematical formulae; reaction equations; potentially clauses of logic programs to capture design heuristics of hierarchical approaches, such as developed in (Douglas 1988). Symbols depicting the various unit operations and theflowsheetsthemselves. Customs and Practice including standards, guidelines and other information observed as a general practice by process designers and engineers. Tables and Lists of physical properites; unit specifications and constants; subproblem listings with status measures (Fraga 1998). We note, for example, applying dynamic programming techniques to process design is based on the use of cost tables (Fraga 1998). Natural Language to enlarge upon or provide a commentary to the information in the other categories.
4. Discussion Agent based systems offer enormous benefits to automated process design, reducing the burden of effort, increasing access to information, models and solution techniques and so forth. However, it is imperative that we provide an interactivity to ensure that the designer has a creative input; that he retains control and has access to partial solutions to foster a systematic search of design space and a computational efficiency. We have applied the key distinctions of EC! to ensure that development of the process design agent accommodates these needs. Enhancing the potential interactivity with a design agent invested with design expertise encourages a less expert user to employ the system in a creative manner similar to that of a more experienced designer. The impact of the inclusion of ECI on the development of a process design agent is minimal. It does not affect the progression suggested by the differences in Figures 1 and 2. Rather we are increasingfinalsystem and it is only in light of system that we can properly determine whether the desirable interaction issues are best served through extending the functionality of the design agent; or through the introduction of a personal assistant agent (Ferber 1999). Nothwithstanding, there are ontological implications: we must ensure that our design ontology embraces relevant additional concepts such as partial solutions, cost tables, preliminary exploration, coarseness of discretisation, subproblem dependencies, dependency qualifiers, solution status, and similar.
304
5. References Banares-Alcantara, R. (1997), 'Design support for process engineering III. Design rationale as a requirement for effective support'. Computers and Chemical Engineering 21, 263-276. Bayer, B., Krobb, C. & Marquardt, W. (2001), A data model for design data in chemicalm engineering - information models. Technical Report LPT-2001-15, Lehrstuhl fuer Prozesstachnik, RWTH Aachen. Braunschweig, B. L., Fraga, E. S., Guessoum, Z., Paen, D. & Yang, A. (2002), COGents: Cognitive middleware agents to support e-cape, in B. Stanford-Smith, E. Chiozza & M. Edin, eds, Troc. Challenges and Achievements in E-business and E-work\ pp. 1182-1189. COGents (n.d.). The COGents Project Agent-based Architecture for Numerical Simulation', http;//w WW. cogents.org. Douglas, J. M. (1988), Conceptual Design of Chemical Processes, McGraw-Hill International Editions. Ferber, J. (1999), Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence, Addison Wesley. Fraga, E. S. (1998), The generation and use of partial solutions in process synthesis', Chemical Engineering Research and Design 76(A1), 45-54. Fraga, E. S., Steffens, M. A., Bogle, I. D. L. & Hind, A. K. (2000), An object oriented framework for process synthesis and simulation, in M. F. Malone, J. A. Trainham & B. Camahan, eds, 'Foundations of Computer-Aided Process Design', Vol. 323 of AIChE Symposium Series, pp. 446-449. Garcia-Flores, R., Wang, X. Z. & Goltz, G. E. (2000), 'Agent-based information flow for process industries supply chain modelling'. Computers chem. Engng 24,11351142. Gero, J. S. (1990), 'Design protoypes: A knowledge representation schema for design', AI Magazine Winter, 26-36. Laing, D. M. & Fraga, E. S. (1997), 'A case study on synthesis in preliminary design'. Computers & Chemical Engineering 21(Suppl.), 53-58. Navinchandra, D. (1991), Exploration and Innovation in Design: Towards a Computational Model, Springer-Verlag. Smithers, T. (1998), Towards a knowledge level theory of design process, in J. S. Gero & F. Sudweeks, eds, 'Artificial Intelligence in Design '98', Kluwer, pp. 3-21. Stalker, R. & Smith, I. (2002), 'Structural monitoring using engineer-computer interaction'. Artificial Intelligence for Engineering Design, analysis and Manufacturing 16(5). Special Edition, Human-Computer Interation in Engineering Contexts.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
305
Developing a Methanol-Based Industrial Cluster Rob M. Stikkelman\ Paulien M. Herder^, Remmert van der WaP, David Schor^ Delft University of Technology ^Interduct, Delft University Clean Technology Institute ^Faculty of Technology, Policy and Management; Energy & Industry Section The Netherlands
Abstract We have conducted a study in collaboration v^ith the Port of Rotterdam in which we explored possibilities for developing a methanol-based industrial cluster in that area. The study had two main goals. The first goal was to develop a realistic methanol-based industrial cluster, supported by technical and economic data. For our cluster we have considered plants and processes from the entire production chain. The second goal of the study was to bring together various actors in the field of our proposed methanol cluster. In order to create a common language among the actors and to get the actors involved actively, we developed a virtual prototype of the cluster. During a workshop with the actors, we used the virtual prototype as a vehicle to initiate discussions concerning technical and economic issues and improve upon the proposed cluster. The key actors that are needed to bring about innovative changes are expected to continue the discussions and explorations in this field together in the future.
1. Introduction The Rotterdam port area in The Netherlands is the main hub in the world-wide methanol infrastructure. About 1 million ton of methanol is imported, stored and sold each year. The importance of methanol in the Rotterdam port area is expected to increase, and possibly double, in the future, as the application of methanol in fuel cells may be a promising option for improving the sustainability of the road transportation sector. The world-wide transportation sector currently depends on oil for roughly 98% of its operations. These oil-based fuels contribute considerably to urban air pollution in the form of emissions of CO2, ground-level ozone precursors (NOx), carbon monoxide (CO) and particulate matter (PM). The application of these fuels in conventional combustion engines is also a source of noise pollution. The application of methanol in fuel cells, however, increases energy efficiency and decreases noise and emission levels compared to the conventional combustion engine. When methanol is to be applied broadly in the transportation sector, the current methanol demand will increase far beyond the current world production levels for the downstream production of fuel additives and adhesives. In order to produce the required amounts of methanol, new, sustainable production routes are being explored and developed world-wide (e.g.. Herder and Stikkelman, 2003). Accordingly, the importance of existing methanol hubs in the world is expected to increase significantly.
306 We have conducted a study in collaboration with the Port of Rotterdam, in which we explored futuristic, and sometimes unusual possibilities for developing a methanolbased industrial cluster based upon the existing methanol infrastructure in that area. The study had two main goals. The first goal was to develop a realistic methanol-based industrial cluster, supported by technical and economic data. The second goal of the study was to bring together the various actors in the field of our proposed methanol cluster, and to create support for the envisaged transformation.
2. Theoretical Background 2.1. Cluster modelling A number of approaches have been reported in literature that deal with the modelling of a cluster of industrial processes. A conventional systems engineering approach to modelling clusters, using mass and energy balances for the chain and its subsystems was introduced by Radgen et al. (1998). It was reported to be a valuable way of modelling and analysing production networks and chains. The authors used existing process simulators, using mass and energy balance calculations to build and analyse chains. In this work, however, we decided to develop a dedicated tool, based on spreadsheets, in order to simplify the building of the virtual cluster. Some other studies aim at optimising an entire cluster with respect to economic and/or ecological objectives. In our study we did not yet aim at obtaining an optimised cluster, but merely at identifying the design space of the methanol cluster. The functional approach as suggested by Dijkema and Reuter (1999) and Dijkema (2001) was used in this study to identify and explore the design space for designing our methanol cluster in the Rotterdam port area. The functional approach can deal effectively with system complexity as it focuses on system functionality in stead of system contents, and the functional characteristics of a system are technology-free. A technology-free, functional design of a methanol cluster provided us with the necessary structure for the definition of the cluster design space without compromising or going into detail of the wide array of technical solutions. 2.2. Transition management The theoretical development of a methanol cluster is of no use when the actors that will have to invest in the new cluster are not involved from the very beginning. These actors can enrich the design space of the methanol cluster with new ideas and alternative plants and processes. The transformation to a methanol-based cluster will likely be a gradual transformation. We, therefore, used the transition management body of knowledge (e.g., Rotmans et al, 2001) to build our theoretical framework with respect to creating involvement of various actors in the change processes. Transitions are modelled as Scurves divided into four phases. A pre-development phase is followed by a take-off phase. Then the acceleration phase takes place which is concluded by a phase of stabilisation. Transition management concepts can help to create involvement, to expose barriers for change and to support the taking down of those barriers. An important tool that is offered by transition management theories is the design of a transition agenda that
307 Table 1. Overview ofsubgoals and research methods. Subgoal 1. To develop a theoretical framework 2. To explore and map the design space broadly 3. To bound the design space 4. To quantify the design space 5. To design a viable methanol-based industrial cluster 6. To design a viable transition process
Research method Literature survey Functional modeling Interviews and literature survey Virtual Prototype Workshop with relevant actors Workshop with relevant actors
w^ould indicate which stage the transition process is in, and would give an indication of how to reach the next stages by creating a long term vision and short term actions.
3. Research Approach In order to achieve our goals we have divided the study into a number of subgoals, and we used different research approaches for each step. The subgoals and associated research methods are summarised in table 1. We conducted a literature survey in order to build a manageable and useful theoretical framework. This framework was described in the previous section. Second, we developed a functional design of a methanol cluster, using the approach described by Dijkema, and using our current knowledge and expertise regarding the developments in the Rotterdam port area. This functional design was used to identify which actors should be approached if this cluster was to be realised. Through interviews and further literature study we were able to identify the most relevant actors and consult with them in order to obtain a realistic design space. We also used the interviews to get a quantitative feel for the cluster, by asking the various actors about their long-term vision with respect to the developments concerning methanol in the broadest sense. We then turned these interview results into a quantitative model, the Virtual Prototype, describing our design space of alternative methanol-based clusters and allowing users to modify the cluster and get an impression of the viability of alternative cluster designs. Finally, we will use the Virtual Prototype in a workshop with relevant actors as a means to further the transition process. The intended results of the workshop are a well thought out methanol-based industrial cluster in the Rotterdam port area, and the start of a platform or community of actors who need and want to get involved in developing such cluster.
4. Results 4.1. The methanol cluster For the functional design of our cluster we have considered plants and processes from the entire production chain, ranging from fossil and renewable fuels to methanol derivatives, such as fiber board plants that make use of formaldehyde. In addition, the cluster includes industries that process or use by-products' such as hydrogen and platinum. The cluster comprises five main functional areas. For each of these functional areas we have made an inventory of possible interactions, flows and subsystems:
308 1. 2. 3. 4. 5.
power production waste processing transportation fuel s methanol and derivatives spin-off processes
The functional design of the cluster is shown in Figure 1.
Bectridty
Biomass
Innport
Crg. waste
Fossil fuels
n
Methanol
ICE car FCcar
Derivates
Rber board
H2
R recycling
Airplane FC production H
Furniture
Figure 1. Functional design of methanol cluster. Power production Power plants currently use fossil fuels as their main feedstock, but are interested in supplementing their feedstock with biomass. A quick calculation, however, shows that under the current market conditions the application of biomass in a power plant is not economically viable. The economic variable cost margin of the conversion of biomass into electricity would be 10% at most. This is too small a margin to justify the use of biomass in electricity production at this moment. Biomass can, however, be used more economically for the production of synthesis gas by gasification, which can be turned into methanol. Roughly, 1 ton of biomass can be converted into 1.2 tonnes of methanol, rendering the variable cost margin at a more attractive 60%. Waste processing The presence of a gasification unit opens up possibilities for the gasification of all kinds of organic wastes, such as solid waste, plastics, sludge, rubber wood and household waste (Schwarze Pumpe, 2002). Transportation fuels Fossil fuels are practically the sole provider of energy for transportation of goods over the road infrastructure, in the form of natural gas, petrol and diesel. The application of methanol, however, as a replacement fuel in conventional internal combustion engines (ICE) is promising. Only strict economic considerations hold back a large scale introduction of methanol into ICE cars. The use of methanol in cars powered by fuel
309 cells has a brighter future, as methanol can be a convenient and safe hydrogen carrier. The viability of implementing a methanol fuel cell into cars has been demonstrated, among others, by DaimlerChrysler (2002), who has developed a series of demonstration models (NECAR). Methanol and derivatives The supply of methanol to the area is expected to grow in the future. This will attract new large-scale installations that process methanol to convert it into derivatives, such as olefins, through the Methanol to Olefins process (MTO) and it will cause expansion of the production of formaldehyde. At its turn, formaldehyde can be used in the production of fiber board, a key ingredient for the furniture industry. In addition to the import of biomass for gasification purposes, imported wood chips can be used in the production of fiber board. Spin-off processes Finally, we introduced a subsystem of spin-off processes to capture any processes that are not directly linked to methanol production or processing, but may come to play a significant role in the future. Generally the life span of fuel cells is shorter than the life span of cars, so we introduced a platinum recycling industry in order to process used fuel cells. In addition, we added an extreme example of using the hydrogen surplus as an aeroplane fuel, since the energy-mass ratio for hydrogen is 3 times higher than for kerosene. This scenario, however, may well be realised only in the very far future. 4.2. Actor involvement The relevant actors come from a very wide range of industries. In order to create a common language among the actors and to get the actors involved actively, we developed a virtual prototype of the cluster, based upon our functional design and the interview results. Some key conclusions and trends that could be extracted from these interviews were: • a main obstacle for methanol cluster development is a high initial investment • relatively inexpensive natural gas inhibits wide-scale research into biomass applications • there is a need for research into a large-scale biomass gasifier • there is a lot of tacit knowledge within companies concerning future developments During a workshop to be held with the actors, we will discuss and detail our ideas and proposals for a methanol cluster, and we will use the virtual prototype as a vehicle to initiate discussions concerning technical and economic issues of the cluster., and extract the tacit knowledge that is present in the actors. The workshop will comprise a panel of representatives from actors considered in the Virtual Prototype. Sessions will include surveys, hypothetical scenarios, and free exchange of ideas to refine our methanol cluster model and develop a consensus on necessary developments along a transition path. As an example, a hypothetical scenario may take as fact near term, significant and enduring cost increases in petroleum. Under such supposed conditions the panel's thinking with regard to creating and operating a methanol cluster at Rotterdam will be captured through survey instruments.
310
5. Discussion and Conclusions The preliminary results of our study support many of our ideas about possibilities for a methanol cluster. The functional design of the cluster proved to be useful in identifying a wide array of possible processes and actors. Secondly, many of the key actors that are needed to bring about such innovative industrial clusters have been interviewed and indicated to be very willing to be brought together in a workshop. These actors are expected to further their discussions and explorations in this field by means of several other transition management initiatives that are currendy being deployed by the Dutch Ministry of Economic Affairs. We trust that this research contributes to the research body of knowledge concerning the development of industrial clusters, as well as to a healthy and competitive methanol-based Rotterdam port area.
6. References Dijkema, G.P.J, and Renter, M.A., 1999, Dealing with complexity in material cycle simulation and design, Computer and Chemical Engineering, 23 Supplement, pp. S795-S798. Daimler-Chrysler, 2002, Study Cites Low Cost for Methanol Refueling Stations, methanol.org, March. Dijkema, G.P.J., 2001, The Development of Trigeneration Concepts, Proc. 6*^ World Congress of Chemical Engineering, Melbourne, Australia. Herder, P.M. and Stikkelman, R.M. 2003, Decision making in the methanol production chain, A screening tool for exploring alternative production chains. International Conference on Process Systems Engineering 2003, Kunming, China. Radgen, P., Pedernera, E.J., Patel, M. and Reimert, R.,1998, Simulation of Process Chains and Recycling Strategies for Carbon Based Materials Using a Conventional Process Simulator, Computer and Chemical Engineering, 22 Supplement, pp. S137-S140. Rotmans, J., Kemp, R., van Asselt, M.B.A., Geels, F., Verbong, G., Molendijk, K.G.P. and van Notten, P., 2001, Transitions & Transition management: The case for a low emission energy supply, ICIS BV, Maastricht, The Netherlands. Schwarze Pumpe, 2002, Sekundarrohstoff-Verwertungs-zentrum Schwarze Pumpe (SVZ), http://www.svz-gmbh.de/.
7. Acknowledgements This study benefited from the support and expertise of the municipal authority of the Port of Rotterdam, and the authors would like to thank Pieter-Jan Jongmans and Anne van Delft for their co-operation. The authors would also like to acknowledge the valuable contributions of Hugo Verheul (Delft University of Technology) to the study, specifically in the area of transition management.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B. V. All rights reserved.
311
Risk Premium and Robustness in Design Optimization of Simplified TMP Plant Satu Sundqvist*, Elina Pajula, Risto Ritala KCL Science and Consulting, P.O.Box 70, FIN-02150 Espoo, Finland
Abstract This paper illustrates issues related to optimal design under uncertainty in a simplified TMP (thermomechanical pulp) plant design case. Uncertainty in the case study is due to four dynamic scenarios of the paper machine pulp demand serviced by the designed TMP plant. Both a risk premium approach and a multi-objective optimization technique were employed. In the latter the worst-case scenario (representing the highest cost) was taken as the robustness measure of the design, and the design parameters were determined as a trade-off between the optimum of the mean cost model (i.e. the stochastic model) and of the worst-case scenario. The TMP model is a general example of an industrial case having parallel on/off production units and time-variant productions costs. Therefore, the design case could also be interesting for other fields of chemical industry than paper manufacturing, and the optimization procedures can be applied for risk premium and robustness studies in general dynamic optimization cases.
1. Introduction In papermaking, TMP (thermomechanical pulp) plant has to satisfy the pulp demand of the paper machine. Design optimization of the simplified TMP plant includes the number of refiners (A^Ref) and the storage tank volume (Ftank) as design parameters. The optimization is genuinely a dynamic problem having paper machine demand and production costs, and thus — when optimally operated — also the number of active refiners varying in time. In the TMP plant design, the optimum of the total costs is found via a subtask of minimizing the capital costs and the production costs in operations and scheduling optimization. The TMP design optimization is a MINLP (mixed-integer non-linear programming) problem since it has both a discrete, A/Ref, and a continuous, Fiank, design parameter. The operational optimization subproblem has integer decision variables (number of active refiners in time) affecting the continuous state of intermediate tank volume through process dynamics. The tank volume is constrained to stay between a minimum and a maximum volume. In the operational optimization, the task is to schedule startups and shutdowns of refiners in order to minimize the production cost when the demand of the paper machine and the price of the electricity are known over a given time horizon.
312
2. Optimization Procedure 2.1. Operations and scheduling optimization In general, the operations optimization task is to find suitable set point trajectories for the controllers. As the controllers are omitted from our simplified TMP system model, no setpoint optimization is included in the study. However, the refiner scheduling optimization can also be considered as operations optimization with refiner activity set point trajectory as binary valued (on/off) function of time. In this case, the operations optimization over a time horizon of some one hundred decision time intervals took approximately one minute by using a low-end PC and Matlab environment and the simulated annealing algorithm (Otter and van Ginneken, 1987). 2.2. Design optimization The MINLP problem in the TMP case is simple in that the NPV (net present value) per capital employed can be determined by first treating both the design parameters (A^Ref and Ftank) as discrete ones and then interpolating a continuous cost function Cost = X^tank) for the optimal number of refiners. Consequently, no advanced MINLP solvers are needed. 2.3. Objective function With a given scenario of the paper machine TMP demand, the production schedule can be optimized and with a given probabiUty distribution of all scenarios (p^), the operational costs as a function of n(t) and V(t) can be calculated. By adding the capital costs, the optimal values for the decision-making amongst the studied design alternatives (A^Ref, Ktank) are obtained. DESIGN LEVEL:
C t {f P'^^"'°' ('' ^-'""- ^^^ ""capital (^-' ^- J
^'^
subject to OPERATIONS LEVEL:
«^°H^;^Ref,^.ax) = argming{nW}
(2)
"(0 100
gHt)} =Y.''At)+K"up+ h^^n^^ dV_ -t-^="(0/..-/4 dt 0
(3)
(4) (5)
n(t)e [0,\,...N^J
(6)
where/^f is production capacity of one refiner, fp^is paper machine demand and h-, refers the daytime and night time electricity costs per refiner at each time interval. The capital cost can be written as ^capital
~ ^Ref "^ ^lank
~ '^Ref^Ref.i
" ' " ' ' O V ten* ' ' ^ o )
^ '
313 where Qe/ is the capital cost of a refiner, which, in real cases, is a function of the capacity (in MW) of the refiner. In the tank cost, the exponent a is usually from about 0.6 to 0.7 (Biegler and Grossmann, 1999) and Co and FQ are the base capacity and base cost, respectively. 2.4. Case calculations In the over-all design optimization, four different demand scenarios of the paper machine were considered. Figure I. All scenarios are expected to be equally likely, i.e. p^ = 0.25. Scenario C
Scenario A
c"
r u
1 1
^
\
Decision interval
1
1 Decision interval
Decision interval
Decision interval
Figure 1. Demand scenarios of the paper machine in the simplified IMP case calculations. Parameter values for the TMP model to be optimized are shown in Table 1. The time horizon is divided into 7=100 decision intervals, where A/=30 min, corresponding to a total of approximately two-day time period for the case calculations. Table 1. Parameter values for the simplified TMP model. Model Parameter
Value
TMP demand - average A-C / D - min - max Number of refiners Refiner production Tank volume - maximum, Vmax - minimum, Vmin - initial volume, Vo(/=0) • end volume, Vend(/=100) Electricity costs - night time - daytime - up/down costs
Units per decision interval (At = 30 min) 7.6/10.4 0-16.5 NRef=3,4,5,6 3.6 Units Vmax = 20, 30, 50, 70, 100, 200,400 5%V„^ 15% Vn^ 2 5 % V n u x - 35%Vn..x
Units per decision interval 3 5 3
The over all feasible region covered in the optimization was obtained by combining the feasible regions of all scenarios resulting the following:
N=3-^V> 200; N=4-^V> 100; N=5-^V> 30 andN= 6^V> 30. The non-feasible regions were due to the fact that paper machine demand could not be satisfied. These values were considered to have infinite large cost values and thus omitted from the optimization.
314 In the design optimization, the interest on capital costs was neglected, and thus the capital cost was simply the annual depreciation. The number of the depreciation years, m, for the refiners and the storage tank was studied in the range of m = [1, 2, 4, 10, 20]. The capital cost due to refiners, A^Ref = [3, 4, 5, 6], is calculated as: 1 •N C (8) 'Ref ^^ Ref
^RefJ
where m is the number of years for depreciation, and CRef,i = 200 is the cost of one refiner (in the units relative to the two-day time period of electricity costs). Similarly, the capital cost due to the intermediate tank, Ftank = 20 - 400, is calculated as: (9) where m is the number of years for depreciation and b = CQ/VQ'^ = 10 is the relative unit cost of the tank.
3. Optimal Design and the Effect of a Risk Premium In the design with a risk premium, the expected value based on the probabilities of all scenarios was calculated for each discrete design parameters, A^Ref and Ftank- The risk premium was defined proportional to the standard deviation of the operational costs under the four equally likely scenarios, with a proportionality factor a = 0...3. The objective fimction to be minimized is expressed as Co. = t/7^g(Az(0) + C,,,,w +aa'.
(10)
Figure 2 shows that with these parameters the risk premium affects the design only at intermediate number of depreciation years. It was obvious that the number of the depreciation years (affecting the importance of capital costs versus operational costs) had a strong effect on the optimal design alternative. At 10 years depreciation time, the risk premium factor had an influence on the optimal design increasing both the number of refiners, A^Ref, and the continuous design parameter, Ftank. a)
b)
Number of refiners for the optimal design in a simplified TMP case
6 z If 5
s
A
1'
• •
•
CO
l>
^ 3
• fl =0-3 A a=0 X a=3
1 1 5
10 15 20 Number of years for depreciation
400
E 3 O
X A
300 200
0 )
Storage tank volume for the optimal design in a simplified TMP case
100
•
•
•
A
• a =0-3 A a =0 X fl=3
0
25 Number of years for depreciation
Figure 2. Risk premium weighing effect (a = 0 - 3) on the optimal design in the simplified TMP case with different number of years for depreciation: (a) number of refiners and (b) storage tank volume. The optimum of the design parameters was found in the studied range of A^Ref and Ftank for the depreciation times 1-10 years, where the capital costs dominates the total costs.
315 As for the 20 years of depreciation time, the optimum number of refiners is observed to reach the maximum (A^Ref = 6) indicating that the parameter range should probably be larger than studied. It was therefore calculated that the minimum costs in operational optimization (based on maximum production rate and storage tank volume at low energy price period) are to be found amongst the number of refiners up to A^Ref = 13 and ^tank = 550. However, the studied range (A^Ref = 3 - 6 , Ftank = 20 - 400) was adequate to find the optimal design parameters when the depreciation time of the relative capital costs is restricted to maximum of 10 years.
4. Robust Design Optimization The robust optimization study was based on the worst-case scenario analysis (Suh and Lee, 2001). The best robust solution was chosen amongst the Pareto optimal design alternatives. 4.1. Multiobjective optimization In the MOO (multiobjective optimization) method expected cost and robustness measure are simultaneously optimised. The robust model is based on the stochastic model (Eq. 10, with a = 0) having an additional objective of controlling the variability of performances of individual scenarios. The worst-case scenario is taken as the objective variable for the robustness measure and a decision-making procedure is applied to choose the best robust design alternative for the case study. 4.2. Decision-making The best robust solution for the decision-making (with the studied model parameter range and the depreciation years m =10) was found by using an L^-metric method, where the robust model parameters are obtained nearest to the ideal point. Figure 3. Pareto Curves
O.obO \
0,200 I Ideal I
0,400
Figure 3. Robust optimization of the simplified IMP case with four different demand scenarios of the paper machine. The best robust solutions based on the worst-case analyses are found with N = 6 and with the indexes k = 5 and k = 8 for the different scaling factors (probability of the worst-case scenario) p^ = 0.25 orpy^= 1 (no scaling), respectively. The optimum solution withp^ = 0.25 corresponds the storage tank volume of Vtank-350.
316 The scaling factor, /?w, is a function of a probability of the worst-case scenario, i.e. /7w = 0.25. The resuh was also calculated without any scaling (p^ = 1). The selected robust optimal solution is closer to the stochastic model solution E(^=0) for smaller p^, and closer to the worst case analysis solution W(k=Np) for larger/>w
5. Results and Discussion For the simplified TMP design case, stochastic model gives A^ = 5 refiners and Ftank = 320 for the optimal design. Both the risk premium study and the robust optimization study prefer N= 6 refiners. In both studies, the optimum storage tank volume is Fiank = 350. Design optimization with a stochastic model does not take into account any uncertainty in design. The TMP design alternatives are based on the over all feasibility region of all scenarios. However, if one of the scenarios is strongly restricting the feasibility region and, in addition, is quite infrequent, the scenario can be omitted from the optimization. This might cause a situation where the TMP line is temporarily unable to produce pulp for the paper mill. Then the question to be answered is, what will be an additional cost of such a scenario, and in more general, how the extra cost should be handled in the design optimization.
6. Conclusions In the paper, the optimization with a risk premium, stochastic optimization and robust optimization were compared. The TMP plant design, even though extremely simplified, had all the characteristics of mixed-integer and dynamical design problems. Therefore, the study offers a suitable application for comparison of the design principles when uncertainty is considered in decision making of optimal design parameters.
7. References Biegler, L.T., Grossmann, I.E., Westerberg, A.W., 1999, Systematic Methods of Chemical Process Design, Prentice Hall, New Jersey. Otten, R.H, van Ginneken, L.P., 1989, The Annealing Algorithm, Kluwer Academic Publishers. Suh, M., Lee, T., 2001, Robust Optimization Method for the Economic Term in Chemical Process Design and Planning, Ind. Eng. Chem. Res., 40, 5950.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
317
Process Design as Part of a Concurrent Plant Design Project Timo L. Syrjanen Jaakko Poyry Oy PO Box 4, 01621 Vantaa, Finland, email: [email protected]
Abstract This paper describes methods and tools developed and in use in a large engineering company when process design forms part of a concurrent plant design project. One of the most important aspects is how to handle the information flow between different disciplines. The methods and tools to solve the information-sharing problems are using new ICT technologies such as the Extensible Markup Language (XML) and the Web. Also the measuring of concurrent plant design and engineering performance as for the information existence, accuracy, completeness and timing is discussed. As a case example of how the process design can be integrated with other disciplines by using new ICT technology, Jaakko Poyry Oy's MODE concept is described.
1. Introduction The world economy is continuously changing and new requirements and challenges are set also for the plant design. Global competition has increased and a new situation on the market has forced the engineering companies to cut costs, slice time and improve quality in their design work. One solution to meet these requirements is to use concurrent design and integrate process design in the concurrent plant design. This paper describes those methods and tools which can be used in the process design when it is part of the concurrent plant design. The process design has an important role in the whole plant design process. Design processes can be divided into two parts. The internal process consists of the proper process design and the interface process defines how the process design interacts with other disciplines. In this paper the main interest is focused on the interface process. In successful plant design handling of the information flow between different discipline interfaces is critical. Information must be managed correctly. Connecting the information flow to a schedule is obligatory. It is important to have measures for information existence, accuracy, completeness and timing. There are different methods of organising the information flow between the process design and other disciplines. Information can be transferred by using transfer files or storing design information in a common repository, which can be accessed with different tools for various reasons. This method also gives possibilities to use new web technologies to access design information.
318
2. Plant Design Project 2.1. Combined project involving different disciplines A plant design project is an effort in which a project company delivers a unique product to an external client. The projects are characterized by their product orientation. The objective of the project is to create a physical entity, i.e. the mill. A typical plant design project consists of designing a paper or pulp mill. Every new paper or pulp mill project requires individual engineering work. Manufacturing or procurement activities cannot be performed before appropriate engineering activities have been completed. The work in different disciplines must be harmonized to complete the project. 2.2. Engineering input data problem Today the plant design projects are becoming larger and more complex. The unit sizes are growing both in mill, engineering and contractor organizations. The reason for the growth is financial, i.e. to keep the projects feasible, and consequently large units seem to be the right solution. One of the main problems in implementing plant design projects is how to execute project plans efficiently, i.e. on schedule and correctly. The time schedules become shorter and shorter, which means that the project tasks and works must be divided between different organizations to keep the time schedule. Today concurrent design is applied to almost every pulp and paper mill design project. To succeed in design work, the basic input data for engineering must be accessible as early as possible and it should also be correct and complete. When information is shared between organizations efficiently, the quality of plant design projects increases. It is not an easy task to share the knowledge and information. The knowledge comprises all information flows in the project and the involved parties' organizations. It includes hard and soft, structured and unstructured, official and unofficial knowledge and data. For each design process it is relevant to know which input data are accessible and what is their status. The status varies from preliminary through released for design and released for construction to final.
3. Process Design Interacting with Other Disciphnes 3.1. Process design The process design acts as a base for all the other design tasks. Some parts of the process design may be performed in quite an early phase of the project. The process design can be divided into two different phases, i.e. preliminary engineering, in which conceptual design is carried out, and detail engineering. The conceptual design starts with balance calculations and preliminary studies. After that the block diagrams of the processes are created, which can be completed into preliminary flow sheets in the basic engineering phase. In this phase all necessary main equipment inquiry specifications, as well as other process equipment, pump, motor and tank lists are created. As the data and information on the main equipment form the basis for creating the final flow sheets, the detail engineering cannot be started until the main equipment has been purchased. The preliminary flow sheets and calculations are checked and the flow sheets are completed incrementally by adding information on the pipe sizes.
319 instrumentation and connections starting from larger sizes, which are the most decisive for space reservation. The sizes and types of all tanks and vessels needed in the mill are calculated. At the same time the preliminary specifications of pumps and motors are defined, and a preliminary procurement specification is created based on this information. The sizes and types of pumps and equipment are checked during the design. The dimensioned flow sheets are completed concurrently in accordance with the instrumentation design, and the flow sheets become PID diagrams. Although the creation of PID diagrams is a joint venture between the process, mechanical and automation disciplines, the process engineer is responsible for the diagrams. After the diagrams have been finished, the process design prepares the operation instructions and logic diagrams in co-operation with automation and electrical engineers. 3.2. Concurrent work between process and mechanical design The process and mechanical design are closely related and the engineering tasks are performed concurrently. For example pipe routing is started before the flow sheets are ready (see Figure 1). This is necessary because the results of the piping design are needed for the structural design and procurement in an early phase. To succeed in concurrent design is normally difficult for both process and mechanical design. From the process engineer's point of view the piping engineer is asking for the process information much before it even exists, and from the piping engineer's point of view the process engineer delivers the data much too late for successful project execution. What makes the problem even more difficult is that also external information is needed, such as the client's comments on and approval of the flow sheets. The improvement of concurrent design can be achieved in two ways; either by improving the design as its own process or by improving the data sharing between the disciplines. Process design Design Criteria
Preliminary Process design
Mechanical design
Modified Process design
Piping design
Modified Piping design
Figure 1. Process and piping design information flow.
4. Data Flow and Sharing The information sharing within a plant design project organisation is a key element for successful project execution. Today's design and manufacturing time schedules are set so exact that there is no room for delays or misunderstandings for lack of information. All information must be available for everyone in the project at all times. The projects are executed by using concurrent design, the project teams can be located in different
320 places even in different countries, and knowledge must be shared between them successfully. The project manager is responsible for the knowledge sharing. In the project it is important to decide on the interfaces between the disciplines. The shared information must be specified and a schedule must be given for when and how information is shared. Is the information pushed from the creators to its users or are the users pulling the information when they need it? Special interest must be given to change management.
5. Tools for Data Sharing 5.1. Mode - mill database architecture In Jaakko Poyry Oy the data-sharing problem between disciplines was observed for a long time ago (Talvio). One effort to solve the problem was the implementation of the MODE (MOdel Driven Engineering) design architecture. It is a collection of engineering applications sharing the same mill database. The mill database is a relational database application with structures for process, piping, automation, electrical engineering and project management data. The model consists of functions, which are the requirements for a specific task, and of components, which fulfil the function requirements. The functions and components area hierarchically connected. The engineering applications are independent from the database. The applications can be implemented with either legacy tools or commercial ones. Most of the MODE applications have been developed as legacy applications in Jaakko Poyry Oy. In the mill database the data may have one of the three status values; Current, RED (released for design) or RFC (released for construction). The current data are the latest and most up-to-date version of the data. The discipline owning the data can change the data at any time. When the data reach the status that they are ready to be released for design (RED), the data are saved in the RED fields. The other disciplines can use RED data values as a basis for their design. Data may be changed, but it shall be carried out in accordance with a predefined procedure. When the data will be released for construction (RFC), they may be considered final. Changing of the RFC data requires a complete study of the consequences due to the change. Most often the project manager or the client must approve the change.
5.2. Process design information as part of the WWW application Just like product models can be used as part of the WWW application (Siltanen), also plant design applications can benefit from web applications. Typically web applications are 3-tier applications (Figure 2).
HTML/XML
Client
i
Q Engineering system XML <
Document Database
> ^ - ^
Application ^=^>.—^ _ S
server Figure 2. Web application
architecture.
L_J
A
Drawmg
database
321 The requirement to create an independent modularised and maintainable system is to standardise the exchange protocol between the application server and back ends. The exchange format must be system-neutral and platform-independent, and it must be widely known and easy to implement. In the modern web application XML is an obvious choice. With XML the process design data can be exchanged and shared so that the application server connection with back end can be implemented. The client system is a typical Internet browser, with some plug-ins for drawing rendering. The purpose of the client is to act as a user interface to the whole application package. The application server contains all the application logic and its main role is to integrate the data from different sources, i.e. process design data and other discipline data components. The back ends are traditional repository applications such as engineering databases, document management, ERP systems, other application servers, etc. 5.2.7. Project portal Jaakko Poyry Oy has developed a project information publishing system WebPub, which is a web technology-based document publishing system. Through WebPub the user can view and print all general project documents and information. During the course of a project, the system is mainly active in the project group's Intranet (Kiiskinen). Within the WebPub structure, all the project information is divided logically into relevant groups of their own. For example, the Process Page is a group containing process PID diagrams connected with equipment information and standards. After the information has been segmented in a rational way and links designed for easy and fast searching, the requested information can be conveniently found and retrieved. 5.2.2. Application integration In the Jaakko Poyry MODE concept the process application is integrated with a 3D plant design system. All information between the systems is transferred in an application-independent XML format. Using the application server in integration gives possibilities to share design work in different locations within the company or even between different companies. Currently Poyry's own legacy XML schema is used, but when standard schemas are published, those should be used.
6. Concurrent Design Performance Measurements When concurrent design is used in a plant design project, it is important to know how the design work progresses and how well the information sharing is accomplished. Performance measurement is useful in controlling the design project and it can be used as a project management tool. 6.1. Mill Database Completeness The mill database can be used for measuring data sharing. When the process engineer inputs the data into the mill database, the input data are simultaneously released to other disciplines. It is quite easy to develop a measurement system which can measure the completeness of the input data.
322
The estimated number of equipment in each equipment class, i.e. main equipment, tanks, pumps, valves, pipelines, motors, etc, is needed as preliminary data. The preliminary data can be collected from previous projects or experienced designers can estimate the numbers. Each equipment class will be measured by different data attributes specific to the class. These attributes may have the status of current, RFD or RFC. Based on these values, it is easy to measure the completeness of the process data. The completeness of the project may be calculated with the formula: Completeness for Design = Has RFD value/Number of equipment in class Completeness for Construction = Has RFC value/Number of equipment in class 6.2. Piping application coherence When a piping application is used, to measure design completeness with process design is to count equipment in the mill database and equipment in the 3D piping application. For example how many valves are there in both applications and are there any discrepancies between the equipment attribute values. This is a good measurement and it is quit easy to implement. Some plant design system vendors provide checking routines to verify the process design against the mechanical design or vise versa.
7. Summary In the plant design projects the process design acts as a basis for all other design tasks. Time is the most critical factor in a plant design process. Therefore we use concurrent design in the plant design in which the process design has the most important role. It is critical how well the data sharing between the disciplines is accomplished. Measures and methods have been developed, but research is needed especially in change management. The tools for data sharing are important. Currently the tools are more and more webbased applications and use XML as the communication language. This trend will continue. For many years Jaakko Poyry Oy has concentrated on building the MODE architecture in which its heart, the mill database, has the key role in information sharing. The results have been promising, but using it as a quality tool and measuring the design completeness need to be further developed.
8. References Kiiskinen, J., Harkonen, K., 1999, WebPub - A pioneering document publishing and distribution system. Know-how wire, June, Jaakko Poyry Oy, Vantaa, Finland. Peltopuro, J., 1999, Master Thesis: Plant design development in 3D CAD system, HUT, Machine Design Espoo,Finland (in Finnish). Siltanen, P., Syrjanen T., Kuusisto M., 1999, XML-based meta-data modelling for product data management. In Proceedings of PDT Europe '99. Stavanger, Norway. Talvio, P. 1987, Today's design systems keep all parties involved on a daily basis.. Pulp and Paper, September, USA.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
323
A New MINLP Model for Mass Exchange Network Synthesis Z. Szitkai, T. Farkas, Z. Kravanja*, Z. Lelkes^, E. Rev and Z. Fonyo Chemical Engineering Department, Budapest University of Technology and Economics, H-1521 Budapest, Hungary; [email protected] * Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova 17, P.O. Box 219, SI-2000 Maribor, Slovenia
Abstract Based on the principles of a simultaneous optimisation model developed by Yee and Grossmann (1990) for heat exchanger network synthesis, a robust optimisation model for mass exchange network synthesis has been developed. Superstucture, essential modelling equations, and example problems with their solutions are presented. The new model is fairly linear and is applicable to systems of both packed and staged vessels, and multicomponent problems as well.
1. Introduction Mass exchange network synthesis (MENS) is an efficient tool for pollution prevention through process integration, and was originally developed mainly based on the ideas of heat exchanger network synthesis (HENS). The first, pinch-based, solution methodology of El-Halwagi and Manousiouthakis (1989) was extended by Hallale and Eraser in 1998 and later in 2000. Using their advanced targeting methods both the capital cost and the total annual cost (TAC) of the network can be predicted ahead of any design. Still, pinch technology for MENS does not provide a systematic way for deriving the optimal network structure. The network design includes trial and error elements, especially when large or multiple component problems are considered. Design methods based on simultaneous optimisation offer the possibility of designing the MEN in a single automated step. The whole MENS problem is formulated as a mathematical optimisation problem that has to be solved for global optimum. Most conveniendy, binary variables are used to represent the network structure, hence the optimisation problem to be solved is usually a mixed integer non-linear programming (MINLP) problem. The first simultaneous MINLP model for MENS was presented by Papalexandri et al. (1994). Comeaux (2000) aimed at simplifying the model formulation, and adopted pinch principles to be able to formulate the MENS problem as a moderate size non-linear programming (NLP) problem.
2. Motivation and Aim of the Work Szitkai et al. (2002a) presented an extensive comparison of the methods mentioned above and showed that so far the best method for MENS is the MINLP model of Papalexandri et al. (1994). Using her model, most of the advanced pinch based designs of Hallale and Eraser can be reproduced or improved. Papalexandri's model is also superior to the insight based NLP method of Comeaux (2000). Still, as it was shown, Papalexandri's MINLP method sometimes delivers worse solutions than the advanced pinch design method of Hallale and Eraser. The most probable reason is that
324 Papalexandri's MINLP model cannot always be solved for global optimality with the commercially available algorithms because her model contains plenty of bilinear mass balances, which set up a nonconvex search space. The aim of this work has been to develop a robust MINLP model for MENS. Regarding the assumptions and modelling principles of the simultaneous optimisation model of Yee and Grossmann (1990), that was originally developed for designing HENs, it was expected that a similar modelling technique would be applicable to MENS as well, resulting in a fairly linear MINLP model.
3. Superstructure and MINLP Model Assuming iso-concentration mixing in the network, mass balance equations can be formulated linearly by using merely concentrations and the amounts of transferred masses. This assumption can only be applied when the stagewise MEN superstructure shown in Figure 1 is used. concentration location 1
S'^^S^ ^
concentration location 2
Stage Z concentration location 3
• j ^ ' ^ . l ^ ^ yi,s
y2,s
Fig. 1: Stagewise superstructure for single component MENS. As Figure 1 shows, only rich and lean stream concentrations at the stage borders (concentration locations, k) are used instead of in- and outlet concentrations of the individual mass exchangers, hence the mass balances for the mixing points become linear. In Figure 1 the superstructure is shown for two rich streams, two lean streams, and two stages. Generally any of the rich and lean streams can be matched once in each of the stages. This superstructure allows somewhat less complex network structures compared to that of Papalexandri et al. The minimum number of superstructure stages can be calculated based on the concentration data of the streams, just as at HENS. R and S represent rich and lean stream flowrates; y and x are rich and lean mole fractions; the indices S and T denote source and target concentrations. The analogy with the Yee-Grossmann HENS model can be closely kept in case of single component MENS problems and packed columns as mass exchangers. The concentrations of all the streams at the concentration locations, at the bounds of the composite intervals, have to be the same in pinch technology; but this constraint does not appear in our model. The only restriction applied is that there cannot be more than
325 one unit in any stage between two streams. This difference usually results in less number of necessary concentration stages than composite intervals in pinch technology. When extending the model to compatible multiple component MENS problems, linearity can be maintained by applying special modelling frame with identical superstuctures applied to the individual components. In case of incompatible components nonlinear balances have to be included. In the new MINLP model, some equations prescribe mass balances at the concentration borders; other equations assume monotonity of the concentrations. Equation 1 sets the mass exchange me to zero if the unit does not exist; otherwise its value is constrained by Q (big-M technique). Equations 2 and 3 force the exact calculation of the driving forces of the units {dy) only if the given units do exist in the solution; F is a constraint. Continuous 0
dyuj^k ^ yuk - ^i,jXj,k - Kj + ^ui,k (1 - zuj,k)
(2)
dyi,jM\ ^ yiMi - ^ij^jMi
(3)
- Kj + hj,k (1 - Zij,k)
bZij,k - Zij^k - PZi,j,k + ^Zij,k = 0
(4)
Obj = Obj _ genuine + w S (pz/,y,^ - sZi^j^^ f
(5)
ij,k
Indices / and j stand for the rich and lean streams, respectively. Originating from the definition of the MENS problem (El-Halwagi and Manousiouthakis, 1989), rich stream flowrates (R) are fixed while lean stream flowrates (5) are model variables. This renders the MENS model slightly more nonlinear compared to the original HENS model. Logarithmic mean differences are approximated with the method of Chen (1987):
Imcdijj^
dyuj,kdyi,jMi
(6)
—^—-—
Kremser equation has to be applied for using staged vessels (Szitkai et al., 2002b), which, unfortunately, spoils the linearity of the model. MINLPs were solved using the GAMS package (Brooke et al., 1992). 3.1. Example problem I (packed columns) Example problem 4.1 from the PhD Thesis of Hallale (1998) is reconsidered (Table 1). Packed columns are specified as mass exchangers, and the exchanger mass based costing method of Hallale is used with the following costing equations: 1
^^/ ik
mass,j^, =-—
TCC =
LIN„,J61S
1 1
'f I E masSijj^ Uhk N • ^' units
(K^ = 0.02 kg NHss"^ kg-^)
(7)
\
(8)
The total capital cost (TCC) of the MEN involves both the cost of the column shells and the cost of the packing. Using the targeted, minimum external lean stream flowrate
326 (5i=2.48 kg/s) Hallale's solution featured a total capital cost of $298,000. Using our MINLP model, relaxed binary variables, and a penalty in the objective function to force them equal to zero or to one were introduced. This stabilised the optimisation process. Fixing the external MSA {S3) flowrate to 2.48 kg/s our solution's optimal TCC was $307,000. Optimising for the TAC (annualisation factor of the capital cost is 0.225, cost of the external MSA is taken 0.001 $/kg and 8150 hr/yr is assumed) we obtained a solution with a TAC of $134,000. The latter solution is shown in Figure 2. Summary of the solutions is shown in Table 2. Table 1: Stream data for example problem L Rich streams
G (kg/s) 2 4 3,5 1,5 0,5
Ri R2 R3 R4 R5
(mass fraction) 0,005 0,005 0,011 0,010 0,008
MSAs (kg/s) 1,8 1
s, S2
CX)
S3
(mass fraction) 0,0017 0,0025 0
y (mass fraction) 0,0010 0,0025 0,0025 0,0050 0,0025 x' (mass fraction) 0,0071 0,0085 0.017
m
b
1,2 1 0.5
0 0 0
Table 2: Cost results of example problem I. objective TCC
comments pinch technology minimal utility MINLP model minimal utility MINLP model
TCC TAC
TCC($)
7>lC($/yr)
298000
140000
307000
142000
218000
134000 - ^ ^ ' ^ 2 kg/s
-<3)-
-Qy
-"^^-"-^^ 4 kg/s ^ ^ ^ ^ 3.5 kg/s
•<J}
^ ^ ^ ^ ^ 1 ^ 1.5 kg/s
<^
^ ' ^ ^
0.5 kg/s
0.015719
2.904 kg/s
oO-
m^ ^O
-O-
7
Fig. 2: Solution of example problem I.
r^^
TAC=$134000/yr TCC=$218000
327 3.2. Example problem II (staged columns) In this example the target is removal of sulphur dioxide (SO2) from four gaseous streams. Only one external MSA is available, pure water (Si). The stream data are available in Table 3. Tray columns are used as mass exchangers. The capital cost is based on the shell cost $12800//,^-^^D^-^ and the cost $608^^^^ per tray; where D is the column diameter, and Ht is the total column height. The overall efficiency of the trays is fixed as 20%. The inactive height is 3m; the tray spacing is 0.5m. The number of stages is calculated with Kxemser equation. This is a highly non-convex equation. Therefore, the global optimum is not ensured by the MINLP; and the time for finding a solution is much longer. Moreover, the local optima found depend on the initial values. The solution of Hallale, with pinch technology, was rCC=$860000 (rAC=$427000/yr). In our model the objective function was the annual cost. Annualisation factor was 0.225; 8150 hr/a were assumed and $0.003/kg or $0.054/kmol cost for the external MSA were assumed. The optimal solution is shown in Fig.3. Table 3: Stream data for example problem II. Rich streams Ri R2 R3 R4
G (kmol/hr) 50 60 40 30
(mol fraction) 0,01 0,01 0,02 0,02
MSAs (kmoVhr) Si
00
(mol fraction) 0
y (mol fraction) 0,004 0,005 0,005 0,015 x' (mol fraction) -
m
b
26.1
-0.00326
Rl ^0.005
(D--
50 kmol/lir
R2i 60 kmol/hr
-Qy
R3I 40 kmol/hr R 4 30 kmol/hr
SI
0^ / ^
strays
\
0.000305
6 trays
O^^^^^^^O^^^^ 1969 kmol/hr 26 trays
5 trays
TAC=$359000/yr TCC=$339000 Fig. 3: Optimal solution of Example II. In this case the flowrate of the external MSA is a bit greater than the minimum (in Hallale's solution it was 5i=1593 kmol/hr). Therefore the capital cost and the annual cost are decreased.
328 3.3. Example problem III (multicomponent problem) The COG example of Papalexandri et al (1994), originally by El-Halwagi and Manousiouthakis (1989), was also solved. The target is the removal of H2S and CO2 from COG (a mixture of H2, CH4, CO, N2, NH3, CO2 and H2S), (RO and the tail gases of a Claus unit (R2). Available lean streams for the purification are aqueous ammonia (Si) and chilled methanol (S2). Perforated-plate columns are considered for applying both solvents. TAC results are as follow: Papalexandri et al., 1994 (MINLP): 917880; Hallale and Fraser, 2000 (Pinch): 427000; our optimum: 436289 $/yr.
4. Conclusions A new, robust and fairly linear MINLP model for the synthesis of MENs has been developed, based on the superstructure and optimisation model developed by Yee and Grossmann (1990) for designing HENs. Using the new model we can obtain the proper trade-off between capital and external MSA costs by taking all the driving forces as optimization variables. The model's main advantage can be fully exploited in the case of packed columns. However, the model can be extended to handle staged columns as well by introducing non-linear expressions in the MINLP problem. The numerical solution's stability is enhanced by using relaxed binary variables with additional constraints and penalty function to enforce the relaxed variables to take binary values. Solutions of example problems justify the applicability of the model and predict that the new model could be applied to design the MEN part of more complex process synthesis problems.
5. References Brooke, Kendrick, Maereus, GAMS. A User's Guide, Rel. 2.25, Scientific Press, 1992. Chen, J.J. (1987). Letter to the Editor: Comments on improvement on a replacement for the logarithmic mean. Chem. Engng. Sci., 42, 2488. Comeaux, R.G., (2000), Synthesis of MENs With Minimum Total Cost, MSc Thesis, Dept.of Process Integration, UMIST, Manchester, England. El-Halwagi, M.M, and Manouthiousakis, V., (1989), Synthesis of Mass Exchange Networks, AIChE Journal, 35, (8), 1233-1244. Hallale, N. and Fraser, D.M., (1998), Capital Cost Targets for Mass Exchange Networks, Part I-IL, Chem. Eng. Science, 53(2), 293-313. Hallale, N. and Fraser, D.M., (2000), Supertargeting for Mass Exchange Networks, Part I-IL, Trans I Chem E, 78, Part A, p.202-216. Hallale, N., (1998), Capital Cost Targets for the Optimum Synthesis of Mass Exchange Networks, PhD thesis. University of Cape Town, Dept. of Chemical Engineering. Papalexandri, K.P., Pistikopoulos, E.N. and Floudas, C.A. (1994), Mass Exchange Networks for Waste Minimization, Trans IChemE, 72, Part A, 279-293. Szitkai, Z., Msiza, A.K., Fraser, D.M., Rev, E., Lelkes, Z., Fonyo, Z. (2002a), Comparison of different mathematical programming techniques for mass exchange network synthesis, p. 361-366, Proc. ESCAPE-12, The Hague, Netherlands, 26-29 May, 2002; Eds. J. Grieving & J. van Schijndel, Elsevier, 2002. Szitkai, Z., Lelkes, Z., Rev, E., Fonyo, Z. (2002b). Handling of removable discontinuities in MINLP models for process synthesis problems, formulations of the Kremser equation. Computers Chem. Engng., in press. Yee, T.F., Grossmann, I.E. (1990), Simultaneous optimization models for heat integration II. Heat Exchanger Network Synthesis, Computers Chem Engng 14 (10), 1165-1184.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
329
A Knowledge Based System for the Documentation of Research Concerning Physical and Chemical Processes System Design and Case Studies for Application M. Weiten, G. Wozny Dipl.-Ing. Moritz Weiten, Technical University of Berlin, Department of Process and Plant Technology, Sekr. KWT-9, StraBe des 17. Juni 135, 10623 Berlin; Prof. Dr.-Ing. Giinter Wozny, Technical University of Berlin, Department of Process and Plant Technology, Sekr. KWT-9, StraBe des 17. Juni 135, 10623 Berlin
Abstract A technical framework is presented in this paper, which allows scientific institutions to establish a knowledge based management of their research results. The system incorporates general principles of information management and relies modem techniques of knowledge engineering. It allows scientists to store research results in standard data formats as well as to include knowledge about the resources and their interconnections. The underlying methodology is based on the standardization of scientific data at different levels and on the concept of ontologies.
1. Introduction Great parts of the results of many research projects dealing with physical and chemical processes get lost due to the lack of an integrated and complete documentation. At least published work can be retrieved with the help of digital libraries. Even within an institution, researchers find it hard to comprehend the work of their colleagues and gather the corresponding data without the help of the originators. Despite some few exceptions, most research institutions dealing with physical and chemical processes leave the task of information management to the individual researcher. On the other hand, research institutions and individual researchers need a technical platform enabling them to store and retrieve their data and information in a standardized way. The framework and the system implementation presented in this paper focuses on the latter aspect. The development of a knowledge based system is described, which helps to manage the results of scientific research in the area of physical and chemical processes.
2. Goals/Requirements The following are some general goals of information management (Weiten et al., 2002a): storage, management and integration of heterogenous resources (text-based, data-based, images, etc.); long-term storage and access; efficient information retrieval; modular storage and access. 2.1. Management of heterogeneous resources Research concerning physical and chemical processes typically has a heterogeneous output in form of data files, spreadsheets, publications, etc. A concept for the manage-
330
ment of such resources has to take into account the different ways the information is structured at each case. Such a concept has to provide a way to gather existing links between resources, based on their context (e.g. a text document and a spreadsheet referring to the same experiment). An interface for the integrative access to those heterogeneous resources has to be supplied. Appropriate data formats (or wrappers/mediators^) are required, in order to realize some certain functionality (e.g. extraction, filtering and linking). Heterogeneous resources require integration concepts, in order to allow an efficient access without knowledge about the particular source (Paton et al., 2000). In this project an integration concept based on common data models has been developed. It helps users to query and access information sources with the help of graphical user interfaces visualizing basic concepts of their scientific domain. 2.2. Long-term storage Long-term storage requires appropriate data formats and standards, which in return require interfaces to standard applications and broad user-support. Standardization often is a difficult job. The standards of individual organisations or corporations (such as a plant data model for CAE-related tasks and tools) are designed to perfectly satisfy their particular requirements. This kind of standard turns out to be insufficient for organisations sharing resources with others (e.g. on the base of collaboration). On the other hand do international, widespread standards tend to be either too complex to maintain (cp. the comments of Batres et al. (1999) on STEP) or too simple to provide a base for real-world applications. The solution lies within the standardization at different levels. In this project, several XML-extending standards have been chosen as basis for long-term storage. Appropriate tools (converters, viewers) have been developed. 2.3. Efficient information retrieval Efficient retrieval is equivalent to obtaining the required information with the highest possible precision (the lowest overload) and little effort. Widespread retrieval techniques have been developed especially for text-based information sources (e.g. in form of search engines). These techniques rely on the natural language, with all its ambiguity and complexity. Alternative approaches for text-based information have -among othersbeen developed for texts with very regular structure that can be represented by a data model (Embley et al., 1998), and for engineering data (Moss et al., 1999). Databases provide a structure and a flexible query language, making precise queries possible. The major drawback is the required knowledge about the particular data model. Approaches of information integration approaches target this problem. In this project an integration approach developed in the field of knowledge engineering (based on ontologies) has been chosen (Studer et al., 1998). It allows to access information from the viewpoint of common concepts within a scientific domain. 2.4. Modular storage and access Modularity is obviously important for large information resources and when there is no or little support for filtering. Database applications for example offer such filtering capabilities. This does not hold for typical file-based data sets such as spreadsheets. In this case, the information or data can only be provided as a whole, except specialized interfaces for certain file formats are available. Standard data formats for scientific data allow the creation of standardized interfaces that are capable of modular access. ^ See also Wiederhold (1992) and Muslea et al. (2001)
331 The importance of a modular for text-based information in form of publications has been strongly emphasized by Kircz (1998). The standard data formats used in this project allow a modular access to information at different levels of detail.
3. Methodology and System Design 3.1. Incorporation of the ontology-concept The framework developed relies on the concept of ontologies (Uschold and Gruninger, 1996) which can be seen as 'a shared and common understanding of some domain that can be communicated across people and computers' (Studer et al., 1998). Ontologies as "knowledge data models" have been applied in different areas, such as information integration (Paton et al., 2000) and expert systems (Ceccaroni et al., 2001). In the system presented here, ontologies provide the base for a formal description and a classification of information resources. Ontologies have been implemented in order to describe the relation of scientific documents (publications, reports), similar to the "ScholOnto"-approach developed by Motta et al. (2000). Additional ontologies describe concepts of scientific research that have to do with the development of mathematical models (Weiten et al., 2002 and Weiten et al., 2002b). These allow a linking of different resources based on conmion concepts (e.g. model parameters). 3.2. Usage of standard data formats Standard data formats are a requirement of long-term storage and platform-independent access. Furthermore do certain standard data formats provide useful a functionality for retrieval and manipulation of data. As already mentioned, can the development of standards for certain fields be very demanding, since a standard must satisfy the needs of its potential users in terms of a least common denominator. The framework presented relies on an incremental standardization. XML forms the basic standard, with a wide range of tools available for data access and manipulation (linking, parsing, filtering, transforming, etc.). Text documents coded in XML are annotated with additional data for bibliographic information on the base of the Dublin Core Standard (Weibel, 2000). The DocBook-standard (Walsh and Muellner, 1999) is the base for standardization of the document-structure. A standardized structure allows a modular storage and access of documents. Scientific data is stored in the Extensible Data Format (XDF). XDF itself follows the philosophy of an incremental standardization. It is an XML-based data format, which provides some basic features (e.g. definition of physical units, n-dimensional data structures). XDF can be extended to meet the requirements of certain scientific fields. Different XDF-based standards are always compatible at the level of the basic XDF functionality. In this project, XDF has been chosen as the basis for numerical and experimental data. The DocBook-standard defines the structure for text documents, with bibliographic information in terms of Dublin Core. 3.3. Technology The system consists of different parts. A central database stores data and information in form of data files in standardized format and ontology-related additional information about resources. The central repository is based on a DB2^^-Database, with the XMLExtender'T^ and the Text-Extender"^^. Web-based user interfaces provide maintenance and retrieval functions. Office'^^-Templates (with Visual Basic Wizards) form the other
332 end of the system. These support users in generating standardized resources and submitting them to the central repository. Java-applications perform a conversion process (from proprietary office formats to XML-based formats). They allow to register resources with parts of standard data models (ontologies) and to define interconnections. The generation and maintenance is realized with Protege 2000, a tool for knowledge based systems, developed in Stanford University.
4 Use Cases 4.1. "Personal" information management
Documesf^-Wlndow
Man tnMfcr m»*tl fcr MMfOtnr Further expemnnttt hove been carried out w .at anionic twfactant SDES and the NF mnabrane D5K (O aoioca) m order to set q> a masf transfer model Earlier NF experiments with surfactant Iiaasport of the sohite ocean -difrw»n model based on been developed to predict Quz is given b^.
Pc^p-Menu Tree-Vlew (shows selectable parts of
doctttnent ^TMclufe> I
.,._L.(^-A.).4^ rr.r^
(offersdlffereot, Unks/mappin^)
(1) In equation ( 1 ) r w represents the hydraulic membrane 1 resistance. It can be dftmnmfMl flrom the mire watnr flinr
Fig. 1: Formal description of information resources with an editor.
In this situation, a researcher works on a process model, including the experimental investigation. He/she has performed several experiments on the base of a certain strategy. The latter has been conceptualised with the background of the mathematical model development. The approach chosen for the mathematical model requires the determination of certain model parameters. The researcher has gathered several files of raw data from these experiments. A spreadsheet-software is used for filtering and curve fitting. The derived values are used in the implemented mathematical model of the process. The researcher obtains simulation results in form of data files, which are again processed with the help of spreadsheet software, in order to perform and visualize model characteristics and the validation with experimental data. When the research project has reached a certain status, the researcher decides to prepare a publication. In the present situation, a "paper version" of the publication would in many cases be the only output accessible some time after the termination of the project. With the help of the system presented in this paper, the researcher first of all uses templates, enabling him/her to create standardized spreadsheets. A database for references together with document templates allow the researcher to create a publication, which meets the particular layout required, and at the same time serves as a base for a standardized XMLfile (to be automatically created). The created files as well as raw data files are submitted to the central database, which validates the structure and "asks" for additional information about the resources supplied and their interconnections. The latter can be edited with a graphical user interface (figure 1), which provides access to the particular struc-
333 ture of a resource. Colleagues or successors of the researcher within the institution use a web-based interface to "browse" the research results submitted. They can use that interface to view texts and data. Hyperlinks provide additional information about resources, such as the determination of model parameters for example. 4.2. Collaborative research Concerning the tasks of scientific work, this situation is similar to the previous one. However, there are some new aspects. The central database with allows the institutions involved to access the data and information of the participants without the need to install any special software. Possible contradictions (e.g. two participants referring to the same physical parameter with different terms and different values or functions) can be discovered in time, since the system checks single concepts (such as parameters) against the appropriate ontology. The standard file formats can be extended by each discipline participating in order to meet particular requirements. Compatibility can still be guaranteed at different levels (e.g. XML, XDF for data files). Data can be exchanged on the base of a common denominator.
5. Conclusions and Outlook Modem techniques in the fields of knowledge engineering and information management allow research institutions to increase the value of research results in form of electronically stored resources. Two typical use cases indicate how the system presented in this paper represents an approach to technically support scientists in an enhanced information management. The system is still to be integrated in a complete framework, which includes the social and organisational aspects of information- and knowledge management. Such a framework is for example presented by the CommonKADS-Methodology (Schreiber et al., 2000).
6. References Batres, R., Asprey, S.P., Fuchino, T., Naka, Y., 1999. A KQML Multi-Agent Environment for Concurrent Process Engineering. Computers and Chemical Engineering, 23, 653-656. Buckingham, S., Motta, E., Domingue, D., 2000. ScholOnto: an ontology-based digital library server for research documents and discourse. International Journal on Digital Libraries, 3, 237-248. Ceccaroni, L., Cortes, U., Sanchez-Marre, M., 2001. OntoWEDSS - An Ontology-based Environmental Decision-Support System for the management of Wastewater treatment plants. Thesis (PhD), Universitat Politecnica de Catalunya. Embley, D.W., Campbell, D.M., Smith, R.D., Liddle, S.W., 1998. Ontology-Based Extraction and Structuring of Information from Data-Rich Unstructured Documents. Proceedings of the International Conference on Information and Knowledge Management., New York: ACM Press, 52-59. Kircz, J.G., 1998. Modularity: the next form of scientific information presentation?. Journal of Documentation, 54 (2), 210-235. Moss, M.A., Jambunathan, K., Lai, E., 1999. A knowledge based database system for engineering correlations. Artificial Intelligence in Engineering 13, 201-210.
334
Motta, E., Shum, S.B., Domingue, J., 2000. Ontology-Driven Document Enrichment: Principles, Tools and Applications. International Journal of Human Computer Studies, 52, 1071-1109. Muslea, I., Minton, S., Knoblock, C.A., 2001. Hierarchical wrapper induction for semistructured information sources. Journal of Autonomous Agents and MultiAgent Systems, 4 (1/2), 93-114. Paton, N.W., Goble, C.A., Bechhofer, S., 2000. Knowledge based information integration systems. Information and Software Technology, 42, 299-312. Schreiber, G., Akkermans, H., Anjewierden, A., de Hoog, R., Shadboldt, N., Van de Velde, W., Wielinga, B., 2000. Knowledge Engineering and Management: The CommonKADS Methodology., Massachusetts: MIT Press. Studer, M., Benjamins, V.R., Fensel, D., 1998. Knowledge Engineering: Principles and methods. Data & Knowledge Engineering, 25, 161-197. Uschold, M., Gruninger, M., 1996. Ontologies: Principles, Methods and Applications. Knowledge Engineering Review, 11 (2), 93-136. Walsh, N., Muellner, L., 1999. DocBook., Sebastopol, CA: O'Reilly. Weibel, S., 2000. The Dublin Core Metadata Initiative. D-Lib Magazine, 6 (12) [online] http://www.dlib.org/. Weiten, M., Goers, B., Wozny, G., 2002. Information Management for Engineering Sciences. In: Novosad, J. (Process Engineering Publisher), Eds. Proceedings of the 15th International Congress of Chemical and Process Engineering., 155156. Weiten, M., Wozny, G., Goers, B., 2002a. Wege zum Informationsmanagement fur die interdisziplinare Forschungsprojekte und Entwicklung eines prototypischen Systems. Chemie Ingenieur Technik (to appear in Vol. 74, 11, 2002). Weiten, M., Wozny, G. and Goers, B., 2002b. In: Luczak, H., Cakir, A. E. and Cakir, G., Eds. Proceedings of the 6* International Conference on Work With Display Units., Berlin: ERGONOMIC GmbH, 624-626. Wiederhold, G., 1992. Mediators in the architecture of future information systems. IEEE Computer, 25, 38-49.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
335
A Semi Heuristic MINLP Algorithm for Production Scheduling Mehmet Yuceer, Ilknur Atasoy and Ridvan Berber Department of Chemical Engineering, Faculty of Engineering, Ankara University Tandogan 06100 Ankara, Turkey
Abstract A large number of process synthesis, design and control problems in chemical engineering can be formulated as a mixed integer nonlinear programming (MINLP) model, involving continuous variables and integer decisions. In this paper, we present an MINLP formulation for production scheduling of multi-product batch plants. The binary (0/1) integer variables of the non-convex MINLP problem make it impossible to use the general purpose algorithms for solution. In order to overcome this difficulty, a semi-heuristic algorithm for production scheduling was developed. Using this approach, the non-convex MINLP problem is first considered as an MILP problem without dividing the orders. Thus, the order that causes prolonged delivery time can be identified. Constraints for this order are then relaxed and MELP problem is re-solved using the new constraints. Having reached the new schedule, the quantitative distribution of the specific order to different units can be determined by solving the LP problem that does not contain integer variables since allocation of the orders to the units and processing order are known. The results obtained with some example problems indicate improvements over previous schedules and therefore, give promise that the suggested strategy might be used in moderately-sized industrial applications.
1. Introduction The single stage multi-product plants with non-identical parallel production units are common to processing, pharmaceuticals, polymer, food industries and also machinery. Hard industrial competition requires that customer orders not only fulfilled on time, but also manufactured with utmost economic benefit in such plants. This, in turn dictates a strict scheduling of orders in available equipment. If one adds many constraints with regard to the scheduling, such as predecessors/successors, cycles to be avoided and makespan, the problem becomes a huge one, right at the 'formulation' phase, even before attempting to solve. The solutions provided until late 80s had not been considered satisfactory to prevent production tardiness. After, probably, the first attempt towards the solution of this NPcomplete problem by Pekny and Miller (1991), some approaches using heuristics (by Mussier and Evans 1989), MILP formulation in discrete time (by Kondili 1993 a, b) and continuous time with heuristic (by Cerda et al 1997) have been proposed. However, one major difficulty with MILP formulations is that the problem size increases exponentially with the number of units/orders. In a most recent work, Berber and
336 Ozdemir (2003) identified one deficiency and two misconception about the formulation of such problems. They claimed that, contrary to the foregoing statement, the fact that every order can be processed in every unit or after every order in one unit is more contributive to the size of such problems. In our previous work (Berber and Ozdemir 2003), an automatically generated MILP formulation was used to schedule a set of orders in non-identical parallel processing units without diving any order. However, if the process time of any of the orders in one unit is very long, then it becomes necessary to divide this order into two or more pieces so that other units may also be used to process it. This strategy will undoubtedly help meet the customer expectations and prevent tardiness. The special difficulty associated with such a strategy, which results in an MINLP formulation, is that the objective function becomes nonlinear, and furthermore non-convex due the multiplicative appearance of integer and real variables. Since available MINLP algorithms are for convex functions only, the problem is seemingly very difficult to solve. This work tackles with such problems, and defines a semi-heuristic algorithm for the solution, and shows that it is effective for moderately sized cases.
2. Problem Statement and Model The scheduling problem in single-stage, parallel unit, multi-product plants is considered. Customer orders, each composed of only one product, are to be processed in a subset of the available processing units. Total production time is chosen as the objective to be minimized. The model accounts for a number of logical physical constraints, which are likely to occur in practice, such as follows: • Orders may have predecessor and successors • Both the predecessor and the successor to a given order must be manufactured in the same processing unit • Each equipment has only one single order to be first processed in it • Each order can be first processed in only one processing unit • Makespan is considered as a constraint to ensure that a due date for the whole set of orders to be processed is met • Cycle constraints were included such that a job can not be simultaneously both the predecessor and the successor to another one • Release time of units, i.e. the time needed for any unit to become available, is incorporated in the formulation of the objective function The original problem is first handled as an MILP problem without any interest in dividing the orders. The solution to this problem identifies the order that extends the processing time, therefore, indicating where the need is for dividing an order to be processed in multiple units. Then, constraints for this unit are relaxed so that it is allowed to be processed in more than a unit, and the MILP problem is re-solved under these circumstances. This solution provides the schedule that the particular order is to be processed in multiple units in an optimal setting. As the allocation to the units and the processing order becomes, thus, known and the integer variables are eliminated; then the real amount to be processed in each unit remains to be determined, which is done by solving the corresponding linear problem (LP).
337 The solution algorithm developed in this study is presented in Figure 1 as a flow diagram. Modeling non-convex MINLP Problem
Solving MINLP problem as an MILP problem without dividing the orders
Determination of the order to be divided and the number of units it is to be processed (manual interpretation)
Relaxation of the constraints and re-solving the problem in MILP form
Solving the fmal problem converted into LP form
Figure 1. Semi-heuristic algorithm for the solution of non-convex MINLP problem. 2.1. Objective function The objective of the problem we propose here is the minimization of a function comprising the total processing time of orders being considered, as represented by the following mathematical statement:
El E (^^/ +Xn,n>T, *XM, + X^™v *TCL„„ IEL
iel
I L PR// T,7 TCLmi/ Xmi/ just XFi/ XM.il
mePRil
V"
mEPRil
: Set of available orders : Set of available units : Set of customer orders that can be processed just before O, in unit /. : Total processing time of order Oi in unit /. : Cleaning time for the ordered pair of jobs (Oi, Om) in unit / : Binary variable denoting that the processing of order Om takes place in unit / before the campaign for Oi : Binary variable denoting that order Oi is the first to be processed in unit / : Amount of the O, order to be processed in unit /
A MATLAB program was developed to automatically formulate the problem such that it can be solved by a branch and bound algorithm when in MILP form. The program
338 takes the data related to the properties of the orders and production units, creates the model, i.e. objective function and all the constraint equations, produces the input files for MINOS package (Murtagh and Saunders 1995) to be run in Fortran environment, and necessary other files for easy interpretation of the MINOS output files. The user, then, easily extracts the results of the optimum schedule, i.e. allocation of orders into the production units and the schedule in each unit, to be drawn as a Gantt chart.
3. Example and Results A set of example problems were solved in order to test the proposed method. The one which will be presented here is the spesific problem tackled by Cerda et al. (1997), and later reconsidered and solved by Berber and Ozdemir (2003) with an objective of minimizing the total production time. This example is composed of 10 orders to be processed in 4 units, has 47 binary variables and 648 constraint equations All units are assumed to be available at the time of scheduling. Table 1 and 2 show the production and batch capacities of orders in every unit and set of predecessors, respectively. Production times of one batch of each product are given in Table 3, whereas the cleaning times are indicated in Table 4. First the problem was solved with an MILP formulation, i.e. without resorting to dividing any order. Makespan was considered as a constraint and an optimum schedule, with an objective function value of 81.4 days and a completion time of 26.25 days, was found. However, the results, as shown in Figure 2a, indicated that dividing the order 6 would have resulted in even better optimal solution in terms of fulfilling a possible customer demand of earlier completion of the set of orders. When the solution strategy proposed in this work was implemented for this example, the optimum schedule shown in Gantt diagram of Figure 2b was found. The objective function reached was 81.5 days, and the orders were to be completed within 23.85 days. These results are clearly indicative of the fact that, with a relatively small sacrifice of 0.1 days in the objective function, the set of orders would be completed 2.4 days earlier.
4. Conclusions This work considered optimal scheduling of a set of orders in multi-product batch plants with non-identical, single stage, parallel processing units. Particular emphasis was placed in dividig one order into multiple units, and a semi-heuristic algorithm was proposed to solve the MINLP problem with a non-convex objective function. The results obtained with some example problems indicate improvements over previous schedules where such emphasis was not considered, and therefore, show promise that the suggested strategy can be used for implementation in industrial environment.
339 Table 2. Input data, batch capacities.
Table 1. Input data, predecessors and order quantities.
Products Products
Predecessors
01 02 03 04 05 06 07 08 09 OlO
0 3 , 06, 09, OlO 03,07 02, 06, 09, OlO 0 5 , 06, OlO 04, 07, OlO 01, 05, 09 03, 05, 08 05,07 0 1 , 0 3 , 06, OlO 01, 03, 04,09, 06
Quantity (kg) 550 850 700 900 500 1050 950 850 450 650
01 02 03 04 05 06 07 08 09 OlO
Unit 1 1.7 ~ 1.25 — — 2.4 — ~ 1.6 2.6
Unit 2 — — ~ 1.7 1.4 1.8 — — 1.9
Unit 3 — 0.9 1.1 — ~ ~ 1.05 ~ -
Table 3. Production times of one batch of orders in units (Ti day/batch). Products Ol 02 03 04 05 06 07 08 09 OlO
Unitl (kg/batch) 100 — 140 — — 280 ~ — 200 250
Unit 2 (kg/batch) — ~ ~ 120 90 210 ~ ~ — 270
Unit 3 (kg/batch) ~ 210 170 — ~ — 390 ~ — ~
Unit 4 (kg/batch) ~ — — — 130 — 290 120 -_ -
Table 4. Cleaning times. Predecessors Successors
i
01 02 03 04 05 06 07 08 09 OlO
01 — 1.0 — — 1.4 — — 2.1 1.5
02 03 04 05 — — — — — — 1.1 — — — 0.15 — — — — 0.05 — 0.3 — — 0.3 0.7 — — 0.85 1.8 — — — — — — 1.25 — — —- 0.6 0.75 0.5
06 0.65 — — — 0.7 — — — 0.8 0.1
07 — 0.3 — 0.9 —— 1.65 — —-
08 0 9 OlO — 0.85 0.4 — — — — 1.6 0.2 — — 0.5 — 0.6 — — 1.2 — — 0.45 — — — — — — 0.65 .... 0.7 -—
Unit 4 — ~ 0.85 1.65 2.1 --
340 4 Units -10 Orders
4 Units -10 Orders
-
08
06
05
TCL
-
08 TCL
03
02
-
07
TCL
TCL
04
-
010
TCL
10
-
06
09 TCL
1 010
04
06
TCL
01
-
TCL
TCL
j
1
I °^
03
: 02
TCL
1
1
20
time (day)
1^ 1
01
TCL 15
1
25 26.25
1
30
-
TCL
,1
5
1 ,.
10
-
06
TCL
1
—
15
J
20
_
.
23.8525
time (day)
Figure 2. Solution of the example problem, (a) without dividing the orders (Berber and Ozdemir 2003), (b) when order 6 is divided between units (this work).
5. References Berber, R. and Ozdemir, Z., 2003, Production scheduling in single stage multi product batch plants: A Critical Evaluation and Some Improvement, Paper submitted to Ind. Eng. Chem. Res. Cerda, J.; Henning, G.P. and Grossmann, I.E., 1997, A Mixed - Integer Linear Programming Model for Short -Term Scheduling of Single Stage Multiproduct Batch Plants with Parallel Lines, Ind. Eng. Chem. Res., 36, 1695. Kondili, E.; Pantelides, C.C. and Sargent, R.W.H., 1993, A General Algorithm for Short-Term Scheduling of Batch Operations - 1 , Computational Issues, Comp. & Chem. Engng., 17,211. Kondili, E.; Pantelides, C.C. and Sargent, R.W.H., 1993, A General Algorithm for Short-Term Scheduling of Batch Operations - 2 , MILP Formulation, Comp. & Chem. Engng., 17, 228. Murtagh, B.A. and Saunders, M.A., 1995, MINOS Ver. 5.4 User's Guide, Systems Optimization Laboratory, Stanford University, Stanford, CA. Mussier, R.F.H. and Evans, L.B., 1989, An Approximate Method for the Production Scheduling of Industrial Batch Processes with Parallel Units, Comp. & Chem. Engng., 13,229. Pekny, J.F. and Miller D.L., 1991, Exact Solution of the No-Wait Flowshop Scheduling Problem with a Comparison to Heuristic Methods. Comp. & Chem. Engng., 15,741.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
341
Roles of Ontology in Automated Process Safety Analysis Chunhua Zhao, Mani Bhushan, Venkat Venkatasubramanian* (chunhua, mbhushan, venkat) @ecn.purdue.edu Laboratory for Intelligent Process Systems (LIPS), School of Chemical Engineering Purdue University, West Lafayette, IN 47907. USA
Abstract Process safety analysis is an information intensive task. Collecting all the necessary information from different sources is time consuming and error prone. Most of the information may already be available in some electronic formats. Sharing and reusing this process information can save significant time and effort. Using ontology can greatly enhance the communication and inter-operability among different systems. In this work, ontologies are designed for various kinds of process information and safety analysis results. PHASuite, an intelligent software system for performing automated process safety analysis, has been built based on these ontologies. Appropriate information sharing schemes have been proposed and implemented in PHASuite. It is found, based on several industrial case studies, that information sharing can greatly reduce time and errors associated with reentering information.
1. Introduction Process safety analysis is an information intensive task. For a typical chemical process, the basic information includes the material information, P&ID, process chemistry, and operating conditions. Different process design and analysis systems may use different aspects of this information. For example, simulation software use physical and chemical properties of materials, P&ID and operating conditions to quantitatively model the process. Process safety analysis software, such as PHASuite (Process Hazards Analysis Suite, an intelligent automated tool for process safety analysis developed in LIPS Laboratory at Purdue University, details can be found in Zhao, 2002), needs most of the information required by simulation tools, with some additional information, on such as the hazardous material properties. Better understanding of the process provided by simulation results, such as the calculated separation ratio of a separation, could also be used in safety analysis. Rather than collecting all the necessary information from different sources and reentering this information into PHASuite, which is time consuming and error prone, sharing and reusing this process information can lead to significant savings in time and effort. The very first requirement for information sharing between different software systems is that these software have an open structure, i.e. the information to be shared should be syntactically readable by other software, and not just stored through object serialization. More and more software are choosing this open approach. However, in order to meaningfully share the information, a common understanding is necessary to reduce or eliminate conceptual and terminological confusion. Such an understanding can help in achieving (1) better communication between people; and (2) inter-operability among systems (Uschold and Gruninger, 1996). The latter is achieved by translating between
342
different models, paradigms, languages and software tools. The shared understanding is the basis for a formal encoding of the important entities, attributes, processes and their inter-relationships in the domain of interest. Although the implicit conceptualization may be embodied in a piece of software, for example, a process simulation tool presumes views on concepts as unit operation, operation and process, an explicit and standardized conceptualization is much more useful. In fact, ontology is defined as an explicit specification of a conceptualization (Gruber, 1993). The requirements for a useful ontology include: clarity, coherence, extensibility, minimal encoding bias, and minimal ontological commitment (Gruber, 1993). In this work, the design of ontologies consists of the following steps: (1) identifying purpose and scope; (2) building the ontology including ontology capture, ontology coding, and integrating existing ontologies; (3) evaluating; (4) documenting. Ontologies can be used to describe the semantics of the information sources and make the contents explicit, thereby enabling integration of existing information repositories, either by standardizing terminology among the different users of the repositories, or by providing the semantic foundations for translators. Ontologies can play a significant role in information sharing and knowledge reuse in process safety analysis. In this paper, we define ontologies related to process safety analysis and propose an ontology-based information sharing scheme, which have been implemented in PHASuite to achieve inter-operability with other software systems. While the specified details presented here pertain to PHASuite (based on HAZOP methodology), we believe that the general scheme of ontology-based information sharing will be applicable to and useful for other process safety analysis systems. To the best of our knowledge, as of now there are no well-defined standards for sharing information for process safety analysis. Our work is an attempt towards developing a standardized structure for information sharing for this important task. The language used to define the ontologies is currently informal, i.e. mostly are defined in natural language. Although this kind of definition is sufficient for current implementation, definitions created using more formal language are desired for further implementation. To develop formal ontologies, some specification languages have to be used. For example, KSL Ontology Server (Farquhar et al., 1995), based on Ontolingua, is a mechanism for writing ontologies in a canonical format. Recently, built on earlier efforts and inspired by three building blocks including Web integration, frame-based systems and description logics, DAML+OIL (Connolly et al, 2002), has been proposed as an ontology specification language. Besides the specification language, acquisition support and reuse of ontologies are also active research areas.
2. Ontologies in PHASuite In PHASuite, the main required information can be divided into four types: material, P&ID, chemistry and operating procedures. Based on various types of information, ontologies have been developed for operation related and safety related information. 2.1. Operation related ontologies For the operation related information, concepts for operating procedures, operating conditions, and parameters are necessary. The hierarchical level of ISA-S88.01 standard (ANSI/ISA-S88.01, 1995), which decomposes a process into different levels, namely procedure, unit procedure, operation, and phase, has been followed in this work. To specify a piece of operating procedure, parameters and operating conditions are needed. Definition of process chemistry is straightforward and can be divided into reaction and
343 separation. A reaction is specified by reaction type, reactants, products, excessive reactants, inhibitors, catalysts and solvents. A separation is specified by input materials, names of different phases, and materials involved in each phase. Equipment is specified by design properties, such as design temperature, pressure and capacity, as well as some structural specifications including whether the equipment has a jacket, etc. 2.2. Safety related ontologies Safety related ontologies are of two types: (i) for sharing information required for safety analysis, and (ii) for sharing safety analysis results. Besides operation related information, safety analysis requires additional information, e.g. the material hazardous properties. The properties, which are currently used in PHASuite can be in Boolean type i.e. true or false, for example, corrosive, cryogenic, dust explosibility, hygroscopic, racemic etc., or take real values (or classification levels) such as boiling temperature, fire hazard level etc, or text strings, such as risk phrases. Material hazardous properties usually exist in Material Safety Data Sheets (MSDS). Some other useful information, available in the MSDS but not being used in current version of PHASuite, can be defined in a similar manner. Besides developing ontologies which enable PHASuite to gather the required input information, it will also be useful to define ontologies for sharing the analysis results with other systems. A typical result, generated by safety analysis, includes the information on: (1) location of the deviation; (2) the equipment; (3) deviation; (4) deviation type; (5) consequence; (6) cause; (7) location of consequence and location of cause; (8) safeguards, and (9) recommendation, as well as other useful information such as (10) cause causal path and consequence causal path etc.
3. Information Sharing Scheme for PHASuite When single ontologies (global ontologies) are used as shared vocabulary for same kind of information by different software tools, information sharing is straightforward since tools can access the information directly. If multiple explicit ontologies are used for different software tools, information sharing is a little complex, but inter-ontology mapping can be created to translate information in one ontology to another. However, currently common ontologies for process information/knowledge related to process safety analysis are hardly developed, and most of the systems were developed without the guidance of ontologies. To handle this situation, ontologies are first created in PHASuite, based on which adapted ontologies are created and imposed on other software tools, either as the information supplier or consumer. A two-step procedure has been developed as the information sharing scheme. As shown in Figure 1, for information importing, the first step is to access and extract information, based on the customized ontologies, from other information sources. The second step is to translate the acquired information into format understandable by PHASuite guided by a dictionary as the inter-ontology mapping. The information is then stored in an information repository. Both the dictionary and the information storage are created based on the predefined ontologies. During run-time, objects are generated from the interface storage. For exporting the information, the first step is to store the information from PHASuite in a results repository based on the predefined ontology, and the second step to convert the information to the format required by other software tools. The following sections demonstrate this scheme using examples of sharing operating procedure information with Batch Plus® and safety analysis results with PHAPro®.
344 Approaches to use this scheme to share information with MSDS and P&ID are also proposed.
Process Information
^ a>
4
PHASuite
cr 3
3NT0L0GIES K
/
Operating Procedures CO
<
/~
Other Documentation Tools
\_
_/
X Q.
Operation and Safety Knowledge Base
Figure 1 Information sharing scheme in PHASuite.
3.1. Sharing process information with process simulation tools Batch Plus® (Aspen Technology Inc.) is a batch simulation tool. It needs information about the operating procedures, equipment and some properties of materials. Most of this information is also necessary for carrying out safety analysis. Process information in Batch Plus is stored in several separate database files. Process information is dispersed in different tables in the database files. Queries in SQL were designed to gather the information from different tables using the relations between tables. Material and reaction information are gathered from the material database, and the Unit procedure and operation details are collected from operation database. Since separations are not specified explicitly in Batch Plus, this information has to be gathered through the stream information from simulation results. Equipment information is better organized in results database, so it is gathered from that source. Dictionaries, constructed under the guideline of ontologies, are used to guide the translation of the information gathered from Batch Plus to information accessible through PHASuite. The dictionaries are implemented in a database. As an example, the dictionary to map operation in Batch Plus to the corresponding operation in PHASuite includes the corresponding operation name from PHASuite for each operation in Batch Plus. This information is then stored in an interface database which is readily accessible by PHASuite. The corresponding object representation of this information is created and used in PHASuite during run-time. Figure 2 illustrates the procedure for importing information from Batch Plus®. This information sharing scheme has been tested using several industrial case studies. It was found that approximately 90% to 95% of the process information required by PHASuite can be gathered from Batch Plus®. The remaining information is mainly the information that has not been specified in Batch Plus® such as the material hazardous properties. Sharing information has greatly reduced the time and error associated with reentering the information. In addition, it makes the cooperation between these different tools possible.
345
o^^
Figure 2 Importing process information from BatchPlus® to PHASuite. 3.2. Sharing results with safety documentation tools The results generated using PHASuite are stored in results database following the ontologies designed for the safety results. Although PHASuite itself provides facilities for results documentation and reporting, it may also be useful to export the results to other safety results documentation tools, such as PHAPro® (Dyadem International Ltd.). It is fairly straightforward for PHASuite to export its results to PHAPro®. PHAPro® can access a hierarchical text file as input. Ontologies for the analysis results are used to guide the design of the import field mapping file used by PHAPro® to map the fields in the text file to the fields understandable by PHAPro®. The results export facility of PHASuite constructs the text file by reading results from results database and exporting the results to a hierarchical text file. 3.3. Sharing material information with MSDS Material properties, especially those related to safety, are important information for safety analysis. Material safety information is normally stored in the form of Material Safety Data Sheet (MSDS), which is typically a text file with different sections devoted to different aspects of material properties and safety concerns. It is very hard to access the information stored in such a text format, although some parsing tools, such as Perl, can be used to extract certain information by string searching, even though the information obtained in this manner may be ambiguous. In recent years, XML has emerged as a popular file format for exchanging information. Some efforts have been made to convert the MSDS to standardized XML format (e.g. WERCS® from The WERCS Ltd.). In the proposed approach to access material safety information automatically from MSDS-XML, the central component will be a XML parser and a dictionary or translator defined using ontologies. Similar to other information, the material information will be stored in an interface database of PHASuite, and the object representation for materials will be constructed from this database at run-time. 3.4. Sharing P&ID information with CAD tools P&ID is another important information source for process safety analysis. For a modern chemical process, P&ID is normally very complex and it is a tedious process to recreate such a drawing in PHASuite. These days, most of the P&ID drawings are in some electronic format, such as AutoCAD®, SmartPlant P&ID® etc. The drawings created using older version of CAD tools are composed of lines or curves. In recent years, with the development of object oriented programming, newer CAD tools are object-based and the basic drawing components are blocks instead of lines, and some of them are
346 data-centric. This data-centric approach makes it possible for PHASuite to share information with them. Some code (possibly in Visual Basic for Applications), using algorithms similar to those currently coded in PHASuite, can be embedded in the CAD software to generate process flow diagram (PFD) from P&ID. The information can then be translated into an interface database accessible by PHASuite, guided by the dictionary created under the guidance of ontologies for equipment. It may also be possible to move the P&ID drawing facility in PHASuite to CAD software by using its drawing facility to further specify the stream information in PFD.
4. Conclusions Promising roles of ontology in automated process safety analysis were discussed in this paper. The design process, methodologies and tools used for authoring were briefly discussed. Ontologies were designed for various kinds of process information and safety analysis results. Based on the ontologies, schemes for information sharing were proposed and implemented in PHASuite. The schemes were illustrated for batch processes using examples of sharing information with other systems or sources, including importing process information from Batch Plus and exporting safety analysis results to PHA documentation tools. This type of automated information sharing can greatly reduce time and effort, and also eliminate errors that may be introduced while manually reentering the process information. In addition to the information intensive nature, process safety analysis needs extensive safety related knowledge for various operations and equipments. This knowledge is also useful for other purposes, such as fault diagnosis and operator training and support. The approaches used here will also be useful for design and implementation of ontology to support safety related knowledge sharing and reusing, which will be further investigated. The ontologies designed here for sharing information and reusing knowledge can serve as a first step towards a standardized methodology for process safety analysis, and can be extended to other types of tools for chemical processes.
5. References ANSI/ISA-S88.01, 1995, Batch Control-Part 1: Models and terminology. Connolly D. et al. Eds, 2001, DAML+OIL Reference Description, www.w3.org. Farquhar, A., Fikes, R., Pratt, W. and Rice, J., 1995, Collaborative Ontology Construction for Information Integration, Stanford University. Gruber, T.R., 1993, International Workshop on Formal Ontology. Italy. Uschold, M. and Gruninger, M., 1996, The Knowledge Engineering Review, 11(2). Zhao, C , 2002, Knowledge Engineering Framework For Automated HAZOP Analysis, Ph.D. Thesis. Purdue University.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
347
Operator Support System for Multi Product Processes Application to Polyethylene Production J. Abonyi^, P. Arva*, S. Nemeth*, Cs. Vincze*, B. Bodolai*, Zs. Dobosne Horvath*, G. Nagy*, M. Nemeth* ^University of Veszprem, Department of Process Engineering Veszprem, P.O. Box 158, H-8201, Hungary www.fmt.vein.hu/softcomp, abonyij @fmt.vein.hu *Tiszai Vegyi Kombinat Ltd., Tiszaujvaros, P.O. Box 20, H-3581, Hungary
Abstract Process manufacturing is increasingly being driven by market forces and customer needs and perceptions, resulting in necessity of flexible multi-product manufacturing. The increasing automation and tighter quality constraints related to these processes make the operator's job more and more difficult. This makes decision support systems for the operator more important than ever before. Based on the three-level model of skilled operators, this paper proposes a modular Operator Support System (OSS). As the proposed approach extensively uses process data, the OSS is based on a data warehouse designed with the help of enterprise and process modeling tools. For human-computer interaction, front-end tools have been worked out where advanced multivariate statistical models are applied to extract the most informative features. The concept is illustrated by an industrial case study, where the OSS is designed for the monitoring and control of a high-density polyethylene plant.
1. Introduction To meet the growing demands of a global economy, many industries strive to remain competitive by providing multiple grades of a large volume, high value product using the same process equipment. Lost production capacity and grade transitions, are major costs associated with the production (Kosanovich). One such example is the production of different grades of polyethylene. Because each transition to a different product grade must be made efficiently with simultaneous objectives of minimal off-specification product and feed stock waste, and non-violation of safety and environmental constraints, the monitoring and control of such flexible multi-product processes owes much to operator control skills and intelligence. These necessaries the design of an Operator Support System (OSS) that can help the operators in transition control. In Section 2, this paper proposes a modular OSS designed for multi-product processes. While the primary contribution is the prototype implementation of the strategy, certain elements of this approach are novel contributions. The latter include the application of enterprise and process model based data-warehouse and synergistic integration of multivariate statistical models to the operator interfaces. This concept is illustrated in Section 3, where the prototype of the proposed OSS is applied to high-density polyethylene (MDPE, HDPE) plant.
348
2. Structure of the OSS The operator has many tasks, such as to keep the process running as closely as possible to a given condition, to preserve optimality, to detect failures, and to maintain safety. Figure 1. shows a three-level model of the performance of skilled operators, which is an excellent sketch for human behavior. As this scheme suggests, there is a need for OSS which indicates intuitive and essential information on what is happening to avoid the operator mental overload and gives suggestions according to operator's experience and skills (Huang, Lane, Lindheim). Hence, the OSS of flexible processes should be the combination of information systems, and mathematical models and algorithms aimed to extract relevant information (signs, e.g. process trends and symbols) to "ease" the operators' work. In the following the main elements of this kind of system are described. Extensive use of historical process data. As new products are required to be introduced to the market over a short time scale to ensure competitive advantage, the development of process monitoring models of multi-product manufacturing environment necessitates the use of empirical based techniques as opposed to firstprinciples models, since phenomenological model development is unrealizable in the time available (Lane). Hence, the mountains of data, that computer-controlled plants generate, must be used by the operator support systems to distinguish normal from abnormal operating conditions and to plan and schedule sequences of operating steps. Data warehouse. Traditional OSS focuses only on specific tasks, which are performed. In case of flexible processes, the design of integrated information system is extremely important. This kind of focus on process approach means stronger focus on the material and information flow through the entire enterprise, where the OSS follows the process through the organization instead of focusing separate divisions. This also means that most of the information moves horizontally within the organization, thus requiring a higher degree of cooperation and communication across the different divisions in the plant (Lindheim, Mjaavatten). This requires the integration of data taken from various production units into a data warehouse with focus on the specialties of the technology. This model-based information system consists of only consistent, non-violate, preprocessed, historical data, and it is working independently from the distributed control system (DCS). This archival database is useful to query, group and analyze the data related to the production of different products and different grade transitions. Goals
Knowledge-based BehaviorX Identification
Decision of Tasks
Planning
Association
Stored Rules
Symbols
Rule-based Behavior Recognition Signs
Skill-based Behavior Feature Formation
Automated Action Patterns
TTTTTT"
Signals
Sensory Inputs
Figure L Three-level model of skilled human operator.
349 Enterprise and process modeling. The design of a data warehouse is based on the sincronization of the events related to the different information sources which requires the understanding the material, energy and information flow between the units of the plant. For this purpose not only first-principles models of the main process units have to be identified, but enterprise modelling (EM) tools have to be also used. The application of EM is extremely important, as this process describes the organization, maps the work-processes, and thereby identifies the needs of OSS. Formulated products (plastics, polymer composites) are generally produced from many ingredients, and large number of the interactions between the components and the processing conditions all have the effect on the final product quality (Lakshminarayanan). When a reliable model is available that is able to estimate the quality of the product, it can be inverted to obtain the suitable operating conditions required for achieving the target product quality (MacGregor). If such model is incorporated to the OSS, significant economic benefits can be realized. In this study, the important process variables having effect to the product quality have been selected by cluster analysis. Self-Organizing Maps. Front-end tools. Chemical reactors, in particular polymer reactors are multivariable, exhibit nonlinear characteristics, and often have significant time delays. In this case the operator cannot easily visualize what is happening in the process, so the computer should aid for visualization of the process state and its relation to the quality of the final product. As the final product quality is measured in the quality control laboratory, not only WYSIWYW (What you see is what you want) interfaces between the operator and the consol are important but WYSIWIS (What you see is what I see) interfaces between the operators (operators at the reactor, at the product formation process, and at the laboratory) are needed to share the information horizontally in the organization. Process monitoring based on multivariate statistical analysis. Plant operators are skilled in the extraction of real-time patterns of process data and the identification of distinguishing features (see Figure 1.). Hence, the correct interpretation of measured process data is essential for the satisfactory execution of many computer-aided, intelligent decision support systems that modern processing plants require. In supervisory control, detection and diagnosis of faults, product quality control, and recovery from large operation derivations, determining the mapping from process trends to operating conditions is the pivotal task. The aim of multivariate statistical based approaches is to reduce the dimensionality of the correlated process data by projecting them down onto a lower dimensional latent variable space where the operation can be easily visualized. These approaches use the techniques of principal component analysis (PCA) or projection to latent structure (PLS). Beside process performance monitoring, these tools can be used for system identification (MacGregor, Wang), ensuring consistent production and product design (Moteki). The potential of existing approaches has been limited by its inability to handle more than one recipe/grade. There is, therefore, a need for methodologies from which process representations can be developed which simultaneously handle a range of products, grade and recipes (Lane). In this paper PCA and self-organizing maps are applied to the visualization of the highdimensional state of the production system.
350
3. Application Example The proposed approach is appHed in a case study that is the data-based product quality monitoring and control of a polyethylene plant at Tiszai Vegyi Kombinat (TVK) Ltd. The aim of the OSS is to help the operators to follow the way of the products from polymer powder production through its storage and processing to granulate storage. An interesting problem with the process is that it is required to produce about ten product grades according to market demand. Hence, there is a clear need to minimize the time of changeover because off-specification product may be produced during transition. There are other reasons why monitoring the process is advantageous: Only a few properties of the product are measured and sometimes these are not sufficient to define entirely the product quality. For example, if only rheological properties of polymer are measured (melt index), any variation in end-use application that arise do to variation of chemical structure (branching, composition, etc.) will not be captured by following only these product properties. In these cases the process data may contain more information about events with special causes that may effect the product quality. Data warehouse. The technology consists of three locally segregated units: polymerization unit (reactors, separation and recovery system), granulation unit and quality control laboratory. The polymerization unit is controlled by a Honeywell DCS, hence the relevant process and calculated variables such as reactor temperature and polymer production rate are collected and stored by a PHD-module. The frequency of the laboratory qualification of the polymer powder and granulate is around two hours. Based on these data-sources, the prototype of the information system has been implemented with the use of MySQL SQL-server, general office programs (Access), and professional engineering prototyping language (MATLAB, Database Toolbox). Enterprise and process modeling. To detect and analyze causal relationships, the laboratory measurements and the operating variables of the reactors and extruders have to be synchronized based on the model of the main process elements (e.g., pipes, silos, flash-tanks). For this purpose, based on the models of the material and information flows, MATLAB scripts were written to collect all the logged events of the whole production-line and to arrange and re-calculate the time of these events according to the "life" of the product from the reactor to the final product storing. Table 1. Number of transitions between the main product grades stored in the database. A B C D £ F G H I J K L
A 0 4 4 2 0 0 0 O
B 1 0 1 0 0 0 0 O 1 0 0 0 0 0
C 1 0 0 0 1 0 0 O 2 2 0 0
D 4 0 0 0 0 0 0 O 3 0 0 0
E 5 2 0 2 0 0 0 0 0 0 2 1
F 0 0 0 0 1 0 0 O 0 3 0 0
G 0 0 0 0 0 3 0 O 2 0 0 0
H 0 0 0 0 0 0 3 O 0 0 0 0
I 1 0 0 0 7 0 0 O 0 0 1 i
I K L 1 0 0 0 0 0 0 0 0 0 0 0 1 4 1 1 0 0 0 0 0 O O 1 0 0 0 1 O O 0 D 0 0 i 2 0 0
351 'Jtm-m's,v, IK
• p."
" •
9
» '"a
^
o*
o
° (,•*«
•• »^* op '
°
'i^"^-"r^"""'i
%..
:^d>-i—I- -I"
Q
* °
• *0 0
.-
0
. %
Figure 2. Grade transition from product A to product E. (left: reactor process variables, right: quality-related variables, lab. measurements). The melt index (MI) and the density of the polymer (ro) are monitored by off-line laboratory analysis after drying of the polymer that causes about one-hour time-delay. Since, it would be useful to know if the product is good before testing it, the monitoring of the process would help in the early detection of poor-quality product. For this purpose black-box models have been worked out, which estimate the product quality []. Front-end tools. Based on the historical data of the previous half-year, all of the productions and grade transitions (see Table 1.) have been pre-processed by the above mentioned models. As Figure 2 shows, based on the database of the grade transitions, we have designed special figures that can be used by the operators as patterns of control strategies. Not only tools for the visualization of time-series have been developed by but plots illustrating the safety constraints (see Figure 3a). Process monitoring based on multivariate statistical analysis. The difficulty of the problem comes from the fact that there are more than ten process variables to consider (e.g., reactor temperature (T), monomer (C2), comonomer (C6), chain transfer agent (H2) concentrations, etc.). As Figure 3b. shows, the proposed OSS consists of PCAbased visualization tools were not only the two-dimensional space of the transformed variables are plotted but the Hoteling T^ and Q measures to detect faults. .<^ y
y
J>
'';^ / : :/ :;:''
.-; ;>' "^
<^^* /
*-ii
y
5^_-^
•
*
•^
Figure 3.a. Example of a safety constraint plot. Figure 3.b PCA plot of grade-transitions from product A to product E.
352
4. Conclusions In this paper the structure of a data-driven OSS has been proposed for the monitoring and control of flexible multi-product processes. As the proposed approach extensively uses process data, the OSS is based on a data warehouse designed with the help of enterprise and process modeling tools. For human-computer interaction, front-end tools have been worked out where advanced multivariate statistical models are applied to extract the most informative features. The concept is illustrated by an industrial case study, where the OSS is designed for the monitoring and control of a high-density polyethylene plant. The prototype of the information system has been implemented with the use of MySQL SQL-server, general office programs (Excel, Access), and professional engineering prototyping language (MATLAB, Database Toolbox). Based on half-years of data collection, with the proposed tools more than 120 and 80 grade transitions were collected and analyzed in the two production-lines of the process. We hope that based on the application of these tools significant economic benefits will be realized in the near future.
5. References Doymaz, F., Chen, J., Romagnoli, J.A., Palayoglu, A., A Robust Strategy for Real-Time Process Monitoring, Journal of Process Control , 1 1 , 2001, 343-359. Huang, S-H., Qian, J-X, and Shao, H-H., Human-Machine Cooperative Control for Ethylene Production, Artificial Intelligence in Engineering, 9, 1995, 203-209. Kosanovich, K.A. and Piosovo, M.J., A Dynamical Supervisor Strategy for MultiProduct Processes, Computers & Chemical Engineering, 21, 1997, 149-154. Lakshminarayanan, S., Fujii, H., Grosman, B., Dassau, E., Lewin, D.R., New product design via analysis of historical databases. Computers and Chemical Engineering 24, 2000, 671-676. Lane, S., Martin, E.B., Kooijmans, R., Morris, A.J., Performance Monitoring of a Multi-Product Semi-batch Process, Journal of Process Control, 11,2001, 1-11. Lindheim, C. and Lien, K.M., Operator Support Systems for New Kinds of Process Operation Work, Computers & Chemical Engineering, Volume 21, 1997, S113-S118. MacGregor, J.F., Kourti, T., Statistical process control of multivariate processes. Control Eng. Practice, 3, 1995,403-414. Mjaavatten, A. and Foss, B.A., A modular system for estimation and diagnosis. Computers & Chemical Engineering, Volume 21, 11, 1997, 1203-1218. Moteki, Y. and Arai, Y., Operation planning and quality design of a polymer process, IFACDYCORD, 1986, 159-165. Nishitani, H., Human-Computer Interaction in the New Process Technology, Journal of Process Control, 6, 1996,111-117. Wang, X.Z., Data Mining and Knowledge Discovery for Process Monitoring and Control, Springer, 1999.
6. Acknowledgements The authors would like to acknowledge the support of the Cooperative Research Center (VIKKK) (project 2001-II-lA), and founding from the Hungarian Ministry of Education (FKFP-0073/2001) and from the Hungarian Research Found (OTKA T 037600). Janos Abonyi is grateful for the financial support of the Janos Bolyai Research Fellowship of the Hungarian Academy of Science.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
353
Combination of Measurements as Controlled Variables for Self-Optimizing Control Vidar Alstad and Sigurd Skogestad Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), N-7491 Trondheim, Norway, email: [email protected], [email protected]
Abstract A new method for selecting controlled variables (c) as linear combination of measurements (y) is proposed based on the idea of self-optimizing control. The objective is to find controlled variables, such that a constant setpoint policy leads to near optimal operation in the presence of low frequency disturbances (d). We propose to combine as many measurements as there are unconstrained degrees of freedom (inputs, u) and major disturbances such that Acopt{d) = 0. To illustrate the ideas a gas-Uft allocation example is included. The example show that the method proposed here give controlled variables with good self-optimizing properties.
1. Introduction Although not widely acknowledged, controlling the right variables is a key element in overcoming uncertainty in operation. Control systems often consist of several layers in a hierarchical structure, each operating on a different time scale. Typically, layers include scheduling (weeks), site-wide optimization (day), local optimization (hours), supervisory/predictive control (minutes) and regulatory control (seconds). The layers are interconnected through the controlled variables c. Optimal operation for a given disturbance d can be found by solving the following problem. min/(jc,M,^)
(1)
u
f{x,u,d)=0 g{x,u,d) < 0
xex,ue
u,de "D
where / is the process model, g the inequality constraints, u the independent variables (inputs), d the disturbances in which we cannot affect, and x the states. J is the scalar economic performance metric and since the economics are primarily decided by steadystate operation, only steady-state models are used in this analysis. Solution of (1) give the optimal inputs and states, Uopt{d) and Xopt{d) respectively, and also the optimal value
354 of the measurements, yopt{d), as a function of d. As shown in Skogestad (2000a) we assume that all optimally active constraints are implemented (active constraint control). Self-optimizing control follows the idea of Morari et al. (1980) and may be summarized as: Self-optimizing control (Skogestad 2000a) is when an acceptable loss can be achieved using constant setpointsfor the controlled variables (without the need to re-optimize when disturbances occur). The central issue when searching for the self-optimizing control structure is to decide how to best implement the optimal policy in the presence of uncertainty. This is accomplished by selecting the right set of controlled variables c to be kept at constant setpoints Cj, in spite of disturbances d and implementation error n. The goal is to minimize the loss, L = y(c, d) —Jopt{d), with a constant setpoint strategy, where the loss is the difference between the value of the objective using a constant setpoint policy and the value of the true optimal objective. For a review of self-optimizing control see (Skogestad 2000^). Skogestad et al. (1998) propose two methods for selecting controlled variables with good self-optimizing properties based on a Taylor series expansion of the loss function. Candidate controlled variables are not limited to single measurements, as shown by Morud (1995) who, by seeking all possible directions of the output space, was able to find a linear combination of the measurements with good self-optimizing properties. Here a much simpler method is proposed.
2. Proposed Method for Selecting Controlled Variables as Linear Combinations of Measurements We show in this section, that if we neglect the implementation error in controlHng c (e.g caused by poor control or measurement error), then it is possible from a Unear point of view to find a linear combination of the available measurements with zero loss ("perfect self-optimizing control"). By eliminating the states jc, we may write the measured variables y as a function of the independent variables (degrees of freedom) u and disturbances d. y = fy{u,d)
(2)
In general the set y also includes the independent variables u. The controlled variables c ("primary outputs") are to be selected as combinations of the measured variables ("secondary outputs"), c = h{y)
(3)
where the generally non-Hnear function h is free to choose, except that we assume that the controlled variables are independent and that the number of controlled variables (c's) equals the number of degrees of freedom (u's). We will here consider the case where the function h(y) is Unear. We may then write c = h{y) as
355 Ac = H^y
(4)
where the matrix H is free to choose. We assume that the operation is nominally optimal, that is, we have Cs = Copt{d*) where d* is the nominal disturbance. We assume that there is no implementation error (n = 0), which impUes that we will have c = Cs (constant) for all disturbances d. This constant setpoint policy will be optimal (with zero loss) provided the optimal value of c{d) remains constant, that is, is Copt{d) is independent of d. This simple insight may be used to find the optimal linear combination (i.e. find the optimal choice for the matrix H). We consider small changes (disturbances) from the nominal disturbance. Then the change in the optimal value of the measurements is given by ^yopt = yopt,d ~ yopt4* =F{d-d*)=
FAd
(5)
where the sensitivity matrix F = - ^ may be obtained numerically by solving the optimization problem (1) for small changes in the disturbance variables d, and from this obtain Uopt{d) as well as yopt{d). We assume that Ugpt and Xopt are continuous in ^ in a neighborhood of the nominal point. From (4) the corresponding change in the optimal value of c is Acopt = H Ayopt- Now require that Acopt = 0 which gives Acopt = HFAd = 0. This need to be satisfied for any Ad so we must have that HF = 0
(6)
In other words, we should select H to be in the left null space of F (H £ fA^(F^)). We assume that we have n unconstrained degrees of freedom (the length of vectors u and c are n), use m independent measurements when forming c, and have k independent disturbances. We than have that F is a m x A: matrix and / / is a n x m matrix. By assuming m>k and m>n and by assuming independent inputs and disturbances it follows that rank{F) = k. The fundamental theorem of linear algebra (Strang 1988) tells that the left null space of F, fAt(F^), has rank m — r, where r = K — Rank{F). Since H G 0^{F^) we have that dim{H) —m — k and by assuming that the number of controlled variables must be equal to the number of inputs we get rank{H) = n m-k
= n^m
= n-\-k
(7)
so that #y = #d-\-#u , e.g. the minimum number of measurements needed is equal to the number of inputs plus the number of disturbances. We then have Theorem 2.1 Assume we have n unconstrained independent variables u, k independent disturbances d, and m measurements y, of which at least n-\-k are independent. It is the possible to select measurement combinations Ac = HAy
(8)
356 such that HF = 0 where F = - ^ . Keeping c constant at its nominal optimal value then gives zero loss when there are small disturbances d. The matrix H is generally not unique. In summary, the main idea is to select the selection matrix H such that Acopt = HAym^opt — 0 by using m = n-\-k independent measurements. If the number of available measurements exceeds the number of inputs and major disturbances, there is some freedom to choose these as to reduce the implementation error and to maximize the observabihty of the disturbances in the measurements, see Alstad & Skogestad (2002) for further details.
3. Example: Gas-Lift Allocation Optimization In many oil/gas fields the production of oil, gas and water are constrained by the processing capacity and other process constraints such as available flow-line transportation capacity. Wang et al. (2002) point out that the available literature does not provide robust procedures on how to formulate and solve typical optimization problems for such systems. Often, the "optimization" consider the constraints sequentially, .or only subproblems are considered (e.g by not including the transportation system to the processing facility). Dutta-Roy & Kattapuram (1997) considered the effect of including process constraints for a two-well case that share a common transportation line to the process. They found that failing to include the process constraints (in this case the transportation line) gave a sub-optimal solution of the problem. Here, we focus on how to implement the optimal operation in the presence of low frequency disturbances. In typical oil/gas producing systems there are large uncertainties (e.g. reservoir properties, models) and few measurements, so methods that can help operate the process optimally when disturbances occur are of great value. In this paper we consider the gas-Uft structure in Figure 1 with the data given in Table 2. The model used is a distributed pseudo one-phase flow model (Taitel 2001) assuming black oil compositional PVT behavior (Golan & Whitson 1996). The valves are modeled as one-phase with a linear characteristic. The flow model represent a two-point boundary value problem and the partial differential equations are discretized using orthogonal collocation. The two wells (W\ and Wi) are connected to a common transport line {T). We assume that the system is dynamically stable. Gas is injected through valves {CV(, and CV-]) to increase the production from the reservoir by making the static pressure (head) less. The operating objective is to maximize the profit, J — Y,i=o,g,giPi^i where indices o,g,gi are oil, gas and injected gas respectively, pi is the price for phase and m/ is the mass rate for phase /. We have neglected water in this analysis. The inputs in this case are u = [Vi V2 V3 V4 V5 V^ V-j]^ where Vi is the valve position for valve /. We assume that the level and pressure of the separator are controlled at the setpoints using CV4 and CV5 respectively. These setpoints can not be manipulated, thus removing 2 DOR In typical offshore systems, the ratio of oil and gas (GOR, the ratio of stock-tank gas mass to stocktank oil mass) from each well is not exactly known, so we assume that the low frequency disturbance is the ratio of gas and oil (d = [GORi GOR2f[^^^]) in the reservoir, where the GOR is given at reservoir properties. The available measurements are the pressure upstream the valves for the wells (/Vi and /V2) and the injection gas mass rates (m^/^i and mgi^2)- It is assumed that there is a upper Umit in the gas processing capacity in the process.
357 due to compressor limitations in the process. The optimally active constraints (for all disturbances) are [nig^tot Vi Vi ^3] so we have DOF = 1 -l-A—X unconstrained DOR Since it is optimal to control the total gas massflowat the constraint, we can reformulate the objective to only consider the cost of injecting the gas into the well (7 = Z/=o,gi Pii^i)- In this case we have assumed that po — 0.17[$//:^] and p^i — -0.05[$/^g] corresponding to a oil price of $ 20 per barrel. The cost for recycling gas in the system has been assumed to be half the sale price of natural gas which was assumed to be 0.1$/5m^. Following the procedure in Section 2 we have that m = n - f / : = l - f 2 = 3, sowe need three measurements. We select the measurements /Vi, /V2 and m^/^as measurements. The optimal sensitivity function (F) is calculated by imposing the above constraints and upon requiring HF — 0, this result in the controlled variable cuc-Hy[0.76 - 0.65 0.09][fVi Fyi ^gi,^- The loss is calculated for several structures and is given in Table 1. We see that controlling c = Cue have good self-optimizing properties with the lowest average and worst case loss. A constant setpoint policy for the other controlled variables give a higher loss.
Table 1. Loss for the alternative control variables for the gas optimization case.
Rank
c
1 2 3 4 5
Cue Pvi mgi,2 rrigi^x PV2
GOR\ 0.03-4 0.06 0.0 1.5 2.0 0.8 3.2
Loss (in million $/year) GORi GORi : 0.03 -+ 0.06 0.10-^0.13 GO/?2:0.10 ^ 0 . 1 3 0.16 0.0 1.0 1.9 4.5 0.5 1.4 5.3 4.1 2.7
Average 0.05 1.5 2.3 2.5 3.3
Table 2. Data for the gas lift allocation example. Parameter L'W\,W2 Dw\,W2 LT DT Rivs,l Pres,2
PIresA PIres,2
Psep Pi P2 M, GOFfl GOlf^
Figure 1. Figure of the well network.
"^g,tot
Value 1500 0.12 300 0.25 150 155 lE-7 0.9SE - 7 50 750 800 20 0.03 0.10 15
Unit m m m m bara bara m3 sPa s Pa
bara
1 kmole kg kg
i
Comment Lepgth well 1 and 2 Diameter well 1 and 2 Length transportation line Diameter transportation line Pressure reservoir well 1 Pressure reservoir well 2 Production index well 1 Production index well 2 Pressure separator Black oil density reservoir 1 Black oil density reservoir 2 Molecular weight gas Nominal gas oil ratio Nominal gas oil ratio Maximum gas capacity
358
4. Conclusion We have derived a new method for selecting controlled variables as linear combination of the available measurements, that from a hnear point of view have perfect self-optimizing properties if we neglect implementation error. The idea is to calculate the optimal sensitivity function {^yopt = FAd) and select controlled variables as linear combination of the measurements c = Hy, such that HF = 0. The method has been illustrated on a gas-Uft allocation example. The example illustrate that in a constant set-point control structure, selecting the right controlled variables are of major importance.
5. References Alstad, V. & Skogestad, S. (2002), 'Combinations of measurements as controlled variables; application to a petlyuk distillation column'. Submitted to ADCHEM 2003. Dutta-Roy, K. & Kattapuram, J. (1997), A new approach to gas-lift allocation optimization, in *paper SPE38333 presented at the 1997 SPE Western Regional Meeting, Long Beach, CaHfomia'. Golan, M. & Whitson, C.H. (1996), Well Perfomance, 2 edn. Tapir. Morari, M., Stephanopoulos, G. & Arkun, Y. (1980), 'Studies in the synthesis of control structures for chemical processes, part I: Formulation of the problem, process decomposition and the classification of the controller task, analysis of the optimizing cotnrol structures', AIChE Journal 26(2), 220-232. Morud, J. (1995), Studies on the Dynamics and Operation of Integrated Plants, PhD thesis, Norwegian Institute of Technology, Trondheim. Skogestad, S. (2000a), Tlantwide control: the search for the optimal control structure', J. Proc. Control 10,487-507. Skogestad, S. (2000Z?), * Self-optimizing control: the missing link between steady-state optimization and control', Comp. Chem. Engng. 24, 569-575. Skogestad, S., Halvorsen, I. & Morud, J. (1998), * Self-optimizing control: The basic idea and tay lor series analysis', In AIChE Annual Meeting, Miami, FL . Strang, G. (1988), Linear Algebra and its Applications, 3 edn, Harcourt Brace & Company. Taitel, Y. (2001), Multiphase Flow Modeling; Fundamentals and Application to Oil Producing System, Department of Fluid Mechanics and Heat Transfer, Tel Aviv University. Course held at Norwegian University of Science and Technology, September 2001. Wang, P., Litvak, M. & Aziz, K. (2002), Optimization of production from mature fields, in 'Proceedings of the 17th World Petroleum Congress', Rio de Janeiro, Brazil.
6. Acknowledgements Financial support from the Research Council of Norway, ABB and Norsk Hydro is gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
359
Integrating Budgeting Models into APS Systems in Batch Chemical Industries Mariana Badell, Javier Romero and Luis Puigjaner Chem. Engng. Dept, UPC, E.T.S.E.I.B., Diagonal 647, E-08028 Barcelona, Spain. e-mails: [javi.romero, mariana.badell, luis.puigjaner]@upc.es
Abstract This work is our proposal for a new conceptual approach to include in enterprise management systems. It addresses simultaneous integration of the supply chain outflows with scheduling and budgeting in short term planning in batch chemical process industries. A cash flow and budgeting model coupled with an advanced planning and scheduling (APS) procedure is the key of the advance. The model development made in this paper and the results suggest that a new conceptual approach in enterprise management systems consisting of the integration of the enterprise finance models with the company operations model is a must to improve overall earnings.
1. Introduction The new economy sets more challenges to overcome in the industry, especially in the batch chemical processes. Here inherent batch process flexibility is to be exploited to obtain maximum enterprise-wide earnings. This flexibility is transformed in complexity to its cash flow management due to the large product portfolio, assortment of material resources, equipment, manpower and so on. Hence, in this work we propose the financial to take advantage of the scheduling and planning level to improve the overall enterprise earnings. Accounting systems, which have been used up until today, are insufficient and will not stand the contest from the increasingly efficient capital market. Consequently the value added given by the transactional-based enterprise systems and its metrics is in doubt. As a first aid answer trends give more weight to treasury and budget than to accounting and planning. Indeed, the reinforcement of cash flow models is a way to put the investors in the focus to take into account the factors that affect them. The majority of companies cannot drive cash flows matching them in the best manner with the operations and other functional decision tasks that provoke them due to the lack of integrated cross-functional links. This means that almost any company has an automatic solution equivalent to a MRP for the resource "liquidity". If companies cannot drive liquidity they cannot control it, being the insolvency the first step into the bankruptcy path of enterprises. Hence, it is the most important to create the integrated tool for simultaneous optimal solutions.
2. Financial Problem Formulation and Previous Work System integration has got quite big attention but lacks prediction, which reflects the same lack of theoretical background as happened with MRP systems in the sixties. Today the lack of adequate enterprise computer-aided systems capable of managing optimally the working capital forces CFO to make decisions using out of date, estimated
360 or anecdotal information. It is like driving by looking through the rear-view mirror. In that position it is not possible to see where the entity is going, only where it has already been, not knowing if damage was provoked. The common issue to begin a budget is to ask about the sales to place the cash inflows. That is the difference with our approach: a simultaneous solution of the production demanded is determined in unison following the guidelines and contraints of CFO. A review of the previous theoretical works in the literature reveals that while in the area of deterministic models of cash management most of them were developed focusing more in the individual financial decision types; at the stochastic side of cash management models, two basic approaches were developed. Baumol's model (1962) had an inventory approach assuming certainty. On the contrary. Miller and Orr (1966) assumed uncertainty. At the sixties linear programming was introduced to the area of finance, allowing the consideration of intertemporal aspects. Orgler, 1969, used an unequal period LP model to capture the day-to-day aspect of cash management problem to minimise the budget cost over the planning horizon subject to constraints involving decision variables. Commercial offthe-shelf financial budgeting software uses toolboxes to add forecasts and other facilities, but no budgeting model functionally integrated to work on line at enterprise level was found in the literature either in the market.
3. The Integrated Model The proposed framework is shown through a specific case study. This case study consists of a batch specialty chemical plant with two different batch reactors. Here, each production recipe basically consists of the reaction phase. Hence, raw materials are assumed to be transferred from stock to the reactor, where several substances react, and, at the end of the reaction phase, products are directly transferred to lorries to be transported to different customers. Plant product portfolio is assumed to be around 30 different products using up to 10 different raw substances. Production times are assumed to range from 3 to 30 hours. Product switch-over basically depends on the nature of both substances involved in the precedent and following batch. Cleaning time ranges from 0 up to 6 hours till not permitted sequences. Scheduling & Planning Model The problem to solve is a 13-week period. Equations 1 to 23 show the scheduling and planning model. The first week is planned with known product demands and the others with known (regular) and estimated (seasonal) demands. Here, orders to be produced are scheduled considering set-up times (cleaning times). This way, the sequence of orders to satisfy customer requirements and the equipment unit to order assignment that minimises the overall required cleaning time is calculated for the first week. For the rest of weeks, sudden demands are just known with one week in advance but as their overall number can be estimated, the model lefts enough idle time to accommodate these 'unknowns'. Besides, it is assumed that sudden orders may be accepted in function of the actual plant capacity. In this way, the model is to be rerun every week as forecasts develop into real orders. Next weeks from the first one are not exact. Indeed, they probably won't be executed as calculated, but their planning is useful to know if there will be enough room to accommodate coming orders. Here, no exact sequence is calculated, and so, no set-up times are considered. The amount of raw materials final products stored at every week-period is also monitored in function of the amount stored in the precedent period, the amount bought or produced and the amount consumed or
361 sold in that period. With this, the model will be able to decide when to ask for raw materials, considering a minimum possible order-size or a relationship between unitary raw material cost and order size.
TPH = S J^TOP^nx^^,^, + X^^o. o p
(1)
o
TPfw^ < 168 {week production time)
(2)
CT„, = E SCT„.x^„,x^,„_,^, o>\
(3)
CT^^ - 0 (initial required cleaning time) o = \
Yj^p.o,e^Yj^p,o-X,e
X
(6)
(7)
p,o,e
^•Xp,o,e^f^p.o,e
(8)
Xp.o,ei =0 if product p is of type 2
(9)
^p,o,«2 = 0 ««^ Xp,o,e3 =^ if product p is of type 1
(10)
TP,,=^TOP,..nw,,,
k>l
(11)
TPi^^ <(168-6^) (week production time) Wu,e^nw^,^^
k>l
(12)
k>l
(13)
M.W,,k,e^nw^,^^ k>l
(14)
^p,k\er -^ if product p is of type R2 k>l
(15)
^p,k\e2' ~^ ^"^ ^p,k\e3' -^ if P^oduct p is of type Rl k>l
(16)
^^p,k=l,e=^^p,o,e
(17)
satisfaction,^^,^^^_^ff, = 1
^^^^
P_Stock^^ =P_Stocky,,, +Yi^e'^^pxe - S^Pi'satisfaction, e
P_Stocky, > 0
i\k-D,+d i\prodj=p
R_StocK, = R_StocK,.,-Y,
J^^r^-n^pxe +^^r^^a-i e
p\Rp=r
(lyj)
R-Stock^, >0
(22)
Budgeting Model Short-term budgeting decisions can be taken every week-period. Week production expenses will consider an initial stock of raw material and products. An initial working capital cash is considered beneath which a short-term loan must be requested. The minimum net cash flow allowed is determined by the CFO considering its variability.
362 Production liabilities incurred in every week-period are assumed to be due to buy of raw materials and production exogenous cash-flows incurred in every week to sale of products. A short term financing source is represented by a constrained open line of credit. Under agreement with banks, loans can be obtained at the beginning of any period and are due after one year at a monthly interest rate (i.e 5%). This interest rate might be a function of the minimum cash. The portfolio of marketable securities held by the firm at the beginning of the first period includes several sets of securities with known face values in monetary units (mu) and maturity week-period k' incurred at month-period k. All marketable securities can be sold prior to maturity at a discount or loss for the firm. Introducing these equations into the budgeting model presented give an integrated model for production scheduling and planning and enterprise budgeting. Wcashj^ > Min _ Cash
. .
R_Liability,_, =^qb^- rb^j, • CostRaw^
(25)
r
Exogenous_cas\ = ^ satis^qp. -SaleP^
(26)
i\D,=k
Debt^ < Max _ debt Debt^ = Debt^_^ + Borrow^ - Out _ Debt^ + F • Debt^_^ MS_net_cashflow^ =-^
(27)
[MSinv^,^^ -MSsale^,^^^
k '=k+l1
(28) k-l
+ S {^k,k ^^Sin\k' - ^k,k MSsale,,,) With this, cash balance is as follows. Exogenous_cas\ -R_liability^^ + Borrow,^ - Out_Debt^ MS _ net _ Cashflowj^ + WCas\_^ + others^_ = WCas\
(29)
Objective function: For m = 3, 6, 9 and 12, cash is withdrawn from the system in form of shareholder dividend. Objective function will consist of maximising these dividends as follows: others^^-^^912 ~ ~ s h a r e _ d i v i I = 1,2,3,4 ' v^ O.F. = msix 2^ai'share _divI
(30)
4. Results of Integration of Models The model is run for a plant product portfolio of 30 different products using up to 10 different raw substances. Production times are assumed to range from 3 to 30 hours. Product switch-over basically depends on the nature of both substances involved in the precedent and following batch. Cleaning time ranges from 0 up to 6 hours till not permitted sequences.
363 Case Study Planning results The model proposed has been implemented in GAMS/CPLEX and solved at a 1 GHz machine. Optimal solution is achieved in 190 CPU seconds. Figure 1-a shows stock of raw materials and final products profile during the three month period. Figure 1-b shows a diagram with the different number of batches of products to be produced at each week. 1000 900 800
8 '°° i
600
2 c
- Product sstcxjk
500-^
-Raw Stock
I 400-1 ^
300 200 100 0
1
2
3
4
5
6
7
8
9 10 11 12 13
Figure 1 (a and b). Planning results when solving the planning&scheduling model. Case study Budget results For the first set of the budgeting model, it is assumed no marketable securities invested at the beginning of the period, an initial cash equal to the minimum cash (20000 u.), an open line of credit at annual interest of 10% and a set of marketable securities at 5% annual interest. In the 12-months horizon it is assumed that cash is withdrawn for dividend emission at 4, 8 and 12 period (month). With this, it is solved the LP problem proposed and results give Figure 2, where overall marketable securities and cash borrowed during the first 3 months are shown. Cash withdrawn in the year is 185.588 u. 40000 35000 30000 25000 w 20000 15000 © o 10000 O w 5000 0
90000 4- 80000 70000 60000 + 50000 40000 30000 20000 10000 0 10
15
Week
Figure 2. Budgeting results when solving the sequencial procedure.
- Marketable securities Debt
364 Integrated Model. Figures 3 and 4 shows the results of integration of models. 1000 900 800
8 ^°° i 600 o
- Product s stock
c 500-1
I 400 ^
-Raw Stock
300 200 100 0
1 2 3 4 5 6 7 8 9 10 11 12 13
Figure 3. Planning results when solving the integrated model. 70000
90000
- Marketable Securities - Debt
Weeks
Figure 4. Budgeting results when solving the integrated model. The overall cash withdrawn during the year, when using the integrated framework, is of 203.196 u.
5. Conclusions The concept behind improved enterprise resource planning systems is the overall integration of the whole enterprise functionality through financial links. This framework is able to support in real time optimal schedule budgets. A difference of 9.5% in earnings/year is achieved in the case study when integrated approach is used. Armed with up to the minute information on the overall budget status, costs and schedules, allocation of resources, reschedules and cost of capital, the enterprise is ready to respond efficiently to events as they arise. The financial support from Generalitat de Catalunya (CIRIT), EC-VIPNET (G1RDCT2000-003181 & GCO (BRPRCT989005) projects are acknowledged.
6. References Badell, M., Puigjaner, L. "Discover a Powerful Tool for Scheduling in ERM Systems", Hydrocarbon Processing, 80, 3, 160, 2001. Badell, M., Puigjaner, L., "A New Conceptual Approach for ERM Systems". FOCAPO AIChE Symposium Series No.320, V 94, pp 217- 223 (1998). Baumol, W.J., "The Transactions Demand for Cash: An Inventory Theoretic Approach," Quarterly Journal of Economics, Vol. 66, No.4 (1952), 545-556. Miller, M.H., Orr, R., "A Model of the Demand for Money by Firms," The Quarterly Journal of Economics, Vol. 80, No.3 (1966), 413-435. Orgler, Y.E., "An Unequal-Period Model for Cash Management Decisions", Management Science, Vol. 20, No.lO (October 1970), 1350-1363. Srinivasan, V., 1986, "Deterministic Cash Flow Model", Omega, 14, 2,145-166.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
365
A System for Support and Training of Personnel Working in the Electrochemical Treatment of Metallic Surfaces Athanassios F. Batzias and Fragiskos A. Batzias Laboratory of Simulation of Industrial Processes, Industrial Management Dept. University of Piraeus, Karaoli & Dimitriou 80, Piraeus 185 34, Greece
Abstract Fuzzy multicriteria analysis is used for decision making in a network of procedures that describes a complete electrochemical finishing plant. The decision alternatives result by means of fault tree analysis and neuro-fuzzy reasoning; the criteria are categorized as objective and subjective. The training of the technical staff is achieved in a cooperative environment by playing with 'what if scenarios based on real and simulated data.
1. Introduction For many reasons, productivity, safety, reliability, liability, and quality conditions require a significant degree of skill from the technical personnel. In order to face lack of skill and/or unexpected events in operation and product quality problems, special cooperative procedures in the domain of Computer Aided Process Engineering (CAPE) should be developed. The basic idea is to change the operational space by means of a Knowledge Based System (KBS), allowing (a) a human operator to interact with the process via less knowledge intensive rules and (b) the process/quality/knowledge engineers to select the relevant data and to design/develop/implement a neuro-fuzzy mechanism producing these rules in cooperation with the operator. Such a cooperation is indispensable when several persons from different human user classes are involved within the same computerised system (Johanssen 1997; Johanssen and Averuk 1993). This work deals with a KBS which can provide rules and guidance to technical personnel working in the electrochemical treatment of metallic surfaces, creating also a cooperative environment between members of the staff that belong to different hierarchical levels. The same system creates/enriches a local knowledge base and performs Fault Tree Analysis (FTA) when a critical defect has been detected, increasing traceability according to ISO-standards of 9000-series. The KBS consists of the discrete sub-systems CIS, EIS, TIS, and the envelop-system EDS. The CIS (Chemical Interactive Sub-system) provides rules to the operator concerning the conditions he must keep in order to obtain the product within an allowed region as regards defects, according to specifications. The CIS is suitable for a chemical process that takes place in a homogeneous bath and is based on a neuro-fuzzy classification algorithm. The EIS (Electrochemical Interactive Sub-system) provides rules to the operator concerning the conditions he must keep in order to obtain both, defect-free surface and quality according to specifications set by the client or the market, with minimal cost. The EIS is suitable for an electrochemical process that takes place in a non-homogeneous bath and
366 is based on a neuro-fuzzy approximation algorithm performing in combination with an external heuristic procedure. The TIS (Topological Interactive Sub-system), which is hierarchically under the EIS, provides prohibitive rules and offers consultance to the operator concerning the arrangement of jigs and racks within the tank of electrochemical processing to avoid defects and ensure the desired quality. This is a very difficult task described in technical manuals to be rather an art than a technology, demanding continuous feedback by the operators and the quality control laboratory. The FDS (Fault Diagnosis System) is an envelop-system which contains the above described sub-systems and the necessary procedures for complete FTA.
2. Methodology The methodology followed is heavily based on fuzzy multicriteria analysis performed by the technical personnel twice in the computer aided integration of procedures described subsequently in this chapter and depicted in Figure 1 (21 steps interconnected with 8 decision nodes). In step 2, application of neurofuzzy network predicts product quality (output) from treatment conditions (input); subsequently given output, in the form of a vector of accepted interval values, defined by the client or the market demand, determines input vectors in clusters by means of the input-output mapping which has been constructed in the learning section of the neurofuzzy network; last, clusters are filtered through minimal accepted width of values of input variables and the remaining form a set of alternatives among which the best is chosen by means of multicriteria analysis, applying a fuzzy modification of PROMETHEE (Geldermann et al. 2000) with the following criteria: fixed cost, fi; energy cost, fa; physical productivity (or treatment rate, dependent mainly on current density with consequences on surface structure and defect appearance), f^', rest variable or operation cost, f^, environmental impact, fy, contribution to inter-lot convenience, depended mainly on the number, size, quality requirements, and priority of lots programmed, f^; contribution to intra-lot convenience, depended mainly on the ranges of treatment control variables (voltage, current density, anodizing time, electrochemical efficiency, concentration, temperature), allowed by production specifications, in relation with production facilities available, f-j. In step 10, the kind of defect observed is put as the 'top event' in the root of a fault tree, where each cause - effect link is quantified by a fuzzy index of significance of this causal relation, given by experts on the basis of (i) the relative frequency of occurrence in the past and (ii) relevance to scientific theory and experimental data; consequently, the leaves of the tree are the suggested ultimate causes which must be examined experimentally in order to find out the real cause and subsequently to make the proper remedial proposal. The order of tests to be used for this experimentation is determined by applying a fuzzy multicriteria method like the one mentioned above, with the following criteria: test significance, supported by FTA, gi; equipment availability, g2; reliability, based on analysis of variance (ANOVA) of experimental results obtained under similar treatment conditions in the past, gy, cost, g^; ratio of time required to time available due to production constraints/conmiitments, g^; expected contribution to explainability, i.e. relation to the corresponding scientific background, g^. It is worthwhile noting that criteria fi, fa, f4, g2, gs, g4 are rather objective, while criteria f3, fs, f6, fy, gi, g5, g6 are rather subjective. The row elements of the multicriteria matrix
367 used as input, which correspond to subjective criteria, are evaluated by six members of the technical staff (2 engineers/managers, 2 scientists working in the quality laboratory, 2 operators), according to a 3-stage DELPHI method incorporated within the integrated KBS. More specifically, the 2 operators evaluate (assign grades on) the elements corresponding to criteria fe, f?, gs, while the other 4 members of the staff evaluate the elements corresponding to the rest subjective criteria fs, fs, gi, ge- All 6 members of the staff evaluate the elements of the weight vector used as input. The whole KBS can be used for both support and training, even by an isolated operator, as all variables and parameter values are provided by the System in real time, depicting current conditions. Similarly, the operator can 'play' with one of the past representative cases saved in the System. During a training session, the trainee activates the KBS by introducing values/choices/situations and receives the System's response; the steps where the initiative belongs to the trainee or the System are symbolized with t or s, respectively. 11. Input of (i) product requirements/specifications set by the client or the market and (ii) raw material or semi finished product quality assessment which took place during the previous stages of production/treatment. 2t. Application of CIS or EIS if the process is chemical or electrochemical, respectively, to determine the best conditions for production/treatment by means of fuzzy multicriteria analysis. 3 t. Application of TIS, which is necessary in the case of EIS. 4 s. Chemical or electrochemical treatment, registration of (i) changes in conditions and (ii) observations of any unexpected event or failure occurring during processing. 5 s. Visual inspection of the product, accompanied with simple measurement in situ. 61. Post treatment remedial actions for eliminating recognizable light defects. 7 s. Separation of defected articles. 8 s. Sampling by the Quality Control Committee (QCC). 9 s. Offline product quality control in Laboratory. lOt. Application of (i) FT A to suggest the ultimate cause of the observed defect and (ii) fuzzy multicriteria choice of best experimental route among candidate alternatives, to confirm or reject the suggestion. 1 Is.Realization of confirmation testing via the chosen experimental route. 12t. Rejection of defected articles by the QCC; decision on recycle or disposal. 13t. Realization of special surface treatment to bring the articles back to their initial condition, according to the remedial directive issued by the Laboratory. 14t. Implementation of special treatment chosen among reconmiended practices , e.g. local plating/anodizing, on condition that it is acceptable by the client. ISs.Transient storing of additionally treated defective articles till the issue of Laboratory testing results. 16t. Sampling among apparently good items according to standard or recommended or agreed practices and dispatch to Laboratory for offline testing. 17s. Transient storing of apparently good items till Laboratory testing. 18s. Knowledge processing for support and training.
368 f START \
f
PREVIOUS STAGE/STORING t
NEXT STAGE/STORINGn^—T
END
\
2 t
f~
A
>
20^1
i X—H
18
»9t|-
1 ves no
I
I Activity Node
^ \
Decision Node
P. Are there defected articles after remedy? R. Is there another alternative experimental route? U. Are the quality testing results acceptable? W. Was the initial fault caused by human mistake?
(
j Initiation/Termination
Q. Is the suggestion confirmed? T. Is oxide stripping feasible? S. Is surface restoration feasible? Z. Are there defective articles?
Figure 1: Flow Chart of procedures constituting a complete process in an anodizing/electroplating plant, according to the 21-step CAPE plan described in the text (t: trainee's initiative and demand; s: System's response and supply).
369 19t. Sensitivity analysis performance for changing weight values of subjective criteria in the multi criteria input vector. 20t. Sensitivity analysis performance for changing parameter values of the generalized function in the special multi criteria method adopted. 2Is.Storing of defected articles.
3. Implementation and Specimen Results The methodology described in the previous chapter has been applied successfully by the authors in the case of a complete plant of aluminium anodizing, which consists of the subsequent processes: cleaning, etching, polishing, electrolytic brightening, sulphuric acid anodizing, dyeing, sealing, finishing. A specimen run concerning the application of step 2 (see Figure 1) in the process of sulphuric acid anodizing of aluminium is presented herein, based on data provided by the Hellenic Aerospace Industry S.A. The input vector consists of the following six variables: voltage, current density, anodizing time, electrochemical efficiency, electrolyte concentration, and bath temperature, which take values within the ranges ( 9 - 2 1 V), (0.8 - 7.6 A/dm^), ( 1 0 - 8 0 min), (80 - 90 %), (5 - 30 g H2SO4/L), and ( 1 0 - 2 8 °C), respectively. The output vector consists of two variables, thickness of oxide and porosity of anodic layer, which take the values 12±0.5 jam and 11±1% respectively, as set by the client in the case of the specimen run under consideration. The input-output mapping resulted after learning gave 252 six-to-two input-output combinations satisfying the specifications set by the client. These combinations were clustered to 10 groups, which were reduced to 5 after filtering, constituting the set of alternatives Ai (i=l,2,..., 5). The criteria weight vector used in the fuzzy PROMETHEE was fj: 18, 2, 1; f2: 10, 1, 2; fy 15, 1, 3; f4: 8, 1, 1; fs: 6, 1, 3; fe: 16, 1, 4: ff. 27, 2, 4, where each triadic fuzzy number appears in the usual L,R form. The generalized preference function used was the linear one, with two parameters: q for defining the indifference region (lower threshold) and p for defining the end-of-linearity in preference change (upper threshold). The results shown in Figure 2 are (a) at high sensitivity level with low p, q values (p=0.50, q=0.25) and (b) at low sensitivity level with medium p, q values (p=1.0, q=0.50). These diagrams reveal the possibility of trainees/operators to influence the choice of best alternative by changing the weight values corresponding to subjective criteria, f^, fs, fe, f?; this possibility is significant only at high sensitivity level. On the contrary, the possibility of the rest members of the staff, who participate in training and operating, is more expressed through monitoring the parameter values of the generalized preference function. In this way, all participants, although belonging to different technological cultures and hierarchical levels, learn to cooperate closely during operation and training, as they determine together the conditions of real or simulated production via the KBS. One of the problems that may appear in implementing the present training and support System is that some times technical personnel which belongs to different hierarchical levels, with different culture, uses linguistic terms with varying contextual meaning, e.g. in describing/evaluating defects, like the ones presented by Batzias and Batzias (2002).
370
1
(a)
A5
A4 A3 A1 A2
o O) 0,8 •o .9- 0,6 in
» E
0,4
I
0,2
Figure 2. Results at (a) high sensitivity level with low p, q values and (b) low sensitivity level with medium p, q values; the contribution of trainee/operator to resolution increase for distinguishing the proposed alternative may be proved decisive. A solution to this problem might be the creation of an ontological communication protocol. A similar technique has been suggested by Batzias and Marcoulaki (2002), for the creation of local knowledge bases in the fields of anodizing and electroplating, but we have not incorporated yet this technique to the System described herein.
4. Conclusions CAPE, in the form of a network of procedures/decisions, can effectively include the human factor to achieve both, technical support and staff training. We designed/ developed/implemented a Knowledge Based System (KBS) which uses, inter alia, fuzzy multicriteria analysis to determine (i) optimal conditions for chemical or electrochemical surface treatment of metals and (ii) preferable experimental routes for investigating the causes of failure in production. The specimen run of the corresponding software presented herein, based on data supplied by the department of aluminium anodizing of a large industrial firm, shows how members of the technical personnel, belonging to different technological cultures and hierarchical levels, can cooperatively learn throughout a computer integrated production system.
5. References Batzias, A.F. and Batzias, F.A., 2002, Computer-Aided Chem. Engineering 10,433. Batzias, F.A. and Marcoulaki, E.C., 2002, Computer-Aided Chem. Engineering 10, 829. Geldermann, J., Spengler, T. and Rentz, O., 2000, Fuzzy Sets and Systems, 115,45. Johannsen, G., 1997, Control Eng. Practice, 5(3), 349. Johannsen, G., and Averukh, E.A., 1993, Proc. IEEE Int. Conf, on Systems, Man and Cybernetics, Le Touquet 4, 397.
6. Acknowledgements Aluminium anodizing data supply from the Hell. Aerospace Ind. S.A. and financial support provided by the Research Centre of the Piraeus Univ. are kindly acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
371
Sensor-Placement for Dynamic Processes C. Benqlilou\ M. J., Bagajewicz^, A. Espuna\ and L. Puigjaner^* ^Universitat Politecnica de Catalunya, Chemical Engineering Department, E.T.S.E.I.B., Diagonal 647, E-08028 - Barcelona (Spain), Tel.: +34-93-401-6733 / 6678, Fax: +34-93-401-0979. ^University of Oklahoma, 100 E. Boyd T-335, Norman, OK 73019, USA. On sabbatical stay at E.T.S.E.I.B., *Corresponding author.
Abstract This article presents a methodology to design instrumentation networks for dynamic systems where Kalman filtering is the chosen monitoring technique. Performance goals for Kalman filtering are discussed and minimum cost networks are obtained.
1. Introduction In general, the optimal location of measurement points should take into account aspects that improve plant performance such as process variable accuracy, process reliability, resilience, and gross-error detectability (Bagajewicz, 2000). These performance indicators (e.g. estimation precision, reliability) of a sensor network represent the constraints in the sensor placement design problem. Among different monitoring techniques like Kalman filters and various Data Reconciliation schemes, Kalman filter presents good variance reduction, estimation of process variables and better tracking in dynamic changes of the process (Benqlilou et al 2002). This performance, however, varies with the position and quality (variance) of the sensors. This paper focuses on the determination of the optimal sensor placement for the use of Kalman filtering.
2. Kalman Filter Algorithm A linear, discrete, state-space model of a process is usually described by the following equations. X. = Ax._i +5w._i +v._i
(1)
y. =CX. +W.
(2)
being x, the n^ dimensional state vector at instant / (representing time instant t = iT), T the sampling period, u is the n^ dimensional known input vector, v the (unknown) zero mean white process noise with covariance Q^ = £[y.v^ J, and w the unknown zero mean white measurement noise with known co variance R^ -
E\A;.W^
J.
In this work it is assumed that the A, JB and C matrices coefficients are known at all times and do not change with time, that is the resulting model is a Linear TimeInvariant (LTI) system model. Given a set of measurements (>',) it is desired to obtain
372
the optimal estimators of the state variables x,. These estimates (X...) are obtained using all measurements from time t = 1... i.By using all the measurements from the initial time onwards to derive the estimates, one is automatically exploiting temporal redundancy in the measured data. The Kalman Filter (Narasimhan S. and C. Jordache, 2000) starts by assuming an initial estimator of the state variables and an estimator of its error covariance matrix P:
XQ
= E[XQ]
(3)
Cov[xo]=Po
(4)
These quantities are used for the prediction of the state variable (no control input is considered, that is u=0) and the error covariance matrix P of the state estimate as follows:
x^/,_i=Ax,_i/,._i+Bw,_i
(5)
/^•/,-i=A/^-i//-iA^+G
(6)
The next phase is the updating of the state estimation and its error covariance matrix by using the process measurements.
Pui=(I-KC)P,n-i
(8)
where kt is the Kalman filter gain given by:
ki =Pi/i-lC^(CP^i-lC^+R)
^
(9)
The corrected values are obtained by formally minimising the sum of squares of the differences between the estimates and true values of state variables, and is thus an extension of the well-known deterministic least squares objective function.
3. Instrumentation Performance Measure If the Kalman filter is to be used as the monitoring paradigm, then it is necessary to choose or develop the desired performance measures. We define the performance f^p^^f of the estimation of the variable j by averaging its error variance element [Pi/i]jj over the entire time horizon:
'^Perf
Kvfe'll n 1=1 L^,7,Jy
373 Pi/i has been selected as a basis for the evaluation of the performance index since it can be determined a priori and without any previous knowledge of the measurements. The parameters required for its computation are R, Q and PQ. The measurement error covariance R is given by the quality of the sensors while the process noise covariance Q is generally more difficult to determine because one does not have the ability to directly observe the process. If an ideal process is assumed, where all variability sources are included in the model, Q = 0. Finally, the value of PQ is selected to be equal to R (practical initialisation for the filtering process). Under conditions where Q and R are constant, both the estimations of Pj/i error covariance and the Kalman gain ki stabilise quickly and then remain constant. Therefore, the asymptotic value of Pi/i can also be used as performance measure. In fact, when the Kalman filter is applied to a system that is continuous and dynamic, the latter is preferred, whereas when conditions reflect short lived batch systems the former is more appropriate. It is clear that performance can be constructed for any set of sensors if and only if the variables are observed. Thus, any design model needs to be able to guarantee observability, either independently or through the model equations. One possible global performance index can be constructed by comparing the measuring system performance with the one corresponding to the same system but in which all variables are measured. When only a few variables are of interest, only these will be considered, being S the set of variables of interest:
K^ Perf,l
_ _
•Si
(11) Perf _Current
Perf _Optimum\
being ko an smoothing value. However, an alternative performance index can be constructed by adding all the indices of the variables included in S:
n \p^
^Perfa-Zj
abs\ -
seS
V
•
-^^S^abs\ki ^Perf _Current ]
(12)
seS
4. Observability Given the topology of the process and the placement of the sensor, variable classification procedure aims to classify measured variables as redundant and nonredundant and the unmeasured variables as observable and unobservable. This is an
374 important task for the performance of DR, since the presence of unobservable or nonredundant variables may generate a singular matrix that could lead to the failure of the DR procedure. Bagajewicz (2000) and other authors have considered the classification procedure in the case of linear DDR or linearised DDR. This procedure allows obtaining the set of redundant variables, which are introduced in the Kalman filter algorithm via the matrix C. That is, for each measured variable a value of one in the corresponding diagonal element of C is introduced. Once the Kalman filter returns back the variance-covariance matrix of the adjusted variables, the variancecovariance matrix of the unmeasured but observable variables is obtained by using the observability model obtained by the classification variable procedure. In this way one can get the variance-co variance of all variables.
5. Design Model The minimum cost model proposed is the following: min (N^)
subject to: fiKp^^nf*
(13)
Required observability where/ is certain given threshold. For a given number of sensors A^^, the determination of the optimal sensor placement is found by determining the diagonal elements of the observability matrix C (if C„ = 1 the variable / is measured and if C„ = 0 it is not measured). To obtain the best performance, matrix C is varied. One difficulty with this formulation is that the threshold values/ are difficult to assess. It is possible to substitute the required observability constraints by the required variance of the state variable estimator. The unobservable variables are then represented by a variable with a very high variance. Since the optimisation problem includes binary variables, the solution is obtained by enumeration. The optimisation strategy is as follows: 1. Determine the optimum performance, given by the case when each process variable involved in the dynamic mass balance is measured i.e. the observability matrix is the identity matrix. 2. Eliminate one sensor and obtain the list of sensor networks from the total set of combination alternatives that satisfies the constraints. 3. Obtain the system performance by selecting the minimum value of the objective function given the list obtained in step 2. 4. Repeat steps 2,3 and 4 until Ns is equal to the minimum number of sensor that allows the system observability.
375 Only when the performance index can be expressed in the same units as cost, one can construct a true cost minimization algorithm. Before that is obtained, one needs to look at a spectrum of solutions and decide the best trade-off of performance vs. cost. In any case, the Pareto optimum space over the different objectives can be determined.
6. Case Study Figure 1 shows a process network used as a Case Study to evaluate the proposed sensor placement methodology, taken from Darouach and Zasadzinski (1991): eight streams and four nodes form it. Simulated measured data were generated from the true values that obey the balance relations with an addition of normally distributed random errors with zero mean and known variance-covariance matrix.
Q8 "^^
• Wl
Q2
w
f
)
Q7
Q3
w
T
1
w
04
[• W\
W4
^'
• W
Figure 1. Process Network. In this case study, both level and flow-rate sensors are considered with no more than one sensor per variable (multi-observation is not considered). In figure 2.a it can be seen the behaviour of system performance based on comparison with asymptotic performance (equation 11).
Number of sensor
Figure 2.a. Correlation between Ns and the system performance using Eq. 11
10 Number of sensor
Figure 2.b. Correlation between Ns and the system performance using Eq.l2.
In figure 2.b system performance is based on the sum of individual performances (equation 12). The unmeasured variables are emulated by given an initial guess value with a highly variance. It can be seen form these figures the results of both approaches and the advantage of these procedures for improving the sensor network design decision making. The first approach shows a significant jump in performance when going from 9
376 to 10 sensors, and suggest that a good choice is using 10 sensors, because the improvements afterwards are marginal. Figure 2b, however, does not detect this feature.
7. Conclusions A new approach to design instrumentation networks for dynamic systems based on the Kalman filtering techniques is presented. Different performance measures are proposed and compared through a Case Study. The optimisation problem is solved in an enumerative way. Also the observability was tackled by two approaches; the same results can be obtained, and the difference between them mainly affects to the mathematical characteristics of the resulting models (and the required solving procedures), as well as to the need of a classification based on observability.
8. References Bagajewicz, M., 2000, Design and Upgrade of Process Plant Instrumentation. (ISBN: 156676-998-1), CRC (formerly published by: Technomic Publishing Company) (http://www.techpub.com). Benqlilou C , Bagajewicz, M.J., Espuria, A. and Puigjaner, L., 2002, A Comparative Study of Linear Dynamic Data Reconciliation Techniques, 9th Mediterranean Congress of Chemical Engineering. Barcelona, Nov. 26-30. Darouach, M. and Zasadzinski, M., 1991, Data Reconciliation in Generalised Linear Dynamic Systems, AIChE J., 37(2), 193. Narasimhan, S. and Jordache, C , 2000, Data Reconciliation and Gross Error Detection: An Intelligent Use of Process Data, Gulf Publishing Co., Houston, TX.
9. Acknowledgements Support form the Ministry of Education of Spain for Dr. Bagajewicz sabbatical stay at E.T.S.E.I.B. is acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
377
Chaotic Oscillations in a System of Two Parallel Reactors with Recirculation of Mass Marek Berezowski, Daniel Dubaj Cracow University of Technology, Institute of Chemical Engineering and Physical Chemistry, 31-155 Krakow, ul. Warszawska 24, Poland, e-mail: [email protected]
Abstract The paper deals with the analysis of the dynamics of a system of two non-adiabatic reactors, operating in parallel. An effect of recycle degree and division of feedstock on generation of temperature-concentration chaotic oscillations in the system is investigated.
1. The Model The system of two independence chemical reactors, operating in parallel, is presented in Fig.l. REACTOR
9
1 1-f , —*^—^ ,
Q
a ,0 1
1
"s.^s
REACTOR
l-q '
r^
^
2
2*
2
/ Fig. 1. Conceptual scheme of the system. To the inlet of the system a stream of feed is introduced, with the normalized flowrate equal to 1-f. On the other hand, to the inlet to individual reactors the streams, according to the division q and l-q are introduced. The outlet streams of the products undergo mixing with each other, yielding the resulting degree of conversion and temperature according to the relation:
a^ = qa, + (1 - q)a^;
0^ = qG, + (1 - q)Q,
(1)
378 The whole system operates as a recirculating one, which enables one to recover the nonreacted mass and heat evolved in reactors. For tank reactors the corresponding balances are presented by equations: Mass balance of reactor 1:
da,''1 dz
= q{fa^-a,)^(l),{a,,@,)
(2)
Heat balance of reactor 1
^,^=9(yB.-0,)+<2>,(«n©,)+(i-/)<^(0«-0,)
(3)
at Mass balance of reactor 2 d (y
8—^ = (l-q)(fa^-a^)-t(/>^(a^,0^) dt
(4)
Heat balance of reactor 2 at Kinetic of reaction of respective reactors are described by Arrhenius type relations:
(t>, {a,, 01) = (1 - f)Da{\ - a, Y e x p ( r - ^ ^ )
(6)
1+ ^ 1
(/>,(a,,e,) = {l-f)8Daa-a,rtxpiy/^
)
14-/782 As it is well known, each of the reactors can generate autonomic temperatureconcentration oscillations (M. Sheintuch and D. Luss, 1987; L.F. Razon and R.A. Schmitz, 1987; W. Zukowski and M. Berezowski, 2000). Their period depends on the values of reactor's parameters. If the reactors differ from technological viewpoint, the generated oscillations may differ in frequencies. In this situation the mixing of two outlet streams may lead to time series of multiperiodic or quasiperiodic character. It comes out that if a part of the resulting stream is mixed with the feed, such a system may generate chaotic changes of concentrations and temperature in individual reactors. This has been proved by numerical analysis. In Fig.2 the Feigenbaum diagram is
(7)
379 presented, which illustrates the character of dynamics of the system under consideration as a function of the recirculation coupling/.
Fig. 2. Feigenbaum diagram; q=0.5. On the vertical axis of the diagram the extreme values of the conversion degree Ct^ are indicated. Fig.2 shows clearly the interval of periodic oscillations (lines), of quasiperiodic oscillations (shaded areas generated in the point/=0) as well as intervals of chaotic oscillations (shaded areas preceded by the scenario of doubling the period). The only one steady state is marked by broken line. It is the unstable steady state. The character of individual intervals has been confirmed both by the sensitivity of the model with respect to the initial conditions and to the corresponding Poincare sections. In Fig. 3 the analogous Feigenbaum diagram is presented, which illustrates the character of dynamics of the system under consideration as a function of the division of stream of feed degree q.
380
OJ
0.2
03
0.4
05
0.6
0.7
0.8
0.9
1
Fig. 3. Feigenbaum diagram; f =0.1. Three areas are seen in it. On the left side of the figure there is area of triple steady states. On the bottom part of this area, the unstable branch is seen (broken lines). The middle part of figure includes an area of stable (continuous line) and unstable (broken line) single steady states. Unstable steady states generate both periodic solutions (single lines) and chaotic solutions (shaded area). On the right side of the figure there is an area of triple steady states. The bottom unstable fragments (broken line) generate stable periodic solution (continuous lines). Poincare set confirming the chaotic character of the solution is presented in Fig.4. All the calculations have been performed for the following parameters values: Da=0.02, n=1.5, p = 2 . 7 , y=l5, 0=1,5, Sfj
= 0 . 0 1 , o, = ^ 2 = 1 . 1 , 6 = 1 . 1 .
381
Fig. 4. Poincare set; f^O.l
q=0.525.
2. Conclusions It is interesting that two reactors, operating in parallel and coupled by the recirculation loop, can generate chaotic oscillations. Joining in parallel the apparatus, fed by the streams of the same composition and identical flowrates means, namely, nothing else as the increase of volume, in which the reaction process occurs. Thus, it seems that two reactors operating in parallel are, from the model point of view, equivalent to one larger reactor. On the other hand, the application of recycle in a single apparatus does not introduce qualitative changes in the system. The situation looks like that, when the system operates as a steady-state one. In such case all inertial constants are of no importance. The situation is different when one has to do with unsteady states. Different volumes of reactors and so-different residence times in individual reactors - may generate in them various types of dynamics. In the example presented the constant S = 1 . 1 , connected with the ratio of the volume of reactor 2 to the volume of reactor 1. In the conclusion, we have to point out with reference to that, the tubular reactor with axial dispersion may be modeled by the cascade of tank reactors, the obtained in this study qualitative results can be transposed to the solutions of the system made up of a parallel tubular reactors. It means that, the parallel heterogeneous tubular system with
382 recycle may also generate temperature-concentration quasiperiodic and chaotic oscillations, although the single apparatus offers the single periodic oscillations only.
3. Symbols Da f n q Ct
Damkohler number recycle coefficient order of reaction partition coefficient conversion degree
)6
coefficient related to enthalpy of reaction
6
dimensionless heat transfer coefficient
8
ratio of volume of reactor 2 to volume of reactor 1
y
dimensionless number related to activation energy
O
dimensionless capacitance of reactor
0
dimensionless temperature
Subscripts 1,2 refers to reactor 7 or 2 s refers to outlet from the system H refers to heat exchanger temperature
4. References Razon, L.F. and Schmitz, R.A., 1987, Chem. Engng Sci., vol. 42, 1005: Multiplicities and instabilities in chemically reacting systems - a review. Sheintuch, M. and Luss, D., 1987, Chem. Engng Sci., vol. 42, 41: Identification of observed dynamic bifurcations and development of qualitative models. Zukowski, W. and Berezowski, M., 2000, Chem. Engng Sci., vol. 55, 339: Generation of chaotic oscillations in a system with flow reversal.
5. Acknowledgements This work was supported by the State Committee for Scientific Research (KBN-Poland) under grant number PBZ/KBN/14/T09/99/01d.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved,
383
Control Structure Selection for Unstable Processes Using Hankel Singular Value Yi Cao^ and Prabikumar Saha School of Engineering, Cranfield University, Bedford MK43 OAL, UK
Abstract Control Structure Selection for open loop unstable processes is the main theme of this paper. Hankel singular value has been used as a controllability measure for input-output selection. This method ensures feedback stability of the process with minimal control effort as well as it provides a quantitative justification for the controllability. Simulation results with Tennesse-Eastman test-bed problem justify the proposed theory.
1. Introduction One of the most important issues in Control Structure Selection is choosing appropriate screening criteria, viz. controllability measure, for input and output combinations. I/O selection is performed based on a plant model and a proposed set of candidate actuators and sensors. Reasons for not using all the available devices could be the reduction of control system complexity. Various controllability measures are available in the literature; foundation is often laid by singular value decomposition such as singular vectors, relative gain array, I/O effectiveness factor etc. However, few of them address the combinatorial issue involved in I/O selection from a large number of candidates particularly for open loop unstable processes. Sometimes it may be desirable to perform I/O selection for an open-loop unstable plant that is already equipped with the devices which may be used to control certain outputs to ensure feedback stability prior to further design of control system (McAvoy & Ye, 1994). However these decisions were made solely on the basis of engineering understanding. The aim of the present work is to find a quantitative measure which can be used to select a large number of candidate inputs and outputs for open loop unstable processes.
2. Theoretical Background Glover (1986) studied the robust stabilization of a linear multivariable open-loop unstable system modelled as (G+A) where G is a known rational transfer function and A is a perturbation (or plant uncertainty). G is decomposed as G1+G2, where Gy is antistable and G2 is stable (Figure 1). The controller and the output of the feedback system are denoted by K and y respectively. Gy is strictly proper and K is proper. Glover (1986) argued that the stable projection G2 does not affect the stabilizability of the system, since it can be exactly cancelled by feedback. The necessary and sufficient condition for G to be robustly stabilized is to stabilize its antistable projection Gy. ^ Author for correspondence ([email protected])
384
rHZ
nw (7)\ w
H=^ ^
k.
?l
1 j 1j
->
—UtFj
Antistahle projection
whole process
Gi
K_ ^
^
Stable projection
BzH [Ci D2J FAZ
Figure 1: Closed loop system subject to plant uncertainty A Consider, RIT will denote the space of proper rational transfer functions with no poles on s-jw with norm denoted by • I ; RH" will denote the subspace of RLT with no poles in the closed right half plane; A* is the conjugate transpose of matrix A, whereas for a rational function of 5, Gj* denotes [Gi^-s)]*. The above feedback system is internally stable iff Gi*, 5 ,
KS,
SG^, I-KSG^e
RH'
(1)
dei{l-GiK)(oo)^0
(2)
S
(3)
:={l-GiKy^
where S is the sensitivity matrix. The objective is to find a controller K that stabilizes (G7+A) for all allowable perturbations A. In other words, the problem is to achieve feedback stability by a controller K with minimal control effort i.e. to minimize IKS^^ . Francis (1986) argues that for technical reasons it is assumed that Gj has no poles on the imaginary axis; thus Gj belongs to /?L~, but not RfT". In that case, the minimum value of l^'S'l^ over all stabilizing i^s equals the reciprocal of minimum Hankel singular value of Gj , <7^n (T^*) i.e. the square root of the smallest nonzero eigen value of r^*r^* where r^* is the Hankel operator with Gj*. Francis (1986) proves that the operator r**r^* and the matrix LcLo share the same non-zero eigenvalues where Lc and Lo controllability and observability gramians respecively. If the system has M inputs and A^ outputs, state space matrices of Gj can be written as: B,=lb, C,=[c''
b^ .... bj^] c^
....c^f
(4) (5)
where {bj,j=\...M} and {c„ j=l...A^}are state space matrices of subsystems with 7"' input and i* output. Their controllability and observability gramians will be
385
A,% + 4,Ai=c/c,.
(7)
The minimum HSV of that subsystem will be ^,; := ^™„ (re- ) = jA~iLr*L;j
(8)
where A^^ (•) denotes the minimum eigenvalue. Understandably, it is possible to extract subsystems with m selected inputs and n selected outputs from Gj. Suppose the selected inputs and outputs have index sets, J={ju jiy •••» 7m} and /={/i, ii, ..., in), respectively. ControllabiUty and observability gramians for any such subsystem will be je J
/e /
and the HSV can be calculated in a similar manner as in Eqn 8. It is observed that if GU is the HSV of the selected subsystem and cr^ is that of the whole process then cT/y is always smaller than Go and the numerical value of Cu decreases monotonically with further elimination of input(s) and output(s). Thus, it is possible to find out a set of inputs and outputs such that - a subsystem consisting of those particular inputs and outputs will satisfy the argument, CTQ - (J/y < S , where (Jean be arbitrarily chosen. However, interaction between candidate inputs and outputs immensely affect the numerical values of HSV vector. Thus, minimum value of HSV vector alone is not sufficient to identify the correct I/O combination. Instead, the sum of all the elements of HSV vector, which is normalized by the HSV of the whole process, is used as a controllability measure in this work (Lim, 1996). Presence of pure integrators in the system will invalidate the above theory as the matrix A J in Eqns (6) & (7) will be zero. To counter this problem, a diagonal matrix, having small negative magnitude as elements, can be added to the system matrix A before decomposing into stable and antistable parts. Through this operation, the poles will shift slightly towards right without disturbing overall characterisfics of the process. However, the pure integrators will move to the RHS of imaginary axis and will be converted into small positive poles which can later be extracted in Gj.
3. Tennessee-Eastman Process The Tennessee Eastman test-bed problem (Downs & Vogel, 1993) involves the control of five unit operations. The simulated plant has 41 process variables and 12 manipulated variables as illustrated in Figure 2, which are modeled with 50 state variables. Out of 41 process variables there are 22 controllable outputs including level, pressure, temperature, flow and 19 composition indicators. The chemical reactions are irreversible and occur in the vapor space of the reactor. The formation of an inert byproduct, F, is undesirable. The products G and H accumulate in the reactor. In this paper the desired set point is 50% G and 50% ^ on a mass basis. By-product F may be present in the product with 97.5% of the product being composed of G and H.
386
^.^JH
©
Figure 2: A schematic diagram of Tennesse-Eastman process A model of the process is generated in Simulink® software. Details of the process data are available elsewhere (Downs & Vogel, 1993). Open loop simulation of model indicates that the process is unstable in nature. The control objective is to select the inputs & outputs in order to stabilize the complete system.
4. Results and Discussions 4.1. Control structure selection The entire analysis has been carried out through MATLAB® software. Due to wide range of the numerical values of the base case I/O variables, proper scaling is essential for reliable analysis. Although the manipulated inputs are already scaled between 0 to 100% rather than their original engineering units, further scaling factor of 5% of their maximum possible variation in either side of base case value, is used to avoid input saturation during real time control. On the other hand, various scaling factors are used for outputs depending on their relative importance during the run of the process. Flow, pressure and level variables are provided respectively with 10%, 2.5% and 2% of their nominal values as scaling factor whereas that of the temperature variables is only 1°C. Resulting scaled process is decomposed into stable and antistable subsystems using STABPROJ subroutine of the Robust Control Toolbox. The antistable subsystem has two pairs of positive complex poles, 3.0648 ± 5.0837/ & 0.024973 ± 0.15521/ and two pure integrators (numerical value is of the order of 10'^).
387
12.5
Figure 3: HSVs of subsystems with one input (or output) and all outputs (or inputs). To observe the effect of individual inputs and outputs on the process, only one input (or output) has been taken at a time along with all the outputs (or inputs) for analysis. Hankel singular values are calculated and shown in the Fig. 3 which clearly shows dominance of liquid levels in stripper and the separator (outputs 12 & 15 ) and reactor cooling water flow (input 10) over the other inputs and outputs. From engineering understanding it can be concluded that outputs can be controlled with stripper and separator liquid flow and reactor cooling water flow can be used to control the reactor temperature. Further analysis using a Branch & Bound method has proven this selection correct (Saha & Cao, 2002). 4.2. ControUer design & closed loop results The graphical user interface SISOTOOL is a design tool in MATLAB® software which allows one to design single-input/single-output (SISO) compensators by interacting with the root locus, Bode and Nichols plots of the open-loop system. PI controllers are designed for TE process using this tool by closing one loop at a time (Table 1). Table 1. Controller settings for TE plant. Lx)ops Output 12 vs Input 7 Output 15 vs Input 8 Output 9 vs Input 10
Controller settings Gain Reset -0.1 0.01 -0.1 0.01 -7.2 0.01
388 Controlled output profiles
0
0.2
0.4 0.6 Time (hr)
0.8
Manipulated Input profiles
1
^
0
0.2
0.4 0.6 Time (hr)
0.8
1
Figure 4: Closed loop response of TE plant using stabilizing PI controller. The closed loop simulation results are shown in Figure 4. PI controller renders the desired stability.
5. Conclusion It is shown in this work that Hankel singular value can be used as an I/O controllability measure in the control structure selection for open-loop unstable processes. This method ensures feedback stability of the process with minimal control effort as well as it provides a quantitative justification for the controllability. Simulation results with Tennesse-Eastman test bed process show that the selected I/O set is effective and prediction made by the new I/O selection measure is valid.
6. Literature Cited Downs, J.J and Vogel, E.F., 1993, Comput. Chem. Engng., 17(3), 245-255. Francis, B.A., 1987, Lecture Notes in Control and Information Sci, A Course in Hoc Control theory, Eds, M.Thoma and A.Wyner, Springer-Verlag, Heidelberg (Germany), 140. Glover, K., 1986, Int. J. Control, 43(3), 741-766. Lim, K.B., 1996, J. Guidance, 20(1), 202-204. McAvoy, T.J. and Ye, N., 1994, Comput. Chem. Engng., 18(5), 383-413. Saha, P. and Cao, Y., 2002, PSE 03, China. Accepted for presentation.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
389
Neural Networks Based Model Predictive Control of the Drying Process Mircea V. Cristea, Raluca Roman, §erban P. Agachi "Babe§-Bolyai" University of Cluj-Napoca, Faculty of Chemistry and Chemical Engineering, 11 Arany Janos, 3400 Cluj-Napoca, Romania, [email protected]
Abstract The paper presents the simulation results of using Neural Networks (NN) for building a statistical model, afterwards used for Nonlinear Model Predictive Control (NLMPC) of the drying process. Important incentives of the NN approach are explored, such as modelling the drying process for which detailed governing rules may be difficult to formalize as first principle models and reducing the computation time in nonlinear model based control. Different control structures are tested, using direct or inferred (NN based observer) measured process variables. Incentives and drawbacks of the different control approaches are outlined and the most favourable control scheme is pointed out. Simulation results reveal clear benefits for the NN based NMPC using the NN based observer approach, compared with traditional control methods, and prove incentives for industrial implementation.
1. Introduction The high-voltage electric insulator production implies a first step batch drying process in order to reduce the moisture content of the drying product from 18-20% to 0.4% in special gas-heated drying chambers. The second step is carried out in high temperature ovens, to achieve the desired moisture content of the final product. Gas and air flow rates are controlled according to a special program, during a period of about 100 hours, in order to obtain the desired moisture content and avoiding the risk of unsafe tensions in the drying products. The main control problem for the drying process performed in the gas heated chambers is the lack of direct measurement of the product moisture content, needed for feedback control. Building a detailed first principle model able to thoroughly describe the spatial and temporal evolution of properties inside the electric insulator body, needed in model based control, is complex and not yet reliable. The NNbased control may overcome some of these problems. The main studied outputs of the process are the moisture content of the drying product X, the outlet air temperature To and the air humidity XQ. The considered inputs of the process are the natural gas y^ and fresh air ^ . flowrates, as manipulated inputs, and the temperature T^ and moisture Xe content of the inlet air together with the heating capacity of the natural gas Hf, as disturbances. The gas-heated chamber may be considered divided into three sections as shown in figure 1. Section 1 represents the air volume within the drying chamber, section 2 the direct surroundings of the drying product. Section 3 represents the drying product itself
390
"^^nrr
XT" ^Section
Figure 1. Description of the drying chamber.
The manipulated input and output variables together with the above mentioned disturbances have been chosen based on practical (industrial) considerations and results obtained in previous studies (Cristea et al. 2000).
2. Methods Founded on an idealised model of the biological neuron, the calculation paradigm of NN is able to represent information on complex systems. Neural Networks are composed of simple elements, neurons, operating in parallel. The network function is determined mainly by the connections between its neurons. The weighted connection paths link every two neurons to each other, the weighting structure providing the total network performance. The main benefits of the NN approach consist in their remarkable ability of learning, generalisation and robust behaviour in the presence of noise. As a consequence, the NN may be successfully used for modelling systems in which detailed governing rules are unknown or are difficult to formalise, but the desired input-output set is known. They also present incentives for cases when input-output data are noisy and when high processing speed is required. The employed NN architecture was of multilayer feed-forward structure, using the backpropagation training algorithm for computing the network biases and weights. Two layers of neurons have been considered, having the tan-sigmoid transfer function for the hidden layer and the purelin transfer function for the output layer. The quasi-Newton Levenberg-Marquardt algorithm was used for training the NN and an early stopping method was applied for preventing the NN overfitting and improving generalization (Hagan and Menhaj, 1994).
3. Research Results 3.1. Neural network design and training Building the neural networks model has been the first step in performing the NN based NLMPC. The NN model of the dryer has been developed to serve two goals. The first one is to provide information on time evolution of target variables, inherently needed for prediction in the NLMPC algorithm. The second one is to infer the moisture content
391 of the drying product, based on available measured variables, model that is later used for NN observer based NLMPC. The NN developed model has the complimentary property of requesting reduced computation effort supplying the algorithm with speed necessary for real time implementation. The structure of the NN consists in two layers of neurons having as NN-inputs: the natural gas flow rate, the moisture content of the drying product, the outlet air temperature and the chamber air humidity (considered with values at the current sampling time t). The last three variables are the state variables of the process. The NNoutputs are the same three state variables but considered at the next sampling time t+At. The trained NN is designed to predict one step into the future the behaviour of the state variables. Applied repeatedly, the dynamic NN predicts the time evolution of the state variables over a desired time horizon. As the drying process of electric insulators is performed batch wise, the process presents as particularity the lack of steady states. According to this behaviour, the training procedure of the NN has been achieved on the basis of a special prepared set of training data. The training set of data has been chosen in accordance to the industrial control practice. Good training performance has been obtained, proved by the close to unity correlation coefficients between the training data set (targets) and the NN simulation data set (NN response). Following the testing step, the trained NN has been used to simulate the drying process having imposed different drying programs, compared to those used for training. The favourable fit was also preserved for the testing subsets of data demonstrating a good generalization property of the NN. The prediction capability of the NN has been subsequently exploited for the observer based nonlinear model based predictive control. 3.2. NN based model predictive control The NLMPC structure obeys the current control practice, i.e. driving the evolution of the moisture content of the drying product in the desired way by means of controlling the air temperature inside the chamber. Usually, the desired decreasing profile of the drying product moisture content is obtained by imposing an increasing ramp-constant profile on the air temperature. The NN based Nonlinear Model Predictive Controller uses the previously trained NN to perform its prediction tasks. Step response models, simulated by the NN, have been used in two approaches: single model approach and multiple models approach. The second approach uses updated models for every of the ramp-constant time segments of the drying programme. First, the setpoint following capacity was tested in the absence of disturbances. Simulation results, for drying chamber air temperature control, are presented in figure 2. The results revealed a very good behaviour, particularly for the NLMPC case using multiple models. The control performance shows reduced overshoot and short settling time, compared to the simple PID control structure (Cristea et al. 20(X)). Afterwards, control performance testing was carried out in the presence of three significant disturbances typically occurring in the industrial practice: 10 ^C inlet air temperature T^ drop (from 16 ^C io 6 ^Q, 10 % heating power capacity Hf drop of natural gas and 20 % inlet air moisture content x^ rise.
392 Chamber Temp.
1
1.5
2
Time [s]
3.5 xlOP
Figure 2. Setpoint following ability of the NN based MFC for single (solid line) and multiple (dashed line) model approach.
All of the three disturbances have been introduced as steps at time t= 120000 s. NNbased NMPC control result, for the heating power capacity Hf (and most important) disturbance is presented in figure 3. Chamber Temp. I PC]
1.5
Time [s]
2 X 10
Figure 3. Disturbance rejection ability of the NN based MFC for the heating power capacity Hf drop, single (solid line) and multiple (dashed line) model approach.
393 Disturbance rejection aptitude of the NN-based NLMC presents favourable control performance results both with respect to the setting time, overshoot and to the zero offset, for all of the tested disturbances. Taking into account that the target variable, the moisture content of the product, is not available for direct measurement, a NN based state observer is proposed for its estimation. The data provided by the NN state observer is used for feedback nonlinear model predictive control of the product moisture content, as shown in figure 4.
NN B A S E D MODEL PREDICTIVE C CONTROLLER
S e tp o in t I \^ N — N.
I i
L^
NN B A S E D STATE OBSERVER mi
;j C e r a m ic
m ai
mass
Figure 4. Structure of the control system for direct moisture content control using NN based state observer and NN based NLMPC controller.
Product Moisture Content
1.5
2
Tinne [s]
2.5
3.5
4 x10^
Figure 5. Setpoint (dashed line) following and disturbance rejection ability for direct moisture content control using NN based state observer and NN NMPC (solid line).
394 The time dependent setpoint selection for the moisture content is based on practical and theoretical considerations concerning the time evolution of the product drying-rate. The conditions stated by the above mentioned considerations are best fulfilled by a sevensegment ramp function, which is actually used as setpoint. Simulations were conducted using this control structure and the results, for two 10% heating power disturbances applied at both t=3000 s and t=1250000 s, are presented in figure 5. The simulation results for this control structure show both good setpoint following and disturbance rejection capability. Although slight decrease of the control performance quality has been noticed, this alteration presents reduced influence on overshoot, response time and offset. The ability to perform direct control of the product moisture content may be highly appreciated. The applied model predictive algorithm has a few special features that make it more effective: operates with constraints on manipulated variables and controlled variables; a nonlinear form of the MPC algorithm was used to obtain feasible control performance; the NLMPC controller was tuned according to the dynamic sensitivity analysis.
4. Conclusions The proposed NN based observer of product moisture content coupled with the NN based model used for NLMPC prove to be a good strategy for controlling the drying process of electric insulators. The results obtained simulating NN based NLMPC reveal good setpoint tracking performance as well as an effective disturbance rejection. Its high performance is owing to the use of the direct control of the product moisture content relying on the NN based state observer, to the calculation speed provided by the NN based model to the NLMPC algorithm, and to the optimal manipulation of both the inlet air and gas flow rates. The NN based model offers the incentives of capturing the intrinsic behaviour of the drying process, otherwise difficult to be described in first principle models. Utilization of this method should be carefully performed with respect to the quality of the training set of data. Increasing the confidence for this set of data may be obtained by expunging the outliers, filtering the data presenting errors in the NN simulation steps and repeating the NN training procedure, both preceded by a careful analysis of the physical feasibility of the training data. For the industrial plant, the NN based NLMPC may result in more efficient energy management, higher productivity and improved product quality.
5. References Cristea, V.M., Baldea, M., and Agachi, §.P., 2000, Model Predictive Control of an Industrial Dryer, ESCAPE-10, Florence. Hagan, M.T., and Menhaj, M.H., 1994, Training feedforward networks with the Marquardt algorithm, IEEE Transaction on Neural Networks, 5, 989.
6. Acknowledgements Financial support from The Swiss National Science Foundation within the Institutional Partnership Project no.7 IP62643, with ETH Zurich, is gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Krasiawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
395
Real-Time Optimization Systems Based On Grey-Box Neural Models F.A. Cubillos^^^ and E.L. Lima^^^ ^^^ Depto. Ing. Quimica, Universidad de Santiago de Chile. Casilla 10233, Santiago, Chile. Fax: 56-2-6817135 , email: [email protected] ^^^ Programa de Engenharia Quimica, COPPE, Universidade Federal do Rio de Janeiro. C.P. 68502 CEP 21945-970, Rio de Janeiro, RJ, Brasil.
Abstract This paper investigate the feasibility using of grey-box neural type models (GNM) for design and operation of model based Real Time Optimization (RTO) systems operating in a dynamical fashion. The GNM is based on fundamental conservation laws associated with neural networks (NN) used to model uncertain parameters. The proposed approach is applied to the simulated Williams-Otto reactor, considering three GNM process approximations. Obtained results demonstrate the feasibility of the use of the GNM models in the RTO technology in a dynamic fashion.
1. Introduction The online optimization of chemical plants has enjoyed considerable industrial interest because of it capacity to achieve competitive advantages in the marketplace. Numerous successful applications of the real-time optimization in industrial practice have been reported (Nath and Alzein, 2000). The economic performance of an RTO system is measured by the expected profit achieved, which is strongly influenced by the quality of the model used (Loeblein and Perkins, 1998). The RTO systems reduce the plant/model mismatch by updating the model using actual and historical plant data sets (Yip and Marlin, 2002). Because the RTO execution is time consuming, simple phenomenological steady state models are currently used. In practical situations, however, it is difficult to reach the steady state among each RTO execution period, leaving the plant in a permanent state of slow dynamic changes. Under this condition model is not entirely consistent and an inefficient update process could reduce the economic performance of the plant. The key to solve this problem is the use of phenomenological dynamical plant models, that have the disadvantages of being difficult to obtain and difficult to update in real time. An alternative way of solving this difficulty is the use of dynamical models, based on combinations of first principles and neural networks (NN), called grey-box neural models (GNM). A GNM normally consists of a phenomenological part (heat and/or mass balances differential equations) and an empirical part (a neural network in this work). Due to the inherent flexibility of NN, models based on this structure are well suited to represent complex functions such as those encountered in chemical reaction processes. This work proposes incorporate in the RTO system a dynamical GNM of the
396 plant based on phenomenological principles and neural networks. The model update is equivalent to the network training. But, the main reason for the use of this approach is the fact that a steady state equivalent model may be easily derived from the dynamical GNM and used efficiently in the optimization step.
2. Grey-Box Neural Models (GNM) Neural networks are non-parametric, massively connected models that perform quite well for approximating nonlinear, multivariable functions. This unique property has allowed using them successfully in predictive control of nonlinear processes. Grey-box Neural Models combine a phenomenological model of the system with neural networks, which estimate the uncertain parameters of the process. This technique enables the synthesis of simpler mathematical models than purely phenomenological ones, with more robust generalization properties than purely NN models. These two properties make the GNM especially attractive in tasks associated with Process Identification, Process Control and Optimization, (Cubillos and Lima, 1998; Xion and Jutan , 2002). The GNM technique consists on formulating a process model, formed by equations derived from phenomenological principles - such as mass, energy and momentum balances - and neural networks, dedicated to estimate those parameters difficult to model or uncertain. This form of representation is an attempt to add prior knowledge to black-box neural models, in order to reduce its complexity and improve their adaptive and predictive properties (Psichogios and Ungar, 1992). Thompson and Kramer (1994) classified these grey-box models into two main types: NN bringing intermediate values (parameters or variables) to be used by the phenomenological model {series grey-box models) or NN in parallel with the dynamic model compensating the plant/model mismatch (parallel grey-box models). Figure 1 shows the series scheme for a Grey-box model as used in this work.
Inputs
Phenomenological
Outputs
Dynamic Model Parameters
Figure 1: The GNM approach.
397
3. Process Descriptions In this work we applied the proposed approach to the simulated CSTR reactor from the Otto-Williams benchmark plant modified by Forbes and Marlin (1996) that may be described by elementary kinetics. The reaction sequences are: A+ B->C
(1)
BH-C-> P + E
C + P->G Instantaneous profits can be expressed as a function of the feed and product flowrates as (Forbestetal.,1994) P = 1143.38*X(P)* Fr + 25.92 X(E)*Fr - 76.23*Fa - 114.34*Fb
(2)
The ideal CSTR had no reactor temperature dynamic, and the manipulated variables to optimize where Tr and flowrate of B component (Fb). Fa and M were set to [ 2 Kg/s, 10220 Kgs]. GNM synthesis To study the feasibility of the use of the GNM type models in the RTO technology, three different modelling schemes were selected as described in Fobest et al. (1994): i) ii) iii)
Single reaction approximation (Ml): [ A + 2B -> P +E] Two reactions approximation (M2): [A + 2B ^ P +E ; A +B +P -> G] Complete three reactions system (M3) as described in (1).
Each GNM was synthesized considering the non stationary mass balance for each species. Feedforward neural networks were added to estimate the hypothetical reaction rates with unknown kinetics. Target reaction rates were calculated directly from discretized mass balances. For example, for a single reaction model (Ml), P component balance may be used to estimate unique reaction rate given by: R, = (Fr*X(P)/M)k+(X(P)k-X(P)k.i)/To
(3)
In order to obtain adequate data estimation for the reaction rates, PRBS input sequences in Fb and Tr were introduced, using a sample time of 1000 sec. Operation conditions and outlet concentrations were register to train the neural networks. The model update that consists of the NN adaptation was carried out using a second order recursive algorithm. The best NN structures were found by a systematic training procedure, considering outlet concentrations of A and B components and the reactor temperature as the inputs to the networks. Finally, networks with one hidden layer with four nodes and sigmoidal activation functions were selected. Based on the updated dynamic GNM it was possible to derive an equivalent steady state model able to be used for optimization purposes.
398 To illustrate the approach, considering the two model approximation (M2), the model has to estimate two reaction rates (Rl and R2) using a neural networks. In the steady state the model equations are: Fa - (Fa+Fb)*X(A) - Rl*M - R2*M = 0 Fb - (Fa+Fb)*X(B) - 2*R1*M - R2*M = 0 - (Fa+Fb)*X(P) + R1*M - R2*M =0 - (Fa+Fb)*X(E) + 2*R1*M =0 - (Fa+Fb)*X(G) + 3*R2*M =0
(4)
A similarly approach was used to derive the other GNM models, where the NN' were used to estimate the respective reaction rates. Figure 2 shows the prediction of the outlet concentration for E product using M2 model. Similarly results are found for the Ml and M3 models showing that the GNM scheme is able to adequately track the process dynamic.
4. RTO results The GNM approach with the three alternative adapted models were tested in the RTO scheme considering similar operation conditions as used by Forbes and Marlin (1996), with feed flow rate and reactor temperature as the optimization variables. The optimization problem is to maximize the profit, as indicated in Equation 2, constrained by the corresponding dynamical GNM models. A dynamical test applied considering the reactor operating in a non optimum point. At a pre-established time, the RTO is connected to the plant, running each sample time, in order to position the process at the optimum. Current reactor states were using as initial guess to the next optimization step. Figure 3 shows the behavior of the RTO system for the three GNM models and the true model over a time horizon of 22000sec (21 RTO executions). Results indicate that all GNM models were able to find an optimally set of inputs close to the true optimum and maintained this conditions over the time. It can be observed that the deviations respects to the nominal optimum are more severe when more inexact is the process model using in the RTO. Other issue observed is the sensitive of the RTO/GNM system to the initial conditions in the optimizer. In order to compare the performance of each model considered in this work, a dynamic performance index, defined as the total profit obtained over the time window was calculated and shown in Table 1. Results show that a more exact model brings best performance, specifically the three reaction model. Table 1: Cumulative profit.
MODEL Total Profit ($)
True 8663
GNM/Ml 8237
GNM/M2 8537
GNM/M3 8576
399 0.38-1
K-,
1
0.36-
0.34-
5?
_h H
1
•
11
I
il
li
0.30-
)
hj
i > j
0.32-
0.28 —1 C
h
ij
1
•
50
h [f
1
1
h
1
150
100
200
S a m pies
Figure 2: Actual (cont) and prediction (dash) E concentration by GNM/M2 model.
<^
10
15
Samples Figure 3: Instantaneous profits.
5. Conclusions The obtained results demonstrate the feasibility of the use of the GNM models in the RTO technology in a dynamic fashion. We considered that this approach introduce improvements in the RTO technology, allowing extension to highly non linear plants, feasibility of on-line adaptation using dynamical information and the integration among MPC - RTO systems by using current plants models. However, issues as models adequacy to optimization task should be analyzed in depth.
400
6. Notation Fa,Fb = Inlet flow rates Fr = Outlet flow rate M = Reactor Mass holdup P = Economic objective function R = Reaction rate (in mass) To = Sample time Tr = Reactor temperature X(i)= composition of component i in mass fraction Subscripts i = component i k= Sample instant
7. References Cubillos, F. and Lima, E., 1998, Adaptive hybrid neural models for process control. Computers and Chemical Engineering, Vol 22, S989-S992. Forbest, F., Marlin, T. and MacGregor, J., 1994, Model Adequacy Requirements For Optimizing Plant Operations, Computers and Chemical Engineering, Vol 18, pp. 497-510. Forbes, J.F. and Marlin, T.E., 1996, Design Cost: A Systematic Approach to Technology Selection for Model^Based Real-Time Optimization Systems. Computers and Chemical Engineering, Vol 20,717-734. Loeblein, C. and Perkins, J, 1998, Economic analysis of different structures of on-line process optimization systems. Computers and Chemical Engineering, Vol. 22, pp.1257-1269. Nath, R. and Alzein, Z., 2000, On-line dynamic optimization of olefins plants. Computers and Chemical Engineering, Vol 24, 533-5338. Psichogios, D. and Ungar, L., 1992, A hybrid neural networks-first principles approach to process modeling, AIChE J., 38, 1. Thompson, M. and Kramer, M., 1994, Modeling chemical processes using prior knowledge and neural networks, AIChE J., 40, 132. Xiong, Q. and Jutan, A., 2002, Grey-box modelling and control of chemical processes. Chemical Engineering Science, Volume 57, 6, 1027-1039. Yip, W.S. and Marlin, T.E., 2002, Multiple Data Sets for Model Updating in Real-Time Operations Optimization. Accepted for publication in Computers and Chemical Engineering.
8. Acknowledgments The authors wish to acknowledge the support provided by FONDECYT (Projects 1020041 and 1010179).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
401
Change Point Detection for Quality Monitoring of Chemical Processes Belmiro P. M. Duarte^* and Pedro M. Saraiva^ GEPSI - PSE Group, Instituto Superior de Engenharia de Coimbra, Portugal ^GEPSI - PSE Group, Dept. of Chemical Engineering, University of Coimbra, Portugal
Abstract This paper addresses a class of problems typically faced by process engineers and operators: the detection of assignable variation causes in retrospective data series corresponding to quality or process variables. The detection of average and/or variance changes is dealt as a classical change-point problem analysis where the two main questions raised are: 1) "Did a change really occur?" and 2) "When did it occur?". The approach adopted for achieving such a goal is based upon Change-Point Analysis (CPA) performed over CUSUM operators. To handle any type of data structure, the detection step is performed by a nonparametric bootstrapping procedure that estimates a probability distribution, according to which the existence of process shifts is determined. We tested the above approach through its application to typical chemical engineering case studies, covering a simulated CSTR and industrial data collected from a Portuguese pulp plant.
1. Introduction The detection of process shifts in retrospective discrete time data series can be treated as a classical change-point problem (Basseville and Nikiforov, 1993; Chen and Gupta, 2000; Darkhovsky and Brodsky, 1993). The basic goal underpinning this paradigm is to find the point(s) in time where a certain statistic did suffer pronounced behaviour changes through enumerative procedures. Some of the most commonly used statistics in parametric detection are the Geometrical Mean Average (Hines, 1976), CUSUM (Page, 1957), or the Generalized Likelihood Ratio (Lorden, 1971), while Mann-Whitney (Darkhovsky and Brodsky, 1993) and CUSUM (Csorgo and Horvath, 1997) values have been employed for non-parametric change detections. Sullivan and Woodall (1996) used a likelihood ratio test to detect change-points in retrospective time series and introduced the so called Likelihood Ratio Test (LRT) control charts, aimed at establishing whether a process is under statistical control (Turner et al., 2001). Taylor (2001) introduced a tool based upon CUSUM statistics to identify change-points on the values for a given monitored variable, through a nonparametric bootstrapping procedure. In this article we describe a Change Point Analysis (CPA) methodology for identification of process behaviour changes associated with the quality performance of a
Author to whom correspondence should be addressed: Departamento de Engenharia Quimica, Instituto Superior de Engenharia de Coimbra, Quinta da Nora, 3030 Coimbra, Portugal.
402 given chemical process, and illustrate its practical value through the presentation of two specific case study applications.
2. Change Point Analysis Approach The strategy that we will use to detect process shifts in retrospective data sets is based upon the CUSUM operator described by Basseville and Nikiforov (1993), Csorgo and Horvath (1997) and Taylor (2001) for off-line types of problems. The choice of such an operator is justified by its optimal properties (Lorden, 1971). Let us suppose that a data set D comprises m independent observations x^ for / = l,---,m, with x^ following a generic probability distribution PQ with 6 G 91 being a vector of relevant parameters (such as the mean or standard deviation). If a change in the mean occurs for observation mj, then the observations through m^ are described by a distribution PQ^ and all the subsequent ones by PQ^Assuming for now that only changes in the mean do occur, and focusing our attention on their identification, let us further consider that the observations are: Xi=H-^Zi
(1)
where z, is a stationary signal with mean 0 and finite covariance that exhibits short range dependence. The cumulative sum operator at the i* point, 5,, is given by:
Si = tzk
(2)
it=i
with the change-point, t^ , placed at ^0 =argmax|5,|
(3)
To estimate the confidence levels associated with each change point we do compare the CUSUM operator range with its distribution obtained by re-sampling the original data, assuming that the range of 5 , , designated as r, is given by: r - max 5. - min 5. \
(4)
\
The decision regarding change-points significance is achieved by a bootstrapping procedure with the points z, randomly sampled without replacement (Efron et al., 1994). A large number of bootstrap samples (n^) is required to achieve a robust decision on how much r would vary if no change took place. The number of bootstrap instances with ranges, r^, superior to r, designated as n^ , can be expressed as:
403 "b
(5)
"'=^,'M) and the occurrence of a significant shift determined from d:
(6)
cl = 100.0—
According to values of cl, a three-level discrimination function, del, may be defined as: \l
if
c/G {[lOO.Ox(1.0-cri);100.0]u[0.0;100.0xai]}
dcl = 1
if
c/G {[I00.0x(1.0-a2);100.0x(1.0-ai))u(l00.0xai;100.0xcir2]}
0 if
(7)
c/6{(l00.0xa2;100.0x(1.0-a2))}
where a^ and a2 ^^^ two different type I acceptance errors. Values of del = 1 and del = 2 provide, respectively, evidence and strong evidence that a change occurred at point ^0 . For the detection of variance change points, a similar approach is used, based upon the d^ statistic, proposed by Inclan and Tiao (1994):
d.=
Izt
*=i
(8) 2
I 2*
m
which is monitored and used to support a change point analysis. After locating a first major shift, the basic Change Point detection algorithm is recursively applied to each of the subintervals generated. In each of the time subintervals resulting from the above time partition, designated as j , and containing nij points, confidence intervals for the mean can be built according to the t-Student critical values:
Mn\j
h-a/2,mj-l
'^Mrrii
'^h-al2,mi
^'V^
(9)
For the standard deviation, each of the points represented, $,, is given by:
?, =k->«» and the confidence limits are built according to the x'^ distribution:
(10)
404
(11)
^ruj
3. Case Studies 3.1. Simulated CSTR The first example chosen to illustrate the CPA capabilities in detecting mean and variance process shifts reports to a simulated continuous stirred tank reactor (CSTR) where a reversible reaction A<-^R is carried out (Economou and Morari, 1986). Each combination of feeding stream properties, i.e. the triplet (C^,, Cj^^, 7,), leads to a set of corresponding output values of ( C ^ , C/j, T). White noise was added to the simulated data thus obtained, through a term N(0,0.1) added to C^,, C^ and C^ , and N(0,1.0) added to 7} and T, where N(//,o) represents a normal distribution with mean // and standard deviation o . A set of 200 points was obtained by performing four step or ramp variations over C^, and T^ (Table 1). Table 1. Disturbances used to generate data for average change detection. Observations #Region Comment 1 1-30 Default case (C^, = 1.0 mol/1; T^ = 430 K) 31-60
2
Positive step change in concentration (C^, = 1.1 mol/1)
61-100
3
Positive step change in temperature (T, = 438.6 K)
101-140
4
Negative step change in concentration (C^, =0.9 mol/1)
141-200
5
Positive slope ramp concentration (slope
disturbance of C^j
change in = 0.1/60
mol/(l.observation); initial value of C^, = 0.9 mol/1) Output C/j was chosen to apply our CPA algorithm, with n^ set to 2000, a^ to 0.05, a2io 0.1, and a to 0.027. The results thus obtained are provided in Figure 1 (where vertical ranges correspond to mean confidence intervals) and Table 2, showing that all of the shifts were indeed identified. Table 2. Change Point Analysis Results. Observation Region Start End 1 1 30 2 60 31 3 61 100 4 101 140 5 141 200
Average (mol/1) 0.5078 0.5607 0.5101 0.4617 0.4912
cl (%) 100.00 100.00 100.00 97.35 100.00
405 3.2. Industrial data from a pulp plant The set of industrial data considered here for CPA application corresponds to a particular pulp quality variable (brightness), collected every hour from a magnesium bisulphite Portuguese pulp mill. For this quality characteristic the plant does have a target value of 88.2 "ISO, together with a minimum allowed specification of 87.5 °ISO. The data available covers a period of almost 200 operating hours, and a CPA, performed over both average and standard deviation behaviours (Figure 2), does show that several significant changes did occur for both of the parameters, and allows us to identify them hierarchically, from top level events to increasingly less significant shifts. Process engineers may stop this recursive addition of detail in an interactive way, whenever they feel that no further added value derives from more time partitioning, even though it may be statistically significant. This interaction and hierarchical identification of process changes can provide powerful insights regarding a better understanding of available data, their interpretation and lead to increased knowledge about the underlying process behaviour, why, how and when real changes did take place.
4. Conclusion In this paper we propose an algorithm based on CUSUM statistics to detect changes in average and/or variance for retrospective data series. It does not depend on any parametric assumption about underlying probability distributions, given that a bootstrapping procedure is adopted to identify statistically significant changes. Since many chemical processes do not behave under stable steady state conditions, but rather with very frequent changes over time, rather than trying to monitor them through control charts or similar techniques, a Change Point Analysis approach (as the one pointed out here) seems to provide a better way for monitoring quality variables and achieve process performance improvements, through a top down identification of the most significant changes that took place over time. Our algorithm was tested over simulated data describing the operation of a CSTR where a reversible reaction occurs, as well as with industrial data collected from a Portuguese pulp plant.
17
33
49
65
97 113 Batch
129
145
161
177
193
Figure 1. Mean Change Point Analysis monitoring for the CSTR.
406
Figure 2. Pulp brightness CPA monitoring for: a) Mean; b) Standard Deviation.
5. References Bassevile, M. and Nikiforov, I.V., 1993, Detection of Abrupt Changes: Theory and Applications. Prentice Hall, New Jersey. Chen, J. and Gupta, A.K., 2000, Parametric Statistical Change Point Analysis. Birkhauser, New York. Csorgo, M. and Horvath, L., 1997, Limit Theorems in Change-Point Analysis. Wiley, New York. Darkhovsky, B.S. and Brodsky, B.E., 1993, Nonparametric Methods in Change Point Problems. Kluwer Academic Publishers, Amsterdam. Economou, C.G. and Morari, M., 1986, Ind. Eng. Chem. Process Des. Dev., 25, 403. Efron, B., Tibshirani, R.J. and Efron, B., 1994, An Introduction to the Bootstrap. CRC Press, New York. Inclan, C. and Tiao, G.C., 1994, Journal of the Amer. Stat. Assoc, 89, 913. Hines, W.G.S., 1976, IEEE Trans. Information Theory, 22,4,210. Lorden, G., 1971, Annals Mathematical Statistics, 42, 1897. Page, E.S., 1957, Biometrika, 44, 248. Sullivan, J.H. and Woodall, W.H., 1996, Journal of Quality Technology, 28, 3, 265. Taylor, W.A., 2000, Change-Point Analysis: A Powerful New Tool for Detecting Changes. Web: http://www.variation.com/cpa/tech. Turner, CD., Sullivan, J.H., Batson, R.G. and Woodall, W.H., 2001, Journal of Quality Technology, 33, 2, 242.
6. Acknowledgments The authors would like to acknowledge financial support provided by FCT through research project POCTI/1999/EQU/32647.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
407
Selecting Appropriate Control Variables for a Heat Integrated Distillation System with Prefractionator Hilde K. Engelien, Sigurd Skogestad Norwegian University of Science and Technology (NTNU) Department of Chemical Engineering, 7491 Trondheim, Norway
Abstract A heat integrated prefractionator arrangement is studied for a ternary separation of a propane-butane-pentane solution. The prefractionator arrangement has large energy savings compared with the best of the direct or indirect sequence with no heat integration. A heat integrated distillation system can be more difficult to control than a non-integrated arrangement, so good control systems are essential. In this work the focus is on the selection of control variables. The method of self-optimizing control has been used to provide a systematic procedure for the selection of controlled variables, based on steady state economics.
1. Introduction For ternary separations there are three classical separation schemes: direct split, indirect split and prefractionator arrangement. Energy savings can be achieved for these separation schemes by running one of the columns at a higher pressure and integrating the reboiler/condenser. For these multi-effect systems there are two modes of integration: forward integration, where the heat integration is in the direction of the mass flow, and backward integration, where the integration is in the opposite direction of the mass flow. Cheng and Luyben (1985) compared the steady state designs for several multi-effect configurations for a benzene/toluene/xylene separation. The result showed 50% energy reduction and that the best configuration for the separation studied was a prefractionator/sidestream column arrangement with reverse integration. Ding & Luyben (1990) presented control studies of both forward and backward integrated prefractionator systems for ternary separation of benzene-toluene-xylene. A direct split system is used for comparison. The configurations suggested has "total Q" control and also control of the two impurities in the sidestream (from main column) by controlling the sidestream flowrate and the sidestream draw-off tray. For the low-purity case they find that the prefractionator arrangement is dynamically about the same as the conventional direct split configuration. For the high purity case both systems are controllable, but the direct split configuration gives much better load rejection. The control of the sidestream toluene composition is particularly poor. Bildea & Dimian (1999) also investigated the controllability for a system with a prefractionator and a sidestream main column. For the forward and backward integrated case they looked at four different designs, depending on the light/heavy split in the
408 prefractionator. The authors concluded that in general the forward heat integration scheme is the easiest to control. In this work a heat integrated arrangement consisting of a prefractionator and a main column with sidestream is studied for the ternary separation of a propane-butanepentane solution. The objective of this work has been on the selection of controlled variables, that is, finding which variables that should be controlled. The concept of selfoptimizing control (Skogestad, 2000), which is based on steady state economics, is used to provide a systematic framework for the selection of the controlled variables. This method involves a search for the variables that, when kept constant, indirectly lead to near-optimal operation with acceptable economic loss. In self-optimizing control, rather than solving the optimization problem on-line, the problem is transferred into a simple feedback problem (Skogestad, 2000). In practice, this means that when the plant is subject to disturbances it will still operate within an acceptable distance from the optimum and there is no need to re-optimize when disturbances occur.
2. The Integrated Prefractionator Arrangement The system studied in this paper is a ternary separation of propane-butane-pentane. The separation is carried out in two columns operating at different pressures (See Figure 1). The first column is a high pressure (HP) prefractionator, which performs the propane/pentane split. Both the distillate and the bottom products from the HP column is fed to the second, low pressure (LP) column. Here propane, butane and pentane are the products from the distillate, sidestream and bottom stream, respectively. In the LP column the top part of the column (above sidestream) performs the propane/butane split while the bottom part (below the sidestream) performs the butane/pentane split. The feed to the HP eulumn ia 300 mol/s (liquid feed), feed compOKitinn ig (0.25 0.5 0.25) and there are 20 and 40 stages in the HP and LP columns, respectively (10 in each section). The heat integration between the two columns is in an integrated reboiler/condenser, where the condensing heat from the HP column is used to boil the LP column.
JZ (ABC) HR
JT (AB
i
(BC^
\^L^
(A)
(B) LP
L Figure 1. The heat integrated prefractionator arrangement.
(C)
409 2.1. Energy savings for the integrated prefractionator arrangement A prefractionator arrangement with no heat integration saves about 30% energy compared to the best of the direct or indirect sequence. A prefractionator with further heat integration where the columns are run at different pressures, can have savings of around 50% compared to the best of the direct or indirect sequence (Ding & Luyben, 1990). For both the arrangements the energy saving is dependent on the recovery of the middle component from the prefractionator. Figure 2 shows a comparison of the minimum vapour flowrate required for the integrated and non-integrated prefractionator arrangement. The calculation is based on the Underwood shortcut equations for minimum vapour flowrate for sharp splits and constant relative volatility. In the prefractionator (propane/pentane split) the minimum vapour flowrate as a function of the recovery of the middle component rB,Db is calculated from:
{ V.min,\
cc^z A'-A
+
^A-^B
^B ^B,D ^B
(1)
The Underwood roots, 0A and OB. are found from the feed equation (King, 1980). The vapour flowrate in the second column is calculated from Kings equation (King, 1980) for binary mixtures. Note that the effect of pressure on the separation is not included in these formulas. Vmin for Prefractionator With and Without Heat Integration
Prefractbnator arrangement with no integration: V =V +V min
0
0.1
0.2
pref
main
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recovery of middle component (B)
Figure 2. Comparing Vmin for prefractionator arrangement with and without integration, sharp split, propane-butane-pentane; ZF = [0.15 0.7 0.15].
410 Based on these minimum vapour flowrate calculations Table 1 presents comparisons between energy requirements of integrated prefractionator arrangements and other integrated schemes. The energy savings are given as percentage improvement compared to the best of the non-integrated direct or indirect sequence. The Petlyuk arrangement has also been included for comparison. In all cases one or both of the integrated prefractionator arrangements has higher energy savings compared to the other schemes. The highest savings occur when there is a high concentration of the middle component. For feed composition (0.15 0.7 0.15) both the integrated prefractionator arrangements have savings of 63.1 %, compared to the best of the direct or indirect non- integrated sequence. In terms of energy the best of the other integrated schemes is a direct split arrangement (with forward or backward integration), which has 37.7 % savings. Table 1. Comparison of energy savings (Vmin) of different systems (compared to the best of the non-integrated direct or indirect sequence), sharp split: propane-butane-pentane. ZF
[1/3 1/3 1/3] [0.7 0.15 0.15] [0.10.45 0.45] [0.15 0.7 0.15] [0.45 0.10.45] [0.15 0.15 0.7] [0.45 0.45 0.1]
DSF (%) 45.5 19.8 32.1 37.5 27.6 32.5 42.0
DSB (%) 45.5 19.8 32.1 37.5 27.6 32.5 42.0
DSF- direct split forward integrated, ISF - indirect split forward integrated, PF - prefractionator forward integrated.
ISF (%) 28.6 20.1 25.0 28.4 29.2 21.2 29.0
ISB (%) 28.6 30.2 25.0 28.4 30.0 21.2 29.0
PF (%) 58.11 25.6 Sin 63.1 34.1 46.7 54.7
PB(%) 59.1 41.6
Sin 63.1 43.7 46.7 58.1
Petlyuk (%) 28.6 19.85 25.0 28.4 27.6 21.2 29.0
DSB - direct split backward integrated ISB - indirect split backward integrated PB - prefractionator backward integ rated
3. Self-Optimizing Control The objective of the study is to implement a simple "optimal" control scheme for the integrated prefractionator arrangement by finding and controlling the variables in the system that will directly ensure optimal economic operation. Then, when there are disturbances in the system, there is no need to re-optimize. The self-optimizing control procedure (Skogestad, 2000) consists of six steps: 1) a degree of freedom (DOF) analysis, 2) definition of cost function and constrains, 3) identification of the most important disturbances, 4) optimization, 5) identification of candidate controlled variables and 6) evaluation of loss with constant setpoints for the alternative sets of controlled variables. The first important step in this systematic procedure is to analyse the number of degrees of freedom (DOF) for the system. For the integrated prefractionator arrangement there are eleven DOF, when assuming a fixed feedrate. These are: the boilup in the HP column, the condensation rate in the HP column, reflux, distillate and bottom flowrate from both columns, sidestream flowrate in the LP column, boilup in the LP column and condensation rate in the LP column (see Figure 1).
411 For this distillation system there are four holdups in the reboilers and condensers that have to be stabilised, but these have no steady-state effects and therefore no effects on the cost function. This then leaves seven degrees of freedom for optimization. In the formulation of the objective function there are two 'conflicting' elements: to produce as much valuable product as possible, but using as little energy as possible. For a given feed, the cost function is defined as the amount of propane, butane and pentane from the LP column (at 0.99 mol% or more) multiplied by the relevant product prices, minus the cost of boilup: J = PD^LP
+ Ps SLP + PBBLP
- PQ QHP
(2)
where po, Ps» and pe are the prices of the propane, butane and pentane products, respectively and pq is the price of boilup. In this study all the products are assumed to have the same value (po = Ps = PB = p)Having defined the objectives, the system constraints must be defined. In addition to requiring positive flows, the following seven constraints have been specified: • The pressure in the LP column should be greater than or equal to 1 bar. • The pressure in the HP column should be less than 15 bar. • The reboiler duty in the LP column (QB,LP) ^ust equal the condenser duty in the HP column (QcHp) (equality constraint). • The product purities in the distillate (XA,D)» side (XB,S) and bottom stream (XC,B) should be above or equal to 99 mol%. • The area in the combined reboiler/condenser should be less than or equal to AmaxIn practice this can be implemented by allowing the area to vary by using a bypass. The optimization problem can then be formulated as: min (-Jx(x,u,d)), subject to: gi(x,u,d) = 0 (model equations) and g2(x,u,d) < 0 (operational constraints). Here; x are state variables, u are independent variables that can be affected (DOF for optimization) and d are independent variables that can not be affected (disturbances). Then the optimal operating point for the system with no disturbances is found by solving the optimization problem. This gives the optimal steady state values for all the variables in the system. Then the optimization is repeated for the various disturbances. The most important disturbance that has been considered is variations in the feed flow of ± 20 % and variations in feed compositions. From this optimization the active constraints of the system is found. When a variable is at a constraint then active constraint control is implemented. For this system we normally find that the three product compositions, the pressure in the LP column and the area of the exchanger are active constraints. This then leaves us with one unconstrained degree of freedom for which a control variable has to be selected. However, there may be cases when the above constraints are not active. Take, for instance, a case when there are changes to the feed composition, such that the condensation in the HP column is higher than the energy requirements of the LP column. In terms of Figure 2 this means that the minimum vapour flowrate curve for the prefractionator will lie above the curve for the main column, for all recoveries of the middle component. In order to balance the columns it may then be optimal to overpurify the least valuable product in the main column, thus one purity constraint is no longer
412 active. Another possible operation is that the prefractionator no longer performs a sharp A/C split and the light component (A) will appear in the bottom stream in order to reduce the prefractionator duty. It is therefore important for the column operation and control system to study the feed composition and the effects of expected disturbances. If the condensation in the prefractionator is less than the boilup in the main column this can result in the purity requirements of the LP column not being met, or that we get pure A at the top, or pure C at the bottom of the prefractionator. Another factor that can change the active constraints is the price of the products. When the prices are the same it is most profitable to produce each product at the minimum purity specification. However, if the price of one product is higher then the others then the products with the lower prices may be overpurified in order to produce as much as possible of the high value product. For the unconstrained degree of freedom the suggested control variables tested were one of the following: boilup in the HP column, fixed boilup to feedrate ratio (QB/F), the pressure in the HP column, reflux ratio in the HP column, fixed reflux to feedrate ratio, distillate flow in HP column, bottom flowrate in HP column, temperatures in the HP column, temperatures in the LP column, distillate composition in the HP column and bottom composition in HP column.
4. Conclusions The method of self-optimizing control is applied to a heat integrated prefractionator arrangement. This system has a total of eleven degrees of freedom with six DOF available for optimization when variables with no-steady state effects have been excluded and the duties of the two columns are matched. From the optimization it is found that there is one degree of freedom left for which there is not an obvious choice of control variable. The method of self- optimizing control will be used to find a suitable control variable that will keep the system close to optimum when there are disturbances.
5. References Bildea, C.S., Dimian, A.C., 1999, Interaction between design and control of a heatintegrated distillation system with prefractionator, Tans IChemE, Vol. 77, Part A, 597-608. Cheng, H.C., Luyben, W., 1985, Heat-integrated distillation columns for ternary separations, Ind. Eng. Chem. Process Des. Dev., 24, 707-713. Ding, S.S., Luyben, W., 1990, Control of a heat-integrated complex distillation configuration, Ind. Eng. Chem. Res., 29, 1240-1249. King, C.J., 1980, Separation Processes, McGraw-Hill Book Co. Lenhoff A.M., Morari, M., 1982, Design of resilient processing plants - I: Process design under consideration of dynamic aspects. Chemical Engineering Science, Vol. 37, No.2,245-258. Skogestad, S., 2000, Plantwide control: the search for the self-optimizing control structure, J. Proc. Control, Vol. 10,487-507.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
413
A Holistic Framework for Supply Chain Management A. Espuna^, M.T. Rodrigues^, L. Gimeno^ and L. Puigjaner^ ^ Departament d' Enginyeria Quimica, Universitat Politecnica de Catalunya, ETSEIB, Avda. Diagonal 647, pab. G-2, E-08028 Barcelona, Spain, email:[email protected] ^ Departments of Electrical and Chemical Engineering, State University of Campinas, UNICAMP C.P. 6101, 13083 Campinas S.P, Brazil, email:[email protected]
Abstract A new approach to Supply Chain Management (SCM) is proposed that considers different structures conceived as module components of a holistic framework. In this way, decision support is supplied at different levels and for different functionality. Thus, the system may be also used for multi-objective assistance to decision making, while systematically considering the time and other intrinsic constraints associated to the structure of the supply chain. This approach is applicable to the different supply chain layers, from the individual single sites to the global Supply Chain optimisation, including multi-site (single company) competitive strategic decisions, or the independent management of a similar objective (i.e. benefits, client satisfaction, etc.) for different companies.
1. Introduction Although the trend towards optimum management of single manufacturing sites has been driven by de integration of aggregate planning and detailed scheduling (Erengug et al., 1999), the consideration of multi-site manufacturing networks and their co-operative supply chain systems requires a revised perspective and complementary decision making structures. A number of approaches have been presented in order to address the supply chain issue (Arntzen et al., 1995; Vidal et al., 1997; Romberg, 1998). An important conclusion is that within all these approaches, at least one of the following is not addressed properly: (i) the SCM systems do not take into advantage of the modern computing architectures (e.g. open systems) available nowadays (ii) the planning and scheduling problems are partially tackled (iii) they do not suit for many manufacturing technologies and (iv) they do not address the whole multi-enterprise supply chain allowing individual manufacturing components to dynamically connect and adapt according to market changes and opportunities. More specifically, the intrinsic dynamics and complexity of the supply chain systems, the special characteristics of the different manufacturing networks and the existence of other considerations (e.g. environment) which must be taken into account, are not adequately covered by current approaches. The proposed approach in this work includes different structures that are conceived as module components of a holistic framework. In this way, decision support is supplied at different levels and for different functionalities (general manager, marketing.
414 production, distribution, sales, etc.)- The planning problem is modeled following the three-phase procedure described below.
2. Solution Approach Firstly, a sequential approach in phases is proposed for the planning problem: The goal of the first phase is to obtain the more adequate supply chain configuration for the specific demand considered. The results of this phase include decisions about which sites will be used, their run times and the transportation needs between sites, without proceeding to time allocation. All this information is dependent on production rates initially supplied by site managers, which may not be accurate enough, since they will depend on the final work load allocation made in a competitive way. Thus, these production rates should be negotiated in view of the initial and subsequent planning results and their consequences when low level site-specific constraints are applied. The second phase proceeds to time allocation of sites production and transportation in order to minimize completion times of final products with acceptable costs for the whole SC as far as for partners. For that it utilizes run times already determined in phase one as bounds in a multi-agent environment. Finally, a third phase would deal with detailed sites planning. In this way the main characteristic of the sequential approach proposed is to deal with the supply chain configuration problem considering only the required run times to ftilfill production needs and without dealing with production and transportation time allocation. Each one of these three phases is solving one part of a global problem, and imposing constraints to the next one, so no guarantee of global optimality can be provided. In order to deal with these problems, additional procedures have to be developed in order to identify constraints relaxation at each phase that eventually may lead to global improvements. This information should be back propagated in the form of modifications of the previous phase scenario. Since the third phase of the proposed procedure has been already object of extensive research, only phases 1 and 2 will be addressed in the next sections.
3. First Phase This phase selects sites to be utilized and determines runtimes for production sites, throughput at storage sites and quantities transported between sites. The cost function minimized involves: i) production costs as a function of runtimes or quantities produced, ii) transportation costs between sites which depend on quantities transported of each state and iii) manipulation costs at storage sites as a function of the manipulated mass of each state (throughput). Production sites are characterized by states production and consumption rates in each possible site operating mode. Storage sites can also operate under different operating modes that determine which states can be stored. During the time horizon each site can utilize only one mode of operation. Once the solver has determined selected sites, runtimes and quantities transported it is possible to build a Gantt chart in two limit situations: i) "continuous" transportation of sites output products, that is transportation can occur immediately after sites start times and ii) transportation only after total production. Any real situation will lie between
415 these two limits. The following sets, parameters and variables are utilized. / s m Pim^s Ri^rn.s Ds Iriis, Outis linkiia ^opim,s ^ocims stoTi istOis t(Um) bint(Um) sto(i,m,s) qt(ii,i2,s )
sites (production and storage sites) states sites operating modes production rate of state s at production site / operating under mode m consumption rate of state s at production site / with mode m total external demand (kg) of state s = true if site / consumes (produces) state s - true if site /; can supply some intermediate state to site i2 = true if site / under mode m produces state s = true if site i under mode m consumes state s = true if site / is a storage site; false if site / is a producer site initial storage of state s at site / (continuous) run time of production site / under mode m (binary) = 1 if producer or storage site / utilizes mode m (continuous) mass throughput of state s at storage site / in mode m (continuous) mass (kg) of state s transported from site /; to site (2
Demand for end products: demand can be external and internal as these products can be also intermediates; sites run time and/or storage sites throughput must allow to fulfill these demand on end products (only conditions false are signaled explicitly). S
Pims'^(i^^)+
X
[sto(i,m,s) + istOi^]>D^+
Outi,,mpi^, stori=f
^
qt{Ui2^s) [1]
/,/2 ,m Outi^Irii^,
i,m Outi, stori ^Pims
Induced demand: intermediates and raw materials needs must be fulfilled through transportation. Equations 2a and 2b are utilized for producer or storage sites respectively. X
^K^'i'^'2"^)= X
/i
m
\ms''^^h^^)'^
X
Qt(ihh^^)=
/j
linki^2
X
sto(i2,m,s)[2ab]
m
linki^2
Transportation: limited by production or storage. Equations 3a and 3b are utilized for producer or storage sites respectively. X
qt(iiJ2^s)<
^
1*2
m
^%s linki^i^
^Piims
Pi^^^^t(ii,m);
^ (2 ^2S linki^^^
qt(iiJ2^s)<
^ m ^Piims
Binary variables definition: Producer and storage sites respectively
sto(^,m,s)[3ab] +''^^15
416 M ^bint(im) > t{i,m)
M ^bint(i,m) >
stored(i,m,s)
bint(i,m) < t{i,m)
bmt(i,m) < stored {i,m,s)
[4a,b]
Sites operation mode: each site operates under only one mode during the planning horizon. ^bint(i,m)
[5]
m
The kind of results obtained is illustrated with the example shown in Figures la,b. Cost minimization leads to a makespan of 251 h with a cost value of 90000. Accepting a degradation until 120000 makespan becomes 99 h mainly through the utilization of external supply of one of the final products through site S47 (which has a higher cost, thus not being utilized when cost is minimized). The above formulation does not consider final products demand due dates. Cost minimization can lead to unacceptable run times since the implied completion times can prevent fulfilling due dates. If completion time estimates are not acceptable other supply chain configurations, utilizing alternative suppliers and/or producer sites can be preferable in spite of higher production or transportation costs. Instead of combining these two aspects in a single cost function, it seems preferable to allow the user to enter some acceptable cost degradation and minimize a cost function that estimates completion times. This reformulation includes all the previous equations with objective transformed into a constraint plus conditions below modeling completion times. CT(i2) > t(i2, m) + delta{i2, ^i, s) delta(i2,/i, s) - CT(ii) - transp contr - manip contr > -M [1 - wt{ii, ^2 )] delta(i2, ii, s) - CT(ii) - transp contr - manip contr < M [1 - wt{ii, i2 )] delta{i2 Ji,s)<M
"^ wt(ii,/2)
For a site i2 its completion time must be greater than its runtime plus any contribution from previous sites sending materials to it. The contribution of any site /; in turn is given by its completion time plus transportation and manipulation times. Detailed model is presented by Gimeno and Rodrigues (2002). Accepting some cost degradation, the obtained result is shown in Figure lb with a smaller time to fulfill demand. Again, the new selected structure is also indicated.
4. Second Phase In this phase the objective is to allocate in time production and transportation in order to fulfill a specific demand of end products characterized by quantities and due dates, with acceptable costs for the whole SC as far as for partners in the SC network. This multiobjective situation is addressed through a multi-agent implementation. Every single actor in the Supply Chain may be modeled through an individual autonomous agent, capable to make any local decisions affecting the links between this actor and the rest of
417 the system (SC). Active SC structure, production and transport allocation, and best and worst completion times estimates from the first phase are used to constrain the search effort. ••lg|»l
JM
M=g—1 Figure la. Supply Chain structure for minimum cost(90000) at left and with cost degradation (120000) at right. Ihou^pil cf skiayBsilES
9f7
916
r^ ^
^
^
^
•
^
"
•
^
913
911/914
\ 2D0O
4000
\ eooo
800O
Figure lb. Runtimes for selected productions sites and throughput for selected storage sites. White bars correspond to the non-degraded situation, black to cost degradation. Then, the proposed approach is based on an evolutionary procedure, where each agent is allowed to reconsider its decisions while the rest of agents also evolve. This leads to an continuous changing scenario for each agent that, for one part, might not converge to a final state and, for another part, in the case of convergence, will not necessarily lead to a desired global optimum (if it can be even defined). In order to ensure the evolution of the system towards a compromise between individual agent objectives and global objectives, two additional controlling agents have been
418 introduced: a coordinating agent and a smoothing agent. The coordinating agent calculates the range of global costs (upper and lower bounds) looking at the SC as a global supplier that should accomplish different internal constraints, defined through aggregation of the individual characteristics of each site. This may lead to a significant reduction in the decision making range of the individual agents. The smoothing agent enforces convergence in a similar way, but based on the assessment of the evolution of the system.
5. Conclusions A sequential approach for Supply Chain Management has been presented. Structurally, the different phases are conceived as module components of a holistic framework. In this way, decision support is supplied at different levels and for different functionalities (general manager, marketing, production, distribution, sales, etc.). Additional support is provided by modules contemplating financial issues (cash-flow management, risk assessment) forecasting facilities, etc., and the use of different optimisation tools. Thus, the system may be also used for multi-objective assistance to decision making, while systematically considering the time and other intrinsic constraints associated to the structure of the supply chain. This approach is applicable to global Supply Chain optimisation, including multi-site (single company) competitive strategic decisions, or the independent management of a similar objective (i.e. benefits, client satisfaction, etc.) for different companies. The proposed holistic approach has been devised for this type of integration/negotiation in a multi-objective environment allowing different modules to interact. It is perceived as a configurable modular system provided with interfaces that strictly adheres to consolidated and emerging standards (ISA, CO, ...) Thus, software transparency and interoperability is guaranteed allowing to easy customisation and agile reusability of existing software.
6. References Arntzen, B.C., Brown, CO., Harrison, T.P., Trafton, L.L. 1995. Global supply chain management at digital equipment corporation, Interfaces 25, 69-93. ErengU9 S.S., Simpson, N.C., Vackaria, A.J., 1999, Integrated production/distribution planning in supply chains: An invited review. European Journal of Operational Research, 115. Gimeno L., Rodrigues, M.T., 2002, Sequential Approach to Production Planning in Multisite Environments, 15th IFAC World Congress, Barcelona, Spain. Homburg, C , 1998, Hierarchical multi-objective decision making, European Journal of Operational Research, 105, 155-161. Mele, F.D., Espuna, A., Puigjaner, L. 2(K)2, Discrete Event Simulation for Supply Chain Management in AIChE Annual Meeting, Nov. 3-8, Indianapolis, USA (Accepted). Vidal, C.J., Goetschalckx, M. 1997, Strategic production-distribution models: A critical review with emphasis on global supply chain models, European Journal of Operational Research, 98, 1-18.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
419
Management of Financial and Consumer Satisfaction Risks in Supply Chain Design G. Guillen(-H-), F. D. Mele(++), M. Bagajewicz(+), A. Espuna(++), and L. Puigjaner(++)(#) (-H-)Universidad Politecnica de Catalunya, Chemical Engineering Department, ETSEIB, Diagonal 647, 08028 Barcelona, Spain; (+) University of Oklahoma. School of Chemical Engineering and Materials Science, 100 E. Boyd St., T-335, Norman, OK 73019, USA. On sabbatical leave at ETSEIB; (#) Corresponding author.
Abstract In this article the design of a supply chain consisting of several production plants, warehouses and distribution centers is considered introducing uncertainty and using a two stage stochastic model. The model takes into account profit over the time horizon and considers consumer satisfaction. Financial risk and the risk of not meeting the consumer satisfaction are considered and managed at the design stage. The result of the model is a set of pareto optimal curves that can be used for decision making.
1. Introduction Supply Chains (SC), which started to be studied in the early 90s, include several decision variables: strategic, tactical and operational. Strategic decisions include the supply chain design, which consists of the determination of the optimal configuration of an entire SC network: number, location and capacity of plants, warehouses and distribution centers to be set up, the transportation links and the flows and production rates of materials. In this work, we present a two-stage stochastic model for SC design with management of financial and consumer satisfaction risks. The model is presented next.
2. Deterministic Model This model considers several production plants, warehouses and distribution centers. It is similar to previous SC models (Tsiakis et al., 2001). However, some modifications have been introduced. The NPV instead of the cost is used as objective function. In addition, the capacities of the plants/warehouses are considered in the determination of the capital investment and in the calculation of the operational associated costs. 2.1. Constraints • Mass balance: the steady state is supposed through the SC
420 ^Pj,t
(1)
j
(2)
^P.j.t i
k
Y^^pjia^ Demandp,^
•
^
(3)
pXt
Capacity constraints: there are maximum and minimum capacity constraints for the plants and warehouses of the SC
Zll^Pur^pi
Vi,t
(4)
Cap.^ < Cap.
\fi,t
(5)
Capf < 'Caf, < Cap^
V/
(6)
p
= Cap,
J
2^2^^pjh 'Ppj 2.-£-i
SF.-= Capj,
(7)
\/jj
Cap J, < CapJ
^j,t
(8)
Y^B.^-Capj^=C^.
^J,t
(9)
2.2. Objective function The objective function is to maximize the NPV and the consumer satisfaction MaxNPV MaxTCSat j^py ^ y CashFlow,
(10) (11)
Cashflow, = -CI = -{FCI+ WC)
t=\
(12)
Cashflow^ = - Re venues, - DE, - IE, - Taxes,
t -2...T
(13)
Cashflow^ = - Re venues, - DE, - IE, - Taxes, +WC
t=T
(14)
FCI = Y,(FClt'Aj
+C^rri)
+ ZZ^C^Jc-^jc
'•
J
(1^)
c
revenues, = ^Salesp, • price^
"^t^p
(16)
p
Sales ^,=Y.Qp>. IE.=Y,i.IEfA,+C^rili)
+ YI.^Ejc-Bic
"^t'P
(1"^)
V<
(18)
DE. =IZe„, -vc, + I E I y , . -/^c,, + I I I X , , .rc,, + i
p
j
p
k
j
j
p
P
k
k
P
^j
I
j
(19)
421 Taxes^ = {revenues^ - Depreciation, )-tr 1 ^ Sales ^, Hp ^ Demandp, TCSat = -TCSat,
\/t
Vr
(20) (21) ^^^^
3. Stochastic Model The stochastic problem is characterised by two essential features: the uncertainty in the problem data and the sequence of decisions. In our case, the demand is considered as a random variable with a certain probability distribution. The binary variables associated to the opening of a plant/warehouse as well as the continuous variables that represent the capacity of plants/warehouses are considered as first stage decisions. The fluxes of materials and the sales of products are taken as second stage or recourse variables. The objective functions are therefore the expected net present value and the expected consumer satisfaction.
4. Financial and Consumer Satisfaction Risk The financial risk associated with a design project under uncertainty is defined as the probability of not meeting a certain target profit (Barbaro and Bagajewicz, 2002a,b). In this work, the risk of not meeting consumer satisfaction is defined in a similar way, that is, as the probability of not meeting a certain target consumer satisfaction. A composite risk is defined using two aspiration levels or targets (a profit Q. and a consumer satisfaction Q') as follows: Crisk (x, Q,Q') = P(FO(x)< Q A FO \iC)< Q')
(23)
In the case where the probabilities are independent, this risk results in the product of both risks. This is the assumption used in this article. Risk management Three different objectives: NPV, consumer satisfaction and compounded risk are considered. If one uses a utility function, the compounded risk can be manipulated by changing the weights of the objectives for different aspiration levels Q and Q.\ In order to avoid the use of binary variables the concept of downside risk (Drisk(x,Q)), introduced by Eppen et al. (1989), is used, as explained by Barbaro and Bagajewicz (2002a,b).
5. Case Study We considered a problem with two possible production plants locations, three warehouses and four markets. The aim of the problem is to determine the optimal SC configuration that
422 maximises the NPV of the investment while maximise at the same time the consumer satisfaction. Figure 1 shows the Pareto curve for the deterministic case. This curve was obtained maximizing the NPV and constraining the consumer satisfaction. The curve shows that only above a 66% consumer satisfaction level some trade off between the objectives exists. Below 66% of requested consumer satisfaction the solution is the same as that of the model without constraining consumer satisfaction and therefore all the pareto solutions accumulate at the end point on the left. Figure 2 shows the same curve for the stochastic model. Figures 3 and 4 show the corresponding consumer satisfaction andfinancialrisk curves of the pareto solutions of the multiobjective stochastic problem. Unsupported solutions are suspected to exist, but this could also be the effect of the small number of scenarios (100) used. In future work this matter will be resolved.
Consumer Satisfaction (%)
Figure 1. Deterministic Pareto Curve.
Consumer Satisfactiwi (%)
Figure 2. Stochastic Pareto Curve.
Figures 4 depicts the financial risk curves associated with each point of the Pareto Optimal curve. For example the curve with no restriction on the consumer satisfaction (SP_E(CSAT)>0) is the one with largest expected NPV. As the consumer satisfaction is constrained the curves move to the left, thus reducing the expected net present value. The shape of the curves, however, remains fairly constant. The corresponding curves of consumer satisfaction risks are shown in Figure 3. The curves move to the right as the expected net present value is reduced. The shape in this case becomes steeper. To reduce the risk associated to the consumer satisfaction the design was modified by limiting the downside risk at certain targets. In figure 5 the different risk curves associated to the penaUsation of consumer satisfaction risk at 69%, 72% and 75%. When the risk is limited, the expected profit (figure 6) is reduced, that is the financial risk curves move to the left. One important thing to notice is that the consumer satisfaction risk curves do not intersect. This is because the consumer satisfaction is not maximised, but the expected net present value is. In addition, the resulting designs present a higher expected consumer satisfaction. Figure 7 shows the composite risk curves (the consumer satisfaction is not
423 constrained). Figure 8 shows one composite risk pareto curve and another composite risk curve where the risk is constrained at a target of 75%. . 1
J
.
.SP_E0
i"
SP_E(CSAT)>80
f
SP_E(CSAT>>95
1
/
SP E(CSAT>>=100»
1 // ''
r
,.y f
.'
/
/ 2.50
2.75
3.00
Consumer Satisfactbn (%)
Figure 3. Consumer Sat Pareto Risk Curves.
3.25 3.50 NPV(niiIL of euros)
3.75
4.00
Figure 4. NPVPareto Risk Curves.
100%^
y>Yj::
80% 60% -
i
•o!
»!
If/ 1 j* '1 ^
/ y *
.E(CSAT)>0
""
/J/ -
40% -
-n=69 -n-72
20%
0% 2.50
2.75
3.00
3.50
3.75
NPV (irin. of euros)
Consumer Satisfaction (%)
Figure 5. Consumer Sat. Risk Curves.
3.25
• •n=75
Figure 6. NPV Risk Curves.
• • • '
•
/
H><
^><""'>^ Cmvmm
Figure 7. Composite Risk Pareto Curves.
^
•j^*-'''^"'*M
25
Figure 8. Composite Risk Curves.
4.00
424
6. Conclusions A definition of composite risk is presented and several pareto optimal curves have been obtained. A stochastic programming approach was used to manage financial and consumer satisfaction risks in the design of supply chains.
7. Nomenclature p products i plants j warehouses k markets c discrete types t interval times CVpi: Variable cost ofp at / CHpj : Handling cost of/? aty CTif. Cost of transport between / andy CTjk'. Cost of transport betweeny and k ir: Interest rate tr: Taxes rate n: Usefiil plant life Hp : Number of products FCf^i, Yi: Fixed cost coefficients at / lE^i, rji'. Indirect expenses coefficients at / lEjc'. Indirect expenses aty of c CSatt: Consumer satisfaction in t TCSat: Total consumer satisfaction
Ai'. Binary variable (Ai =1 if i is opened, Aj =0 otherwise) Bjc'. Binary variable (Bic =1 if j of c is opened, Bjc =0 otherwise). cXpf Production capacity factor of/? at / J3pj: Handling capacity factor of/? aty CO/. Inventory cost aty X/. Turnover inventory rate aty SFji Security design factor ofy Demandpkt'. Demand of/? atkint Pricep. Price of p Capjc'. Capacity ofy of c Capit'. Capacity of/ in t Capjt: Capacity ofy in t Capf. Capacity of plant / Cap/. Capacity of warehouse/' Qpit'. Amount of/? fabricated at / in / Xpijt'. Amount of/? transported from / toy" in t Ypjkt: Amount of/? transportedfromyto A: in ^
8. References Barbaro, A.F. and Bagajewicz, M. 2002a. Managing Financial Risk in Planning under Uncertainty, Part I: Theory, AIChE Journal, Submitted. Barbaro, A.F. and Bagajewicz, M. 2002b. Managing Financial Risk in Planning under Uncertainty, Part II: Applications, AIChE Journal, Submitted. Tsiakis, P., Shah, N. and Pantelides, C.C. 2001; Design of Multi-echelon Supply Chain Networks under Demand Uncertainty. Ind. Eng. Chem. Res. 40, 3585-3604. Simchi-Levi, D.; Kamisky, P.; Simchi-Levi, E., 2000; Designing and Managing the Supply Chain. Concepts, Strategies, and Case Studies. Irwin McGraw-Hill.
9. Acknowledgements Financial support received from European Community (project GROWTH Contract GlRDT-CT-2000-00318) is fully appreciated. Support from the Ministry of Education of Spain for the sabbatical stay of Dr. Bagajewicz is also acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
425
Operator Training and Operator Support using Multiphase Pipeline Models and Dynamic Process Simulation: Sub-Sea Production and On-Shore Processing Morten Hyllseth & David Cameron Fantoft Prosess AS, PO Box 306, NO-1301 Sandvika, Norway Kjetil Havre Scandpower Petroleum Technology AS, Postboks 3, NO-2027 Kjeller, Norway
Abstract Recent advances in sub-sea oil production technology have changed the way off shore oil fields are developed. Many new field developments use sub-sea production facilities and pipe a multiphase mixture of gas, oil and water back to an existing platform or onshore facilities for separation and processing. These pipelines can be very long (up to 100km long) and complex - with complicated branching and crossovers. Dynamic features of multiphase flow, such as slugging, can cause disturbances in the processing system. Operators need help in handling these. Model-based estimates of pressure, holdup and flow along the pipeline are thus needed. An integrated system for operator support and operator training is being delivered for a large sub-sea gas field in the Middle East. It builds upon a dynamic process simulator (D-SPICE) and a multiphase flow simulator (OLGA 2000). The simulator incorporates methods for simulating multiphase flow in closed-loop networks and tracking of hydrate inhibitors. The system is tuned to measured process performance. The system also supports predictive simulation facilities. The paper provides an industrial case study of computer-aided process engineering in a technically challenging environment. Novel methods for multiphase modelling in closed loops, real-time model system design and model tuning are presented.
1. Introduction and Objectives: Sub-Sea Processing and Transport Oil production technology has changed drastically in recent years. Whereas once an oil or gas field would be built with a platform and on-site processing of fluids, now fields are being developed with sub-sea production equipment and long multiphase pipelines to existing platforms or shore facilities. These facilities have lower capital and operating costs than conventional platforms, and enable the development of what would be otherwise marginal fields. These new developments give rise to interesting operational challenges, most of which are related to the multiphase transport of oil, gas and water to the processing facilities. A mixture of oil, gas and water in a pipeline flows in a highly non-linear way. Depending on the topography of the pipeline and the total flow rate through the line, large accumulations, or slugs of liquid can be produced. For this reason, a slug catcher a separator with large total volume and small normal operating volume - is used to provide enough inventory to handle a slug. However, if a pipeline is long, it is likely that the liquid volume in the pipeline far exceeds the economically defensible volume of the slug catcher. The process must therefore be operated in such a manner unacceptable amount of slug flow is avoided.
426
Export Gas
Onshore Facilities Condensate
Produced Water
Figure 1. A simplified schematic of the production system. The glycol recovery and distribution system is not shown. Operation of complex sub-sea networks with long tie-backs are not straightforward. Consider the process shown in Figure 1. Throughput for the pipelines is controlled using the choke valves on the wells. Additional control can be obtained by manipulating pressure in the slug catcher on shore. Note that the control elements, the choke valves, are installed 100km from the variable to be controlled, namely slug catcher level. Deadtimes are dominant, and a model is needed to predict the effect of a change in choke position on onshore behaviour. This paper describes the application of multiphase dynamic modelling to the monitoring and control of the process shown in Figure 1. This process is a gas development in the Middle East. The system that is delivered simulates process behaviour to provide estimates of local flow rate and inventory in the transport pipelines. Predictive simulations are used to identify actions that will cause slug problems.
2. Multiphase Modelling The core of the monitoring system is a multiphase model of the transport pipelines. This uses an widely-accepted simulation program called OLGA 2000. The principles behind this program have been described in detail elsewhere (Bendiksen, 1991, Scandpower 2001).
3. System Modelling and System Integration 3.1. Overview The system model consists of two elements: (1) a D-SPICE model of on-shore facilities and sub-sea utilities and (2) an OLGA 2000 model of the sub-sea production wells and pipelines. These two models are fully integrated and share a common, D-SPICE, user interface. 3.2. OLGA modelling The OLGA model consists of 202 km of piping, a reservoir inflow model for eight wells 22 valves, the pipeline surroundings and sources for methanol and glycol injection.
427 3.2.1. Closed-loop modelling The structure of the transport system posed a technical challenge. Multiphase flow models have been developed for merging flow networks, where the nominal flow is in one direction from the branches of the tree (the production wells) to the root of the tree (the land facilities). However in this system, the dual export line and crossover arrangement impose a diverging flow network. The closed multiphase flow loops imposed by the parallel export pipelines in the Scarab/Saffron fields required expanding the multiphase flow network solver of OLGA to diverging network. 3.2.2. Glycol tracking and methanol distribution It is possible that the natural gas can react with water to form solid hydrates in the pipeline. If this happens the pipe will be blocked with solids. For this reason, hydrate inhibitors such as glycol or methanol, are injected into the sub-sea system. The process described here uses ethylene glycol (MEG) as a primary inhibitor and methanol as a backup inhibitor. MEG is recovered from the system in the onshore facilities, but methanol is discharged with the produced water. Both the MEG and the methanol systems are poorly instrumented. The total flow rate to the entire system is measured, but there is no measurement of glycol or methanol at each injection point. We therefore developed a hydraulic model of the glycol and methanol distribution networks and interfaced this with the onshore facilities for MEG recovery and the OLGA model. MEG tracking has been implemented in OLGA. MEG is included as a separate component in the aqueous phase in the pipeline. The MEG concentration in the aqueous phase can therefore be monitored to ensure that the pipeline system stays outside the hydrate formation region. 3.3. Modelling of onshore facilities A model of onshore facilities is needed for the following reasons: (1) the pressure control in the onshore facilities influences the behaviour of the transport pipelines and (2) the transport pipelines and wells must be operated so that constraints in the onshore facilities, such as vessel liquid capacities, system pressures and unit throughputs, are not violated. A simplified model was built of the onshore facilities. This model represents the main pieces of onshore equipment and represents all important throughput and capacity constraints. Constraints modelled are slug catcher level, slug catcher pressure, condensate separator level, pressure in condensate separators, product gas demand, coalescer interface level, level in storage tanks for condensate, produced water, MEG and the unseparated MEG-water mixture and the capacity in MEG separation and condensate stabilisation. These constraints are modelled and monitored. An alarm is raised if the calculated value of these constraints goes out of its normal range or is predicted to do so in the near future. The alarming on predicted values is done through the regular execution of a look-ahead simulation. 3.4. Look-ahead simulations One of the most valuable features of the system is its ability to run predictive simulations on a routine basis. These look-ahead simulations are run using a copy of the process model. This model is initialised with a snapshot of the current conditions in the real-time model. It is then run, as fast as possible, for a specified time into the future. Key variables in the model are monitored for alarm conditions. If an alarm is raised it is sent to a special look-ahead alarms list and is reported to the operator. Future operator actions can be simulated by using a script file that executes the required actions on the
428 look-ahead model. This script is written in D-SPICE's proprietary scripting language, called Model Control Language (MCL). The predictive behaviour of key variables can be shown through a predictive trend display, as shown below. Historical data is shown to the left of the vertical cursor, and predicted data to the right.
tAAtMM Q««rP«»«i»
> f{ iPi»x|"''«l''i's«"l Toast P^ISTsSir] ii jj^l a^xj idF\ M
2a8f| ^i\ ^ i ^f\ > tj iicj rasj »ci incj ^tii j&f{ wcj j s | wcj m4 MIE^
^ ^
3
11 m
(»R
|B#il^«j^'' w:»"
Figure 2. A screen dump of the user interface showing the look-ahead trend and lookahead simulator control panel. This example shows a very important monitoring variable, namely the level in the slug catcher. It is all too easy to operate the pipeline system so that liquid accumulates in the export pipelines. If this liquid comes out rapidly, as a slug, it is possible to overfill the slug catcher and shut down the onshore processing. The look-ahead simulator allows operators to predict whether an sub-sea operation, such as bringing another well on stream, will give rise to problems some time later. It also enables them to evaluate alternative ways of avoiding problems. 3.5. System integration The system is integrated with the customer's process control and SCADA system. This is done using D-SPICE's DCS interfacing system. Since the client's control system supported the OLE for Process Control (OPC - http://www.opcfoundation.org) standard, we were able to use the standard D-SPICE OPC client interface without modification. Thus, an interface that we originally developed for an ABB control system was used successfully to communicate with a Yokogawa system. This is a radical improvement over the situation only a few years ago, where each control system
429 required a proprietary and expensive interface that often involved the purchase of extra control system hardware. In our opinion, OPC is a key enabling technology for applying computer-aided process engineering in real processes. For testing, we have replaced the control system with a copy of the model and an OPC server. This model simulates the process and allows the behaviour of the real-time system to be evaluated and tested.
Maintenance HMI Client
Operator HMI Client
Figure 3. System architecture for a typical multiphase process monitoring system. 3.6. Tuning Tuning is essential for the correct performance of the system. Thus, measurements of flow and pressure are used to adjust well parameters and pipe friction. Temperature measurements are used to adjust heat transfer coefficients. Single-loop tuning algorithms are used. A system of logic and data validation is used to ensure that bad measurements are not used.
4. Operator Interface and Operator Support Tools A custom operator interface was built for this system. This interface is used in parallel with the control system. It has a similar look and feel to the control system, but concentrates specifically on variables that are estimated by the model. The interface provides access to estimated quantities; measured quantities used in the model; trends of key variables, and alarms on present estimates and predicted values. All graphical elements are animated to enable process status to be seen at a glance.
5. Results At the time of writing the system had been successfully passed through factory acceptance testing (FAT). This means that the system had been demonstrated to be able to communicate with the control system and had been able to monitor the performance of a simulated process. The process is being installed in November 2002 and is expected
430 to be in use from early 2003. Operational experience will therefore be presented at the meeting. On the basis of the FAT, we expect that the system will be a necessary tool for operating the difficult sub-sea production system.
-Holdup 20 I n c h Nol [ f r a c t ] -Gas tlmr [UMSCFB] -Agueovs Pliase Hbldiqp [fract] 1 - £00 -
1
0 8 - 500 - 0 0 0 6 -
400 -
0 6
300 0 4 0
2 •
0 -J
200 •
0 4
100 " 0 2
0 "J
0
5000
10000 Blstajuce DnJ
15000
20000
Figure 4. An example of the user interface for one of the two export pipelines. The profile is seen by the operator and shows liquid hold-up (the highest curve, left-most axis, gas flow (next axis to the right, next highest curve) and aqueous phase holdup (left-most axis and lowest curve). The operator can see at a glance that there is a low gas flow rate, with the risk of slug formation.
6. Conclusions Novel multiphase technology has been coupled with dynamic process simulation, open control system interfaces and a custom user interface to provide a necessary tool for process operators. We hope to be able to report interesting experience and observations from the commissioning of the system.
7. References Bendiksen, K.H., et al, (1991), "The Dynamic Two-Fluid Model OLGA: Theory and Application", SPE Production Engineering, May 1991, 171-180. Cameron D.B., 0degaard, R.J. and Glende, E., (2001), "On-line Modelling in the petroleum industry: Successful applications and future perspectives", in: Gani, R, and Bay J0rgensen, S., (eds.), "European Symposiym on Computer Aided Process Engineering - 11", Elsevier, Amsterdam, 111-116. Scandpower Petroleum Technology AS, (2001), "OLGA 2000 User's Manual", Kjeller, Norway.
8. Acknowledgements This work was carried out by a team that, with the authors, included J0rgen 0degaard, Erik Glende, Johnny Nylund, Sverre J0rgensen, Gunnhild Baekken, Iris Andersen, Kersti Ekeland Bjurstr0m and Zheng Gang Xu. Their substantial contribution is acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Scierlce B.V. All rights reserved.
431
Unstable Behaviour of Plants with Recycle Anton A. Kiss, Costin S. Bildea*, Alexandre C. Dimian and Piet D. ledema University of Amsterdam, Nieuwe Achtergracht 166, 1018 WV, Amsterdam *Delft University of Technology, Julianalaan 136, 2628 BL Delft, The Netherlands
Abstract This article considers processes involving two reactants and two reactions. It is demonstrated that plantwide control relying on self-regulation results in regions of state multiplicity or unfeasibility, even if the stand-alone reactor has a unique, stable operating point. Moreover, when selectivity reasons require low per-pass conversion, instability is very likely.
1. Introduction In today's economically competitive environment, chemical plants must cope with large changes in production rate, product specification and feedstock quality. In addition, exploiting system's nonlinearity for an optimal design is a current practice. However, the combination of non-linearity, disturbances and design uncertainty might lead to undesired phenomena: jumps to a new low-performance operating point, oscillatory or even chaotic behaviour. State multiplicity and instability in CSTR, tubular reactor with axial dispersion or catalytic particle are classic problems in chemical reaction engineering. Other representative systems are heat-integrated reactors, binary, thermally coupled, azeotropic or reactive distillation. With very few exceptions, the non-linear studies considered stand-alone units. However, coupling stable units does not guarantee the stability of the resulting system. Instability could occur due to inability of the plantwide control system to manage the mass balance of the plant. For recycled reactants, the reactor must ensure that the entire amount fed in the process is transformed into products. Reactor - separation - recycle systems are self-regulating, if only one reactant is involved as in the case of A-^ P reaction. Control structures allowing floating recycle and relying on the selfregulating property work well, if the reactor is large enough (Larsson and Skogestad, 2000). If heat effects are considered, the recycle systems involving either CSTRs (Pushpavanam and Kienle, 2001) or PFRs (Bildea et ai, 2002) may exhibit state multiplicty and instability. These occur also in polymerization reactors (Kiss et al, 2002). When the reactor volume and the nominal conversion are small, the plant is very sensitive to disturbances. In this case, fixing the recycle and changing the reaction conditions (volume, temperature, pressure) results in better performance (Luyben et al, 1999). The above-cited studies considered the A -> P reaction that can seldom be found in real plants. In this article we consider processes involving two reactants and two reactions. We demonstrate that plantwide control relying on self-regulation leads to regions of state multiplicity or unfeasibility, even if the stand-alone reactor has a unique, stable operating point. Moreover, when selectivity reasons require low per-pass
432 conversion, instability is very likely. Due to space limitations, we restrict the discussion to liquid phase reactions, and wellmixed reactor (CSTR). The results are equally applicable to PFRs and gas-phase reactions. The analysis is based on dimensionless models. In general, f, and z^j are dimensionless flow rates and concentrations, respectively. Subscripts / and k define streams and components, respectively.
2. One Recycle Systems This section analyses the consecutive parallel reactions A + B->P A + P-^R taking place in an isothermal CSTR-Separator-Recycle system. Such chemistry is common, for example in butane - butene alkylation for iso-octane production, or reaction between ethylene oxide and alcohols. Usually, the intermediate product P is of interest. We present results for the case of adjacent reactants volatility, when the flowsheet has only one recycle {Figure 1). When both reactants are completely recycled, feasible operation is possible only if the ratio of reactants in the feed matches exactly the stoichiometry. For this reason, one reactant feed is on flow control (/A,O=1), while the feed flow rate of the second reactant (fB,o) is used to control its inventory in the process. We assume a good separation, in such a way that the product is pure. The reactants are completely recycled and the molar fraction of the product in the recycle stream is very low. The model includes reactor and separation equations, as well as the relation for the feed flow rate of second component imposed by the control structure (Eqs 1 to 8).
a=0.1 0.8 \ 0.6
0.5
/"'^^v,
1 A+B-*P A+P->R
0.4\
\
5
0.2-
^
^
^
^
^
1—1—1—1—I—I—1—,—1—1—1—1—•-
u -1
40
=
=
=
=
' -'• r ' ' ' ' 1 ' ' ' ' 1 Da 60 100
Figure 1. Control structure relying on Figure 2. Selectivity of product P for self- regulation for a one-recycle plant. different kinetics (a=k2/kj).
^ ^ ' (^A,2 • ^B,2 + ^ • ^A,2 • % 2 ) + / 2 * ^A.2 " / s ' ^A,3 = 1
(1)
/ B , 0 + / a • ^B,3 - {fl
(2)
' ^B,2 + ^ « • ^A,2 ' ^B.2 ) = ^
433
fl
' ^P,2 - ^ «
• (^A,2 • ^B,2 - ^
• ^A,2 * ^P,2 ) =
(3)
^
(4)
1+ / R - / 2 = 0
(5)
/22A,2-/3^A,3=0
(6)
/2^B,2 ~ / 3 ^ B , 3 ~ ^
(7)
^A,3 "^ ^3,3 "^ ^P,3 ~ ^ /R ~/3
(8)
~"/B,0~^
Two steady state solutions are possible. The selectivity of product P, 5P/A is given in Figure 2 as a function of Damlohler number. Figures 3 and 4 show the conversion of the key component, X^, for different values of the separation performance (zp^) and recycle flow rate (/R). All diagrams exhibit an unfeasibility region at low Damkohler values. When the Damkohler number exceeds the critical value corresponding to the turning point of the Da - XA diagram, two feasible steady states exist. The critical value Da" represents a limit point of the balance equations. Then, the following feasibility conditions (i.e. existence of steady states) can be derived: Da>
4(1^/R/
(9)
A higher recycle rate (Figure 3) shifts the limit point to lower conversion and Damkohler number values. As a result, the state multiplicity has a diminished practical importance for high recycle rates. However this might not be economically feasible due to the increased costs involved by a higher recycle rate. In addition, at a given recycle rate, a poorer separation - i.e. higher molar fraction of product in the recycle - shifts the fold to higher conversions and Damkohler numbers (Figure 4). Consequently the critical reactor volume required in a recycle system increases. In order to minimize the effects of state multiplicity it is recommended to have a good separation - i.e. no product in the recycle stream. _
Ja=0.2
^^^^^^
]
/A^2 0.4
1 r'^L^ A^-^^
0.2 J
1 llp
0
A+B->P A-t-P-^R
50
100 Da
150
200
Figure 3. Da-X bifurcation diagram for different recycle flow rates fR.
0
50
100 Da
150
200
Figure 4. Da-X bifurcation diagram for different separation performances, Zps-
434
f
0.5 n 04 -
A+B- ^P A+P-
Selectivity
^^0.3-
r1 -08 -0.6 2
Conversion
<
^0.2-
- 0.4 05" 0.1 - 0.2 0.0-
—
i
—
1
—
<
—
,
—
,
—
,
—
^
time/[h]
-0
Figure 5. Instability of low conversion operating point in one-recycle systems. In order to assess the stability of the two different steady states, an AspenPlus simulation was developed. The physical properties of A, B, P, and R species correspond to butene, butane, iso-octane and C12H26, respectively. The operating point was chosen near the fold, on the low-conversion high selectivity branch. The units were designed and rigorous steady-state simulation was performed. The flowsheet was exported to AspenDynamics, where control loops were provided and tuned. It turned out that the nominal operating point is unstable. Figure 5 shows the shift from low to high conversion branch. The change of operating point is also accompanied by a decrease of selectivity.
3. Two-Recycle Systems In this section we consider the consecutive reactions:
2A^P-\-R This chemistry can be found, for example, in the Tatoray process. r~~|
A Recycle
A Feed Product
Figure 6. Control structure relying on self- regulation for a two-recycle plant.
435 Depending on the physical properties of the species involved, several flowsheets are possible. If the reactants A and B are lighter and heavier, respectively, than the main product P, the flowsheet involves two recycle streams. Plantwide control includes control loops for reactor level and temperature as well as distillation columns top and bottom purity or, equivalendy, temperatures (Figure 6). One reactant feed (A) is on flow control,/A^ = 1, relying on self-regulation. The main advantage of this configuration is that the production rate can be easily set. The flow rate of reactant B at reactor inlet/RB is fixed. The feed rate of reactant B,fB^, is used to control the inventory of reactant at some location, for example an intermediate storage tank. For this control structure, the following equations can be easily derived: ^ « • (^A,2 • ^B,2 + ^ • ^1,2 Yfl'
(10)
^A,2 " /s * ^A,3 = ^
/B,0 + / 5 - ^ B , 5 - ( / 2 - ^ B , 2 + ^ « ' ^ A , 2 - ^ B , 2 )
(11)
= 0
/2^A,2 ~/3^A,3 ~ ^
(12)
/2^B,2 ~/5^B,5 ~^
(13)
JR,B ~ J2~
(14)
J5~/B,O~^
J3~
J5~
(15)
/B,0 ~ ^
Figures 7, 8 and 9 show conversion versus Damkohler number bifurcation diagrams, for different recycle rates, separation performances and kinetics. In all cases the lowconversion branch is closed-loop unstable, independent on the separation dynamics. In order to assess the stability of the two different steady states, an AspenPlus simulation was developed. The physical properties of A, 5, P, and R species correspond to toluene, tri-methyl-benzene, xylene and benzene, respectively. The operating point was chosen near the fold on the low-conversion branch.
0.8-
0.8 -
- ' ' 1 0 ^ ^ ^ ^ •^]]^^j5-
3 ^
Z B , 5 = 1 J ^
0.6-
0.6-
f//^
0.4 -
^A.3=1
-""50
A+B^2P 2A->P+R *
^
1
1
1
20
1
^
- ^ ^ ' ^ i , K
0.4 •
1
r
' 1 • ' ' ' 1 ' ' ' 40 Da 60
Figure 7. Da-X bifurcation diagram for different recycle flowrates, f^.
a=0.1
0.2-
n-
.ir^..
= 3
ZA,3-1
^^^-^"6^1
a =0.1
0.20
ZB,5=1
A+B-^2P 2A-»P+R '
'
1 '
40
•
'
Da
'
1 '
60
Figure 8. Da-X bifurcation diagram for different separation performances, ZBS-
436 0.5 0.4 -7-0.3
0.1
A+B->2P 2A->P+R
0.0
10
Da 15
Figure 9. Da-X bifurcation diagram for different kinetics (a-k2/kj).
10
time/[h]
20
30
Figure 10. Instability of low conversion operating point in two-recycle systems.
The units were designed and rigorous steady-state simulation was performed. The flowsheet was exported to AspenDynamics, where control loops were provided and tuned. It turned out that the nominal operating point is unstable. Figure 10 shows the shift from low to high conversion branch. This occurs after a long period of misleading stationary behaviour.
4. Conclusions - State multiplicity occurs in systems involving two reactants if the control structure implies self-regulation of the mass balance for one reactant. Feasible operating points exist only if the reactor volume (i.e. Da number) exceeds a critical value. - When multiple states exist, the low conversion ones are unstable. The instability manifests as a jump to high conversion points, or infinite accumulation of some reactant. - In multi-reaction systems, economical optimality implies high selectivity. This can be achieved at low per-pass conversion operating points that might be unfeasible due to the low-conversion instability. - Considering only the nominal steady state, the operating point can lead to a difficult to control or even un-operable plant. Nonlinear analysis is a way to identify and avoid such dangerous situations at the design stage.
5. References Bildea, C.S., Cruz, S.C, Dimian, A.C. and ledema, P.D., 2002, European Symposium on Computer Aided Process Engineering - 12, The Hague, The Netherlands. Kiss, A.A., Bildea, C.S., Dimian, A.C. and ledema, P.D., 2002, Chemical Engineering Science, 57(2), 535. Pushpavanam, S. and Kienle, A., 2001. Chemical Engineering Science, 56, 2837. Larsson, T. and Skogestad, S., 2000, Modelling, Identification and Control, 21, 209. Luyben, W.L., Tyreus, B.D. and Luyben, M.L., 1999, Plantwide Process Control, McGraw-Hill, New York.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
437
Development of an Intelligent Multivariable Filtering System based on the Rule-Based Method S. p. Kwon, Y.H. Kim, J. Cho and E. S. Yoon Institute of Chemical Processess, Seoul National University, San 56-1, Shillim-dong, Kwanak-gu, 151-744 Seoul, Korea (ROK), e-mail: [email protected], [email protected], jacque(a)pslab.snu.ac.kr, [email protected]
Abstract In this v^ork an intelligent multivariable filtering system (IMPS) based on the rule-based method is developed for on-line monitoring of the states of a transient chemical process. Basically, IMPS consists of three main components, the inference engine, the knowledge database and the state estimation. Once in a while, it is not normal to observe the states of such a transient process only by a fixed filter. For that reason the filter should be often changed along with the inconsistent situation during the overall period of state estimation. In IMPS a certain filter is chosen by the cooperation of the inference engine and the knowledge database. Since the rule-based method makes use of well-established knowledge of long experience, IMPS gives consistent and reliable results. Moreover, the decision rules integrated in IMPS can be flexibly changed for the variation of the standard operating manual. In practice IMPS is built on the environment of SIMULINK. As a case study a continuous polymerization reactor was stochastically simulated and sequentially filtered by two different filters.
1. Introduction There are more and more demands for special chemical products with beneficial properties. Thus, chemical reaction processes, which are related to the production of fine and specialty chemicals, are exactly estimated by using on-line estimation techniques. Therefore, the main activities of process operators of the full-automated chemical plants are fault detection and diagnosis, and correcting deviated states, which are based on process variables estimated, such as temperature, pressure and density. In practical stochastic observers, which are simply called filters, are implemented for estimating the complete state vector extracted from the noisy process data measured, and then all variables can be derived from the state vector estimated (Kwon and Wozny, 1999; Vankateswarlu and Avantika, 2001). Por that reason a widely variety of filter algorithms have been developed and a number of works have been carried out for online monitoring of various nonlinear chemical processes (Jazwinski, 1970; Ramirez, 1994; Guiochon, et al., 1995). Especially, Kalman filter algorithms have been often used for on-line monitoring of polymerization reaction processes (Schuler and Suzhen, 1985; Gagnon and McGregor, 1991; Kozub and McGregor, 1992; Boem and Roeck, 1994; Wang, et al., 1995; Mourikas, et al., 1998).
438 Practical applications of the selected filter are highly divers, with each case having peculiarities of its own (Haykin, 2002). A steady-state filter, which is called the Wiener filter, is effective for stationary inputs, but it is not sufficient for dealing with situations, in which the signal and noise is not stationary. In this case the optimal time-varying filter, which is called the Kalman filter, is available for a range of applications. In addition the adaptive Kalman filter, in which the observation noise covariance is sequentially updated by using FIR (finite-duration impulse response) filter algorithm, is practical because of its tracking capability (Chen and Rutan, 1996). Occasionally, a type of filter is not sufficient to carry out on-line monitoring of process transitions, because chemical processes at the abnormal situation show an extremely large change from one steady state to another. For instance there are frequent manual changes in the product grade in continuous chemical processes. It was emphasized that manual errors keep count about 40 percents of all causes of accidents in the chemical and petrochemical industry (Bhagwat, et al., 2001). On-line monitoring systems that detect faults during process transition are very significant to reduce the manual errors.
2. Optimal Filtering Algorithms In general, the nonlinear time-variant process model and the nonlinear observation model with measurement noise are represented by x(t) = f{x,u,t)-\-w{t),
A:(O)=XO,
y{t) = h{x,u,t)-\-v(tl
(1) (2)
where x(t) is n dimensional state vector, u(t) is m dimensional input vector, / is a nonlinear state fimction of x(t) and u(t). w(t) and v(t) are additive white noises with zero-mean. Initial state vector jc(0) is a Gaussian random vector with mean XQ. y(t) is r dimensional output vector, /i is a nonlinear measurement function of JC(0 and u(t). w{t) and v(t) are independent of jc(0). In addition the covariance matrices of (jc(0) - XQ), w(t), and v(0 are PQ, Q{t) and R(t), respectively. 2.1. Steady-state optimal filter Inherently, the Kalman filter is an optimal algorithm, in which the linear state-space model is used to predict state vector x(t) between the sampling intervals, and then all the estimated states are filtered by using the past state estimate and the newly obtained measurement vector y(t). The optimal state estimate is carried out through minimizing the quadratic performance functional, in which both of state and measurement uncertainties are included in this criterion consistently. By using the boundary conditions for minimizing this performance functional the Riccafi's equadon can be solved. As a result, the Kalman filter algorithm is obtained. i = Fx + PH^R-' {y - Hx),
(3)
P = PF^ + FP + Q-PH^R-'HP,
(4)
439 where JCQ and PQ are initial state estimate and its covariance. The symbol x is the state estimate, and P is the covariance of state estimation error, which is calculated from the known final value backward in time. The filter gain K is defined as PlfPT^ and can be computed independently from the dynamic state response, because it is only a fiinction of system and performance parameters. If the calculation of P is examined at infinite fime, then Eq.(4) will come close to a steady state. Besides, the filter gain becomes a constant value, if the system is observable and controllable (Ogata, 1987). 2.2. Time-variant optimal filter The steady-state optimal Kalman filter can be generalized for time-variant systems or time-invariant systems with non-stationary noise covariance. The time-varying Kalman filter is calculated in two steps, filtering and prediction. For the nonlinear model the state estimate may be relinearized to compensate the inadequacies of the linear model. The resuhing filter is referred to the extended Kalman filter. If once a new state estimate is obtained, then a corrected reference state trajectory is determined in the estimation process. In this manner the Filter reduces deviations of the estimated state from the reference state trajectory (Kwon and Wozny, 1999; Vankateswarlu and Avantika, 2001). In the first step the state estimate and its covariance matrix are corrected at time 4 by using new measurement values j(4).
k{t,) = Fx[t,\t,_,)+K{tMh)-H^h\t,_^), K{t,)=l{t,\t,^)H'[R+Hl{t,\t,^)H'y,
(5) (6)
P{t,)={l-K{t,)H)p{t,\t,_X
(7)
where the process and observation matrices mean F = {df/dx)^^^ and H = {dh/dx)^^^. In the second step the state estimate and its covariance are predicted forward to the next sampling time by integrating the deterministic fiinctions (Kwon and Wozny, 1999). 2.3. Adaptive recursive least-square filter For the regular filter the observation noise covariance J? is a constant matrix determined before state estimation. On the other hand, the measurement noise covariance R(t) may be adjusted to compensate for estimation errors. Using finite-duration impulse response (FIR) filter algorithm the observation noise covariance can be adjusted during the state estimation process.
R{h) = t^]{vk
-tj)-^'ih
-tj)]-H'P{t,)H
,
(8)
7=1
where the filter order p is the number of delay-elements corresponding to a smoothing window. The symbol oj is the respective tap weight, and the asterisk denotes complex conjugation. The second term of right-hand side is use to describe the parameter estimation errors. According to Eq.(8) the on-line observation residual v(4), which is the difference between the observation and prediction, will result in a large observation
440
noise covariance, if another component makes a contribution to the observation. Hence, if the on-Hne residual becomes too large, then the small weight factor will turn off the filter, and the number of delay-elements will not be used to estimate the parameters (Haykin, 2002; Ogata, 1987).
3. Intelligent Multivariable Filtering System (IMFS) For the state estimation a certain filter can be automatically selected by the cooperation of the inference engine and the knowledge database in IMFS, as shown in Figure 1. The knowledge database means the sum of knowledge about facts and rules, whereas the inference engine is the general problem-solving knowledge. The rules represented by "if-then sentence" are an effective method to represent recommendation, direction, strategy, etc. The interpreter compares the fact and the condition. If the fact satisfies the given condition, then the instructed action is executed (Suh, 1997). Filters can be used for estimating state variables of a transient process sequentially. The rule-based method makes use of well-established knowledge of long experience and gives consistent and reliable results. It can also find a suitable filter more rapidly than the other knowledge representation methods, such as the procedural method and the formal logic. Furthermore, the decision rules integrated in the knowledge database are flexibly changed on account of the variation of the standard operating manual. IMFS Disturbed Sgnals
^
Qven
^
Situation
1 Multivariable Filter |c4=t>
i 1
tr
inference Engine
Filtered Signals
step
BiiHi
|
C
B
knowledge Database
\
1 Situation 1 |
•S
1 Situation N | < - ^ [ Filter N |
Figure 1. On-line monitoring of the transient states by using IMFS.
Figure 2. Integration of the nonlinear model in a block ofSIMULINK.
4. Continuous Polymerization Reactor: A Case Study The polymerization reaction system, which is studied in this work, consists of Methyl methacrylate (MMA) as monomer, Azo-bis-isobutyronitrile (AIBN) as initiator, and Ethyl acetate as solvent. Free radical polymerization reaction is assumed to work in a continuous stirred tank reactor with a cooling water jacket. This polymerization reactor can be designed to produce an amount of polymer product, i.e. the rate of output stream is constant. The rigorous nonlinear process model of the continuous polymerization reactor consists of a number of differential and algebraic equations about the reaction kinetics, the gel and glass effect, the volume change, the gas-liquid phase equilibrium in the reactor, the energy balance, and the polymer molecular weight moments (Kwon and Wozny, 1999; Guiochon, et al., 1995; Boem and Roeck, 1994; Wang, et al, 1995). The
441 material balance consists of three terms for the accumulation, the reaction and the mass flow. These equations are valid for four components such as the initiator, the monomer, the solvent, and the chain transfer agent. The general mechanism of polymerization reaction gives the reaction rates of each component under the assumptions, QSSA (Quasi-steady state approximation) and LHC (Long chain hypothesis). The energy balance contains four terms for the heat accumulation, the heat generation, the heat flow and the heat removal through the cooling jacket. The polymer molecular weight and its distribution are described by the balances of the lowest three moments of active polymer molecules. By using the rigorous nonlinear model developed the dynamic behavior of the continuous polymerization reactor can be simulated accurately. The nonlinear model was programmed in MATLAB, and then for convenient uses it was inserted into a block of SIMULINK, which is a commercial dynamic simulator in the MATLAB package, as shown in Figure 2. Using the symbolic model in SIMULINK stochastic simulation can be carried out with no trouble. In Figure 3 and 4 we can see that the polymerization process reaches a steady state about 3 hours after the reaction start, and then there is a transition in 20 hours. There are drastic changes of the conversion, the molecular weight, the temperature. Furthermore, the steady-state Kalman filter and the time-varying Kalamn filter can be effectively switched in order to estimate the state of the continuous polymerization reactor. Both of the filters showed satisfactory performances at the steady state region, that is three hours after the reaction startup of the polymerization reaction. However, we can see clearly that the time-varying Kalman filer is preferred to the steady state one at the drastically varying region, as shown in Figure 5 and 6. At abnormal situations it is recommended to use more than two different filters sequentially according to the condition. The selection method, which is called IMFS, was suggested for practical applications. IMFS is a reliable and consistent system for the reason that it bases on a well-established knowledge of long experience. As the case study, the continuous polymerization reactor is investigated. The nonlinear model is developed and integrated in a block of SIMULINK for the convenient use. By using the program stochastic simulations are carried out. In addition a few different filter algorithms were evaluated by simulation. f'tt^'Vr'UHWl-S**^
i70
*t^»i^'f^t'|^>iM^Y»t«»^i4<Mi»M'h*»*''j5
10
15 Time [h]
20
25
20
25
30
|M^¥<»I^^ 10
15 Time [h]
20
Figure 3. Stochastic simulation on the conversion and the molecular weight.
5
10
15 Time [h]
Figure 4. Stochastic simulation on the temperature and the coolant feed.
442
Time [h]
Time [h]
Figure 5. Filtering of the temperature and the conversion using a fixed filter.
Time \h]
Figure 6. Filtering of the temperature and the conversion using two filters.
5. Conclusions At abnormal situations it is recommended to use more than two different filters sequentially according to the condition. IMPS could suggest a suitable filter for the practical application. IMPS is a reliable and consistent filtering system based on the well-established knowledge of long experience. The continuous polymerization reactor was investigated as a case study. The nonlinear model is developed and integrated in the block of SIMULINK for the convenient use. By using the program stochastic simulations were carried out and filter algorithms were evaluated. One of the most important advantages of IMPS is that it can help operators to reduce the time and effort necessary to find proper filters.
6. References Bhagwat, A.M., Srinivasan, R. and Krishnaswamy, P.R., 2001,4'^ IP AC WS on On-line Pault Detect. & Super. Chem. Proc. Ind., 75. Boem, D. and Roeck, H. 1994, 3'^ IEEE Confer, on Contr. Appl, 1277. Chen, J. and Rutan, S.C, 1996, Analtica Chemica Acta, 3351-10. Gagnon, L. and McGregor, J.P., 1991, Can. J. Chem. Eng., 69, 648. Guiochon, S., Defaye, G., Vidal, C. and Caralp, L., 1995, DYCORD+'95, 238. Haykin, S., 2002, 3'"^ Eds., Adaptive Pilter Theory, Prentice Hall, NJ. Jazwinski, A.H., 1970, Stochastic Process and Piltering Theory, Academic Press. Kozub, D.J. and McGregor, J.F., 1992, Chem. Eng. Sci. 47,1047. Kwon, S.P. and Wozny, G., 1999, Com. & Chem. Engng. Suppl., 273. Mourikas, G., Moorris, A.J. and Kiparissides, C, 1998, DYCORD+'98, 682. Ogata, K., 1987, Discrete-Time Control Systems, Prentice Hall, Chap.7. Ramirez, W.F., 1994, Process Control and Identification, Academic Press. Schuler, H. and Suzhen, Z., 1985, Chem. Eng. Sci., 40, 1891. Suh, J.C, 1997, Dissertation, Seoul National University, 59-78, (in Korean). Vankateswarlu, C. and Avantika, S., 2001, Chem. Eng. Sci., 56, 5771. Wang, Z.L., Pla, F. and Corriou, J.P., 1995, Chem. Eng. Sci., 50, 2081.
European Symposium on Computer Aided Process Engineering - 13 A. Krasiawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
443
Multiple-Fault Diagnosis using Dynamic PLS Built on Qualitative Relations Gibaek Lee\ En Sup Yoon^ ^Department of Industrial and Engineering Chemistry, ChungJu National University, Chungbuk, 380-702, Korea, email: [email protected] ^School of Chemical Engineering, Seoul National University, Seoul, 151-742, Korea
Abstract This study suggests the hybrid method of qualitative model and quantitative data-based method to improve diagnosis accuracy and resolution, and promote reliable diagnosis of multiple-fault. The proposed method is based on signed digraph offering a simple and graphical representation for the qualitative relationships between process variables. On local causal relationships of each variable in signed digraph, PLS model is built to predict the target variable of the causal relationships. In order to handle accurately process dynamics, the input of PLS uses the current value of source variables as well as the past values of source and target variables. The measured value and predicted one are compared to diagnose fault.
1. Introduction Fault diagnosis is used to analyze the process data on-line, monitor the process trends, and diagnose faults at abnormal situation. It helps operators in decision-making and ultimately assist operators to more efficiently keep operation continuous and safe. Considering the characteristics of chemical processes, the minimum requirements for the fault diagnosis methodology have been suggested as like speed, accuracy, resolution, robustness, portability, and reliability (Finch, 1989). Among these factors, reliability means that accurate diagnosis can be expected for all faults, including novel faults and predictable multiple-fault. Multiple-fault is more than two faults that happen simultaneously or sequentially and can be classified into four categories as induced fault, independent multiple-fault, dependent multiple-fault, and masked multiple-fault (Lee et al., 1999). Masked multiple-fault is the faults of which some can explain all symptoms from the others, and cannot be recognized using the qualitative method only. Diagnostic methods for chemical processes are broadly classified as ones that use process model, and ones that rely on process history data. And they are classified as qualitative ones and quantitative ones, respectively (Venkatasubramanian, 2000). This study suggests a new hybrid method based on signed digraph (SDG) in qualitative model-based methods, and partial least squares (or projection to latent structures, PLS) in quantitative history data-based methods. The proposed method will be illustrated with the case study of a heat exchanger and a CSTR. This process is originally used by Kramer (1987) and is simulated by the model of Sorsa (1991). The sampling interval is 5 s, and total diagnosis time is 2000 s. The first or single fault occurs at 100 s, and the second fault occurs at 100 or 200 s. Diagnosis result of this study will be compared with the one of the qualitative method
444
Figure. 1. Reduced digraph of the CSTR process. suggested in our previous study (Lee et al., 1999).
2. Suggested Method 2.1. Off-line analysis 1. Modification of SDG: After SDG is built, the reduced digraph is obtained by removal of unmeasured nodes from the SDG (Kramer, 1987). The reduced digraph of CSTR process is shown in figure 1. And, physically feasible faults for each piece of equipment are defined and added on the root node in order to handle only physically meaningful faults (Lee et al., 1999). 2. Dynamic PLS model: Each arc in SDG represents the instant effect from source node to target node. All source nodes connected with the arcs to each target node have the direct influence on the target node. That is, only the source nodes connected to a target node can affect the target node. Therefore, the PLS model based on the source variables connected to a target variable can predict the value of the target variable. The output Y of the model is each target variable, and the input X of the model is the source variables of the target variable. In order to handle accurately process dynamics, PLS is integrated with ARMAX, referred to as dynamic PLS (DPLS). The resulting input of DPLS for a target variable includes the current value of source variables as well as the past values of the target variable and source variables. For example, the DPLS model based on measured variable, FR, FW, and T can predict TR in CSTR process. If the current value and one previous value are used as input data, the input matrix X for the estimation of TR is formed as F,(l)
F^(l)
7(1)
TM
F.iO)
iv(0)
T{0)
F,(2)
F^(2)
T{2)
T,{2)
F,{1)
^.(1)
T{1)
X= F,(t)
F„it)
T(t)
T,it-l)
F,{t-l)
/v(f-l)
(1) T{t-l)
445 3.
Learning data: Operation data is needed to build DPLS model. In the suggested method, each DPLS uses only data set representing the local relations from input variables to output variable. The required data set for each model can be provided by normal operation data set such as set-point changes, which are frequently occurred. Therefore, the suggested method does not need faulty case data set, which is rarely available. 4. Determination of model parameter: The necessary number of time lags / and the principal components (PC) are determined from learning data. The number / is usually 1 or 2 which indicates the order of the dynamic system. The design method is analogous to the method used in the study of dynamic PCA (Ku et al., 1995), and uses the cross-corelation plots of the scores to determine the number of PCs. The number of time lag and PCs for the DPLS model of the CSTR process are shown in Table L The input matrix X as equation (1) is made with the determined number of time lag, and DPLS model is developed by multivariate statistical package PlantAnalyst®. 5. Preparation of set table: The fault detection is performed by the observation of the residual, which is the difference of measured value and predicted value by DPLS. Thus, the faults inducing the variation of each residual are classified with the sign of the residual. The qualitative state of the residual of (+) or (-) becomes symptom, Table 1. Number of PCs, time delay, and fault table of the CSTR process. Residual variables CAO FO TO CL CR CT FP FR FW L T TR CA CB
Positive (+) Sign of residual FEED-CCH FEED-FCH FEED-TCH LC-SVCL FC-SVCH TC-SVCL VL-BH, FS-BH
Number of PCs 2 2 2 2
Time delay 1
2
1
VR-BH, FS-BH
1 4 7
1 2 2
3
1
5 5
2 2
VT-BH LS-BH, PUMP-EF TS-BH, FS-BH, LS-BL CW-TCH, HX-PL, FS-BL, TS-BL LS-BH, TS-BH LS-BL, TS-BL
Negative (-) sign of residual FEED-CCL FEED-FCL, FP-BK FEED-TCL LC-SVCH FC-SVCL TC-SVCH VL-BL, PP-BK, PUMP-EF, FS-BL VR-BL, FS-BL, PUMP-EF, RP-BK VT-BL, WP-BK LS-BL, RX-LK TS-BL, FS-BL, LS-BH CW-TCL, FS-BH, TS-BH LS-BL, TS-BL LS-BH, TS-BH
which is expressed as with the pair of the output variable and the sign of the residual. The classified faults are stored in a set (called as fault set). Also, The faults can be classified into two types. One type of faults is ones added to the output variable. The sign of the symptom by the faults are same with the one of arc connecting the fault
446 to the output variable. An another type of faults is sensor faults of measured variables except output variables in DPLS model. This paper does not consider independent sensor faults. The sign of the symptom by the faults is the reverse direction of the product of the sign of the sensor faults and the sign on the arc connecting the input variable to the output variable. The fault set for the example process is shown in Table 1. 2.2. On-line diagnosis (1) Detection method: The residual variation is detected by CUSUM. CUSUM has the recurrent computation form suitable for real time analysis and does not need filtering. It uses two parameters of minimal jump size and threshold size. In this study, 6o of residual distribution as the minimal jump size and 3 o of CUSUM distribution as the threshold was used (Lee et al., 1999). (2) Diagnosis using fault set: The basic diagnostic strategy is to obtain the set of minimum faults that can explain all detected residuals. The diagnosis procedure is as follows. ® The fault sets corresponding to the detected symptoms are listed as named as the list of fault sets. The union of the fault sets in the list becomes the set of initial fault candidates. Among the elements of the initial fault candidates set, each sensor fault is pre-examined whether the residual of the measured variable of the sensor fault is detected in the direction of the sensor fault. Otherwise, the sensor fault is removed from the initial fault candidates set. It can improve diagnosis resolution. © In order to decide how many symptoms each fault candidate, F can explain, the number of how many fault sets in the list of fault sets involve F is calculated. The number is named as nES (number of explained symptoms). The fault candidates having the biggest nES are included in the set of first fault candidates. If all detected symptoms cannot be explained by single faults of first fault candidates, multiple-fault will be searched at the next step. ® First element in the set of first fault candidates is selected. After the fault sets including the first fault candidate are removed in the list of fault sets, the remaining list of fault sets is assigned to the first fault candidate. The union of the fault sets in the list of fault sets assigned to the first fault candidate becomes the set of multiple-fault candidates for the first fault candidate. The nES of each fault in the obtained multiplefault candidates set is calculated. The fault candidates having the biggest nES are included in the set of second fault candidates for the first fault candidate. When the nES of second fault candidate is same with the number of the fault sets in the list of fault sets assigned to the first fault candidate, double-faults of the first and second fault candidates can explain all symptoms. If all detected symptoms are not explained, this step will be repeated. For instance, consider the double fault case. In this case, the level sensor is lowly biased at 100 s (LS-BL). After 100 s, the reactor temperature is highly biased (TS-BH). The symptom of L(-) is detected at 105 s, CA(-) at 175 s, CB(+) at 200 s, T(+) at 205 s, and TR(-) at 215 s. The diagnosis procedure will be explained with the data of 215 s. The initial fault candidates set is {LS-BL, RX-LK, TS-BL, TS-BH, FS-BH, CW-TCL} and TS-BL and FS-BH is removed because T(-) and FR(H-) are not detected. Therefore, the initial fault candidates set is {LS-BL, RX-LK, CW-TCL, TS-BH}. Because LS-BL among these faults has the biggest nES of 4, LS-BL is selected as first fault candidates. In the next step, the fault sets of {CW-TCH, TS-BH} is assigned to LS-BL. The final fault candidate is two double-fault of {{LS-BL, TS-BH}, {LS-BL, CW-TCL}}. In this example, though the diagnosis is later than the time of fault occurrence, it is accurate. The qualitative method failed to obtain the true solution because it is masked multiple-
447 fault (Lee et al., 1999). Also, it shows good diagnosis resolution as the final fault candidate set has two elements.
3. Case Studies 3.1. Diagnosis description The selected situations can be found in our previous study (Lee et al., 1999). They are 15 single faults and 37 double faults which are generated randomly, and the step function is used for the simulation of a fault. Eight DPLS models for eight measured variables are obtained. Also, three PLS models are made for three control output variables. As the learning data for each PLS model, this example uses seven data sets of set-point change and external disturbance. These situations are frequently occurred in normal operation. The same data are used to determine the parameters of CUSUM of minimal jump size and threshold size. The constraint variable suggested in our previous study of fault-effect tree model (Lee et al., 1999) are used to increase the resolution. It represents the quantitative governing equation such as the balance equation and valve relation as variables. The previous study used a mass balance and two control valve equations for CSTR. As the control valve equations are expressed by DPLS model, this study uses only mass balance equation. Reactor leaking (RX-LK) is a root cause of the positive deviation of the constraint variable, DF that is defined as F^ - Fp 3.2. Result of single fault cases To measure the diagnostic performance, three parameters are used. Accuracy is 1 if the diagnosis is accurate, that is, a true fault is involved in the final fault candidates set. Otherwise, accuracy is 0. Robustness is the number of wrong detected symptoms independent to the true faults. When accuracy is 1 and robustness is 0, resolution denotes the number of final fault candidates. In all selected single fault cases, accuracy is 1 and robustness is 0. For the case of VLBH, our previous method based on the qualitative model gave double-fault during 45 s (Lee et al., 1999), but the suggested method shows robust result during all diagnosis periods. Though the resolution for 2 cases of RX-LK and LS-BL is worse than our previous method, the resolution by the suggested method is better for 4 cases of WP-BK, VL-BH, VT-BH, and FS-BL. 3.3. Result of multiple-fault cases Our previous qualitative method for 9 cases failed because these cases are masked multiple-faults in which one fault can explain the symptom of the other fault. However, Table 2. Diagnosis result of the selected double faults. No. 1 7 6 10 13 14
Robustness 0.96 / 1 0/0 0/0 0/0 0/0 0/0
Resolution
No.
1.09/4 2.23/4 1.01/4 3.84/4 4.0/4
15 17 19 20 28 31
Robustness 0.99 / 1 0/0 0/0 0/0 0.2/1 1/1
Resolution 3.82/4 1.17/4 8/8 4/4
448 the proposed method showed the results that only 1 case among 37 cases fails and the diagnosis for other 36 cases are accurate during all diagnosis periods. The diagnosis for the double-fault of RX-LK and PP-BK fail, because process variations resulted from PP-BK are too weak to detect symptoms for FP-BK. In table 2, the former is average over the total results taken by the frequency of 5 s during the simulation period of 2000 s, and the latter is the worst result during all diagnosis periods. Table 2 shows the diagnosis result of the selected double-fault except a failed case and 24 cases of the worst resolution under 2. Case 20 of FV-BL and WPBK shows the worst resolution of 8. In the case, the primary fault candidates of {VT-BL, WP-BK} and {VR-BL, FS-BL, PUMP-EF, RP-BK} from the detected residuals of FW(-) and FR(-) are obtained, respectively. Thus, 8 (2 X4) double-faults became the final fault candidates. Though 33 cases among 36 cases show the robustness of 0, three cases (1, 15, and 28) detect a variable independent to the true solution in short periods. This wrong detection is due to that smaller CUSUM parameters than necessary were used. In order to prevent wrong detection, CUSUM parameters are doubled for the diagnosis. With the new parameters, the wrong detection of two cases is not occurred. And, fault detection for most cases is delayed for 5-30 s. However, diagnosis accuracy and resolution for 33 cases are not effected to be same with the previous parameters.
4. Conclusion This study is about the fault diagnosis by the hybrid method of qualitative model-based method and quantitative history data-based method. The diagnosis is performed by the observation of measured value and predicted value by DPLS model built on the local causal relationships of SDG The proposed method has the advantages to improve diagnosis accuracy and resolution, and facilitate diagnosis of masked multiple-fault.
5. Reference Finch, F.E., 1989, Automated Fault Diagnosis of Chemical Process Plants Using Modelbased Reasoning, Sc.D. Thesis, Massachusetts Institute of Technology. Kramer, M.A. and Palowitch, B.L., 1987, AIChE J. 33, 1067. Ku, W., Storer, R.H. and Georgakis, C , 1995, Chemometrics Intell. Lab. Syst. 30, 179. Lee, G, Lee, B., Yoon, E.S. and Han, C , 1999, Ind. Eng. Chem. Res. 38, 988. Sorsa, T. and Koivo, H.N., 1991, IEEE Trans. Systems. Man. Cybern. 21, 815. Venkatasubramanian, V., 2000, Proc. of the PSE Asia 2000. 597.
6. Acknowledgement This work was supported by grant No. (R05-2002-000-00057-0) from the Basic Research Program of the Korea Science & Engineering Foundation.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
449
Integration of Design and Control for Energy Integrated Distillation Hongwen Li, Rafiqul Gani, Sten Bay J0rgensen CAPEC, Department of Chemical Engineering Technical University of Denmark DK-2800, Lyngby, Denmark
Abstract A method for development of control structures is presented for an energy integrated distillation column. The method contains three levels. First the feasible operating region is determined, second the feasible static control structures are proposed and screened, and third, the additional degrees of freedom stemming from process dynamics are handled using additional actuators and control loops. The advantage of the three-layered strategy is that the first two layers can be dealt with early during process development while the third layer may be added at the appropriate design point where information on the dynamic behaviour is available.
1. Introduction The systematic computer aided pre-solution analysis of process models for integrated design and control presented earlier by Russel et al. (2002) is further investigated in this paper for an energy integrated distillation pilot plant. This paper investigates aspects of design and control of the integrated distillation column with the model analysis method and validates the results through simulation and operational analysis of the energy integrated distillation column. First the model analysis is presented for a single distillation column; subsequently the column analysis is extended with the analysis for a heat pump. Finally, the method for development of control structures is presented and verified through simulation.
2. Process Description The heat integrated distillation pilot plant considered in this paper is shown in Figure 1. It contains two main sections, namely a distillation column section and a heat pump section. The heat pump section is physically connected to the distillation column through the condenser at the top and the reboiler at the bottom of the column. The heat pump consists of four heat exchangers, two compressors, one expansion valve, a large tank, OCfjy^ and Ctcv9 ^^^ ^^^ control valves. While the refrigerant circulates within the heat pump it changes phase, and through absorbing heat of vaporization at low pressure and releasing it again at high pressure it carries heat from the column condenser to the column reboiler. A more detailed description of this pilot plant is given by Eden et al. (2000). The process model analysis of these two sections is discussed individually.
3. Process Model Analysis Model analysis first determines the available degrees of freedom for design and for control, which then identifies the "common" variables used in design as well as control. The analysis subsequently identifies the important constitutive equations, their dependent process variables and the corresponding derivative information with respect to the identified "common" variables. This analysis employs the mass and energy
450 balance equations and the constitutive equations to generate information related to process sensitivity, process feasibility, and design constraints. The model analysis is decomposed into two parts: first the analysis is carried out for the distillation column and thereafter, the heat pump system.
_g^^CV9__ Condenser ( ^ j Expansion valve j
(DecanteT)
-^3—d_s—• D.XQ
r-€H
-SF.Xp
Compressors
h©
Superheating
( ReceiverJ
Secondary Z ' * ^ Condenserv-*^
i=F^
B.X.
Figure 1: Flow sheet of the heat integrated distillation column. 3.1. The process model analysis for the distillation column Five degrees of freedom (determined from the model analysis) are related to five design and control variables. The design problem considered determines the optimal values of the design variables so that some process variables attain their desired values while the control problem maintains the same process "conmion" variables at their desired values by manipulating the same "common" design variables when there is a disturbance. As the design and control problems involve the same set of "conmion" variables, they are integrated and solved simultaneously, once an integrated problem has been formulated. From a control point of view, for dual composition control of the distillation column, both the top product purity and the bottom product purity need to be controlled. The hold ups of the condenser and the reboiler here need to be controlled to stabilize the system. For example the optimisation variable or design variable D (distillate flow rate) is used to control top composition xo and vapour flow rate at the bottom VB is used to control bottom purity jc^, where vapour flow rate is manipulated by heat duty of the reboiler QB- The column pressure is controlled by heat removed from the condenser QcThe holdup of the condenser MD, is controlled by reflux flow rate LQ and the holdup of reboiler MB is controlled by bottom product B. Hence for control purpose, the design optimisation variables, QB, QC, T), LQ, B may be chosen as the manipulated variables. This illustrates the relationship between design and control issue for a conventional distillation column.
451 From a design point of view, the five candidate design variables are selected first. The product rate D and B need to be specified to meet the external mass balance and the market needs. Vapour flow rate V, the condenser heat duty Qc and reflux flow rate LQ are needed in the column to fulfil the separation process. From a control point of view, then design variables are also considered as "actuator" variables and it is necessary to identify the appropriate process variables that could be "controlled" through them. 3.2. The model analysis for the energy integrated distillation column From the model analysis, two degrees of freedom are obtained for the heat pump section. Here high pressure on heat pump section PH and low pressure PL are chosen as controlled variables. Two actuators are valves Ctcvs ^^^ ^cv9 •
4. Operation Window for the Energy Integrated Distillation The appropriate variables that can be controlled through the identified "common" design/actuator variables can be identified through an investigation of the operation window. In the case of this heat integrated column with sieve trays operated primarily in the spraying regime, the limits forming the operating region are the flooding limit, weeping limits, maximum column pressure, maximum heat pump high pressure, maximum heat pump low pressure, maximum cooling power of the heat pump system, maximum pumping capacity. Inside this region is the operation window, within which the operation point must be located to ensure the separation process (as illustrated through Figure 2). The empirical correlations for flooding and liquid weeping are described below and the limits related to the high and low pressures of the compressor system are determined through simulation. 4.1. Flooding and liquid weeping curves The following derivations are based on empirical correlations estimated by several authors and collected by Zuiderweg (1982) on his review paper on the state of the art for sieve trays. Let LQ and V be the volumetric flow rates of reflux and boil-up in m^/s, and let pi and pg be the liquid and gas densities in kg/m^. The weeping limit in terms of the minimal vapour flow rate is then found by the following empirical correlations:
^min ~ ^^w '^t-ij
(1)
\ Pl Where: CFmax is the tray capacity factor on bubbling area, m/sec At is the tray area, m^ This correlation is plotted as the weeping limit, i.e. Curve 1 in Figure 2 The flooding limit in terms of the maximal vapour flow rate is used as the flowing correlation:
(2) Where CF^ax is the capacity factor at start of flooding, m/sec. This correlation is plotted as the flooding limit, i.e. Curve 3 in Figure 2.
452 4.2. Maximum and minimum PL The limits of the operation region imposed by the heat pump are obtained by simulation in total reflux mode. Curve 2 in Figure 2 is mapped by switching on a controller to maintain heat pump low pressure PL at its maximum, i.e., 600kPa and then gradually decrease high pressure PH until intersection with the weeping limit. Going along this trajectory the number of active cylinders reduced as appropriate such that the pressure drop through CV9 has a reasonable level (50-200kPa). If the pressure drop exceeds these limits retuning of the low pressure controller is necessary due to the nonlinear valve characteristic. Curve 4 in Figure 2 is mapped by keeping CV9 open and all cylinders active, and then gradually decrease high pressure PH- This way the low pressure is at all time kept as low as possible, decreasing as the high pressure is reduced. The lower limit represents the lowest possible column pressure, and it crosses the weeping limit at a point, which thus is the point of lowest possible column pressure and boil-up rate at which the column can be operated.
5. The Static Actuator Configuration of the Integrated Distillation Column 5.1. The static actuator conHguration For integrated distillation column QB and Qc are not the manipulated variables, which are controlled by PH and PL on the heat pump section. Through the analysis of the column the actuator structures it appears that PH+PL (on the heat pump side) is suitable to the control column pressure while PH-PL (on the heat pump side) may be used to control the vapour flow rate. With these control actuator configuration, the column pressure and vapour flow rate control loops are decoupled. This also means that the actuator (design) variables on the distillation column side are determined through the "control" variables on the heat pump side. Rigorous simulations have been performed to confirm the actuators configuration with the dynamic model of Koggerb0l (1995). The simulation results are plotted in Figure 2, where the curve A in Figure 2 is at constant PH+PL» while PR-PL change. From curve A one can see that column pressure is nearly constant for constant PH+PL for many different PH-PL. From curve B one can see that the vapour flow rate is nearly the same at constant PH-PL in spite of different PH+PL- This confirms the design and control issues stated above that one could control the vapour flow rate in the integrated distillation column by manipulating the pressure difference between the high pressure and the low pressure of the heat pump side and the column pressure by manipulating the sum of the two heat pump pressures.
6. Control Structure for Dynamic Integrated Distillation Column 6.1. Dynamic control structure for colunm section For the distillation column, the overall control for the column section has five outputs XD, XB, MD, MB and P. Comparing to the static control problem, two more holdup variables, i.e., MD and MB, are introduced. Suppose that the column pressure and the reboiler vapour flow have the same control configurations as the steady state. For such kind of "standard" control problem, the most widely accepted control structures are "LV-configuration" or "DV-configuration. The issue of this kind of control configuration has been investigated using frequency dependent formulations of measures such as the condition number, the Relative Gain Array by Bristol (1966) and the Relative Disturbance Gain by Stanley et al (1985). This paper will focus on discussing the dynamic control structure on the heat pump section and how each dynamic control structure affects on the stability of the integrated distillation column.
453 6.2. Control structure on the heat pump section As discussed above, PH+PL and PH-PL are used to control column pressure and vapor flow rate. In turn, PH and PL are controlled by manipulating two actuators Ctcv^ ^^^ Ct(^yg. The pair of these two process variables and two control valves play a very
60 80 100 Column pressure P(kPa)
120
Figure 2: A curve: PH+PL ^t constant, PH-PL change. B curve: PH-PL ^t constant, PH-^PL change.
important part in terms of stabilization of the integrated distillation column. First let us discuss how disturbance affects the heat pump high pressure PH and low pressure PL SO as to decide the pair problems. Consider, for example, the plant at steady state with only liquid level controllers for the reboiler and the condenser has been implemented. If suddenly the energy balance is disturbed by a small amount 5H, for instance due to a disturbance in the feed preheater in the feed composition, or perhaps in the temperature of the cooling medium in the secondary condenser, then the changed heat input starts to accumulate in the plant. If the disturbance reduces the cooling rate this will immediately affect the high pressure PH such that PH begins to increase. Thereby the compression work is continuously increased, but also the cooling rate will gradually increase as the temperature gradient in secondary condenser and in the air coolers will increase with PH. The increase in the high pressure affects boil-up rate, and as a result of this, the column pressure and the heat pump low pressure PL simultaneously increase. Assuming that the enthalpy of the feed remains constant after the disturbance, the behavior of the entire plant becomes unstable if the compressor work increases faster than the sum of all the outgoing heat flows. For this integrated distillation column the compressor work does indeed increase faster than the sum of all the outgoing heat flows within part of the operating region. Therefore a small disturbance in the overall energy balance can initiate a drift of the plant towards increasing or decreasing pressures depending on the sign of the disturbance. To reject disturbance so as to stabilize the system, high pressure PH and low pressure PL need to be controlled by manipulating suitable actuators. In theory either the low pressures or the high pressure could be paired with Ct^y^ and thereby stabilized the system. However, the gain from Otcvs ^^ ^^^ ' ^ ^ pressures is relatively small (Koggerb0l, 1995) so if CX^y^ is to be used for stabilization it should be preferably be paired with the measurement of PH-
454 The valve CLQ^^ does not directly affect the energy balance but it can be used to stabilize the plant if it is paired with a suitable measurement. Suppose that PL is stabilized by manipulating the valve CXQ^^^ then a disturbance, which tends to increase PL, will be neutralized by the valve opening being increased by the controller. This way PL is maintained at setpoint. The main result of the discussion above is that high pressure PH should be controlled and that the cooling valve OL^y^ is preferred as actuator for this purpose. So, in conclusion, a control loop manipulating 61! ^^g based on a measurement of the high pressure PH is suggested to stabilize the plant. The low pressure PL is controlled by control valve OL^^y^ to stable the system. In the reboiler the saturation pressure PH is a sufficient measure of the condition on the freon side for heat transport into the column, This condition is now going to be controlled using 61!^^g. In the condenser, which is the other contact point between the heat pump and the column, the saturation pressure PL is a sufficient measure of the condition on the freon side for heat transport from the column. So this control structures, i.e., valve OLQ^^ control low pressure and valve dJ^yg control high pressure, can reject the disturbance and stable the system.
7. Conclusion With a systematic computer aided analysis of the process model inspired by Boris et al, the control and design problems for an energy integrated distillation column are carried out. The relationships between design and control problems are discussed through analysis of the static process model to result in a more suitable control actuator configuration for the integrated distillation column. Rigorous simulation results verify this analysis. This paper addresses how to select and combine actuators in order to achieve nearly independent actuator actions. Thereby the interactions between basic control loops in the integrated distillation column are significantly reduced which is the basis for optimising control.
8. References Bristol, E.H., 1966, On a New Measure of Interactions for Multivariable Process Control; IEEE Trans. Automat. Control, AC-11, 133-134. Eden, M.R., K0ggersb0l, A., Hallager, L. and J0rgensen , S. B. 2000, Computer and Chemical Engineering, 24, 1091-1097. Eden, M.R., L0ppenthien, C. and Skotte, R., 2000, Distillation Column Startup Manual, Technical University of Denmark, Lyngby, Denmark. Koggersb0l, A., 1995, Distillation Column Dynamics, Operability and Control, Ph. D Thesis, Technical University of Denmark, Denmark. Russel , B.M., Henriksen, J.P., J0rgensen , S.B. and Gani, R., 2002, Computer and Chemical Engineering, 26(2), 213-216. Stanley, G., Marino-Galarraga, M. and Avoy, Mc, 1985, Shortcut Operability Analysis. The Relative Disturbance Gain; Industry Engineering Chemistry, Process Design Development, Vol. 24(4), 1181-1188. Zuiderweg, F.J., 1982, Sieve Trays: A view on the State of the Art; Chemical Engineering Science. Vol. 37(10), 1441-1464.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
455
Process Monitoring Based on Wavelet Packet Principal Component Analysis Li Xiuxi, Yu Qian*, Junfeng Wang School of Chemical Engineering, South China University of Technology, Guangzhou, 510640, P. R. China
Abstract For improving performance of principal component analysis (PCA) for process monitoring with noise, this paper proposes a wavelet packet PCA (WPPCA). It integrates ability of wavelet packet in de-noise and ability of PCA to de-correlate the variables by extracting a linear relationship. Process monitoring is simulated in a two-inputs, two-outputs dynamic process with noise. The simulation result shows that the wavelet packet PCA eliminate effects of noise effectively with better monitoring performance. Finally, the proposed approach is successfully applied to Tennessee Eastman process for dynamic monitoring.
1. Introduction Process monitoring is applied to improve reliability and safety of process operation. The main functions of process monitoring include: monitor the system operation status; detect the occurrence of fault; quantitative analysis of the abnormal extent; determine the type, time, extent, exterior, action ways and influence extent of faults. PCA as an effective method of data analysis techniques has been widely applied in process monitoring (Wise and Ricker, 1991; Nomikos and MacGregor, 1995). However, as PCA mainly analysis the linearity and steady state of process variables, its application on complicated process system is restricted. Barkshi et al (1998) had proposed a multi-scale PCA (MSPCA) to improve the local analysis of time-frequency, which combines wavelet analysis with conventional PCA. Qin et al (1999) applied the MSPCA to fault diagnosis and detective of sensor, and simulated the tubular reactor and furnace, which got good results. But the wavelet analysis just divides low-frequency part of signal. Since it ignores high-frequency part, the de-noise performance of it is restricted. To improve the PCA monitoring performance in process with noise, this paper proposes wavelet packet PCA (WPPCA), combining wavelet packet analysis and PCA.
2. Wavelet Packet PCA Wavelet packet PCA integrates PCA and wavelet packet analysis. Wavelet packet analysis decomposes the high-frequency part further, which wavelet analysis not does, and adaptively selects relative frequency bond based on character of signal to be analyzed. The steps in the WPPCA methodology are shown in Figure 1, and the detailed procedures are given as follows.
Corresponding author, Tel: +86(20)87112046, E-mail address: [email protected]
456
w
o C/5 r
tm 'cr C/3
n>
^
>< t-^ o ^ R ^\ /7 p \\J/ 3. X
inSHi
': h ^ V <- —— N
V
\ni? a-
W,1,2
• /
•
•/
1
/
^
1
P I
\
/
I P
I \r
^-!
-
y ••'
^ • ' ~ " '
W 1/ *
V^ L,0 J
1-Hg
I 1 ^ '" I I
_ —^^—1 — _ —
L...>
M't/-
F/^. 7. Methodology of wavelet packet PCA.
(1) For each column in data matrix, select wavelet packet function ^jXn{t) and wavelet packet dividing level L and compute wavelet packet decomposition coefficients {WL,0,WL,b..-,WL/-l};
(2) For each variable, use the same best full wavelet packet base algorithm to process wavelet packet decomposition tree and find best wavelet packet decomposition coefficients; (3) Select these coefficients as column vector to build wavelet packet coefficients matrixes with different tree nodes {X^oi ^L,U""XL,I^.\]^ the row number of these matrix is n/2^ and the column number is m; (4) For these coefficients matrixes, respectively use conventional PCA to confirm the retain number of principal components and compute principal components score matrixes {7^,0, 71,i,...,r^/.i} and load matrixes {PL,Q, PL,U"',PL,I-\}\ (5) Use retain score matrixes and load matrixes to rebuild wavelet packet coefficients matrixes {X\^Q, ^ ' / . b - - - ^ L / - I } ;
combines corresponding column (6) For each column in [X\^Q, X\^i,...^\^2^.x}, vectors to get rebuild wavelet packet coefficients; (7) For these coefficients, respectively use wavelet packet de-noise limit method to process these coefficients and get de-noising coefficients; (8) Use wavelet packet rebuild algorithm to get each variable samples {x\,X2,...yXm}', (9) Build new data matrix X* and use PCA to select the retain number of principal components and compute score matrix T and load matrix P\ The WPPCA combines the ability of PCA to de-correlate the variables by extracting a
457
linear relationship with that of wavelet packet analysis to extract auto-correlated measurements. To combine the PCA method and wavelet packet analysis efficiently, each measurement variable, which is also the column vector of the original data matrix, are decomposed to wavelet packet coefficient column vector using the same best full wavelet packet base. That is, the original data matrix X is transformed to WpX. In which Wp is an nxn standard orthogonal matrix, and Wp denotes the transform factor which includes the filter coefficients.
In which WPi^2.i(i=0,l,...,L) denotes the wavelet packet filter coefficient matrix. The relations between principle component analysis for X and the principle component analysis for WpX are given by following two theorems: Theorem 1: The principle component load vector for WpX is same as the principle component load vector for X, and the principle component score vector for WpX is the wavelet packet transform of the principle component score vector for X. Proving: Because each column of matrix X is analyzed with the wavelet packet using the same wavelet packet transform matrix Wp, the relativities among the columns of matrix WpX and the columns of matrix X are unchanged. From: (WpXfiWpX) = X''w\WpX=X''X We will see that when number of sampling points are large enough, the columns mean value of the matrix WpX are equal to the colunm mean value of the original data matrix X. And their covariance matrix is the same too. The above equation shows that after the transformation the principle component load vectors of the matrix WpX are the same with that of the original data matrix X. From PCA, we can get: X = TP^ So: WpX = (WpT)P'^ It shows that the principle components score vectors of the matrix WpX are the wavelet packet transformations of the principle components score vectors of the matrix X. Done. Theorem 2: When no principle component is ignored in any scales and no wavelet packet coefficient are eliminated by the threshold value, the result of WPPCAis equal to the result of PCA. Proving: From the properties of wavelet packet transformation and wavelet packet reconstruction we will see that if the wavelet packet coefficients are not treated, the wavelet packet reconstructed signals are the same as the original signals. When all principle components are reserved, the reconstructed data matrix is the same with the original data matrix. Done.
3. Algorithm Performance Simulation To test the performance of wavelet packet principle component analysis in dynamic process monitoring with noise, a 2-in-2-out three order dynamic system with noise is employed. The mathematical models are:
458 z{k)-.
0.087 -0.102 0.118 -0.191 z{k-\) + z(k-2) 0.847 0.264 0.638 0.147 [0.053
-0.091
[0.476
0.128
z(k-3) +
1
2
3
-4
luik-V)-^
0.8 0.8
a(k-l)
y{k) = z{k)+v{k)-\-0.%o{k) where u is the input variable.
u(k) =
0.811
-0.226
0.477
0.415
M(A:-1) +
-0.328
0.143
0.096
0.188
0.455
-0.162
0.135
0.242
\u(k-3) +
\u(k-2)
0.193
0.689
-0.320
-0.749
\8(k-l)-\-
0.8 0.8
cr(k-l)
where f'is a random white noise which mean value is 0 and variance is 1, v is a random white noise which mean value is 0 and variance is 0.1, (J is a random white noise which mean value is 0 and variance is 1. 1000 normal samples are selected, and a MSPCA model and a WPPCA model are established respectively. The SPE and T^ and control limitation are calculated. When the confidence region is 95%, the control limitations are: 5P^«=1.7564, 7^«=7.0356.
jAJJiMiM
,. . ^1 A./v.^l^AAA7^A 20 40 60 80 100 120 140 160 180 200
20 40 60 80 100 120 140 160 180 200
20
Sample (a) Fig. 2. SPE and f plot with variance error, (a)WPPCA,
40
60 80 100 120 140 160 180 200 Sample
(b) (b)MSPCA.
200 sampling real-time data are monitored using the proposed model. To test the monitoring performance, a variance error disturbance is introduced at the 160th sampling time and canceled at the 162nd sampling time. The plots of SPE and 7^ are shown in Fig. 2. From Fig. 2(a), only few sampling points are over control limitation before the 160 time because of the noise, and the disturbance is detected successfully. It shows that the WPPCA will detect the abnormal status very well, and monitor the dynamic process with noises efficiently. Fig.2(b) is the result of MSPCA based on
459 wavelet analysis. Neither the SPE plot nor the T^ plot can detect the disturbance occurrence at the 160th sampling time. The results show that the WPPCA is better than MSPCA based on wavelet multi-frequency analysis in process monitoring.
4. Case Study In this Section, the proposed wavelet PCA approach is applied to the monitoring problem of the Tennessee Eastman (TE) process. Tennessee Eastman process, which was developed by Downs and Vogel (1993), consists of five major unit operations: a reactor, a condenser, a vapor-liquid separator, a recycle compressor, and a product stripper. Some disturbances are programmed for researching the characteristics of the control system, listed in Table 1.
Table 1. Process disturbance for the Tennessee Eastman. Case IDV(l) IDV(2) IDV(3) IDV(4) IDV(5) IDV(6) IDV(7) IDV(8) IDV(9) IDV( 10) IDV( 11) IDV(12) IDV( 13) IDV(14) IDV(15) IDV(16)
Disturbance A/C feed ratio, B composition constant B composition, A/C ratio constant D feed temperature Reactor cooling water inlet temperature Condenser cooling water inlet temperature A feed loss C header pressure loss - reduced availability A, B, C feed composition D feed temperature C feed temperature Reactor cooling water inlet temperature Condenser cooling water inlet temperature Reaction kinetics Reactor cooling water valve Condenser cooling water valve Unknown
Type Step Step Step Step Step Step Step Random variation Random variation Random variation Random variation Random variation Slow drift Sticking Sticking Unknown
The reference set contains 1000 samples from normal operation with a sampling interval of 3 min. For the sufficient representation of the normal status of the TE process, these data are sampled in 10 groups, and the mean values of them are regarded as normal operation data. A WPPCA model is developed from these data. Nine principal components are selected, which capture 90.32% of the variation in the reference set. The control limits shown in every plot correspond approximately to the 95% confidence region, which is determined by using the methodology presented by Nomikos and MacGregor (1994): 5P£:«=7.1509, 7^«=75.0008. The simulation is run under the first disturbance IDV(l), which is loaded at the 300th time step. SPE plot and T^ plot are shown in Figure 4. From these plots, disturbance is quickly detected, although before the 200 time there still are some sampling points of SPE and T^ over control limit to false alarm. After the 300th time, the SPE and the T^ plots are far over the control limit. Figure 5 shows the scores plot. The figure clearly illustrates that process projection points are away from the normal situation. This result shows that for the TE process the WPPCA will get well effect in process monitoring.
460
1000
1000
Fig. 4. SPE and t^ plots ofWPPCAfor IDV(l).
-10 -180
-140 -100 -60 -20 The First Principal Component
Fig. 5. Scores plot ofWPPCAfor IDV(l).
5. Conclusions In this paper, a wavelet packet PCA is proposed, which integrate performance of wavelet packet analysis de-noise and what of PCA. WPPCA algorithm and two theorems were presented. A dynamic 2-in-2-out three-order process with noise was employed to compare the WPPCA and the MSPCA based on wavelet analysis. The result shows that the WPPCA has the better performance than the MSPCA based on wavelet analysis. Finally, the WPPCA is used in the state monitoring for TE process. Application of TE process shows that the proposed WPPCA approach is effective.
6. References Bakshi, B.R., (1998), Mutiscale PCA with application to multivariate statistical process monitoring. AIChE J., 44(7), 1596-1610. Downs, J.J. and Vogel, E.F., (1993), A plant-wide industrial process control problem. Computers and Chemical Engineering, 17(3), 245-255. Jaffard, S. and Laurencot, P., (1992), Orthonormal wavelets, analysis of operations and applications to numerical analysis. Proceedings of the 3*^^ China-France Conference on Wavelet. Nomikos, P. and MacGregor, J.F., (1994), Monitoring batch processes using multiway principal component analysis, AIChE Journal, 40(8), 1361-1375. Qin, J.S., Lin, W. and Yue, H., (1999), Recursive PCA for adaptive process monitoring, IFAC Congress'99, July 5-9, Beijing, CHINA. Wise, B.M. and Ricker, N.L., (1991), Recent advances in multivariate statistical process control: improving robustness and sensitivity. Proceedings of IFAC ADCHEM Symposium, 125.
7. Acknowledgments Financial support from the National Natural Science Foundation of China (No. 29976015), the China Excellent Young Scientist Fund, China Major Basic Research Development Program (G20000263), and the Excellent Young Professor Fund from the Education Ministry of China are gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
461
Information Criterion for Determination Time Window Length of Dynamic PC A for Process Monitoring Xiuxi Li, Yu Qian*, Junfeng Wang, S Joe Qin^ School of Chemical Engineering, South China University of Technology, Guangzhou 510640, China ^ Department of Chemical Engineering, The University of Texas, Austin TX78712, USA
Abstract Principal component analysis (PCA) is based on and suitable to analysis of stationary processes. When it is applied to dynamic process monitoring, the moving time window approach is used to construct the data matrix to be analyzed. However, the length of the time window and the moving width between time widows are often empirically tested and selected. In this paper, a criterion for determining the time window length is proposed for dynamic process monitoring. A new algorithm of dynamic monitoring is then presented. The proposed selection criterion is used in the new algorithm. Finally, the proposed approach is successfully applied to a two-input two-output process and Tennessee Eastman process for dynamic monitoring.
1. Introduction For safety and good product quality of process plant, it is important to monitor process dynamic operation and to detect upsets, abnormalities, malfunctions, or other unusual events as early as possible. Since first principle models of complicated chemical processes are in many circumstances difficult to develop, data based approaches have been widely used for process monitoring. Among them, the principle component analysis (PCA) extracts a number of independent components from highly correlated process data, has been applied successfully for process monitoring. A limitation of PCA-based monitoring is that the PCA model is built from the data, which is stationary, while most of real industrial processes are dynamic. When it is used to monitor process with dynamic characteristics, it is hard to exactly identify whether faults or normal dynamic varieties. A number of efforts have been done to improve performance of PCA based monitoring techniques (Ku, 1995, Qian, 2000, Kano, 2001, Qin, 2001). However, these articles have not pointed out the criteria how to select the time window length. In this paper we propose a criteria for determining the time window length by using Information Criteria (IC) to compare dynamic monitoring characteristics with different time window length. Equipped with the proposed window length selection criteria, a new dynamic monitoring algorithm is presented.
2. Augment Data Matrix with the Moving Time Window Approach The moving time window approach uses a length-invariable window and moves it along the time axis with a fixed step. With the approach, local dynamic characteristics of data Corresponding author, Tel: +86(20) 87112046, E-mail address: [email protected].
462 series could be captured and varieties of data series could be timely analyzed. Two major parameters of the approach are the window length and the moving width between windows. Selection of the two parameters is critical and not easy. For example, if the window length is selected too large, computational complexity would be augmented enormously. On the other hand, if it is too small, certain of dynamic characteristics might be lost. The key to choose the moving width is to balance the computation complexity and the degree of losing alarm. In practice, it is chosen with operation experience of the industrial processes. Let raw data matrix be Xe3i^^. X^eSR^^'is depicted as follows:
In dynamic PC A models, the augmented matrix
x,{t-l) ^mit + ^-l)
r= x^{t+nw-w) "•
x^{t-\-nw-w-l)
x^{t-\-nw-w) '"
(1)
x^{t-^nw-w-l)Jn'yn{
where / and w are the time window length and the moving width between windows, n' and m' are inferred as follows: n ' = int(( n - Z ) / w ) + 1
(2)
m' = (I + l)m We use AR models to denote the time-lagged variables within the X' matrix, which is represented as Xi(t)+aiiXi(t-l)+..
.-k-aiiXi(t-l)=^t)
(3)
where / is the order of the AR model, equivalent to the time window length, and / G [ 1 , m]. Thus we select the time window length in a way based on the approach of determination the order of AR model.
3. Information criteria for determining the time window length In Section 2, we introduce model identification to interpret dynamic PCA, and find that the selection of the time window length can be accordingly solved with the approach of determination the order of AR model. Identification of a system model may be regarded as confirming the probability distribution of a stochastic process. From this understanding, the principle of information criteria is to select appropriate model structure to maximize the approximate extent between the real probability distribution and the estimated probability distribution based on observation data. Shannon entropy is always used to measure the approximate extent, which is represented as:
H,=lf(x)lnfix)dx where jc is a continuous stochastic variable.
(4)
463 Let stochastic variable x, y be the output variable and observation variable of the system to be identified respectively. Let / {x) be the real probability density of the stochastic process, and g (x| y) be the estimative probability density based on observation variable y. The difference between the entropy of the real probability distribution and the entropy of the estimative probability distribution is regarded as the approximate extent variable of the two probability distributions. It is given by: Uf. g) = - J / W i n f{x)dx - (-J/Win g{x\y)dx) = -\ fix)\n[--^]dx
^^^
Taking into account the model parameter vector a, which is the function of the observation variable y, then ^(x|y)=^[x|a(};)]. Thus the information criterion is defined as: select a model order / to maximize the function / (g), which is given by J{g) = E^{L\fix\g(x\a{y))}
(6)
Based on the same observation data, different estimative parameter vector ai is determined from a number of different model order, /. Accordingly, different probability density function is calculated. The model order is thus determined by maximizing the J{g), as the most appropriate order. Let ao denote the estimative parameter, which maximizes the £';c{ln^[^|^Cy)]}» and a^Lbe the maximal likelihood estimative variable within the ao. Neymann and Pearson have proposed an important asymptotic relation, which is represented as follows:
21„|lteJ|~X;
(7)
That is: 2lng\y\ dML\-2\ngly\aoh X]
(8)
where ^Qy|aAfL] is the maximal likelihood function of the observation variable 3^, and / i s the free degree of the X^ distribution. Akaika have proved that: £,{21ng[^ao]-21ng[x|a^J-X,^
(9)
From Eqns. 8 and 9, we have -2£^fe[lng(;c|a^,]}=-2£jng[y|a^,]}+2r
(10)
Let
/C = -2[lng();|a^J+2/
(11)
then computation of the maximal value of the J(g) is transformed into computation of
464 the minimal value of the IC. The computation of the logarithm maximal likelihood function \ng{y\dMd follows following relationship,
ln^(y|5MJ = ~ l n < 5 |
(12)
where n is the sample number and Sj is the variance of the residual ^. Jenkins and Watts have proved 51-—^-—y^ii
(13)
For dynamic PCA, the residual can be inferred as follows:
S(^;-^;)'-S(^,-^,)'
II =J^
tL
(14)
m where X is the raw data matrix and X is the recursive data matrix after processed by the conventional PCA. Then, from Eqns. 11, 12, 13 and 14, we conclude the information criteria as follows: selecting the moving time window length / to make:
/C = nln -ii^i^:
.
^
^ + 2r
(15)
m(n-2l-l) minimal, where y is the number of the estimative parameters and equal to / in the AR model.
4. A dynamic monitoring scheme using moving time window To make the proposed parameters selection algorithms useful in real plants, we propose a dynamic monitoring scheme that addresses the issues of operation steady situations varying with time. With the tolerance limits Qa and 7^a» a complete procedure for dynamic process monitoring, including moving time window approach and parameters selection criteria, is implemented in real time, as follows: 1) Use variables correlation analysis approach to determine the raw data matrix X. 2) Calculated, rand X. 3) Depends on operational experience, select the moving width w between time windows. 4) Select time window length set: [/i, I2 ... h], and respectively calculate the augment matrix [X'l ... X's]. 5) Calculate [ P ^ . . . P ' s ] , [ r i . . . r s ] and [x;.-.x:J. 6) Calculate [ICi ... IQ] from Eqns. 15. Select the minimal value ICmin from [ICi ... ICs\ and the corresponding /, X\ P\ T and X' be selected at last. 7) For real time data, calculate Q^ and f^i. If Q\>Qa or f^i>f^a^ further identify
465 which variable arises the abnormal situation by variable contribution plot. 8) Otherwise, set / = / + 1 and go to step 7.
5. Application to dynamic process monitoring In this Section, the proposed dynamic monitoring scheme is applied to the monitoring problem of the Tennessee Eastman process. Tennessee Eastman process, which was developed by Downs and Vogel (1993), consists of five major unit operations: a reactor, a condenser, a vapor-liquid separator, a recycle compressor, and a product stripper. The process has 41 measurements, including 22 continuous process measurements and 19 composition measurements, and 12 manipulated variables. Some disturbances are programmed for researching the characteristics of the control system, listed in Table 1. Table 1. Process disturbance for the Tennessee Eastman. Case IDV(l) IDV(2) IDV(3) IDV(4) IDV(5) IDV(6) IDV(7) IDV(8) IDV(9) IDV(IO) IDV( 11) IDV(12) IDV( 13) IDV(14) IDV(15) IDV(16)
Disturbance A/C feed ratio, B composition constant B composition, A/C ratio constant D feed temperature Reactor cooling water inlet temperature Condenser cooling water inlet temperature A feed loss C header pressure loss - reduced availability A, B, C feed composition D feed temperature C feed temperature Reactor cooling water inlet temperature Condenser cooling water inlet temperature Reaction kinetics Reactor cooling water valve Condenser cooling water valve Unknown
Type Step Step Step Step Step Step Step Random variation Random variation Random variation Random variation Random variation Slow drift Sticking Sticking Unknown
The reference set contains 1000 samples from normal operation with a sampling interval of 3 min. Considering the performance of dealing with multivariate process of the PC A, we selects time window length / set as [1 2 3]. From Eqns. 15, we can accordingly compute the IC values of different /. Based on the proposed IC approach, the / is chosen as 3. The window moving width w is chosen as 4. Thus the data matrix is a (250x64) array, where 250 is the number of available data windows, 64 is the number of variables. A linear DPCA model is developed from the data matrix. Seventeen principal components are selected, which capture 97.7% of the variation in the reference set. The control limits shown in every plot correspond approximately to the 95% confidence region, which is determined by using the methodology presented by Nomikos and MacGregor. To verify the effectiveness of the proposed IC approach, four comparative simulations for the DPCA model and the CPCA are made. In the first case, the disturbance IDV(l) is introduced at sample 200. The result is shown in Fig. 1. Using the CPCA, many SPE values have been out of the control limit even if the process is normal. However, the DPCA is able to well detect the disturbance. The second case involves the disturbance IDV(2), and the result is shown in Fig. 2. In general, the performance of dynamic PCA with appropriate / and w should be better than conventional PCA.
466
250
400
600
1000
Sample
Fig.l SPE plots of data characterizing IDV(l) projected onto the DPCA model and the CPCA model.
Fig. 2 SPE plots of data characterizing IDV(2) projected onto the DPCA model and the CPCA model.
6. Conclusions Inspired from the principle of identification dynamic time-series model by determining the model order, we present in this paper an Information Criteria (IC) approach to select the time window length in the dynamic PCA. Then a new dynamic monitoring algorithm with the proposed IC approach is presented. Application of TE process proved that the proposed IC approach is effective. The dynamic PCA with appropriate time window length and windows moving width is able to detect and identify faults and abnormal events better than conventional PCA approach.
7. References Downs, J.J. and Vogel, E.F., 1993, A plant-wide industrial process control problem. Computer and Chemical Engineering, 17, 245-255. Jenkins, GM. and Watts, D.G, 1968, Spectral analysis and its applications. San Francisco: Holden-Day. Kano, M., Hasebe, S. Hashimoto, I. and Ohno, H., 2001. A new multivariate statistical process monitoring method using principal component analysis. Computer and Chemical Engineering, 25, 1103-1113. Ku, W., Storer, R. and Georgakis, C , 1995, Disturbance detection and isolation by dynamic principal component analysis. Chemometrics and Intelligent Lab. Systems 30, 179-196. Li, Weihua and Qin, S.J., 2001. Consistent dynamic PCA based on errors-in-variables subspace identification. Journal of Process Control, 11, 661-678. Lin Weilu, Qian, Y. and Li, X.X., 2000, Nonlinear dynamic principal component analysis for on-line process monitoring and diagnosis. Computer and Chemical Engineering, 24,423-429.
8. Acknowledgments Financial support from the National Natural Science Foundation of China (No. 29976015), the China Excellent Young Scientist Fund, China Major Basic Research Development Program (G20000263) are gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
467
Tendency Model-based Improvement of the Slave Loop in Cascade Temperature Control of Batch Process Units Janos Madar, Ferenc Szeifert, Lajos Nagy, Tibor Chovan, Janos Abonyi University of Veszprem, Department of Process Engineering Veszprem, P.O. Box 158, H-8201, Hungary, [email protected]
Abstract The dynamic behaviour of batch process units changes with time and this makes their precision control difficult. The aim of this paper is to highlight that the slave process of batch process units can have a more complex dynamics than the master loop has, and very often this could be the reason for the non-satisfying control performance. Since the slave process is determined by the mechanical construction of the unit, the above mentioned problem can be effectively handled by a model-based controller designed using an appropriate nonlinear tendency model. The paper presents the structure of the tendency model of typical slave processes and presents a case study where real-time control results show that the proposed methodology gives superior control performance over the widely applied cascade PID control scheme.
1. Introduction In the pharmaceutical and food industry high-value-added products are manufactured mainly in batch process units. The heart of units is generally a jacketed stirred tank in which not only chemical transformation, but distillation, crystallisation, etc. can also be performed. As the aim of pharmaceutical industry is to produce high quality and purity products, the optimisation and control of operating conditions is the most efficient approach to produce efficiently, as well as to reach specific final conditions of the product in terms of quality and quantity (Cezerac, 1995). The 888.01 batch control standard defines three types of control needed for batch processing (ECES, 1997): • Basic control comprises the control dedicated to establishing and maintaining a specific state of the equipment and process. Hence, basic control includes regulatory control, interlocking, monitoring, exception handling. • Procedural control directs equipment-oriented actions to take place in an ordered sequence in order to carry out a process-oriented task. Hence, procedural control is characteristic of batch processes. • Co-ordination control directs, initiates and/or modifies the execution of procedural control and the utilisation of equipment entities. Examples of co-ordination control are algorithms for supervising availability of capacity of equipment, allocating equipment to batches, etc. When these control types are applied to the equipment, the resulting equipment entities provide suitable process functionality and control capability. However, lots of theoretical and practical problems have to be solved for the successful implementation.
468 Recently, the research focuses on the higher hierarchical levels (co-ordination control like scheduling and optimisation, and procedural control) (Garcia, 1995; Terwiesch, 1998; Book, 2(XX)), and the basic control is assumed to be the same as the control of continuous processes. However, in the batch environment, there may be higher requirements on the performance of basic control (Fisher, 1990). Due to the complexity of the chemical synthesis and the difficulty to estimate reactant compositions on-line, the control of reactors remains basically a temperature control problem, commonly performed directly or indirectly through heat exchangers with a heat transfer fluid circulating in the jacket surrounding the reactor. For this purpose, generally simple cascade control systems are used (Cezerac, 1995) and usually not very much attention is paid to the tuning of the slave control loop (i.e. in practice simple proportional controllers are used). However, according to our industrial experience, the performance of the complex and hierarchical solutions is primarily constrained by the performance of the split-range controller in the slave-loop. Hence, this paper focuses on the slave-loop control and presents a detailed case study where the impact of the slave-loop is illustrated by the temperature control of a pilot plant presented in Section 2. The tendency model of the jacket of the reactor is given in Section 3, while in Section 4 this model is utilized in a model-based control algorithm. Based on the real-time control results presented also in this section, some conclusions will be drawn in Section 5.
2. Process Description: Prototype of Pharmaceutical Process Systems To study the control strategies of multipurpose production plants a prototype of pharmaceutical process systems was conceptualized. The physical model of this system was designed and installed in our laboratory. The P&I diagram of the process is shown on Figure 1. As depicted in Figure 1, the central element of the process unit is a 50 liter stirred autoclave with heating-cooling jacket. The unit contains feeding, collector and condenser equipments.
Figure 1. The prototype of the pilot process unit.
469 Through the jacket the direct heating-cooling system allows cooling with chilled water or heating with steam in steam or hot water mode. In the hot water heating mode the water is circulated by a pump, the steam is introduced through a mixer, while the excess water is removed through an overflow. Since the system is multipurpose and operates in batch regime, the precision temperature control is not a simple problem. In the industrial practice, cascade structure is used to decompose the problem into two parts. Contrary to the classical PID-PID cascade scheme, in this paper model-based controllers are applied at both the master and slave levels. The manipulated variable at the master level is the jacket temperature; the disturbance (that can be indirectly determined in model-based solutions) is the heat flux of the processes taking place in the reactor. The manipulated variables of the slave process are the valve signals (VI: steam valve, V2: cooling water valve); the disturbance is the reactor temperature; the output is the characteristic jacket temperature. The latter can be defined as the inlet, outlet or average temperature of the jacket. This choice will influence the dynamics of both the slave and master loop processes. Based on dynamical simulation, experimental results and experience gained on other industrial systems we assigned the inlet temperature of the jacket as the output of the slave process. Applying the above decomposition, the master process can be modelled as a first order plus time delay (FOPTD), while the slave process should be modelled as a more complex nonlinear system, as it will be presented in the following section.
3. The Model of the Jacket The jacket and the circulating liquid can be described using the common lumped parameter enthalpy or heat balances given in the chemical engineering literature. In the model, zero-volume distributors and mixers are applied and the overflow as well as the feed of the steam and the fresh cooling water are taken into account. The obtained simplified first principle model can be regarded as the tendency model (Filippi-Bossy, 1989) of the most important phenomena. Its scheme is given on Figure 2 where TH is the temperature of the cooling water coming from the environment, TF is the equivalent steam temperature calculated from the boiling temperature, the latent heat and specific heat. The valve characteristics are given in form of second order polynomials. The two first order transfer functions are obtained from the lumped parameter model of the jacket and the model of the thermometer at the jacket inlet. The Kl, K2, and K3 constants can be calculated from the parameters of the first-principles model of the process or can be identified using experimental data. For this purpose open-loop experiments were conducted on the whole operating range of the composite manipulated variable u (because of the split-range control: VI and V2 are not opened at the same time, u=100% means that VI is opened fully, u=0% means that V2 is opened fully). For comparison, beside the proposed tendency model (Ml) a FOPTD (M2) model with was identified based on the data shown of Figure 3. The significant deviation between the two models supports our practical experience that controller design based on linear models cannot provide good control performance in the slave loop. To prove this assumption in the following section real-time control experiments will be presented where the previously presented tendency model will be applied in model-based temperature control of the batch process unit.
470
reactor temperature
T1,T2 DT K KAR
Time constant Time-delay Gain Nonlinear valve characheristic
Figure 2. Schematic model of the reactor jacket.
Figure 3. Comparison of slave models and the process.
4. Temperature Control The proposed simple, but structurally transparent model can be easily utilised in several model-based control algorithms. We chose the so called Predictor-Corrector Controller (PCC) scheme which was developed and tested by us in many pharmaceutical applications in the previous decade (Abonyi et al., 1999). The basic scheme of the PCC algorithm is shown in Figure 4. Similarly to the Internal Model Control (IMC) scheme, the model is modified in every time-instant according to the measured process variables (correction). The control signal is calculated analytically based on this corrected tendency model without solving any optimisation problem. The operation of the proposed slave loop controller is demonstrated as a part of the complete reactor temperature control, where the master model-based controller was based on an FOPTD model. The parameters of the models were determined using the experimental data given of Figure 3. For comparison, a classical PID-PID cascade controller was also designed to the process and its parameters were optimized on the models of the master and slave processes and fine-tuned experimentally. The performance of this cascade PID-PID controller in a heat-up operation is shown on Figure 5. (The manipulated variables are constrained because of the physical limitations of the system). The same experiment was conducted using the model-based PCC-PCC controller scheme, and the
471 result is given on Figure 6. This result shows that the proposed methodology gives superior control performance over the widely applied cascade PID control scheme. As the master process is a simple FOPTD system that is easy to control, we have realized that mainly the slave-controller determines the overall control performance. Model Linear dynamic, Steady-state nonlinearity, Physical constrains
•
I I zlj
Corrector
Predictor
^
handling modelling error
P-inverse model
PCC
.
^
!
I
measurement
Process
control input
Figure 4. Scheme of PCC algorithm. -Y_mastef
—SP_»live
—Y_slava
—VI
—V2
|
Figure 5. Reactor temperature control using a PID controller. I
—SP_nw»tf
Ev^
Figure 6. Reactor temperature control applying a PCC controller.
472 This is because the proposed model-based slave loop controller is able to effectively handle the nonlinearities and the constraints of the valves and accurately describes the dynamics of the jacket of the reactor. This observation confirms our practical experience gained during the installation and tuning of model-based controllers in the Hungarian pharmaceutical industry in the last ten years. The advantages of PCC are the following: superior performance over PID control, effective constraint handling (no windup), the parameters of the controllers can be easily determined by simple process experiments, and the complexity of the controller is comparable to that of a well furnished PID controller. Furthermore, the analysis of the modelling error gives the possibility of process monitoring.
5. Conclusions The reason for the non-satisfying control performance of batch process units very often is the slave process that can have a more complex dynamics than the master loop has. As the slave process is determined by the mechanical construction, it is straightforward to design a model-based controller based on a nonlinear tendency model of the slave process. It has been shown that the parameters of the model-based slave controller (namely the parameters of the tendency model) can be easily determined by simple process experiments, and the complexity of the controller is comparable to that of a well furnished PID controller. Real-time control results showed that the proposed controller effectively handles the constraints (no windup) and gives superior control performance.
6. References Cezerac, J., Garcia, V. Cabassud, M., Le Lann, M.V. and Casamatta, G., 1995, Comp. Chem.Eng., 19,S415. European Committee for Electrotechnical Standardisation, 1997, S88.01, EN 61512-1. Garcia, V., Cabassud, M., Le Lann, M.V., Pibouleau, L. and Casamatta, G., 1995, The Chem. Eng. Biochem. Eng. J., 59, 229. Terwiesch, P., Ravemark, D., Schenker, B. and Rippin, D.W.T., 1998, Comp. Chem. Eng., 22, 201. Book, N.L. and Bhatnagar, V., 2000, Comp. Chem. Eng., 24, 1641. Fisher, T.G., 1990, Batch Control Systems - Design, Application and Implementation, Instrument Society of America. Abonyi, J., Chovan, T., Nagy, L. and Szeifert, F., 1999, Comp. & Chem. Eng., 23, S221. Filippi-Bossy, C , Bordet, J., Villermaux, J., Marchal-Brassely, S. and Georgakis, C , 1989, Comp. Chem. Eng., 13, 35.
7. Acknowledgements The authors would like to acknowledge the support of the Cooperative Research Center (VIKKK) (project 2001-1-7 and II-IB), and founding from the Hungarian Ministry of Education (FKFP-0073/2001 and 0063/2000) and from the Hungarian Research Found (OTKA T0376(X)). Janos Abonyi is grateful for the financial support of the Janos Bolyai Research Fellowship of the Hungarian Academy of Science.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
473
Consistent Malfunction Diagnosis Inside Control Loops Using Signed Directed Graphs Mano Ram Maury a", Raghunathan Rengaswamy^, and Venkat Venkatasubramanian^'* "Laboratory for Intelligent Process Systems, School of Chemical Engineering, Purdue University, West Lafayette, IN 47907, USA. ^Department of Chemical Engineering, Clarkson University, Potsdam, NY 13699-5705, USA.
Abstract Though signed directed graphs (SDG) have been widely used for modeling control loops, due to lack of adequate understanding of SDG-based steady state process modeling, special and cumbersome methods are used to analyze control loops. In this article, we discuss a unified SDG model for control loops (proposed by Maury a et al. (2002b)), in which both disturbances (sensor bias etc.) as well as structural faults (sensor failure etc.) can be easily modeled. Various fault scenarios such as extemal disturbances, sensor bias, controller failure etc. have been discussed. Two case studies are presented to show the utility of the SDG model for fault diagnosis. A tank-level control system is used as first case study. The second case study deals with fault diagnosis of a multi-stream controlled CSTR.
1. Introduction Various types of faults occurring in a chemical plant can be broadly categorized as (1) faults originating outside control loops and (2) faults originating inside control loops. A properly functioning controller (emulating integral control action) masks the faults since the effect of disturbances is passed to the manipulated variable even though the error signal is zero. Thus, presence of control loops makes fault diagnosis more challenging. Among many models such as fault trees, completely numerical models etc., signed directed graphs (SDG) have been widely used for modeling control loops. Iri et al. (1979) were the first to use SDG for modeling a chemical process. Later, Oyeleye and Kramer (1988) discussed about SDG-based steady state analysis and prediction of inverse response (IR) and compensatory response (CR). Recently, Chen and Howell (2001) presented fault diagnosis of controlled systems where SDG has been used to model control loops. Previous researchers used special methods to deal with control loops since SDGbased steady state modeling was not fully explored. Recently, Maurya et al. (2(X)2c) presented a systematic framework for SDG-based process modeling and analysis. We also proposed a unified SDG-model for control loops in which both disturbances (e.g.
* Author to whom all correspondences should be addressed, e-mail: [email protected], phone: (765) 494 0734, fax: (765) 494 0805.
474 sensor bias) as well as structural faults (e.g. sensor failure) can be easily modeled and analyzed (Maurya et ai, 2002Z?). In this article, we explore the fault diagnostic applications of the proposed model. This article is organized as follows. In the next section, we present a brief overview of our SDG-related work. In section 3, we discuss the unified SDG model for control loops. Various fault scenarios are also analyzed. In section 4, two case studies are presented to show the diagnostic efficiency of the proposed framework for fault diagnosis of systems with control loops. The article is concluded with discussion on future work.
2. Overview of SDG-based Process Modeling and Analysis A brief discussion on SDG-based process modeling and analysis is presented here. A detailed discussion can be found in (Maurya et al, 2002c). A SDG for a process can be developed from the model equations of the process. A process can be described by differential equations (DE), algebraic equations (AE) or differential algebraic equations (DAE). The initial response of a DE system can be predicted by propagation through the shortest paths from the fault node to the process variables. The initial response of a DAE system with only one perfect matching (or only negative cycles) in the algebraic part can be predicted by propagation through shortest paths assuming that arc length for differential arcs and algebraic arcs are 1 and 0, respectively. A perfect matching between the equations and dependent variables is a complete matching in which each equation is matched with a variable and no variable or equation is left unmatched. The response of an AE system can be predicted by analyzing the corresponding qualitative equations or propagation through the SDG (provided that the AE system has only one perfect matching or there are no positive cycles in the SDG for the AE system). The steady state response of a dynamic system can be predicted by using the SDG for the corresponding steady state equations. Maurya et al. (2002^) have shown that the qualitative equations ensure correct prediction of the steady state behavior of systems that exhibit CR. This leads to considerable simplification in SDG-based steady state analysis of control loops. A succinct discussion is presented next. For detailed discussion, see (Maurya et al, 2002a).
3. Control Loops- a Unified Framework Most of the control loops used in the industry are PI control loops (as far as their behavior is concerned). All control loops considered in this section are assumed to be PI control loops unless otherwise stated. Oyeleye and Kramer (1988) have analyzed PI control loop as a system exhibiting CR. It has already been shown that CR is implicitly handled by the steady state equations, no special method is needed to analyze control loops (Maurya et al, 2002b). Depending upon the qualitative state of the error signal, appropriate changes in the perfect matching handle the underlying scenario. 3.1. Model and preliminary analysis of a control loop The fault variables considered are the bias in the measurement, the controller signal and the valve position, set point (in the context that set point can be changed by an external agent), and failure in the measurement, the controller signal and the valve position. The model for a PI controller is given below.
475 External system
External system
csfaii fM<^y
CS fail 0
^ ^•"•W"
CSbias #^ ^ Positive arcs ^ Negative arcs k>0 kc>0 kv>0
8([CSfail])'''' (a) Information flow for e;^0
CS,
x^
(b) Information flow for e=0
Figure 1. SDGfor a PI control loop. ,bias m
(2)
~ ^m
CSp
(3) KQ.S
dt CS VP
(1)
^
ri S{CSfail).{CSp +
(4)
CSi)-hCStia
^{VPfail).K.CS^VPuas
(5) (6)
In the above equations, X, CS and VP refer to the controlled variable, controller signal (controller output) and valve position (or manipulated variable), respectively. The subscripts m, P and / refer to measurement, P control and I control, respectively, kc and ky are the controller gain and the valve gains, respectively. Failure inside the control loop is modeled by introducing non-zero deviations in the corresponding failure and bias variables. For example, sensor failing high is modeled as Xmjaii = 1 and Xm,bias ='+' etc. The effect of VP and external variables (say Xj') on X is modeled as: X = k.VP -h ttijfXjf
(7)
Usually the above equation is matched with X = Xiin the absence of a control loop and k represents the process gain (kp). The system described by equations 1-7 has two perfect matchings. One of the SDGs of this DAE system for [kc] = [k] = [ky] ='+' is shown in Figure 1(a). There is only a negative feedback cycle in the AE part of the SDG and hence propagation is valid (Maurya et ah, 2002c). Now we discuss initial and steady state response of the PI controller. Initial response: Figure 1(a) shows the SDG for the DAE system. Initial response can be predicted by propagation through the shortest paths in this SDG. The controller effectively behaves as a P controller. Thus equation 4 and CSj can be eliminated from analysis. The remaining equations constitute model of a P controller. For non-zero disturbances or bias, the error, e^O (corresponds to imperfect control). Information flow is as expected. The SDG also shows the interaction between the control loop and the extemal system.
476 Steady state response: Figure 1(b) shows the SDG for the corresponding AE. Ideally speaking, steady state refers to the state when all the derivative vanish i.e. e = 0 in equation 4. The arc (e -> CSp) becomes ineffective. Now X^^ = Xm and as observed in the previous section, perfect matchings change (the AE system has only one perfect matching) and information flow is exactly opposite to that of the previous case. Perfect control is achieved under such conditions. Figure 1(b) also shows the interaction of the control loop with the extemal system in steady state. Notice the signs of the arc Xjf —> X (sign is [aijf]) in Figure 1(a) and the arc Xjf —> VP (sign is [—aijf/k]) in Figure 1(b). In another situation, controller and (or) valve opening saturate(s). When X has reached a constant value, in a loose sense one can say that the system has reached steady state. Still, e ^ 0. For all practical purposes, controller saturation can be modeled through P control action with suitably modified gain (that corresponds to the ratio of the saturated value of {CSp + CSi) to thefinalvalue of e). This situation corresponds to imperfect control and is explained through Figure 1(a). Thus the concept of perfect matching handles everything within the proposed framework guaranteeing completeness. Further, no spurious solutions are generated and such a framework is useful for large-scale applications. Now a number of fault (disturbance) and failure scenarios are discussed with respect to perfect control and imperfect control. 3.2. Perfect control The controller can be modeled as integral controller (Figure 1(b)) because e = 0. Various faults are discussed below. Changes in external disturbances: Integral action keeps the controlled variable at its set point. The information flow is:
X^' -^Xm-^X
-^VP and xy -^VP^CS^
CSj.
Faults inside the control loop: Three types of faults are: • Sensor bias: The propagation is Xm,bias —> X. Qualitatively, [X] = -[Xm,bias]' • CS bias: The propagation is CSuas -^ CSj. Further, C5='0' and X ='0\ • VP bias: [CS] = -[VPbias] and VP = X ='0\ 3.3. Imperfect control Imperfect control is exhibited due to large extemal faults or control loop failure. The error signal is non-zero (Figure 1(a)). CS builds up till the controller saturates. Two scenarios are: (i) large extemal disturbance, set point change or sensor bias in which the controller behaves as a P controller and, (ii) failure inside the control loop in which the control loop is open. Three types of failure are: • Sensor failure: The arc X —» Xm is cut. Propagation yields [X] = -[Xm^bias]• Controller failure: The paths from e to CS are cut and CS = CSbias• Control valve failure: The arc CS —» VP is cut and X = VP = VPtias-
4. Case Studies Case study 1 deals with fault diagnosis in a tank level-control system. Case study 2 deals with fault diagnosis of a CSTR system.
477 Jo
^n
VP fail 0 ^ kc<0
\
/VP xv"^
\ ^«
y cs, • Positive arcs (a) The controlled tank system
X;;^ Negative arcs
(b) SDG for the perfect control case
Figure 2. The tank-level control system and its SDG. 4.1. Case study 1: Tank-level control system The importance of this case study is three-fold: (1) to explain simple concepts, (2) comparison with the results discussed in the literature and, (3) to emphasize the ability to perform multiple fault diagnosis. The controlled tank and its SDG under the perfect control scenario are given in Figure 2 (a) and (b), respectively, fi and fo are the inlet and outlet flowrates, respectively. L is the level of the liquid in the tank, kc is negative. The measurements are fo, Xm and CS. Diagnosis for two fault scenarios is discussed below. Positive sensor bias: The observed pattern is [fo Xm CS] = [0 0 -f ]. Any fault in fi or set point is ruled out. The candidate faults are VP^ias = ' - ' (=> X='0') or Xm,bias ='+' (=> X ='-'). Further resolution cannot be achieved. Negative sensor bias and positive inletflowrate: One observed pattern could be [fo Xm CS] [+0 0]. fi ='+' is inferred by direct propagation. But this alone cannot explain CS ='0\ Forward simulation shows that if fi ='+' were the only fault then CS ='+\ So the assumption of multiple faults is necessary. One can easily see that the other fault is VPtias
= -^' (X
='0')
or XmMas
= -' (X
= V ) .
A similar case study has been presented by Oyeleye and Kramer (1988) to explain SDGbased analysis of compensatory response. 4.2. Case study 2: A multi-stream controlled CSTR The case study has been taken from Chen and Howell (2001). A detailed SDG-analysis of this case study has been presented elsewhere (Maurya et ai, 2002a). In this section, we present SDG-based control loop diagnosis for this case study. The SDG for the stable system under perfect control scenario is shown in Figure 3. A similar SDG has been presented by Maurya et al. (2002^?) to show that the proposed framework can model type-A interaction. There are no cycles in the SDG. Thus qualitative simulation or fault diagnosis can be performed by using forward or backward propagation, respectively. For a sample comparison with the results discussed in the literature (Table 5 of (Chen and Howell, 2001)), consider fault 1 i.e. L-sensor-bias-high (Lm,bias ='+')• Forward propagation to node LC shows that LC ='-\ Similarly other results can be reproduced. Let us consider diagnosis of the same scenario. The measurements are [LC FC TC FJC FJ CA] = [ + - 4-]. Back-propagation from node FC reveals that the candidate fault set is {FVuas ='-\ FMuas ='-\ LM^as ='+'}•
478
LV.:,
*" Positive arcs Negative arcs
Figure 3. SDGfor the multi-stream controlled CSTR- perfect control. The first two candidates are ruled out since they cannot produce LC ='-'. Thus the fault is LMbias ='+' (complete fault resolution).
5. Conclusions and Future Work A brief discussion on our SDG-related work followed by a detailed discussion on the SDG-based modeling and analysis of control loops has been provided. Two case studies have been presented to elucidate the use of the framework for fault diagnosis. In future, the framework would be used for control loop monitoring and distributed fault diagnosis in large-scale systems.
6. References Chen, J. and Howell, J. 2001, A self-validating control system based approach to plant fault detection and diagnosis, Comp. & Chem. Engg., 25, 337-358. Iri, M., Aoki, K., O'Shima, E. and Matsuyama, H. 1979, An algorithm for diagnosis of system failures in the chemical process, Comp. & Chem. Engg. 3(1-4), 489-493. Maury a, M.R., Rengaswamy, R. andVenkatasubramanian, V. 2002a, A signed directed graph-based systematic framework for malfunction diagnosis inside control loops. Technical Report CIR\C-02-2 Purdue University. Maurya, M.R., Rengaswamy, R. and Venkatasubramanian, V. 2002Z?, A systematic framework for the development and analysis of signed digraphs for chemical processes: Part II- Control loops andflowsheetanalysis; Submitted to Ind. Engng. Chem. Res. Maurya, M.R., Rengaswamy, R. andVenkatasubramanian, V. 2002c, A systematic framework for the development and analysis of signed digraphs for chemical processes: Part I- Algorithms and analysis^ Submitted to Ind. Engng. Chem. Res. Oyeleye, 0.0. and Kramer, M.A. 1988, Qualitative simulation of chemical process systems: Steady-state analysis, AIChE J., 34(9), 1441-1454.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
479
Financial Risk Control in a Discrete Event Supply Chain Fernando D. Mele^"^\ Miguel Bagajewicz^"^^, Antonio Espuna^"^^ and Luis Puigjaner^"^^^*^ (+) Chemical Engineering Department, Universitat Politecnica de Catalunya, ETSEIB. Diagonal 647, E-08028, Barcelona, Spain., Phone: +00 34 934016733. (++) School of Chemical Engineering and Materials Science. University of Oklahoma. 100 E. Boyd, T-335, Norman OK 73019, USA. On sabbatical leave at ETSEIB. (#) Corresponding Author
Abstract In this work, a discrete event supply chain is modeled from the point of view of one of the members. The model takes into account uncertainty and it determines an optimal ordering policy so that profit is maximized and financial risk is controlled. Two cases are considered. In one case, uncertain the behavior of the other members of the chain is known while in the other they are not.
1. Introduction A Supply Chain Management (SCM) problem by means of discrete-event simulation is studied in this paper. The paper is an extension to discrete event modeling of the models presented by Perea-Lopez et al. (2001). We add uncertainty in the form of two-stage modeling and financial risk management (Barbaro and Bagajewicz, 2002a,b). This paper is organized as follows: The SC dynamic modeling is first described. In the following sections, the deterministic and the stochastic models are described. Next, the risk concept utilized is explained. Finally, some conclusions and ideas for future work are exposed.
2. Modeling the Supply Chain It has been consider a supply chain (SC) with all entities acting as independent agents, each of these being represented by a collection of states and transitions (Figure 1). The model has been constructed using two toolboxes of Matlab: Stateflow and Simulink, and it considers the SC as a decentralized system where there is no global coordinator and every entity in it makes decisions locally. The demand has been modeled as a set of events distributed over the time horizon of the study, each of them having an associated amount of material and time of occurrence. The inter-arrival intervals are uniform but the associated amounts are distributed according to a normal distribution.
480 ORDER input (ORin)
^ Order ; Reception
ORDER output {ORou\
i Ordering
Delivering j M { \ T E R I A L output {MAout)
\ Material • I Reception :
MATERIAL input (MAin)
f
Figure 1. Generic unit scheme. The inventory policy aims at determining when should a replenishment order be placed and how large should this order be at each time. At every time R, the inventory position / is checked. All the members of the supply chain, except the one in consideration, behave like follows: If / is below the reorder threshold s, a replenishment quantity is ordered according to a known law. One example of such law is the proportional one used by Perea-Lopez et al. (2001): If the inventory is above a threshold 5, nothing is done until the next review, otherwise an order u is done according to the proportional law, u = k {s - J). Total profit is used as performance index. It considers sales, purchasing costs and storage costs for materials and orders over the simulation time horizon.
3. Deterministic Model Six generic units have been connected as Figure 2 shows. The material flow moves from the entity SIP to the customers and the ordering flow moves in the opposite direction. The inventory control policy beforehand described has been applied in all the entities belonging to the model, except in the DIB.
Rest of the system
,I ORout
DRin
-I
Customer demand
DIB MA
Figure 2. Supply Chain Scheme.
MAout
Figure 3. Relation between the SC and one entity.
The case posed considers that a given plant wishes to make decisions in order to maximize its own profit. The manager of this plant knows the modus operandi of all the chain by means of a simulation model, and has information about the future demand. In
481 our case study, the distribution center DIB receives orders that the system places (ORin). It can either respond by delivering materials (MAout) or save the order if it has not enough material. On the other hand, the system sends materials to DIB (MAin), and if it is necessary, DIB places orders (QRout) to the system. The variable that has to be manipulated to modify the profit of DIB is Orout. The question to answer is what is the quantity of material that should be ordered in time TQ, T'I, x'2 and T'3 in order to maximize DIB's profit. Thus, there exist three possible discrete values at each of the four time instants. One of the permitted values is chosen at each time TQ, T'I, x'2 and T'3, and one simulation of the system is executed. For each simulation, the profit is calculated and the one with largest profit is kept.
Figure 4. Demand and Time instants in which orders have to be placed.
4. Stochastic Model The demand is modeled using normal distributions and sampling scenarios. The amount ordered in time TQ is considered as a first-stage variable, that is, a decision made before the uncertainty is revealed, whereas the amounts of materials ordered in the next periods, T'I, T'2 and T'3, are considered second-stage variables, decisions made after the uncertainty materialization.
5. Financial Risk This work applies the financial risk concept defined in Barbaro and Bagajewicz (2002a). Financial risk is the probability of a certain design x of not meeting a certain target profit level Q. Figure 5 represents a typical curve that describes the risk as a function of different profit targets. The objective is to reduce the risk for certain aspiration levels.
Risk{x,Q) = YJ P^P^^fi^s W < ^ ) SES
(1)
482
Figure 5. Typical risk curve.
6. Results Since the size of the orders that has to be request is chosen between three values, and, in addition, there are four time instants in which an order has to be placed, there are 81 possible combinations for simulating in each scenario. For each one of these configurations, taking into account each scenario, the expected values of the profit (E(profit)) has been calculated and the one with the largest expected profit is picked. In the deterministic case the values for the demand have been chosen in 7 units of product A each five simulations steps and 3 units of B with the same frequency. Moreover, the value for the safety inventory level has been set in 50 units for all the entities except in SIP and PI where the selected value has been 100 units. In the first case below described, a variance of 3 and 2 has been added to both deterministic values of the demand size. In the second case, a variance of 30 has been added to the deterministic value of the safety inventory level. 6.1. First case. Uncertain demand In this case, it has been considered that uncertainty is only in the demands. The model used 100 scenarios and three discrete values at each time. Three curves of maximum profit has been generated (Figure 6). Decisions can be taken by observing the chart and by comparing the expected profit E(Profit) and risk values for each alternative (Table 1). It is important to notice that these curves represent the maximum E(Profit) achieved for each value of TQ. The negative profit values correspond to scenarios in which the sales are minor to the purchases or storage costs in the simulation. For example, if a customer asks for materials and the inventory level is not high enough, the orders are accumulated, then, DIB incurs in a penalization cost.
0.900.80 0.70 0.60 Risk 0.50 0.40 0.30 0.200.10-
0.90 0.800.70 0.600.500.40 0.30 0.20 0.10
/
TO = 0
TO=10 TO=20
i-f^^
,
,
,
Figure 6. Risk curves for the stochastic first case
-3000
^ /
^
TO = 0
.TO=7
J^
TO=14
JJ
/v
/it^
^ ^ -2000
-1000
1000
2000
3000
4000
5000
Figure 7. Risk curves for the stochastic second case.
483 Table 1. Results for the stochastic first case. Order size at TQ 0 10 20
E(Profii) [€] 1043 952 897
Risk{Q = 1000) [%] 46 45 50
Risk(n=: 2000) [%] 86 88 87
6.2. Second case. Uncertain ordering policy in third parties In this case, it has been considered that uncertainty is in one of the parameters of the ordering policy of the entity RIB. It has been supposed that the uncertain parameter is the reorder point s of the inventory control policy of RIB, and the values for this parameter belongs to a normal distribution with a given mean value and variance. The same procedure than in the first case has been applied. Results can be seen in Figure 7 and Table 2. Table 2. Results for the stochastic second case. Order size at TQ E(Profit) [€] RiskjQ = 1000) [%] 0 940 47 7 857 52 14 780 56
Riskj^ = 2000) [%] 89 92 92
6.3. Risk management In both cases above, as the profit is maximized for each option of the first stage variable (size of the order at TQ) the risk also increases. Given the simulation-based approach used, solutions with smaller risk are found by inspection. Consider case one (uncertainty in demands) and an aspiration level of Q, = 500. The risk at this level was computed for all the simulations and the smaller was chosen. The curves corresponding to maximum profit and reduced risk are shown in Figure 9. The size of the order picked for the reduced risk case is now TQ- 10, as opposed to zero for the maximum profit case. The expected profit reduces from 1043 to 940. The risk at an aspiration level of 500 is reduced from 34 % to 28 %, which is significant. For an aspiration level of 0, that is, the risk of loosing money is reduced for this solution from 19 % to 14 %, again a significant reduction. The value of the downside risk at an aspiration level of 0, that is, the integral of the financial curve from -oo to 0 (Barbaro and Bagajewicz, 2002a), also decrease from 40 to 30.
484
Figure 9. Maximum Profit and Reduced Risk curves.
7. Conclusions A SC is modeled in this paper, determining the optimal ordering policy of one of the members in conditions in which the behavior of the other members is perfectly known and the demands are uncertain. A second case was considered where the demands are certain and the parameters of the order policy models of the other members are uncertain. It has been shown how financial risk can be managed. Extensions to the consideration of more uncertain parameters as well as decentralized control with sharing of information or centralized control are work in progress.
8. References Applequist, G.E., Pekny, J.F. and Reklaitis, G.V., 2000, Risk and uncertainty in managing chemical manufacturing supply chains, Comp. Chem. Eng., 24. Barbaro, A.F. and Bagajewicz, M., 2002a, Managing Financial Risk in Planning under Uncertainty, Part I.* Theory, AIChE Journal, Submitted. Barbaro, A.F. and Bagajewicz, M., 2002b, Managing Financial Risk in Planning under Uncertainty, Part II: Applications. AIChE Journal, Submitted. Law, A.M. and Kelton, W.D., 1991, Simulation Modeling & Analysis, McGraw-Hill International Editions. Perea-Lopez, E., Grossmann, I.E., Ydstie, B.E. and Tahmassebi, T., 2001, Dynamic Modeling and Decentralized Control of Supply Chains. Ind.Eng.Chem.Res., 40.
9. Acknowledgements Financial support received from Generalitat de Catalunya, FI programs, and project GICASA-D, and European Community project VIPNET (GlRDT-CT-2000-00318) is fully appreciated. Support form the Ministry of Education of Spain for the sabbatical stay of Dr. Bagajewicz is acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
485
Control Application Study Based on PROCEL Q.F. Meng, J.M. Nougues, MJ., Bagajewicz and L. Puigjaner* Universitat Politecnica de Catalunya, Chemical Engineering Department, Av. Diagonal 647, E-08028 - Barcelona (Spain), Tel.: +34-93-401-6733 / 6678, Fax: +34-93-401-0979. *To whom correspondence should be addressed E-mails: [email protected]
Abstract This is the compare study of control strategy in different operating conditions. To achieve this, a configurable process scenario at pilot plant scale (PROCEL) has been built at the Universidad Politecnica de Catalunya (UPC). In this work, the following study has been carried out in sequence: 1. Walk through the general steps for process control development 2. In Sattline DCS, different configurations have been programmed for continuous mode case study. 3. Using the commercial software MATLAB ,there have been reviewed the performance of various tuning techniques optimisation techniques Genetic Algorithm Results of the above study have been tested.
1. Introduction Objectives: The object is to study of different control strategy using pilot plant data It is a simple control problem, but the aim is to apply different techniques. The majority of the controllers used in industry are of PID type. A large industrial process may have hundreds of these type of controllers. They have to be tuned individually to match the process dynamics in order to provide good and robust control performance. Traditionally there is several methods: 1. Thumb rule. According to the different process, experienced value of PID is applied. 2. Trial and error. 3. Critical point method Including ZN method and TL method. 4. Performance Index, ISE or ITAE. 5. Automatic tuning. This is a series of automatic tuning methods, such as Relay feedback tuning method. By automatic tuning (or auto -tuning), we mean a method which enable the controller to be tuned automatically on demand from an operator or an external signal. In this work, an ITAE index acts as object function. And two optimisation methods Newton search method and Genetic Algorithm are both carried out. Genetic Algorithms' Overview: The Genetic Algorithms is a stochastic global search method that mimics the metaphor of natural biological evolution. GAs operate on a population of potential solutions
486 applying the principle of survival of the fittest to produce better and better approximations to a solution. At each generation, a new set of approximations is created by the process of selecting individuals according to their level of fitness in the problem domain and breeding them together using operators borrowed from natural genetics. This process leads to the evolution of populations of individuals that are better suited to their environment than the individuals that they werecreated from, just as in natural adaptation. GAs versus Traditional Methods: It can be seen that the GA differs substantially from more traditional search and optimisation methods. The four most significant differences are: GAs search a population of points in parallel, not a single point. GAs do not require derivative information or other auxiliary knowledge ;only the objective function and corresponding fitness levels influence the directions of search. GAs use probabilistic transition rules, not deterministic ones. GAs work on an encoding of the parameter set rather than the parameter set itself (except in where real-valued individuals are used). In cases where a particular problem does not have one individual solution, for example a family of Pareto-optimal solutions, as is the case in multi-objective optimisation and scheduling problems, then the GA is potentially useful for identifying these alternative solutions simultaneously. Methodology: 1. System Identification In pilot plant, real-time test to obtain data. Extracting data to do a curve fitting 2. Using mass balance and energy balance equation to calculate the math model of the process 3. Adjust the model to fit the data, compare the fitting between the data and methodology. Using Matlab simulates and develops a tuning parameter ,model validity in case study This paper is arranged as the following part 2, History review of PID controller tuning techniques. In part 3 there is a brief introduction to the PROCEL pilot plant, on e model is presented in part 4,in part 5,a discussion .
2. Tuning controller s techniques The most of the controllers used in industry are of PID type .A large industrial process may have hundreds of this type of controllers. They have to be tuned individually to match the process dynamics in order to provide good and robust control performance. About the tuning techniques: The output of a PID controller is: w (t) =
Kp
= Kp
{e
+ i y \ \
' e + K J y
{T )d T + T ^
e {T ) d T +
K j^
de dt
^ ) dt (1)
487 Traditionally there is several methods: 1. Thumb rule. According to the different process, experienced value of PID is applied. 2. Trial and error. 3. Critical point method. Including ZN method and TL method. 4. Performance Index , ISE or ITAE In this work ITAE will be used. 5. Automatic tuning. This is a series of automatic tuning methods, such as Relay feedback tuning method.
Relay Feedback system
Figure 1. Relay Feedback System. By automatic tuning (or auto-tuning),we mean a method which enable the controller to be tuned automatically on demand from an operator or an external signal. Automatic tuning needs identify the dynamics of a certain process. Usually Relay was mainly used as an amplifier in the fifties and the relay feedback was applied to adaptive control in the sixties. The exciting to a process loop make it reach the critical point. The critical point, i.e , the process frequency response of the phase lag of pi(7i),has been employed to set the PID parameters for many years since the advent of the ZieglerNichols(Z-N) rule. From then several modified identification methods are
^
Ad
Ku=
(2)
an proposed. The controller Cohn-Coon method requires an open-loop test on the process and is thus inconvenient to apply. The disadvantage of the Yuwana and Seborg method and the Bristol method is the need of large set-point change to trigger the tuning which may drive the process away from the operating point. To acquire more than one point of dynamics of process Proposed method: performance index ITAE.
3. PROCEL Description PROCEL, the PROcess CELl pilot plant, has been built at UPC in order to test real time applications. It is constituted by three tank reactors, three heat exchangers and the necessary pumps and valves to allow changes in the configuration. Equipment of the P ROCEL is fully connected and the associated instrumentation allows the change of configuration by software. PROCEL is designed to work in different operation modes. An appropriate set of electric valves makes the plant configurable in different modes of
488 operation in an easy way, from strictly batch operation to continuous or hybrid scenarios. The plant is also provided with a distributed control system (DCS). The Sattline DCS is connected to a PSP(Planning , Scheduling and Programming) Server and Data Server using XML message via TCP/IP network. This flexibility allows making experiments in batch, continuous and batch-continuous mode simply configuring the control software. The DCS is Sattline (ABB). This physical system allows validating different methodologies with a case study that can be transported to complex production structures (i.e. petrochemical plants).
Figure 2. PROCEL Control Configuration.
Tcold
U/.f.) Tl
I^^Ki^" Out I F3 Jhot
Figure 3. Heating Temperaure Control.
2200 2400 2600 2800 3000 3200 3400 360Q 3800 Mm 42Q|||
Figure 4. STEP Response of the system.
4. System Identification The pilot plant contains three glass reactors, EQl, EQ2, Eq3 each with a volume of 10 litres .A highly flexible connectivity between the three vessels is achieved via a network of pipes, pumps and valves.The case to de studied is with EQl, cold water flow Fl, temperature of Tcold is feeding from the top of EQl,one electrical heater is applied to control the temperature Tl,then the heated water is discharged from the bottom with the flow F3,and temperature Tl.To minimise the level fluctuation level loop is facilitated.When the process is stable, a step input of Rl is applied to the process. The
489 process data is collected via MMS OLEGateway + VB program, then the data of CSV format is imported to Matlab workspace. The variable curve is plotted: Then the process dynamic is obtained as the following,in the form of FOPTD ,eq (3),(4)
G(5) =
Gis) =
(3)
250^ + 1
(4)
Ts + l
In the same time ,on the basis of material balance and energy balance , a differential equation is got.
£»(r„j/rf/=AF,fry-rR,j+F2fr2-r;j/;+F5(r5-rR,)7A'R,+ARiVj,;-Zo5i_R;(rRr«m6)y/wM«/CpVR,
(5)
The most important parameter of the model is the gain, delay, and time constant. The necessary adjusting is made to best fitting the process curve.
S.ControUer Design After the model is obtained, the PID parameter tuning is committed
Q]^
{B Out1
o
-KD
Figure 5. ITAE Calculation Block. Here we continue use the PID controller.
ITAE = j'^t-abs(e(t))dt
(6)
In this work , an ITAE index acts as object function. And two optimisation methods Newton method and Genetic Algorithm is both carried out. In this case the object function is a function of index ITAE= f(Plant model(K,I ;D), Integral time, SP, Disturbance) Optimisation parameters procedure: Software enviroment: Matlab 6 relaesel2 Simulink4.0 That is the spent time is 20 minutes and 29 seconds
490 Above is optimisation of Newton search Method That is the spent time is 7 minutes and 7 seconds. The basic genetic algorithm is as follows: 1. Create an initial population (usually a randomly generated string) 2. Evaluate all of the individuals (apply some function or formula to the individuals)
^^^;i9e?5i'
Figure 6. GA Initial Populations 3. Select a new population from the old population based on the fitness of the individuals as given by the evaluation function. 4. Apply some genetic operators (mutation & crossover) to members of the population to create new solutions. 5. Evaluate these newly created individuals. 6. Repeat steps 3-6 (one generation) until the termination criteria has been satisfied (usually perform for a certain fixed number of generations) This code is to generate a population of 10 individuals, and the boundary of explorer is [1.1 9], the evaluation functions "mygademoleval.m", Here we use Genetic Algorithm for Function Optimisation Toolbox ,see reference.
6. Discussion and Conclusion We use a pilot plant to acquire process data and to test controller strategy. A tuning procedure is also proposed based on Optimisation techniques Genetic Algorithm. In the future, an online tuning method will be developed and tested, and applying to MultiVariable, Multiloop system.
7. References Astrom, K.J., Hagglund, T., Huang, C.C, Ho, W.K., 1993, Automatic tuning and adaptation for PID controller-a survey,Control Eng.Practice,(4)(1993) 699-714. Astrom, K.J., Hagglund, T., 2001, The future of PID control-Control Eng. Practice, (9) 1163-1175. Chipperfield, A., Fleming, P. Pohlheim, H., Fonseca', C., Genetic Algorithm Toolbox User's Guide 1-3. Houck, C.R., Joines, J.A., Kay, M.G., 1999, A genetic Algorithm for Function Optimisation: A Matlab Implementation. Tan, K.K., Lee, T.H., Jiang, X., 2001, On-line relay identification, assessment and tuning of PID controller. Journal of Process Control, (11), 483-496. Yuval, D., Genetic Algorithms and Robotics: a heuristic strategy for Optimisation.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
491
Challenges in Controllability Investigations of Chemical Processes Mizsey, P., M. Emtir, L. Racz\ A. Lengyel\ A. Kraslawski^ and Z. Fonyo Department of Chemical Engineering, Budapest University of Technology and Economics, H-1521 Budapest, Hungary ^MOL Rt. (Hungarian Oil Trust Co.) Szazhalombatta, Hungary ^Lappeenranta University of Technology, Lappeenranta, Finland
Abstract The controllability investigation of any kind of chemical process is the interactive and challenging part of process design or development. The investigation works on several levels and uses different methods. First, the control targets are defined, the set of controlled variables are determined and the set of possible manipulated variables are selected. The proper pairing of the controlled and manipulated variables, that is the design of the control structure, consists of first, the study of steady state control indices and then dynamic behaviours in the cases of open and closed control loops of the promising control structures. This investigation is presented on the design of energy integrated separation schemes. A new challenging method is offered by the Rough Set Theory (RS), which has been already successfully used in several areas of artificial intelligence and data analysis e.g. for the discovery of patterns and dependencies in data. Its use for controllability investigations is a new area at the revision of control structures of existing plants, retrofit modifications, because it can detect the dependencies among the possible controlled and manipulated variables with the analysis of the measured data and helps to fulfil the requirements of the control target. RS theory indicates also the measure of the dependencies among the variables. The RS theory is used for the improvement of the control structure of a complex chemical process, which is the fluid catalytic cracking unit (FCC). The analysis of the data of existing FCC unit shows that the product quality significantly depends on the temperature in the regenerator unit and its control is necessary. After considering the degrees of freedom, a new control loop is designed for the proper control of the temperature in the regenerator unit, which is accepted by the industry and it will be included in the control structure of the FCC plant.
1. Introduction Controllability investigations are integral part of process design and they are mutually influencing each other. According to previous studies, theories, and practice there is a classical way for the design of control structures (e.g. Mizsey and Fonyo, 1990, Mizsey et al, 1998). It starts already on the short-cut level of process design and counts for the degrees of freedom for the possible control loops. After defining the control targets the set of controlled variables and the set of manipulated variables are determined. This selection is based on engineering judgement and heuristics but the result of exhaustive mathematical modelling is also considered. After defining the sets of controlled and manipulated variables, steady state controllability indices like Niederlinski index, relative gain array, Morari resiliency index, condition number are determined (e.g. Luyben, 1990). The evaluation of the
492 steady state indices gives indication for the most promising control structures which are finally tested by dynamic simulation methods before the final selection. This controllability investigation works properly at the design stage of processes if there is a reliable model of the system and also for modification of existing plants' control strucutre if we have reliable model. In case of revision of existing plants there is, however, another alternative where the revision and/or improvement of the control structure can be supported with the use the Rough Set theory (e.g. Pawlak, 1982) which helps to detect dependencies in the system to be controlled. For this activity the RS theory needs measured data of the plant to be investigated. After detecting the dependencies and also their measures the control structure can be revised and modified.
2. Demonstration of Interaction between Design and Control The example for the comprehensive design of five energy integrated separation schemes demonstrates the interaction between process design steps: economic optimisation and controllability investigations. A three-component-alcohol-mixture is separated in five distillation based energy integrated two-column separation systems: two heat integrated distillation schemes (forward and backward heat integration), fully thermally coupled distillation column (also known as Petlyuk or Kaibel system), and sloppy separation sequences with forward or backward heat integration. The schemes are economically optimised for the total annual cost (TAC) and than controllability investigation takes place. The results compared to the non-integrated base case and each other. The results of the rigorous optimization can be summarized as follows: the heat-integrated schemes are always more economical than the conventional distillation schemes, direct sequence with backward heat integration (DQB) shows the maximum TAC savings 37 %. direct sequence with forward heat integration (DQF) shows the smallest TAC savings, 16 %, the sloppy schemes show TAC savings of 34 % for forward heat integration and 33 % for backward heat integration, Petlyuk system (SP) shows 29 % TAC saving value and the highest utility demand compared to the other energy integrated structures, sloppy schemes with forward (SQF) or backward heat integration (SQB) have the lowest values of utility demand but because of using middle pressure steam the utility cost will be higher. Secondly, the optimum schemes are investigated from controllability point of view. The controlled variables are the product compositions and the set of manipulated variables is also determined. This is based on engineering judgement. The possible manipulated variables are the following: distillate 1, reflux flow, distillate 2, reflux flow 2, bottom rate 2, heat duty, side product flow. Ratio control structures are not considered. The steady state indices are determined and compared. The results are shown in Table 1 and they indicate the following: • serious interactions can be expected for the sloppy schemes (SQF & SQB) and for the Petlyuk system (SP) due to poor RGA values and also for other indices, • base case (D) and heat integrated schemes (DQF and DQB) show less interactions than those of the sloppy schemes and Petlyuk system, • for D, DQF, and DQB the selection of the D1-L2-B2 manipulated variables for the control of product compositions shows good controllability features.
493 Table 1. Steady state controllability indices of selected control structures for the economically optimized schemes. Studied Schemes D-iDl-L2-B2) DQF-(D1-L2-B2) DQB-(D1-L2-B2) SP-(D-S-Q) ST-(L-S-B) SQF-(D-S-Q) SQF-(L-S-B) SQB-(D-S-Q) SQB-(L-S-B)
NI 1.137 1.136 1.093 3.515 7.438 6.470 4.030 5.080 1.287
MRI 0.099 .024 0.023 0.182 0.089 0.010 0.008 0.038 0.022
CN 8.890 36.32 39.660 6.890 14.38 137.4 158.1 33.31 64.388
xn 1.0 1.0 1.0 1.0 0.130 1.000 0.250 0.997 0.770
X22 0.880 0.88 0.910 0.320 0.570 0.250 0.250 0.470 0.827
X33 0.880 0.88 0.910 0.280 0.990 0.150 0.998 0.196 1.0
Dynamic simulations are carried out at equimolar feed composition and feed rate of 100 kmol/hr, base case. The disturbances are 100 to 100.5 kmol/hr for feed rate, and (0.33/0.33/0.33) to (0.32/0.34/0.32) for feed composition, respectively. First, the schemes are studied without any composition control (open loop) and then with closed composition control loops. It can be concluded: 1- In the case of open composition control loops the schemes, also the non-integrated base case, show quite similar dynamic behaviour but the sloppy schemes with backward heat integration (SQB) is significantly slower than the others. For the base case (D) and for the heat integrated schemes (DQB and DQF) the D1-L2-B2 set of the manipulated variables show good controllability performances 2- In the case of closed composition control loops, for the base case (D) and for the heat integrated schemes (DQB and DQF) the D1-L2-B2 set of the manipulated variables show good controllability performances. The heat integration does'nt influence the dynamic behaviours. 3- The more complex energy integrated structures, Petlyuk system and sloppy heat integrated structures, show worse dynamic behaviour (settling time, overshoot) than those of the base case and the simple heat integrated scheme. 4- SQB has the worst controllability features among the all, 5- Forward heat integration schemes (DQF and SQF) prove to be better than backward heat integration schemes (DQB and SQB). This can be due to stronger interactions that can take place because of the opposite direction of material and energy flows. Considering the economic disadvantage of the DQF, it is not preferred to DQB. On the contrary, SQF and SQB show similar economic features but since SQB shows the worse controllability features among the all and SQB is not recommended. In this case the controllability features make the decision. The case study solved by the classical way proves that this is an effective methodology, however, the selection of the sets of the controlled and manipulated variables is quite easy because the systems are simple.
3. New Challenge in Control: the Rough Set Theory The Rough Set theory or Rough Set Data Analysis (RSDA) is widely used for the determination of non-linear relationships in many different areas. Rough set theory is a method of information analysis and especially reduction of data sets, discovery of data patterns, classification of the objects into sets and generation of decision rules, e.g. Pawlak, (1982, 2002). Rough set theory does not need any preliminary information about data like probability distribution (as in probabilistic analysis), basic probability
494 assignment (as in Dempster-Shafer theory) or membership function (as in fuzzy set theory). It performs an analysis of properties of the data, allowing for the identification of the redundant or irrelevant attributes. In consequence, it enables obtaining of the simple rules among input and output variables from the database by reducing the redundant attributes while keeping the original degree of consistency. The feature of the RSDA that it can discover dependencies among variables can be utilised for the controllability investigation. Namely, at the early step of the control structure design the determination of the sets of controllability and manipulated variables is usually based on heuristics, experiences, and engineering judgement. This synthesis activity works quite straight if the system to be controlled is simple, well known by the experienced engineer, and data about system are reliable. However, in the case of complex systems the proper selection of the controlled and manipulated variables which fulfil the control targets is not always obvious even for the experienced designer. Some dependencies can be unnoticed. But this synthesis activity can be aided if the RSDA is used to support this activity to avoid unnoticing the dependencies which are needed for the proper control of the complex plant. The RSDA is tested and used for a complex system, for an existing Fluid Catalytic Crack unit.
4. Rough Set Theory to Improve the Control of FCC Unit The control of the FCC unit is an exhaustively studied topic in the literature as well as in the industrial practice. Since the FCC units produce large amount of valuable products their control has a paramount importance. There have been several works to improve its control e.g. Pohlenz, (1970), Lee et al. (1989), Kurihara, (1967), Worldwide Refining Survey (2001) and Advanced Control and Information Systems (2(X)1). For the improvement of the control of the existing FCC plant the Rough Set Theory is tested and used. The rough set data analysis is realised by ROSETTA software, a toolkit for analysing tabular data within the framework of rough set theory. ROSETTA (0hm and Komorowski, 1997) is designed to support the overall data mining and knowledge discovery process. For the RSDA carried out by the ROSETTA, about 140 measured operating points of the FCC industrial unit are collected and analysed. This analysis gives results about the dependencies of an existing plant, which are suitable to reconsider the control structure of the plant as well. The product quality (motor octane number, MON) depends on the properties of the feed, steam, catalyst, and air flows. Table 2 contains the values of the measured variables. They are selected as attributes (input data) influencing the decision (output data) - product quality. The first problem, when applying rough set theory, is to determine the number of the data intervals. If the number of the intervals is too high, then too many rules (if - then connections among the input and output variables) will be obtained. On the other hand, if the number of interval is too low then the set of rules will be too small and in an extreme case it becomes empty. Therefore the optimal number of intervals is a crucial point in the use of rough sets. After determining the intervals of the variables, the ROSETTA toolkit determines the rules, in the form of if-then connections between the variables. In the case of the investigated FCC unit, the rules are checked by the statistical model of the FCC unit. Based on the statistical model 8 rules are correct out of 150 rules. The system can be controlled if the rules, connecting the attributes and decision, are known. This kind of control is similar to the fuzzy control, but "rough set control" is not so subjective and complicated as fuzzy control. The different rules can be compared to check if they influence the same decision. Finally, the right rule can be
495 selected for control purposes considering the simplicity of the control structure, its cost and time of the operation (settling time). Table 2. Measured points of the FCC unit. Techn. parameters Feed oil
Steam
Stripper Dispersion Fluffing Emergency
Catalyst Air
Location
Variables Mass flow (t/h) Pressure (bar) Reactor Temperature ( T ) Sulphur containing Mass flow (t/h) Mass flow (t/h) Reactor Mass flow (t/h) Mass flow (t/h) Temperature ("C) Reactor Catalyst / feed oil Coke content (%) Regenerator Flow (kNm'/h) Regenerator Temperature ("C)
Values 100.0-203.04 4.39-8.33 60.56 - 131.39 0.01 -0.12 1.5-3.6 2.17-3.80 0.15-0.60 0.18-1.21 657.07 - 684.05 6.24-10.34 1.5-3.5 66.01-90.51 160.02-208.18
In the case of the studied FCC unit, it is found that several rules are already used for control purposes. It can be also seen from the results that the temperature of the catalyst is an important attribute because if it increases in the range of possible operation, the product quality, MON also increases. The temperature of the catalyst is fixed in the regenerator where the coke being on the catalyst's surface is burned away. The coke is formed in the reactor part of the FCC during the cracking of heavy hydrocarbons. The temperature of the catalyst is not controlled in the investigated FCC unit yet, so a control loop should be designed to improve the operation of the actual control structure. In the first step, the analysis of degrees of freedom is carried out and a possible manipulated variable is found: the coke formation in the reactor can be controlled by the feed flow of the bottom product of the main distillation column (BMC) which contains heavy hydrocarbons. (This main distillation column separates the products of the FCC unit.) This flow is free for this control and it is selected as manipulated variable. Since the too high temperature in the regenerator unit must be avoided, the temperature profile in the regenerator should be followed and the highest temperature is selected by a so called "high selector (HS)". The recommended new control structure is presented in Figure 1. Afterburning in the dilute phase of the regenerator can be avoided with our recommendation.
5. Conclusions The Rough Set Theory proves to be successful for the improvement of control structure of an industrial FCC unit because it can detect dependencies between all of the process variables, attributes (input data) and the decision (output data) - product quality, the control target. For this activity Rough Set needs measured data of the plant. The modification of the existing control structure detected by the Rough Set Theory is recommended and accepted by the industrial experts.
496 to maii
Flue
f
columi
--©j new control loop (TC3)
i
Reacto
\
a
r-A0--
.JxW_>
Steam of fluffing
BMC
feed
steam of dispersion
Figure 1. FCC unit with control loops and with the new control loop (TC3).
6. List of References Advanced Control and Information Systems, 2001, Control of FCC unit. Hydrocarbon Processing, September. Kurihara, H., 1967, Optimal control of FCC processes, PhD. Thesis, Mass. Inst, of Techn., Cambridge. Lee, L.L., Chen, Y.W., Huang, T.N. and Pan, W.Y. 1989, Four-Lump Kinetic Model for Fluid Catalytic Cracking Process, The Canadian Journal of Chem. Engng., 67, pp. 615-619. Luyben, W.L., 1990, Process Modelling Simulation and Control for Chemical Eng., McGraw-Hill. Mizsey, P. and Fonyo, Z., 1991 Assessing plant operability during process design. Computer-Oriented Process Engineering, Elsevier, 411-416. Mizsey, P., Hau, N.T., Benko, N., Kalmar, I. and Fonyo, Z. 1998, Process control for energy integrated distillation schemes, Comp chem Eng, 22, S427. 0hrn, A., Komorowski, J. 1997, ROSETTA: A Rough Set Toolkit for Analysis of Data, Proc. Third International Joint Conference on Information Sciences, Fifth International Workshop on Rough Sets and Soft Computing (RSSC'97), Durham, NC, USA, March 1-5, Vol. 3,403-407. Pawlak, Z., 1982, Rough set. International Journal of Computer Information Sciences 11,341-356. Pawlak, Z., 2002, Rough sets decision algorithms and Bayes' theorem, European Journal of Operational Research, 136, 181-189. Pohlenz, J.B., 1970, Oil and Gas Journal, 68, 33, 158-165. World-wide Refining Survey, 2001, Oil and Gas Journal, Dec. 24.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
497
Analysis of Linear Dynamic Systems of Low Rank Satu-Pia Reinikainen^\ Agnar Hoskuldsson^^ ^^ Lappeenranta University of Technology, P.O. Box 20, 53851 Lappeenranta, Finland, Email: [email protected] ^^ Technical University of Denmark, IPL, Bldg 358, 2800 Lyngby, Denmark, Email: [email protected]
Abstract We present here procedures of how obtain stable solutions to linear dynamic systems can be found. Different types of models are considered. The basic idea is to use the Hprinciple to develop low rank approximations to solutions. The approximations stop, when the prediction ability of the model cannot be improved for the present data. Therefore, the present methods give better prediction results than traditional methods that give exact solutions. The vectors used in the approximations can be used to carry out graphic analysis of the dynamic systems. We show how score vectors can display the low dimensional variation in data, the loading vectors display the correlation structure and the transformation (causal) vectors how the variables generate the resulting variation in data. These graphics methods are important in supervising and controlling the process in light of the variation in data.
1. Introduction Past years have shown great advances in measurement equipments. Sensors and optical equipments are examples of data collection means that have become very popular. These new instruments will have great influence on the future developments. E.g., many process companies are using NIR (Near Infra-Red) instruments for process control, because investment in NIR based control only costs around 10% of traditional process equipments. There are other important aspects of the new ways of process control. These new instruments do not touch the materials, and it is cheap to store and send the data onwards. There is great need for new methods that can handle data from these modem types of instruments. The reason is that we typically receive large amount of data that need to be processed and the data usually show very low rank. E.g., NIR data from a chemical process control gives us a data matrix X=XNXK with K=1050. The actual rank in these data may be four or five. Typically, the algorithms available in program packages assume that the data have frill rank, or they only check for the numerical precision of the used algorithms. When these methods are used these types of data, the solution vectors tend to be unstable and provide with unreliable predictions. Looking at the linear least squares methods can show the problems involved. Suppose the response variable, y, represents some quality measure. The linear least squares method is looking for a solution, b, such that the measure |Xb-yp is minimized. The exact solution is given by b=(X'^X)"^X'^y. For NIR data the matrix X'^X will be
498 1050x1050, but the solution should be based on a four or five dimensions. If the number of samples, N, is sufficiently large, we may be able to compute this solution, possible using extended precision. But the solution will have 1050 values, many of which are large, and it will be useless for prediction purposes. The approach chosen here is to use the H-principle of mathematical modelling (Hoskuldsson (1996)). The basic idea is to carry out the modeling in steps and at each step compute a rank one approximation to the solution. This rank one solution is based on optimizing the balance between the improvement in fit and the associated precision that can be obtained by such an improvement in the solution. Thus, each of the rank one part is a result of optimization task involving fit and precision, such that all parts are in certain sense optimal at the respective step of the analysis.
2. Mathematical Models Suppose that there are given values of the instrumental variables that has been collected in a matrix X. A standard assumption in standard assumption is to assume that the response variable can be derived linearly from X apart from small random values that are assumed normally distributed. We write it as y=Xp+8, or y~N(Xp,a^I). This indicates that the residuals have the same variance, a^. The linear least squares methods is concerned finding the value of p that minimizes the measure of fit, |Xp-yp ->min. The exact solution, bj, to this task is given by bi=(X^X)'^X^y. Sometimes there is a requirement that the solution vector in some sense should be as small as possible This can be included in the optimization task as minimizing the sum P^Vp + |Xp-y|^ ->min. The exact solutions are b2=(X'^X+V)-^X'^y. The matrix V can be the unity matrix, I, a constant times the unity matrix, kl, the covariance matrix for the b's or some other positive definite matrix. A common choice for V is kl, where the constant k is chosen by some external condition, e.g., the value that gives the smallest leave-one-out predictions. This is the popular Ridge Regression method. When working with dynamic systems we are interested in the changes in time in the solution vector. We shall look closer at the Kalman filter approach in finding the solutions. Let X^ be the instrumental data up to time t, (xt,yt) the sample values at time t, the and St= Xt^Xt+V. Then the solution at time t, b2,t, can be written as b2,t
= (Xt.iXi+V+Xt x7)-kXt.i V i + Xt^yO
= b2,t-i + kt(yt - x j b2,t-i), with kt= St-i'^x/gt, and gt=l+xj St.f^Xt. This follows from the rewriting XjXt= Xt./Xt.i+ Xt Xt^, X/yt = Xt-i^yn + x/yt, and the application of the matrix inversion lemma. This leads to the Kalmanfilterequations 1. Sample variance: gt=l+x7 St.f^Xt. 2. Kalman gain: kt= St.f^Xt/gt. 3. Update the solufion: b2,t = b2,t-i + kt(yt - x7 b2,t-i). 4. Update the inverse: St"^ = SM"^ - gt kt kt^. In these equations at time zero So=V. Otherwise the matrix V does not enter the equations. Apartfiromthese equations there may be some further ones on requirements to the solution vector. When there are many variables the recursive updating equations tend to give unstable solutions. E.g., in the case of NIR instruments S would be
499 1050x1050. Even if we start with a diagonal V, the updating becomes unstable because the difference matrix S-V is typically of rank 3-6 for NIR data. The present approach is concerned with finding stable solution in the case the data show low rank like in the case of NIR data. The algorithm proposed is independent of V. Thus, V can be zero or any other prior choice. The solution is based on the H-principle of mathematical modeling that we shall consider closer.
3. The Basic Algorithm H-principle is a recommendation of how we should carry out the modelling procedure for any mathematical model: 1) Carry out the modelling in steps. You specify how you want to look at the data at this step by formulating how the weights are computed. 2) At each step compute expressions for i) improvement in fit, AFit, and ii) the associated prediction, APrecision 3) Compute the solution that maximizes the productAFit x APrecision 4) In case the computed solution improves the prediction abilities of the model, the solution is accepted. If the solution does not provide this improvement, it stops. 5) The data is adjusted for what has been selected and start again at 1). The H-principle suggests that we should find a weight vector w that gives us a solution of step 3 (Hoskuldsson (1996)). The solution suggested is given by the eigen vector of the leading eigen value to the eigen value problem, X'^YY'^XW = Xvf
In case there is only one response variable, Y=y, there is a closed form expression for w
w = xVlx^y|. The next task is to compute the loading vector, p, as p=Sw/d, where d=w^Sw. The score vector, t, is defined as t=Xw. Besides these vectors we need one type more, the transformation or causal vectors r. It is defined such that p=Sr. These computations are carried out at each step. At the end of the computations the data is adjusted for what has been selected. The algorithm is as follows: 0. Initialize variables. Xo=X, So=S, Yo=Y, EO=IK, B = 0 . For a=l,2,..., K, 1. Find the weight vector Wai solve Xa-i^YY^Xa-iWa = K^a, or Wa = Xa-iV/|Xa-/yl2. Compute scaling constant dai da=Wa^Sa-iWa, loading vector pai Pa=Sa.iWa/da, and score vector ta=Xa.iWa.
3. Transformation vectors r^: ra=Ea.iWa; Adjust transformation matrix: Ea=Ea.i-daraPa^ 4. Compute new solution coefficients B: Ba = Ba-i + dgraqa^, with qa=Y^ta/da. 5. Adjust X:Xa=Xa.i-taPa'^. 6. Adjust S: Sa=Sa.i - da PaPa^.
7. Check if this step has improved the prediction aspect of the model, and if X-a or da are not too small. If it pays to continue, start a new iteration at 1.
500 The results of this algorithm is an expansion of the matrices as follows: X =tiPi^ + t2P2^+...4-tAPA^+...+ tKpJ
=TPl
S = di pi pi^ + d2 P2 P2^ + ..+ dA PAPA^ + ... + dK P K P J S-' = di ri ri^ + d2 r2 rj"^ + ...+ dA rArA^" + ... + dK TKFJ B = di ri qi'' + d2 r2 q2'' + ... + dA FAQA^ + ... + dK TKQK^
= PDP^. = RDR\ = RDQ\
Here the vectors are collected in a matrix, e.g., T=(ti,t2,...,tK). D is a diagonal matrix with da's in the diagonal. The decomposition of S is a rank one reduction, meaning that the rank of say Sa is one less than that of Sa-i. (Follows from SaWa=0). Thus SK will be the zero matrix. The matrix R satisfies R^P=D'\ or ri^pj=5ij/di. It can also be written as (RD''f(?D'')=l or RDP'^=I. The score vectors (ta) are not orthogonal, ti^tj^O for i^j. This algorithm is carried out for each time point t. Note, that if V=0, S=X^X and the algorithm reduces to PLS regression. In that case the score vectors become orthogonal. We can view the algorithm as an approximation, B = s-^X^Y = (diriri^+...)(piti^+...)Y = (riti^+...)Y = (diriqi^4-...). Note that only A terms in the expansions are used. The choice of the weight vector w at each step reflects the covariance that is left. The expansion stops, when there is no covanance left, Xa'^Y=0. When there are many variables, it is often necessary to be careful in finding the weight vector w. A collection of methods has been developed that optimise the choice of w (Reinikainen, Hoskuldsson (2002)). In general it is needed to auto-scale the data, i.e., to centre data and scale to unit variance. The choice of V should reflect the choice and be of same units as the scaled data. For further details see the appendix.
4. Graphics Display The same types of graphic analysis for any choice of the matrix V are used. Here an example of the graphic analysis based on an industrial on-line data is presented. The figures illustrate results of static linear PLS model, in which NIR data from an oil refinery is used to model density of the product. The vectors in the algorithm are displayed graphically to illustrate the structure and variation in data. The basic plots are: 1. ta versus 4: The vectors (ta) decompose the data matrix X. Therefore the plots of ta versus tb show us the sample (fime) variation in data. Fig 1 reveals that the process (samples) is changing with the time. Arrows in Fig 1 visualise the drift on 1.-4. PLS components. The dynamic behaviour can be clearly seen even on the first two score vectors. Therefore, it cannot be expected that the same model will be valid at the Table 1. R^ values of thefirstfour latent variables of the PLS example. LV 1 2 3 4
S R^(X), % 34.17 75.83 97.08 98.99
S R^(Y), % 59.77 80.52 82.95 87.48
501 0.6 ^3 0.4
12 •s^g
0.2
.
•^^"94
17i<—
> 0
^"?
-0.2 -04
J
i^90
19
•95
J
^
^4
0
1
h ,4
-0.3
-0.2
-0.1.
0
0.1
0.2
0.3
Figure 1. Score vectors presenting sample (time) variation. 0.05
1190cm-i
0.04
1
0.03 0.02 P4
0.01 0
1 ^ cm-' ^ * » % '» ^ 0 0 cm'^h.2
-0.15
-0.1
-0.05
0
0.05
0.1
1
1
1 ci? cm' •\ Jm ^|L«n — - j t - ir^--i"-w
1^1
1 1 1
-0.01 1210 cm-i
0.15
-0.06
^ 9 5 0 cni' ••J^ieoocli-
-0.04
-0.02
0
0.02
0.04
0.06
P3
PJ
Figure 2. Loadings revealing variables (wave numbers) contribution to the PLS model. -Pi
P i - - - Ps
1200
P4
spectrum
1000
0.2
1200
1000
800
wavenumber, cm'^ wavenumber, cm'^ Figure 3. Loading vectors (p), scaling vectors (r) and an example ofspectrums. beginning of the later time period and at the end of the period. In this case the change should not only concern the solution vector found in dynamic modeling, but the whole model should be changed. 2. Pa versus pb'. The loading vectors (pa) are generated as Pa=Sa-iWa, where Wa is found by some optimising considerations. We look at these plots to see how the variables contribute at the individual steps. Especially in spectral data the changes in data might be small. With Fig 2 it is easy to identify the wave numbers causing the drift. A spectrum together with loading vectors is presented in Fig 3. 3. Ta versus r^: The transformation vectors (ra) are generated from Pa=Sra. They also satisfy da'^ta=Xra. Thus, these vectors tell us how the variables contribute to the analysis, and, how the covariance structure in S has been used. We can also multiply element-wise X and ra to see which variable contribute most to the score vectors.
5. References Hoskuldsson A., 1996, Prediction Methods in Science and Technology, Colourscan, Warsaw Poland Reinikainen S-P., Hoskuldsson A., 2002, COVPROC Method: Strategy in Modeling Dynamic Systems, Journal of Chemometrics (accepted).
502 Appendix. Analysis of linear dynamic systems of low rank Proposition 1. The weight vectors (wj are orthogonal to later loading vectors
(pj,
pjwa=0,forb>a. P r o o f . N o t e that Wa^Pa=l a n d Wa^Sa = Wa^ (Sa.i - da Pa Pa^) = da Pa^ " da (Wa^Pa) Pa^ = 0.
Pb=Sb.iWb/db. We write Sb-i as Sb-i
= Sb.2 - db-i Pb-i Pb-i^ = Sb.2(I - Wb-i Pb-i^ ) = Sb-iUo = SaUi
Here Ui is some matrix that is not used. This gives Wa^Pb= Wa^Sb-lWb/db = Wa'^ Sa UiWb/db = 0 .
This completes the proof. The important property of the algorithm is Proposition 2. The matricesP=(pi,...,PK)
^f^dR=(r],...,rf^
satisfyR^P=D^.
Proof. The vectors ra are defined as Pa=Sra. If a=l we get Pi = SoWi/di = S Wi/di, or ri= Wi. This gives Pi'^ri = pi^ wi/di = wi'^ S wi/di^ = 1/di. For a=2 we get P2 = SiW2/d2 = (So - di Pi Pi"^) W2/d2 = S(I - Wi Pi^) W2/d2 or r2=(I - Wi pi^) W2/d2 This gives Pi^r2 = (pi'^ W2 - (Pi Vi)(pi'^ W2))/d2 = ( p / W2 - (pi'^ W2))/d2 = 0 P2^r2 = (P2'^ W2 - (P2Vi)(pi'^ W2))/d2 = (p2'^ W2) /d2 = l/d2 since (p2^Wi) = 0 fi*om Proposition 1. For higher values of the indices a and b we proceed in a similar way as in Proposition 1. Proposition 3. The weight vectors (wj are mutually orthogonal, Wiy^Wa=0,forb^. Proof. Suppose that a>b. Note that Xa-i^YY^Xa.iWa = A^aWa. It gives W a V = Wa'^ Xa.i^YY'^Xa-iWb.
From definition of Xa-i we get Xa-l = X b . i - (tbPb^ + . . . + ta.iPa-1^).
From Proposition 1 we get Xa-lWb = (Xb-i - (tbPb'^ + . . . + ta.iPa-l'^)) Wb = Xb-i Wfe- tb(Pb'^ Wb) = t b " tb = 0 .
This shows that the weight vectors are mutually orthogonal.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
503
Data Based Classification of Roaster Bed Stability Bjom Saxen^ and Jens Nyberg^ ^Outokumpu Research Oy, P.O. Box 60, nN-28101 Pori, Finland, bjom. saxen @ outokumpu.com ^Outokumpu Zinc Oy, P.O. Box 26, FIN-67101 Kokkola, Finland, jens.nyberg @ outokumpu.com
Abstract An on-line application of a self-organizing map (SOM) has been developed for detecting and predicting instability of a fluidised bed furnace. In the application, which has been in use at the roster of Outokumpu Kokkola zinc plant for over one year, SOM is utilised for compressing multi-dimensional process measurement data and for visualising bed stability changes as a path on a two-dimensional map. Instability, which normally causes operational problems and lowered production, has been detected quite reliably. A rule-based system for proposing correcting actions is being developed as an extension to the SOM.
1. Introduction Roasting of zinc concentrates at Outokumpu Kokkola plant is carried out in two fluidised bed furnaces. Occasionally, the bed of a furnace moves into instability, which leads to operational problems and lowered production. Reasons, indicators and remedies for bed instability have been investigated during the last few years. Generally, instability is a consequence of changes in the chemical or physical properties of the concentrate feed, but the causal connections are complex. There is a fairly large amount of real-time data available at the roaster, e.g. temperature, flow and pressure measurements, and also less frequent off-line analyses of the chemical composition of the streams. Although a period of instability can be recognised from history data, real-time interpretation of the high-dimensional data is difficult and there is a need for a data refining and compressing tool. The self-organizing map (SOM) is a method for visualisation, clustering and compressing data (Kohonen, 2001). The SOM can effectively refine multi-dimensional process data as reported by e.g. Alhoniemi et al. (1999) and has been shown suitable for process monitoring and fault diagnosis also in mineral and metal processing (Rantala et al., 20(X), Jamsa-Jounela et al., 2001, Laine et al., 2000). In addition to detection of roaster bed instability, there is also a need for identification of the underlying reasons. This task requires expertise knowledge, since there are many factors to
504 be considered. Rule based systems represent a straightforward approach for applying apriori knowledge in supervision and fault detection (Isermann, 1977).
2. The Roasting Process Roasting is an essential part of a zinc electrowinning process. In Kokkola zinc plant, the process contains departments for roasting, leaching and purification, electrowinning and melting & casting. There are two roasting furnaces, both of fluid bed type (Lurgi), with a grid area of 72 m^. The mix of zinc concentrates is fed to the furnace by rapid slinger belts, and air is fed from the bottom of the furnace. Around 22 t/h concentrate and around 42 000 NmVh air is fed to each furnace. The reaction between sulphides in the concentrate and oxygen is exothermic and heat is generated; the furnace temperature is kept at about 920 950 C by cooling. The products are solid oxide material, called calcine, and sulphur dioxide gas. The gas, which also contains solids, is lead to a waste heat boiler, cyclones and electrostatic precipitators before it is cleaned in a mercury removal step and in a sulphuric acid plant. Along with the roasting, some of the concentrates are directly leached. This enables higher flexibility in the acquiring of concentrates; some concentrates are more suitable for roasting and others for direct leaching. 2.1. Challenges and development Roasting is in principle a simple process, but there are many influencing variables and sometimes it is very difficult to control the furnace. The main difficulty is that every concentrate behaves differently, because of its specific mineralogy. The move to concentrates with finer grain size influences the furnace behaviour, and impurities like Cu and Pb have a great impact. High impurity level can lead to sintering of the bed; i.e. molten phases and sulphates are formed. Another problem is that the bed sometimes becomes very fine (no coarsening occurs) and this hinders the fluidisation. To master the process, it is essential to maintain a stable bed with good fluidising properties and good heat transfer. During last years many plant test runs have been carried out with the aim to better understand the roasting mechanism and to find out optimal run conditions (Metsarinta et al., 2002). Among the tested parameters are impurity levels (Cu, Pb), concentrate particle size, water injection to different spots, oxygen use etc. The number of measurements has been increased, which has brought more information about the state the furnace. New control strategies and advising systems have been developed by utilising knowledge gained theoretically and through tests, but also by data based studies of the process behaviour. 2.2. Process control Basic measurements and control loops of one furnace line are shown in Figure 1. The furnace control can roughly be divided into three levels: 1. The Conventional level includes standard controller for flows, pressures, etc.
505 The Advanced level includes sophisticated use of the large amount of measurement data; fuzzy temperature control by concentrate feed, oxygen enrichment control by 02-feed and furnace top temperature control by water addition. Also, this level includes process monitoring by means of SOM, as well as an advisory system based on expert rules (under development). The Ultimate level implies changing the concentrate mix into a "safe" region, i.e. a composition with coarse particles and little impurities.
CONCENTRATE MIX
mi FU2ZY CONTROL
T
m^hCHEMICAL ANALYSIS
^
GRAIN SIZE DISTRIBUTION
Figure 1. Flow chart and basic instrumentation of one furnace line.
3. Data Based Methods To support the investigations on the chemical and physical mechanisms of the roasting process, plant data have been analysed mathematically. This has resulted in tools for process control, especially for bed stability monitoring and management. To verify earlier observations and to serve as a base for the development of control methods, a correlation analysis was carried out with process data from one year of operation. Variables included were measurements of flow, temperature and pressure, origins of the concentrates in the feed mix, chemical analyses of feed and product compositions and grain size distribution of the product. In addition, some calculated quantities used in the furnace operation were also included. The study was carried out using linear correlation analysis and by time series plots of selected variables. Although the bed stability was the main focus of
506 the analysis, the correlations were analysed in general and no single response (output) variable were selected. Separate analyses were carried out for both furnace lines. Along with the correlation analysis, SOM was used as a tool for data mining and correlation exploring. The SOM algorithm quantizes the observations into a set of prototype vectors and maps them on a two-dimensional grid. The updating method forces the prototype vectors of neighbouring map units to the same direction so that they will represent similar observations. The number of prototype vectors is chosen essentially smaller than the number of observations in training data. Thus, the algorithm generalizes correlations in the data and clusters the observations, being thereby suitable for data visualisation.
4. Results Based on process data studies with the SOM, an on-line tool for detecting instable furnace behaviour from process measurements was developed. The software was developed and implemented in MATLAB® using version 2.0beta of SOM Toolbox (2002). Models for both furnace lines were made. The measurement variables were concentrate feed, air feed, oxygen feed, oxygen coefficient, water feed, windbox pressure, furnace and boiler temperatures at different spots, boiler steam production, boiler offgas oxygen content, calcine composition (S^", Na, Si, K, Cu, Pb) and fraction fine particles in calcine. The temperature measurement signals were pre-treated by mean value calculation and a rulebased exclusion of non-representative signals. One-day mean values from around two years of operation were used for training. Observations from process shut-downs were excluded from the data, occasional erroneous values (due to probe or data transfer failures) were labelled as non-available and the variables were normalised by linearly scaling to unit variance. The dimension of the data vector fed to each SOM was 20. The SOM grid size was set to 9x6, i.e. 54 prototype vectors was set to represent around 600 observations in the data set. In the training data, the algorithm clustered most periods of instability close to each other and a rough classification into stable and instable areas of the map could be made. The classification was based on the knowledge that low concentrate feed, low windbox pressure and large fraction of fine particles in the calcine correlate with instability. The SOM component planes in Figure 2 show how these variables are represented in the prototype vectors. It should be noted that although these variables correlate with bed stability, none of them could alone be used as stability indicator. For the on-line interface, the map units correlating with instability was coloured red, the units close to this area yellow, and the other units green.
507 Windbox pressure (mbar) n270
Concentrate feed (t/h) Fraction <71 pim in calcine (%) ^N^>vs.«^v^N n24.6 - - - - - n37
21.9
22.6
1242
^^^^^^^^
•20.5
^-^-"-'-•^''^^^'^
-6.75
Figure 2. Component planes for three important variables in the SOM for furnace 2. In the on-line application, the feed data is 8-hour mean values achieved from the history database at the plant. The interface shows changes in bed stability by a five-days path (5x3 observations) of the best-matching unit (BMU) on the map. The BMU is the unit representing the prototype vector with shortest distance to the input vector. Also, the application outputs a plot of the quantization error (Euclidean distance between BMU and input vector) for the same period, which can be used as an indicator of model reliability. The on-line SOM tool has been in use for over one year, and has detected bed instability tendencies quite reliably. The BMU path on the map is easy to interpret, and gives a quick generalization of the situation in the furnace. Figure 3 shows the SOM interpretation of the stability of furnace 2 during five days in September 2002. During this period, the bed was moving from instability back to normal behaviour. The quantization error plot in Figure 3 indicates that the explanation of the first observations in the period is unreliable.
18-Sep-2002 14:00:00 - 23-Sep-2002 06:00:00 w:1
Quantization error Normal (green)
I Dubious (yellow)
I Unstable (red)
Figure 3. SOM visualisation of a bed stability path of furnace 2; the smallest circle represents the match of the first observation and the star shows the latest match. A plot of the quantization error for the same period is given to the right.
508
5. Conclusions and Further Work Detection of fluidised bed instability requires multidimensional data and appropriate methods for its analysis. The SOM-application at the roaster performs data analysis which, in opposite to a human observer, is systematic and consequent. The application reliably monitors bed stability, and gives valuable support for operation. However, most of the process variables clearly correlating with instability show only consequences, and some of them are manipulated variables in control loops. The underlying reasons for a particular instability period may be so nested that they are hard to detect. Hence, the development of a rule based system for isolating instability reasons and providing correcting actions has been started. The rules are based on metallurgical and practical process know-how. Known inappropriate combinations of feed composition and process parameters are checked through, and when such a combination is found, the system gives advises for correcting actions. For instance, one rule is: IF calcine Cu > 0.6% AND Oxygen coefficient < 1.2 THEN Increase oxygen coefficient! Further work will include refining and tuning of the rules based on upcoming process situations.
6. References Alhoniemi, E., Hollmen, J., Simula, O. and Vesanto, J., 1999, Process Monitoring and Modeling using the Self-Organizing Map, Integrated Computer Aided Engineering, vol. 6, no. l,pp. 3-14. Isermann, R., 1977, Supervision, fault-detection and fault-diagnosis methods - an introduction. Control Engineering Practice, vol. 5, no. 5, pp. 639 - 652. Jamsa-Jounela, S-L., Kojo, I., Vapaavuori, E., Vermasvuori, M. and Haavisto, S., 2001, Fault Diagnosis System for the Outokumpu Flash Smelting Process, Proceedings of 2001 TMS Annual Meeting, New Orleans, USA, pp. 569-578. Kohonen, T., 2001, Self-Organizing Maps, volume 30 of Springer Series in Information Sciences. Springer, Berlin, Heidelberg. Laine, S., Pulkkinen, K. and Jamsa-Jounela, S-L., 2000, On-line Determination of the Concentrate Feed Type at Outokumpu Hitura Mine, Minerals Engineering, vol. 13, no. 8-9, pp. 881-895. Metsarinta, M-L, Taskinen, P., Jyrkonen, S., Nyberg, J. and Rytioja, A., 2002, Roasting Mechanisms of Impure Zinc Concentrates in Fluidized Beds, accepted for Yazawa International Symposium on Metallurgical and Materials Processing, March 2003, CaUfomia, USA. Rantala, A., Virtanen, H., Saloheimo, K. and Jamsa-Jounela, S-L., 2000, Using principal component analysis and self-organizing map to estimate the physical quality of cathode copper. Preprints of IFAC workshop on future trends in automation in mineral and metal processing, Helsinki, Finland, pp. 373-378. SOM Toolbox, 2002, http://www.cis.hut.fi/projects/somtoolbox [18 October 2002].
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
509
A Two-Layered Optimisation-Based Control Strategy for Multi-Echelon Supply Chain Networks p. Seferlis and N. F. Giannelos Chemical Process Engineering Research Institute (CPERI) PO Box 361, 57001 Thessaloniki, Greece, email: [email protected], [email protected]
Abstract A new two-layered optimisation-based control approach is developed for multi-product, multi-echelon supply chain networks. The first layer employs simple feedback controllers to maintain inventory levels at all network nodes within pre-specified targets. The feedback controllers are embedded as equality constraints within an optimisation framework that incorporates model predictive control principles for the entire network. The optimisation problem aims at adjusting the resources and decision variables of the entire supply chain network to satisfy the forecasted demands with the least required network operating cost over a specified receding operating horizon. The proposed control strategy is applied to a multi-product supply chain network consisting of four echelons (plants, warehouses, distribution centres, and retailers). Simulated results exhibit good control performance under various disturbance scenarios (stochastic and deterministic demand variation) and transportation time lags.
I. Introduction A supply chain network is commonly defined as the integrated system encompassing raw material vendors, manufacturing and assembly plants, and distribution centres. The network is characterised by procurement, production, and distribution functions. Leaving aside the procurement function (purchasing of raw materials), the supply chain network becomes a multi-echelon production/distribution system (Figure 1). The operational planning and direct control of the network can in principle be addressed by a variety of methods, including deterministic analytical models, stochastic analytical models, and simulation models, coupled with the desired optimisation objectives and network performance measures (Beamon, 1998; Riddalls et al., 2000). Operating network cost, average inventory level, and customer service level are commonly employed performance measures (Thomas and Griffin, 1996; Perea et al., 2001). In the present work, we focus on the operational planning and control of integrated production/distribution systems under product demand uncertainty. For the purposes of our study and the time scales of interest, a discrete time difference model is developed. The model is applicable to networks of arbitrary structure. To treat demand uncertainty within the deterministic supply chain network model, a receding horizon, model predictive control approach is suggested. The two-level control algorithm relies on a
510 decentralised safety inventory policy, coupled with the overall optimisation-based control approach.
Figure 1. Multi-echelon supply chain network.
2. Supply Chain Model Let DP denote the set of desired products (or aggregated product families) of the system. These can be manufactured at plants, P, by utilising various resources, RS. The products are subsequendy transported to and stored at warehouses, W. Products from warehouses are transported upon customer demand, either to distribution centres, D, or directly to retailers, R. Retailers receive time-varying orders from different customers for different products. Satisfaction of customer demand is the primary target in the supply chain management mechanism. Unsatisfied demand is recorded as back-orders for the next time period. A discrete time difference model is used to describe the supply chain network dynamics. The duration of the base time period depends on the dynamic characteristics of the network. The inventory balance equation, valid for warehouses and distribution centres, is:
>'a(0-ya('-l)+S^a',*('-V*)-E%.*'(0
V ke^,D}t^T,ie
DP
(1)
yi^k is the inventory of product / stored in node k . x-j^^j^ and x^^^-r denote the amounts of the i-th product transported through routes (k^,k) and {k,k^), respectively, where k^ supply k and A:^^are supplied by k. L^^j^ denotes the transportation lag for route (k^,k). The transportation lag is assumed to be an integer multiple of the base time period. For retailer nodes, the inventory balance considers the actual delivery of product / attained, denoted by J^-
y>At)=y>A'-})+I,h^'A-W,)-daif)
V keR,teT,ieDP
(2)
511 The balance equations for unsatisfied demand (e.g., back-orders) take the form: BO,,{t)^BOj-l)+R,,{t)-d,,{t)-LO,,{t)
ykeRjeTJe
DP
(3)
where Rtjt) denotes the demand for product / at retailer k and time period t. LOtJt) denotes the amount of cancelled back-orders (lost orders). At each node capable of carrying inventory (nodes of type W, D, and R), capacity constraints are in effect that account for a maximum allowable inventory level: y,{t)=J,oc,y,,it)^vr
yke^,D,R}tET
(4)
i
where Y^ denotes the actual inventory of the node, Oi the storage volume factor for each product, and Vk^^ the maximum capacity of the node. A maximum allowable transportation capacity, T/"/^, is defined for each permissible transportation route within the supply chain network: 5;Ax,,,,.(f)
\/ke{P,W,D}teT
(5)
i
where J3i denotes the transportation volume factor for each product. For each manufacturing resource RSp a maximum level of availability, CJ^, in each plant node is specified: SE'^..A*.r(O^Cj i
ykePjeTJeRS
(6)
k'
where K. j denotes the usage factor of thej-th resource for the /-th product.
3. Control Strategy for Supply Chain Management Supply chain management is performed within a two-layered approach. The first layer aims at keeping inventory levels around pre-specified targets. A single dedicated controller is used for each inventory node. Disturbances are generated by demand fluctuations at downstream nodes. The second level of control is a model predictive optimisation-based scheme that considers the entire network dynamics, embedding the inventory controllers of the first layer and a stochastic model for demand variation. 3.1. Inventory control Proportional-integral-derivative (PID) controllers are derived for the feedback control of inventory levels. The PID control law in discrete velocity form is given by the following relationship (Marlin, 1995):
512
mvXt)=mv,{t-l)-K^
1+ T,
kkYK l + 2^y(r-l)-/^feV(?-2) (7)
+ -^ At ^
J
\
)
where mv^/j is the value of the manipulated variable for the inventory controller. At the discrete control interval, Kc the proportional gain, T/ the reset time for the integral mode, and r^ the reset time for the derivative controller mode. The manipulated variable for the inventory control of each node is the total amount of the products transferred from all supplying nodes to node k:
^^*(0=SS^/.o(^~^o)
(8)
i
k'
The choice of the manipulated variables imposes a constraint on all incoming material to a particular node. The PID controllers are tuned to allow for fast set-point tracking and good disturbance rejection dynamics, taking into consideration the transportation delay between nodes. 3.2. Optimisation-based model predictive control Supply chain management requires a number of decisions to be determined at every time period. The main objectives of the supply chain network can be summarised as follows: (i) maximise customer satisfaction, and, (ii) keep operating supply chain costs low. The first target is attained by the minimisation of back-orders for a period of time, while the second target is achieved by minimising transportation and inventory (storage) costs associated with the supply chain network. Based on the fact that past and present control actions affect the future response of the system, a receding-time horizon is selected. The trajectory of the system is predicted and compared to the desired trajectory. The control actions are then determined from the minimisation of a performance index over the given time horizon, th-
,W.
|VV„
t
ke{R}
i
T,i,k ,k \ri,k ,k
t kG^,D,R}
(9)
i
The performance index, /, includes a term penalizing back-orders at all retailer nodes, and a term accounting for the transportation costs. The weighting factors WBO and wj reflect the relative importance of the controlled (back-orders) and manipulated (transportation of products) variables. The overall optimisation model-based predictive controller for the supply chain network takes the following form: Min
J
X
s.t. Supply chain model Eq. 1-6 Feedback inventory controllers Eq. 7 Stochastic disturbance model
(P)
513 All variables in the supply chain network are assumed to be continuous. This is definitely valid for bulk commodities and products. For unit products, continuous variables can still be utilised, with the addition of a post-processing rounding step to identify neighbouring integer solutions. This approach, though not formally optimal, may be necessary to retain computational tractability in systems of industrial relevance. The computational cost for solution of linear programme (P) increases with the size of the receding time horizon, which has to be carefully chosen in order to balance good control performance with robustness to external disturbances. The time horizon should be selected at least as large as the largest transportation delay in the system.
4. Simulated Results A four-echelon supply chain system consisting of two production nodes, two warehouse nodes, four distribution centres, and 16 retailer nodes is studied. All possible connections between immediately successive echelons are permitted. There is a number of low-cost routes between successive echelons that carry the bulk of the supply, and a number of more expensive alternative routes that will be used periodically when the cost of accumulated back-orders becomes significant. Five product families are being distributed. The size of the network model is 650 variables per time period with 358 product quantities along transportation routes and 80 back-order quantities as optimisation variables. 4.1. Deterministic disturbance in demand The given network can accommodate step changes in the order of 50% for each product family. Retailer inventories resume their set-points within typically 10 time periods, whereas 15 time periods are required in the larger warehouse nodes. The maximum deviation from inventory set-points is kept below 20% (figure 2). 4.2. Stochastic disturbance in demand A stationary stochastic model is used to describe the demand variation over the simulated time span. Typical simulation results are reported in table 1 and figure 3. Demand satisfaction is optimised at high levels in most cases. The performance index, reflecting network operating costs, increases at higher demand variances and transportation delays. Larger transportation lags require a more conservative controller tuning in order to avoid controller-induced instability. However, there is a compromise between tight inventory control (small inventory variance) and achievable minimum performance index. Cases 4-5 and 7-8 exemplify the impact of proper receding horizon selection to supply chain network performance. In case 4, a time horizon equal to 3 time periods achieves a higher demand satisfaction rate at the expense of higher transportation costs than case 5, where a time horizon equal to 5 is selected. In case 6, the control horizon is equal to the maximum transportation lag in the system (5 time periods), resulting in a relatively high performance index and lower customer satisfaction. By increasing the control horizon to 6 time periods, a significant improvement is observed. In general, shorter time horizons lead to more aggressive control actions, but larger time horizons may render the control scheme more susceptible to demand variations.
514 '1 +
» E
\
0
- e - Wnode l l -X- Dnode —H- Rnode
V 0
5
4
i\
+
10
15
20
25
30
35
40
45
50
Time period
Figure 2. Inventory levels for a 50% step change in demand for product A.
Figure 3. Inventory levels for case I.
Table I. Results of simulated cases Case 1 2 3 4 5 6 7 8 9
Transport, lags [Lpw LwD LDR] [111] [111] [111] [2 11] [2 11] [4 2 1] [5 2 1] [5 2 1] [5 3 1]
Receding horizon th 5 5 5 3 5 5 5 6 6
Demand satisfaction % 99.9 99.6 94.8 99.9 99.7 98.9 90.5 99.7 98.9
Variance in prod, demand 0.1 0.2 0.5 0.1 0.1 0.1 0.1 0.1 0.1
7/time period 21.86 22.02 24.57 24.30 21.86 26.16 40.42 24.87 26.77
5. Conclusions A two-layered control strategy was described for supply chain management purposes. The strategy combines feedback controllers to account for the fast dynamics at the inventory nodes, while utilising the power of a fully-centralised optimisation-based model predictive controller to achieve an optimal operating policy for the supply chain network over a selected time horizon.
6. References Beamon, B.M., 1998, Int. J. Production Economics, 55, 281. IVIarlin, T.E., 1995, Process Control, ]VIcGraw-Hill, New York. Perea-Lopez, E., Grossmann, I.E., Ydstie, B.E. and Tahmassebi, T., 2001, Ind. Eng. Chem. Res., 40, 3369. Riddalls, C.E., Bennet, S. and Tipi, N.S., 2000, Int. J. Syst. Sci., 31,969. Thomas, D.J. and Griffm, P.M., 1996, Eur. J. Oper. Res., 94, 1.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
515
Dynamic Control of a Petlyuk Column via ProportionalIntegral Action with Dynamic Estimation of Uncertainties Juan Gabriel Segovia-Hernandez^, Salvador Hernandez^, Ricardo Femat'^ and Arturo Jimenez^ ^Instituto Tecnologico de Celaya, Depto de Ingenieria Quimica, Av. Tecnologico y Garcia Cubas s/n, Celaya, Gto., 38010, Mexico, [email protected] ^Universidad de Guanajuato, Facultad de Quimica, Noria Alta s/n, Guanajuato, Gto., 36050, Mexico. ^ Departamentos de Matematicas Aplicadas y Sistemas Computacionales, IPICYT, Apdo. Postal 3-90, 78231, Tangamanga, San Luis Potosi, SLP, Mexico.
Abstract A three point control configuration based on a proportional-integral controller with dynamic estimation of unknown disturbances was implemented in a Petlyuk column. The proposed controller comprises three feedback terms: proportional, integral and quadratic actions. The first two terms act in a similar manner as the classical PI control law, while the quadratic term (double integral action) accounts for the dynamic estimation of unknown disturbances. Comparison with the classical PI control law was carried out to analyze the performance of the proposed controller in face to unknown feed disturbances and set point changes. The results show that the closed-loop response of the Petlyuk column is significantly improved with the proposed controller.
1. Introduction Since the energy consumption of distillation columns can have a significant influence on overall plant profitability, several strategies have been suggested to improve their energy efficiency. One strategy is the use of thermal coupling, in which the transfer of heat is accomplished by direct contact of material flow between two columns. For the separation of ternary mixtures, the Petlyuk column (also known as the fully thermally coupled distillation column) provides a choice of special interest (Figure 1). The Petlyuk column had not gained interest in the process industries until recent times (Hairston, 1999) even though its concept was established some 50 years ago (Brugma, 1937; Wright 1949). Savings in both energy consumption and fixed investment can be accomplished through the implementation of such a separation scheme. Theoretical studies have shown that Petlyuk columns can save up to 30% in energy costs compared to conventional schemes (e.g. Petlyuk et al, 1965; Glinos and Malone, 1988). Such results have promoted the development of more formal design methods (Triantafyllou and Smith, 1992; Hernandez and Jimenez, 1999a; Amminudin et al, 2(X)1; Muralikrishna et al, 2002). To promote a stronger potential for industrial implementation, a proper understanding of operation and control aspects are needed to complement the energy savings results. Recent efforts have contributed to the understanding of the dynamic properties of the Petlyuk column (Wolff and Skogestad, 1995; Abdul-Mutalib and Smith, 1998; Hernandez and Jimenez, 1999b; Serra et al, 1999; Jimenez et al, 2001).
516
K^ ^ 3 -»• A
^ -*•
B
\<^
L-kFigure 1. Petlyuk column. The expectance that the dynamic properties of Petlyuk columns may cause control difficulties, compared to the rather well-known behavior of the conventional direct and indirect sequences for the separation of ternary mixtures, has been one of the factors that has contributed to their lack of industrial implementation. In this work, we analyze the closed-loop behavior of Petlyuk columns when a proportional-integral controller with dynamic estimation of unknown disturbances is implemented (Alvarez-Ramirez et al., 1997). The performance of the integrated column under such a controller is compared to the behavior under a traditional proportional-integral controller. The analysis is based on rigorous dynamic simulations, and two cases are considered: (i) set point tracking and (ii) output regulation under load disturbances in the feed mixture.
2. Design of Petlyuk Columns A base design for the Petlyuk column was first obtained, followed by an optimization procedure to detect the base operating conditions under which the minimum energy consumption for such a design was achieved. The optimization procedure has been described by Hernandez and Jimenez (1999a). An ODE model was formulated with equations for total mass, component mass and energy balances, along with ideal VLE and stage hydraulics relationships. After the model is formulated, the design problem of the Petlyuk column shows five degrees of freedom; three of them are consumed by the implementation of three control loops, and the two additional degrees of freedom are used as search variables to detect the operation with minimum energy consumption. The search variables we used were the flowrates of the liquid and the vapor interconnecting streams (LF and VF, Figure 1).
3. The PI Control with Dynamic Estimation of Uncertainties The implementation of the output feedback control for distillation column can be configured such that only the liquid composition of the output flowrate is regulated (i.e., uncoupled one-point configuration control; see second example by Alvarez-Ramirez et al., 1998). In such a configuration, the liquid compositions for the main product streams A, B and C (see Figure 1) were taken as the controlled variables whereas, respectively.
517 the reflux flowrate, the side stream flowrate and the reboiler heat duty were chosen as the manipulated variables. The ideas behind the simulations are (i) to show that Petlyuk column can be controlled by exploiting a simple control configuration and (ii) to improve the closed-loop performance by implementing a proportional-integral feedback with dynamic estimation of unknown disturbances (also called PII^). The main idea behind the proposed controller is to estimate the input d = J(t) from the system output and, if the estimated value is close to the actual one, substitute it with a Pl-like control law. The PI control law can be written as the following dynamic system (Luyben, 1990)
u=
Kr{y-r)-z (1)
z=
K,{y-r)
where r stands for the input reference, y denotes the system output and z is the integral of the control error. The constants KQ and Kj = KjTi (where T; denotes the reset time) stand for the proportional and integral gains, respectively. The estimated value of the input d is computed via the following equations:
y = — y + KpU + d + gXy-y) z 'd = gXy-'y)
(^) (3)
where d and y are, respectively, the estimated values for d and y whereas ^i and g2 are estimation constants, which_must be strictly_positive to guarantee convergence of the estimation errors ei = {y - y) and 62 = (d~ d) to the origin, i.e., if ^1,^2 > 0 then (^i,^22 -^ (0,0) for all time t > to > 0 and any initial condition ^10 = ^i(O), ^2,0 = ^i(O) =^ y ^ y and J -^ to > 0 and any initial condition at the physically realizable operation of the column (Femat et al., 1999). Note that equations (1) - (3) are linear; therefore, one can readily obtain a transfer function for the system. Indeed, the transfer function takes the form: C(s) = u{s)/ec(s) = A'c + Kj/s + ^£/s(TnS + 1), where s = (oj, j = (-1)^^^ and eds) = (y{s) - r(s)); KE = Ksigugi) and Tn = Tii(^i,^2) stand for the gain and characteristic time of the dynamic estimation term (namely KE/S{TIIS +1)). Note that such a term is quadratic and provides a dynamic estimation of the input d, which can represent load disturbances (regulation problem) or step changes in references (servocontrol problem). For further details on tuning and closed-loop stability analysis of the PII^ controller, see Alvarez-Ramirez et al. (1997) and Femat et al. (1999), respectively. The performance of the PII^ controller was compared with the performance of the classical PI control action, which is a widely-used type of controller in the chemical industry. Both controllers were tuned following the criterion of the integral of the absolute error (lAE), such that the values of the control gains (Kc, Ki and Ti in the case of PI controllers, or Kc, g\ and g2 in the case of the PII^) that provided a minimum value of lAE for a set point change for each separation scheme were detected.
4. Separation Objective and Control Goals The analysis presented in this work is based on the separation problem of three different ternary mixtures with molar compositions (A, B, C) equal to (0.40, 0.20, 0.40) and product purities of 98.7, 98 and 98.6 percent, respectively. The three mixtures
518 considered were n-pentane, n-hexane and n-heptane (mixture 1), n-butane, isopentane and n-pentane (mixture 2), and isobutane, n-butane and n-hexane (mixture 3). Two set of simulations were carried out. (i) Servo-control. Step was induced as set point changes for each product composition under SISO feedback control at each output flowrate (see Figure 1) and (ii) Regulation under load disturbances: the effect of feed composition disturbances was induced to test and compare the performance of the proposed controller (a 5% change in the composition of one component with the same total feed flowrate).
5. Dynamic Simulations and Results The dynamic results are presented on a comparative basis to allow for a better assessment of the controllers performance. For mixture 1, the results of the lAE values for the case of responses in the face to feed disturbances show that the PII^ controller provides a better behavior than the PI controller. Table 1 shows how the lAE values for the PII^ controller are smaller than those for the PI controller. The highest values for lAE, which reflect the most difficult control task, are obtained for the stabilization of the intermediate component. This is probably because the dynamic behavior of the composition of the intermediate component under open loop operation shows an inverse response. However, it should be noted that the most significant improvement of the PII^ controller over the PI controller was obtained for the control of the intermediate component (in contrast to the control problem of the lightest and heaviest components). Table 1. lAE results for the mixture 1 under load disturbances at feed compositions. Component A B C
Flf 1.2296x10"^ 2.8259 X 10-^ 1.745939x10'^
PI 2.2082 x 10"^ 8.7463 x lO"^ 3.1119x10-^
A more detailed analysis of the dynamic performance obtained for the lightest (A) component and the heaviest (C) component follows. Figure 2 displays the simulation results obtained for the control analysis of component A. When load disturbances were induced, the PII^ controller adjusts the composition of product A smoothly, while the PI action shows a significant time period of product quality deterioration (Figure 2a). The control effort, measured through the changes in the control valves positions (Figure 2b), reflects the superior performance of the PII^ controller accordingly. When a set point change in the composition of component A was imposed, the controllers showed a fairly similar behavior, as observed in Figures 2c and 2d. Figure 3 shows some results obtained for the control of the heaviest component. A superior behavior of the PII^ option is again evident. Although for the set point tracking case (Figure 3c), the use of the PII^ controller shows only a slightly better performance than the PI controller, a remarkable improvement of the use of the PII^ controller is obtained when responses against feed disturbances were considered (Figure 3a). For the case of set point tracking, the responses of the system under the action of either the PII^ controller or the traditional PI controller were not significantly different, although the use of the PII^ controller provided generally a faster adjustment with fewer oscillations. When the response of the column to feed disturbances was analyzed, the PII^ controller provided a remarkable improvement over the use of the PI controller; while in several cases the implementation of the PI controller yielded extremely high
519 settling times, the PII^ controller showed an excellent capability to eliminate the feed disturbance fast and without overshoot problems. As far as control efforts are concerned, the implementation of the PII^ controller provided smoother control actions; the variations in control valves positions were minor, in contrast with the results for the Pl-classical control mode, in which the valves even became saturated (or completely closed) in several of the tests conducted. e o J
•" 50 A 49«
!
1
«•-
"o
45 -
1^
»***iiiitAl!ihtow—• • • • • • • "• , — , — , — , — • — i 43 \— , — 1 — , — ,
-*—pira
"
K 6
7
8
9
j
8
10
12
14
10
a) Response to feed disturbance in composition A
b) Response in the control valve to feed disturbance in component A
—*—PU*2 1 -••-PI
1 :::
I ..»W«»=*™
1 g 54 • 1 ; 53 • J 521 51 • 1 50 <
c) Response to set point change in composition A
d) Response in the control valve to set point change in component A
Figure 2. Some representative dynamic responses (component A) for the separation of mixture 1.
I 0.995
\fm-
|o.»5r*-'—^^"' 0.98 -I
1
I
5«
,—
a) Response to feed disturbance in composition C
b) Response in the control valve to feed disturbance in component C
-g
56
Kj-^ 0
1
2
3
5
6
7 Time,hr
c) Response to set point change in composition C
d) Response in the control valve to set point change in component C
Figure 3. Some representative dynamic responses (component C) for the separation of mixture 1.
520 When mixtures 2 and 3 were subjected to the same tests, similar trends on the dynamic responses of the Petlyuk column were obtained. Particularly, the PII^ controller provided a remarkable performance when load changes in the feed composition were considered. The smooth performance of the PII^ controller is induced by its disturbance estimator, which resembles the structure of linear state observers.
6. Conclusions The control of a Petlyuk column with a proportional-integral controller with dynamic estimation of uncertainties was analyzed. The dynamic behavior of this action was compared to the Petlyuk column performance under a proportional-integral controller. Set point tracking and responses to feed composition disturbances were analyzed. The results obtained for three case studies show that, after optimizing the controller parameters of each control policy, the closed loop behavior under the PII^ control mode was significantly better than the responses obtained with a PI controller. The superiority of the PII^ control option was particularly noticeable when the column was subjected to feed disturbances. The properties of the PII^ controller allow a proper detection of disturbances and a proper corrective action to prevent the controlled output from significant deviations from the desired operation point. In general, the PII^ controller has been found to have an excellent potential for the control of the Petlyuk column.
7. References Abdul Mutalib, M.I. and Smith, R., 1998, Trans Inst. Chem. Eng., 76, 308. Alvarez-Ramirez, J., Femat, R. and Barreiro, A., 1997, Ind. Eng. Chem. Res., 36, 3668. Amminudin, K.A., Smith, R., Thong, D.Y.-C. and Towler, G.P., 2001, Trans Inst. Chem. Eng., 79, 701. Brugma, A.J., 1937, Dutch Patent No. 41, 850, October 15. Femat, R., Alvarez-Ramirez, J. and Rosales-Torres, M., 1999, Comp. Chem. Eng., 23, 697. Glinos, K. and Malone, F., 1988, Chem. Eng. Res. Des., 66, 229. Hairston, D., 1999, Chem Eng., April, 32. Hernandez, S. and Jimenez, A., 1999a, Comp. Chem. Eng., 23, 1005. Hernandez, S. and Jimenez, A., 1999b, Ind. Eng. Chem. Res., 38, 3957. Jimenez, A., Hernandez, S., Montoy, F.A. and Zavala-Garcia, M., 2001, Ind. Eng. Chem. Res., 40, 3757. Luyben, W.L., 1990, Process modeling, simulation and control for chemical engineers, 2nd Ed., McGraw-Hill, Singapore. Muralikrishna, K., Madhavan, K.P. and Shah, S.S., 2002, Trans Inst. Chem. Eng., 80, 155. Petlyuk, F.B., Platonov, V.M. and Slavinskii, D.M., 1965, Int. Chem. Eng., 5(3), 555. Serra, M., Espuna, A. and Puigjaner, L., 1999, Chem. Eng. Process., 38, 549. Triantafyllou, C. and Smith, R., 1992, Trans Inst. Chem. Eng., 70,118. Wolff, E.A. and Skogestad, S., 1995, Ind. Eng. Chem. Res., 34, 2094. Wright, R.O., 1949, U.S. Patent 2, 471, 134, May 24.
8. Acknowledgements Financial support from Conacyt and Concyteg, Mexico, is gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
521
Dynamic Study of Thermally Coupled Distillation Sequences Using Proportional - Integral Controllers Juan Gabriel Segovia-Hernandez^, Salvador Hernandez^, Vicente Rico-Ramirez^ and Arturo Jimenez^ ^Instituto Tecnologico de Celaya, Departamento de Ingenieria Quimica, Av. Tecnologico y Garcia Cubas s/n, Celaya, Gto., 38010, Mexico. ''Universidad de Guanajuato, Facultad de Quimica, Noria Alta s/n, Guanajuato, Gto., 36050, Mexico, [email protected]
Abstract A comparative study of the energy requirements and control properties of three thermally coupled distillation schemes and two conventional distillation sequences for the separation of ternary mixtures is presented. The responses to set point changes under closed loop operation with proportional-integral (PI) controllers were obtained. Three composition control loops were used, and for each separation scheme, the parameters of the PI controllers were optimized using the integral of the absolute error criterion. The effects of feed composition and of the ease of separability index were considered. The results indicate that there exist cases in which integrated systems may exhibit better control properties than sequences based on conventional distillation columns.
1. Introduction Distillation process, the most widely-used separation method in industry, is characterized by its high energy consumption. Alternate arrangements to the conventional distillation columns (one input, two outputs) have received noticeable attention in recent years. Through the use of recycle streams between two columns, several thermally coupled distillation systems have been proposed. The three thermally coupled systems that have been analyzed to a greater extent are the system with side rectifier (TCDS-SR, Figure la), the system with side stripper (TCDS-SS, Figure lb), and the fully thermally coupled distillation system (or Petlyuk column. Figure Ic). Several studies have shown that those TCDS schemes can save up to 30% on energy consumption with respect to the direct and indirect sequences based on conventional columns (Tedder and Rudd, 1978; Alatiqi and Luyben, 1985; Glinos and Malone, 1988; Fidkowski and Krolikowski, 1991; Finn, 1993; Yeomans and Grossmann, 2000). Most of these results were obtained through energy consumption calculations at minimum reflux conditions, and they spawned the development of more formal design procedures. Hernandez and Jimenez (1996, 1999a) have reported the use of optimization strategies for TCDS to detect designs with minimum energy consumption. When comparing the energy savings of the integrated schemes, it has been found that in general the Petlyuk system offers better savings than the systems with side columns. However, the complex column configurations that can potentially produce larger energy savings are not commonly used in industrial practice, largely because of control concerns (Dunnebier and Pantelides, 1999). Recent research efforts have been conducted to understand the
522 operational properties of TCDS. The works of Wolff and Skogestad (1995), AbdulMutalib and Smith (1998), Hernandez and Jimenez (1999b) and Jimenez et al. (2001) have shown that some of these integrated options are controllable, so that their potential implementation would probably not be at the expense of control problems. In this work, we present an analysis on the closed loop behavior of three TCDS, and compare their responses to those of the conventional direct and indirect distillation sequences. The analysis is based on rigorous dynamic simulations using changes in set points of product compositions.
2. Design procedure The design of the three TCDS under consideration was carried out following the procedure suggested by Hernandez and Jimenez (1996, 1999a). The method provides a tray structure for the integrated systems by a section analogy procedure with respect to the design of a conventional sequence; the TCDS-SR is obtained from the tray arrangements of a direct sequence, the TCDS-SS from an indirect sequence, and the Petlyuk system from a sequence of a prefractionator followed by two binary distillation columns. The degrees of freedom that remain after design specifications (one degree of freedom for the systems with side columns, and two for the Petlyuk system) were used to obtain the operating conditions under which the integrated designs provide minimum energy consumption.
/KIFI I
H -#(a) TCDS-SR
->c
(b)Ta)S-SS
(c) Petlyuk Coluim
Figure 1. Thermally coupled distillation sequences. The search procedure provided the optimal values of the interconnecting vapor flowrate (VF) for the TCDS-SR (Figure la), the interconnecting liquid flowrate (LF) for the TCDS-SS (Figure lb), or both streams for the case of the Petlyuk column (Figure Ic). Rigorous simulations, using the dynamic model developed by Hernandez and Jimenez (1996), were conducted to test the designs. The model is based on the total mass balance, component mass balances, equilibrium relationships (assuming ideal VLB), summation constraints, energy balance and stage hydraulics (Francis weir formula). Because of the coupling between the columns, the set of equations must be solved simultaneously.
523
3. Dynamic Simulations and Case Studies Although more formal techniques to define the control loops for the integrated columns may be used (for instance the relative gain array method), we based our selection on practical considerations. Thus, the control of the lightest component was manipulated with the reflux flowrate, the heaviest component with the reboiler heat duty, and the control of the intermediate component was a function of the integrated structure; for the TCDS-SR it was tied to the reflux flowrate of the side rectifier, for the TCDS-SS to the heat duty of the side stripper, and for the Petlyuk column to the product stream flowrate. The closed loop analysis was based on proportional-integral controllers. The parameters of the controllers, proportional gains (Kc) and the reset times (Xi), were optimized for each conventional and integrated scheme following the integral of the absolute error (lAE) criterion. The case studies were selected to reflect different separation difficulties and different contents of the intermediate component of the ternary mixtures. Three mixtures with different values of the ease of separability index (ESI, the ratio of relative volatilities of the split AB to the split BC, as defined by Tedder and Rudd, 1978) were considered. The selected mixtures were n-pentane, n-hexane and n-heptane (Ml, ESI = 1.04), n-butane, isopentane and n-pentane (M2, ESI = 1.86), and isobutane, n-butane and n-hexane (M3, ESI = 0.18). To examine the effect of the content of the intermediate component, two types of feed compositions were assumed. One feed with a low content of the intermediate component (where mole fractions of A, B, C, were equal to 0.40, 0.20, 0.40, feed Fl) and another one with a high content of the intermediate component (A, B, C equal to 0.15, 0.70, 0.15, feed F2), were used. The total feed flowrate for all cases was 45.5 kmol/h. Specified product purities of 98.7, 98 and 98.6 percent for A, B and C respectively were assumed.
4. Energy Requirements The results on energy requirements reflect the optimization procedure carried out on the recycle streams for the three integrated sequences. 4.1. Mixture Ml Table 1 shows the energy requirements for each integrated scheme and conventional sequences when mixture Ml was considered. The Petlyuk system shows the best potential, offering savings in energy consumption of up to 50% with respect to the conventional distillation sequences. The TCDS-SR and TCDS-SS require between 14 and 20% less energy consumption than the conventional sequences. 4.2. Mixtures M2 and M3 The superior behavior on energy efficiency of the Petlyuk column was also observed for mixtures M2 and M3 (Segovia-Hernandez, 2001). In the case of mixture M2 the Petlyuk column can offer savings in energy consumption of up to 15% with respect to the conventional sequences, while the savings achieved by the TCDS-SR and TCDS-SS schemes are in the order of 10%. In the case of mixture M3 the Petlyuk column requires between 40 and 50% less energy consumption, whereas the TCDS-SR and the TCDSSS options offer energy savings of up to 30% with respect to the conventional sequences. Table 1. Energy requirements (Btu/h)for separating mixture Ml. Feed Direct Sequence indirect Sequence TCDS-SR TCDS-SS Fl 3,263,772.2 3,547,190.0 2,521,007.0 2,730,465.2 F2 4,127,083.9 4,356,343.8 3,167,085 3,511,610.3
Petlyuk column 1,709,474.1 2,142,722.5
524
5. Dynamic Results The dynamic analysis was based on individual set point changes for product composition on each of the three product streams. The three control loops for each conventional and integrated sequence were assumed to operate under closed loop fashion. 5.1. Mixture Ml, composition Fl Table 2 shows the lAE values obtained for each composition control loop of the distillation sequences under analysis. It is observed that the Petlyuk column offers the best dynamic behavior, which is reflected in the lowest values of lAE, for the control of the three product streams. The dynamic response of each control loop when the Petlyuk column was considered is displayed in Figure 2, where a comparison can be made to the response obtained with the widely-used direct sequence. One may notice in particular how the direct sequence is unable to control the composition of the intermediate component, while the Petlyuk column provides a smooth response, with a relatively short settling time. It is interesting to notice that for this mixture with an ESI = 1, and a low content of the intermediate component in the feed, the Petlyuk column offers the highest energy savings and also shows the best dynamic performance from the five distillation sequences under consideration.
j|A^sf 0 0.10.2 0.30.40.50.60.7 0.80 9 "
a) Conponent C (Petlyuk)
S
0 0.1 0.2 03 04 05 06 07 08 09 lamtr
d) Conponent C (Direct sequence)
V'- < » > • » • > 0 0.10.2 0.30.40.5 0.60.7 0.80.9 1 Ihichr
b) Conponent B (Petlyuk)
I:3 i ^ y ^ ^ 0
0,5
1
1.5 2 Tlm.hr
2.5
3
c) Conponent A (Petlyuk)
k/vvwvvv |lvww—— 1.5
e) Component B (Direct sequence)
2
25
3
f) Component A (Direct sequence)
Figure 2. Dynamic responses of the Petlyuk column and the direct sequence. 5.2. Mixture Ml, composition F2 When the content of the intermediate component in the feed was raised from 20 to 70 percent, significant changes in the dynamic responses of the distillation systems were observed. The first remark is that the Petlyuk column does not provide the best choice from an operational point of view. A second observation is that the best choice depends on the control loop of primary interest. When the control of the light (A) or the heavy (C) component of the ternary mixture is of primary concern, then the TCDS-SS scheme provides the best option since it offers the lowest lAE values for these control loops.
525 However, if the control policy calls for the composition of the intermediate (B) component, the indirect sequence shows the best behavior, with the lowest value of lAE. Overall, it may be stated that for this type of mixture, the TCDS-SS may offer a good compromise, providing energy savings with respect to conventional sequences and good dynamic properties. Table 2. lAE results for mixture Ml, composition Fl. Sequence Direct Indirect TCDS-SR TCDS-SS Petlyuk
Component A 7.92441 X 10-^ 4.0076 X 10-^ 3.55963 X 10-^ 7.69839 xlO-^ 1.74924x10-^
Component B 5.28568 X 10-^ 3.4576 X 10-^ 2.78147x10-^ 8.9876 X 10-^ 3.42972 X 10-^
Component C 2.95796 X 10"^ 2.64873 X 10-^ 7.99529 X 10-^ 3.80888 X 10"^ 2.10607x10-^
5.3. Other mixtures The analysis was completed with the consideration of the other four case studies. Some trends were observed. For one thing, the best option depends on the amount of intermediate component. Also, it was found that the best sequence, based on the lAE criterion, for the control of the light component was also the best choice for the control of the heavy component, but a different separation scheme provided the best option for the control of the intermediate component. If the feed contains low amounts of the intermediate component, the Petlyuk column shows the best dynamic behavior for the control of the light and heavy components, while the indirect sequence provides the best responses for the control of the intermediate component. For feed mixtures with high content of the intermediate component, sequences with side columns showed the best responses for the control of light and heavy components, and conventional sequences were better for the control of the intermediate component. The ease of separability index also shows some effect on the topology of the preferred separation scheme when the feed contains a high amount of the intermediate component. For mixtures with ESI higher than one, the systems with two bottom streams (integrated or conventional) show the best dynamic properties, while for mixtures with ESI lower than one, the separation systems with two top distillate streams (TCDS-SR or the direct sequence) provide the best dynamic responses. Table 3 summarizes the optimal options detected from the dynamic analysis for all case studies. The only case in which there was a dominant structure for all control loops was when the feed contained low amounts of the intermediate component and an ESI value of 1, and the Petlyuk column provided the optimal choice in such a case. Table 3. Sequences with best dynamic responses for each control loop.
Mixture Ml M2 M3
Feed with low content of intermediate component Control of A and C Control of B Petlyuk Petlyuk Petlyuk Indirect Petlyuk Indirect
Feed with high content of intermediate component Control of A and C Control of B TCDS-SS Indirect TCDS-SS Indirect TCDS-SR Direct
526
6. Conclusions We have conducted a comparison on energy requirements and on the dynamic behavior of five distillation sequences for the separation of ternary mixtures. Three of the sequences considered make use of thermal coupling, and their energy and control properties have been compared to those of the conventional direct and indirect sequences. From energy considerations the Petlyuk column shows generally the highest savings. The dynamic analysis was based on optimal PI controllers for all sequences, according to the lAE criterion. The results from the dynamic analysis do not show a dominant option, but interesting trends were observed. Two factors seem to affect the optimal choice from dynamic considerations. One is the amount of intermediate component, and the other one is the preferred control policy, i.e. which component of the ternary mixture is the most important from operational or marketing purposes. When the control of the lightest or heaviest component is of primary interest, integrated sequences provide interestingly the best options. When the amount of intermediate component is low, the Petlyuk column provided the best dynamic performance; when the amount of intermediate component is high, the integrated sequences with side columns showed the best dynamic results. On the other hand, when the control of the intermediate component is the desired strategy, the energy savings provided by the integrated sequences conflict with their control properties, since the conventional sequences offered generally the best dynamic responses (also interestingly, the indirect sequence was the best option most of the times.) In summary, although the best operational option is not unique, the results show that there are cases in which integrated sequences do not only provide significant energy savings with respect to the conventional sequences, but also may offer some dynamic advantages.
7. References Abdul Mutalib, M.I. and Smith, R. 1998, Trans Inst. Chem. Eng., 76, 308. Alatiqi, I.M. and Luyben, W.L. 1985, Ind. Eng. Chem. Process Des. Dev., 24, 500. Alatiqi, I.M. and Luyben, W.L. 1986, Ind. Eng. Chem. Process Des. Dev., 25,762. Dunnebier, G. and Pantelides, C. 1999, Ind. Eng. Chem. Res., 38, 162. Fidkowski, Z. and Krolikowski, L. 1991, AIChE J., 36, 1275. Finn, A.J., 1993, Chem. Eng. Progress, October 41. Glinos, K. and Malone, F. 1988, Chem. Eng. Res. Des., 66, 229. Hernandez, S. and Jimenez, A. 1996, Trans Inst. Chem. Eng., 74, 357. Hernandez, S. and Jimenez, A. 1999a, Comput. Chem. Eng., 23, 1005. Hernandez, S. and Jimenez, A. 1999b, Ind. Eng. Chem. Res., 38, 3957. Jimenez, A., Hernandez, S. Montoy, F.A. and Zavala-Garcia, M. 2001, Ind. Eng. Chem. Res., 40, 3757. Segovia-Hernandez, J.G., 2001, M.S. Thesis, Department of Chemical Engineering, Instituto Tecnologico de Celaya, Mexico. Tedder, D.W. and Rudd, D.F. 1978, AIChE J., 24, 303. Wolff, E.A. and Skogestad, S. 1995, Ind. Eng. Chem. Res., 34, 2094. Yeomans, H. and Grossmann, I.E. 2000, Ind. Eng. Chem. Res., 39,4326.
8. Acknowledgements The authors acknowledge financial support received from Conacyt and from Concyteg, Mexico.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
527
Metastable Control of Cooling Crystallisation T. T. L. Vu^ J. A. Hourigan^ R. W. Sleigh*, M. H. Ang^ M. O. Tade^ ^CAFR, University of Western Sydney, Sydney NSW 1797 Australia *Food Science Australia, Sydney NSW 2113 Australia ^Dept. of Chem. Eng., Curtin University of Technology, Perth WA 6845 Australia
Abstract The paper studies the metastable control of cooling slow- and fast-growth crystalline compounds, which have: low solubility at 25°C, high solubility parameters and a detectable metastable zone. Nyvlt's and alternative methods are applied to measure the metastable limits. An optimal control problem is developed in GAMS and solved for the optimal cooling temperature set points. Various cooling and seeding strategies are implemented in a laboratory-scale crystalliser to compare the yields and Crystal Size Distributions (CSD). For both compounds, slow cooling with initial fine seeds is the optimal crystallisation strategy in batches, which can achieve the highest yield and the narrowest range of particle size. The successful laboratory-scale results will lead to further pilot- and industrial-scale investigations.
1. Introduction Crystallisation is an important purification and separation technique due to its flexible, energy efficient operating conditions. It can proceed continuously or in a batch to produce high purity products by simply creating supersaturation using either cooling or evaporation. The control of a crystallisation process has been much studied over the last decade in response to the demand of efficient downstream operations and product effectiveness. However, a significant gap between research and industrial implementations still exists especially in the food industries because of expensive and complex control strategies and the lack of accurate process measurements and skilled operators. To attract these plants, any proposed improvement in crystal quality control should be reliable, practical and more importantly economical enough to be implemented. Vu and Schneider (2002) have successfully studied evaporative batch crystallisation based on the metastable control. This paper briefly reviews the selection criteria of crystalline substances for a profitable cooling process and the estimation of the metastable zone. A common slow-growth organic compound and a fast-growth inorganic salt are selected based on the mentioned criteria to demonstrate the advantages of cooling control. Using a general population balance-based mathematical model redefined for a batch cooling crystalliser and the growth kinetics found in the literature, the optimal control problem is formulated and solved for the cooling temperature profiles. The keynote is the comparison of different cooling and seeding strategies to select the best one for batch crystallisation. The effect of initial seed size distribution on the yield is also discussed.
528
2. Selection of Crystalline Compounds The selection criteria for a compound used in a cooling process include low solution concentration W (g compound/lOOg water) at 25°C and high solubility parameter SP. Cooling crystallisation is only profitable if the saturated concentration C* (kmole/m^ solution) satisfies the equation defining the SP (Mersman 2001). The temperature TK is in degrees Kelvin. When the solubility-temperature curve is flat, evaporative crystallisation must be applied. SP =
> 8.-. Cooling crystallisation
SP =
< 1 .*. Evaporative crystallisation
An additional selection criterion is a detectable metastable zone width. Every solution has a maximum amount that it can be supersaturated before becoming unstable. The zone between the solubility curve and the unstable boundary is referred to as the metastable zone. In an initial seeded batch the supersaturation is always maintained within the metastable zone to minimise nucleation, the formation of new unwanted tiny crystals known as fines. These either cause filtration problems or reduce batch yields by blocking or passing through screens. The metastable control can be achieved if the crystalline compound has a detectable metastable zone width represented by ATmet- One of Nyvlt's methods (1985) is used to measure the metastable limit. Saturated solutions of known concentrations containing a few large size seeds are cooled down at a steady cooling rate until the first nuclei appeared. The difference between saturation and nucleation temperatures ATmet represents the metastable zone width at a given concentration. Nyvlt's methods are not applicable for slow growth compounds at low temperatures. In this case alternative methods, which increase concentration at a specified temperature instead of reducing temperature at a given concentration, must be employed. Details of these methods will be discussed in another paper due to space limitation. As the metastable limit and the solubility curves respectively serve as upper and lower constraints in a dynamic optimisation problem, they should be estimated beforehand if unavailable in the literature.
3. Problem Formulation The process dynamic model of a batch crystalliser is straightforward, fully described by the energy, mass and population balances. However, the dynamic of the crystal size distribution can be ignored if a batch is initially fed with seeds closely sized between two adjacent sieve sizes. General equations and constraints are developed for anhydrous salts. Additional equations are required to describe other transformations as in the case of hydrates and organic compounds. The subscript f and the superscript * in the following equations denotes feed and saturation, respectively. The rate change in: mass of water
dx, dt
(1)
529 mass of dissolved impurities
dx2
mass of dissolved pure substance
dt dx^
dx^
(3)
mass of crystals of pure substance
dt dx
dt pnNGx'
('^)
1u~ volume equivalent average diameter of crystals
dx^
(^)
2 (5)
dt In equation (4), N is the total number of footing seed crystals and G is the growth rate, generally obtained from equation (6).
G = k^e^^^\S-\y
5=-^
(6)
In equation (6), kg is the growth rate constant; g relates the growth rate to the supersaturation. Eg is the growth activation energy and R is the ideal gas constant. The main driving force for crystallisation is the concentration of substance in excess of saturation. However, to minimize nucleation, this concentration should not exceed the metastable limit or secondary nucleation threshold (Wm) at a given temperature. These are the main constraints acting on the systemW* <W <W^ . Different objective functions can be set to determine the optimal batch cooling strategy subject to these constraints. In a plant the initial conditions and the criteria to terminate a batch are normally specified. To improve the plant performance, the objective function can be defined as the minimum batch operating time. For a crystallisation process design, the target could be the maximum amount of crystals formed or the maximum volume equivalent average diameter of crystals at a fixed period of time. This will be demonstrated in the following case study.
4. Case Study Description Not many organic and inorganic compounds possess the three criteria: low solubility at ambient temperature, high SP and significant metastable zone width. Potassium dihydrogen phosphate was selected to represent a fast growth inorganic salt for comparison with a-lactose monohydrate, a slow growth organic compound. Their SP profiles and metastable zones are plotted in Figure 1. Growth kinetics and solubility data for KH2PO4 and lactose were found in the literature (Mullin 1993, Thurlby and Sitnai 1976, Visser 1982). Nyvlt's and alternative methods were applied to measure the metastable limits. These are presented in Table 1. Initial seeds were taken from between 75-|Li and 150-jLi sieves. The seeding weights were calculated based on a theoretical 27-fold increase in mass or 3-fold increase in average diameter, after the supersaturation solutions were cooled down from 60°C to 20°C.
530 1
o
1
1
1
1
1
I
I
CO
"^ 7 ""
K
—•
Lactose
^
S6
6
~ KM9PnA
^
—•
B5 Q. CO A
1
1
1
1
I
10
20
30
40
50 T(C)
60
"5 5*40
I
60
(a)
!
70
80
90
100
Lactose metastable limit
KH2P04 metastable limit
1^0
I
AT„
Nucleation zone
o
|3. 20 30
35 T(C)
40
Figure 1: (a) Solubility Parameter Profile and (b) Metastable Zone ofKH2P04 and Lactose. Table 1: Growth Kinetics, Solubility and Metastable Limits ofK2P04 and Lactose. Compounds Solubility Metastable limit Growth rate (|Li/min)
KH2PO4
Lactose
In W* =0.0197 + 2.73 \nW^ =0.0157 + 2.99
In W* =0.0287 + 2.389 In W^ =0.0207 + 2.99
-7735
G =9.3*10'^^ 7^, ^^_jy
-11121
G = 6.1*10''^ "^^
(S-lf
5. Results and Discussions Various cooling temperature and seeding strategies were applied to select the best one for cooling batch crystallisation. Crystals produced were filtered, dried and the size range analysed using the Malvern Mastersizer X. The fast-cooling strategy as shown in Figure 2 represents the highest possible cooling rate, which could be achieved in a laboratory-scale crystalliser using tap water as a coolant. This policy was applied for both compounds with and without initial seeding. The best temperature set-point profiles were obtained using MINOS 5.2 (Vu and Schneider, 2002) to solve the optimal control problem. As these profiles are subjected to the secondary nucleation constraint they must be applied with initial seeding. The slow-cooling profiles are actual temperature response recorded inside the crystalliser.
531 For KH2PO4 the temperature output can follow the set point from 60°C to 47°C then lagging occurs due to the heat transfer limitation. However, the whole cooling profile lasts only 40 minutes compared to the 270-minute duration for lactose as plotted in Figure 2. The narrow metastable zone width of KH2PO4 plotted in Figure 1 does not affect the cooling process since its growth rate is enormously large. In contrast, lactose has a much wider metastable zone but the cooling process is very slow due to its relatively low growth rate, as shown in Table 1.
Set point 150 Time (min)
30
250
Lactose CSD
25 ^20
200
Fast cooling Seed Slow cooling
E15 >10
/ s|o seeding
200 400 Size (nnicron)
600
500
1000 1500 Size (micron)
2000
Figure 2: Cooling Temperature Strategies and CSDs ofKH2P04 and Lactose. Comparing the yields, as the weight of crystals produced over the theoretical weight, seeding is significantly better than without seeding. For a slow growth compound such as lactose, using fme seeds and slow cooling can result in a higher yield than fast cooling. For a fast growth salt such as KH2PO4 the difference in yields is insignificant since crystallisation occurs so rapidly and the yields in all cases almost reach 100%. Comparing the crystal size distributions slow cooling however produces the narrowest particle size range of KH2PO4 as shown in Figure 2. Lactose cooling crystallisation only possesses this advantage if finer initial seeds taken from between 45-^ and 75-^^ sieves are used.
532
6. Conclusions Controlled cooling of the metastable zone has been studied using KH2PO4 and lactose representing a fast growth salt and slow growth organic compound. Although these substances possess some similar properties: low solubility at 25°C, high SP and detectable metastable zone, KH2PO4 has a much narrower metastable zone width compared to lactose. Using fine seed crystals and slow cooling subjected to secondary nucleation is the best batch strategy for both crystalline compounds regarding yields and CSDs. To achieve higher yields and products, which have better storage, transportation and free-flowing properties, it is worth estimating the metastable zone in advance and controlling the cooling temperatures subject to these metastable limits through an inexpensive single PI controller. Future work will apply this strategy in pilot- and industrial-scale batch crystallisers using factory mother liquors of the salts possessing similar criteria as KH2PO4 and lactose.
7. References Mersman, A., 2001, Crystallisation Handbook, Marcel Dekker, New York. Mullin, J.W., 1993, Crystallisation, Butterworth Heinemann, London. Nyvlt, J., Sohnel, O., Matuchova, M. and Broul, M., 1984, The Kinetics of Industrial Crystallisation, Elsevier, Amsterdam. Thurlby, J. A. and Sitnai, O., 1976, Lactose Crystallisation: Investigation of Some Process Alternatives, J. of Food Sci. 41, 38. Visser, R.A., 1982, Supersaturation of a-Lactose in Aqueous Solutions in Mutarotation Equilibrium, Netherlands Milk and Dairy J. 36, 89. Vu, T.T.L. and Schneider, P.A., 2002, Improving the Control of an Industrial Sugar Crystalliser: a Dynamic Optimisation Approach, Computer Aided Chemical Engineering 10 (ESCAPE-12), Elsevier, Amsterdam.
8. Acknowledgements The authors would like to acknowledge AJ Parker CRC for Hydrometallurgy, Food Science Australia, the Centre for Advanced Food Research, UWS and the DRDC for their support of the project.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
533
Regional Knowledge Analysis of Artificial Neural Network Models and a Robust Model Predictive Control Architecture Chia Huang Yen, Po-Feng Tsai and Shi-Shang Jang National Tsing-Hua University Chemical Engineering Department Hsin Chu, Taiwan Email: [email protected]
Abstract Model based control schemes such as model predictive control are highly related to the accuracy of the process model. A regional-knowledge index is proposed in this study and applied in the analysis of dynamic artificial neural network models in process control. To tackle the extrapolation problem and assure stability of the control system, we propose to run a neural adaptive controller in parallel with a model predictive control. A coordinator weights the outputs of these two controllers to make the final control decision. The proposed analysis method and the modified model predictive control architecture have been applied to a neutralization process and excellent control performance is observed in this highly nonlinear system.
1. Introduction Since model uncertainty is inevitable, the following two points are essential to guarantee the performance of MPC. (1) Identify the uncertainty of the model; (2) Increase the robustness of MPC to cope with the model's uncertainty. In the work of Lin and Jang (1998), a systematic approach based on information theory was presented for designing the data set used to train an ANN for the purpose of a complete process model. However, implementation of such designs in industrial circumstances may be very expensive and even impossible. Leonard et al. (1992) proposed a "validity index network", the idea of using Parzen's estimator to calculate the probability density of training data is universal for other kind of empirical models. To accommodate the uncertainty of the model used in MPC, additional mechanism is necessary, and the neural adaptive controller (NAC) of Krishnapura and Jutan (2000) is worthy of consideration. A neutralization process provided by Palancar et al. (1998) is chosen as the target system.
2. Methodology 2.1. Regional knowledge analysis of artificial neural network models For a single input and single output (SISO) dynamic system, any dynamic model can be expressed in the following form:
534
(1) where u and y are the input and the measured output, ^ is the predicted output, k stands for the current time instant, and n and m are the output and input orders. For convenience of statement, we defme
and ^ is called an event in the dynamic space and is a general form of the input pattern to the artificial neural network model. Because yjt+i is uniquely determined by the real process, we have the corresponding augmented event as follows: 6 = (yt+P«) ~ V^Jt+l' yk ' yk-\ '•••' yk-n^^k
' ^ k - l ^"">^k-m )
(3)
Assume that the following training data set:
(4) And the input patterns is:
Q = {y.
=(yi,yi_,.,..,yi_,,ui,ui_,,.,.,u[_j\.^,...,,]
(5)
The concept of Parzen-Rosenblatt probability density function (Haykin, 1999) is used and extended as an index to measure the reliability of the model prediction. The Parzen-Rosenblatt density estimate of a new event,
"^^, based on the training data
set, ^^, is defined as:
where the smoothing parameter, /i, is a positive number called bandwidth, which controls the span size of the kernel function,
'^
and mo is the
dimensionality of the event set, ^ . The kernel functions, K, are various and, however, both theoretical and practical considerations limit the choice. A well-known and widely used kernel is the multivariate Gaussian distribution:
(7)
535 Once
"^^ is close to some
' , the relative kernel functions will give higher values
and those ' which are not in the neighborhood will give lower values in the summation. The above probability density function (6) is denoted as regional knowledge index. N A C s che m e ^ V
^ n
neijral adaptive contro 11 er U^.
coordinator
.U....
process
u^ ^
M A2SIN m o d e l
optimizer
^
-*<>
M P C s c he m e Figure 1. Architecture of the robust model predictive control
(RMPC).
2.2. Model predictive control The proposed robust model predictive control shown in Figure 1 is reduced to a standard model predictive control if we set u = Uj^p^ in the coordinator. 2.3. Neural adaptive control If we set u = Uj^^^ in the coordinator. Figure 1 is reduced to the neural adaptive controller, Figure 2, by Krishnapura and Jutan (2000).
n
hidden node
^ , Output node
-eontmltef Mgmenl'edriHwofk'
Figure 2. Neural adaptive controller
structure.
-^y
536 The whole system works by updating all the four connecting weights in the network to minimize the deviation (E) of the process output from its set-point value at current time instant k: Ek =-{yd,k
(8)
-JkJ
This error signal is generated at the output of the plant and is passed backward to the neural network controller through the plant and is minimized with the steepest descent method. 2.4. Coordinator In the proposed architecture as shown in Figure 1, the model predictive controller and the neural adaptive controller run in parallel. A coordinator is designed to make the final decision based on the outputs of the above two controllers. As a preliminary test, the following equation is used to combine the outputs of the MPC and the NAC:
w ^w^mvc
^{i-yj)u,
(9)
where ^ is a model-reliability index that weights the control actions from model predictive controller and neural adaptive controller. For simplicity, the following linear form denotes the decision factor yj is implemented in this work: ¥
^(/a(^„e. for
)) =
a < f^{a)
• / Q ( ^ . . .
(10)
)
^^ ) < b
where a and b are constants, and
/ Q {CO^^^ )
I// =0 and / ^ (CO^^^ )> b , l/J = 1,
as show in figure 3.
v^
^ y, h{0)^ Figures. ^W vs.
)
/QK.W)
537
3. Example (pH Control) The simulated pH control system is adopted from Palancar (1998). There are two inlet streams to the continuous stirred tank of reaction (CSTR), the acid flow, Qp,, an aqueous solution of acetic acid and propionic acid, and the base flow, Q^, an aqueous solution of sodium hydroxide. A primary test against a step change of the set point from pH = 7 up to pH = 10 at the 700^ seconds and a further test against a sequence of step changes in set point are performed. The results are depicted in figure 4, figure 5 and figure 6. pH Control of CSTR
O g
^ 200
400
600
800
10OO
1200
1400
1600
1800
2000
5
XJLii Zopm in 10OO 1200
200
400
600
800
(S O ^o.4
200
400
600
800
10OO
^
200
400
600
800
10OO Time
O
1400
1600
1800
2000
1200
1400
1600
1800
2000
1200
1400
1600
1800
2000
Figure 4. RMPC against set point change (solid line: response; dotted line: set point). X 10^
M P C control action Real control action 3.5 I
,
LZlSZX N A C control action
2.5
690
700
710
720 Time
Figure 5. Zoom-in of the control actions.
730
740
750
760
538 pH Control of CSTR
0 , -3 X 10 ?:
500
1000
1500
2000
2500
3000
3500
4000
4500
500
1000
1500
2000
2500
3000
3500
4000
4500
500
1000
1500
2000
2500
3000
3500
4000
4500
1000
1500
2000 2500 Time
3000
3500
4000
4500
5h
1Gr--
5h-
1 ro) 0.5
^
0
/^K^ 500
Figure 6. RMPC against a sequence of step changes in set point.
4. Conclusion The incompleteness and inaccuracy of artificial neural network models generally exist and deteriorate the performance of the above control scheme. In solving this problem, regional knowledge analysis is proposed in this study and applied to analyze artificial neural network models in process control. A novel control approach is proposed to combine the neural model based MPC technique and NAC. The regional knowledge index in the coordinator determines the weights by considering the present state of the controlled processes. This approach is particularly useful for ANN dynamic model with rough training and low accuracy. The excellent control performance is observed in this highly nonlinear pH system.
5. References Haykin, S., 1999, Neural Networks: A Comprehensive Foundation. Prentice Hall International, Inc., 2"^ edition. Krishnapura, V.G. and Jutan, A., 2000, Chemical Engineering Science, 55, 3803. Leonard, J.A., Kramer, M.A. and Ungar, L.H., 1992, Computers & Chemical Engineering, 16, 819. Lin, J.S. and Jang, S.S., 1998, Ind. Eng. Chem. Res., 37, 3640. Palancar, M.C., Aragon, J.M., Miguens, J.A. and Torrecilla, J.S., 1998, Ind. Eng. Chem. Res., 37, 2729.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
539
Optimisation of Automotive Catalytic Converter Warm-Up: Tackling by Guidance of Reactor Modelling J. Ahola\ J. Kangas^T. Maunula^ and J. Tanskanen* ^University of Oulu, Department of Process and Environmental Engineering, P.O. BOX 4300, FIN-90014 UNIVERSITY OF OULU, Finland ^Kemira Metalkat Oy, P.O. BOX 171, FIN-90101 OULU, Finland
Abstract In this paper, the ability of the developed reactor model to predict converter's performance has been evaluated against experimental data. The data is obtained from the full-scale tests and the performed European driving cycle vehicle tests. The focus is on the warm-up period of catalytic exhaust gas converters and especially on the prediction of catalyst light-off.
1. Introduction Understanding of the dynamic behaviour of exhaust gas catalytic converters is becoming more important as the environmental regulations tighten. A large portion of the total emissions are forming during the first few minutes of a drive when the catalytic converter is considerably cold and the reactions are relatively slow and kinetically controlled. Thus, it is essential to design the converter in such a way that the warm-up occurs optimally leading to high performance in the purification of unwanted emissions, such as oxides of nitrogen, hydrocarbons and carbon monoxide. Mathematical model, which would describe different physico-chemical phenomena during the warm-up period accurately enough, would be irreplaceable for the design process of a catalytic converter. The importance of the catalytic converter modeling is well recognised in the current literature (e.g. Koltsakis, Konstantinidis & Stamatelos, 1997; Koltsakis & Stamatelos, 1999; Brandt, 2000; Lacin & Zhuang, 2000; Mukadi & Hayes, 2002). In this paper previously built warm-up model's ability to describe these phenomena is investigated by comparing its predictions to experimental work. Furthermore, influences of different design constraints are inspected.
2. Model The purpose of the developed model is to describe the 3-way catalyst warm-up behaviour by predicting time dependent NOx, THC and CO conversions and temperature profiles. The model has been presented in an earlier paper (Kangas et aL, 2002). The converter is assumed to be adiabatic with uniform radial temperature and flow rate distribution. Thus, the profiles of the whole converter are obtained by modeling one channel of the monolith. The channel model equations consist of gas phase mass and energy balances, solid phase energy balance and heat transfer model
540 between these two phases. Laminar type flow in the small channels is approximated using a plug flow model with axial dispersion. Accumulation of heat and mass in the gas phase has been taken into account. The solid phase model includes the accumulation of heat and the axial heat conductivity. The chemical reactions take place only on the surface of the solid phase. The reactor model has been implemented in MATLAB® programming language. The system of partial differential equations is converted to ordinary differential equations by applying the numerical method of lines (Schiesser, 1991). Resulting ODEs are solved by the MATLAB®'s odel5s function, which is a quasi-constant step size implementation of the NDFs in terms of backward differences (Shampine & Reichelt, 1997).
3. Experimental Different type of experimental data has been exploited during the model construction and its performance evaluation. Particularly, bench scale converter experiments by using synthetic exhaust gas streams, full-scale experiments where the converter is mounted on the exhaust gas stream of an engine, and the European driving cycle (NEDC) vehicle tests have been carried out. An underfloor converter, which is quite far from the engine, has been employed as a demonstrative catalytic aftertreatment system in the vehicle test. Thus, the inlet gas temperature is relatively low leading to difficulties to pass the most tight emission limits and, modifications of the whole aftertreatment system might be necessary. The effects of the structural changes on the performance of the aftertreatment system have been evaluated by testing ten structurally different converters. The most significant characteristics of the converters are shown in Table 1. The warm-up of a converter can be attempted to speed up by structural changes like increasing the proportion of the active component or decreasing the mass to be heated. Thermal mass can be simply decreased by using shorter catalyst monoliths or lower cell densities, but to obtain the same performance other structural changes are needed to replace the loss of geometrical surface area. As another consequence shorter residence times of the components in the reactor would reduce the attainable conversions, which has to be compensated by increasing the loading of the active component in the substrate. However, the increase in loading is always limited due to the growing costs. The mass of active components Pd and Rh (7:1), the thickness of washcoat and the diameters of converters were kept constant in the ten tested prototypes. These selections will give rise to the following features: the prizes of the converters are approximately same; pore diffusion does not vary between the converters and the inlet gas flow distribution is constant in converter inlet. In this study fast warm-up has been selected as the most important design criterion. The converters are compared by the time needed to achieve 50 percent conversion in the NEDC test. The criterion is fulfilled only if the conversion continues the rice after it achieves 50 percent. Thus, the criterion excludes cases where the 50 percent conversion is only temporarily exceeded. Light-off for catalyst #2 is complicate. In the beginning conversion oscillates between zero and over 80 percent several seconds until it stabilises. The results are shown in table 2.
541 Table 1. The most significant characteristics of the converters used in evaluation. Catalyst Catalyst volume [dm^] Monolith length [mm] Thickness of metal foil [jum] Cell density [Cpsi] GSA [m-/dm^] PGM on washcoat mass [%] Active component mass [g] Washcoat mass [g] Metal foil mass [g] Inner shell + pins mass [g] Total mass [g]
1 0.85 120 50 260 2.93 0.93 1.35 146 511 458 1116
2 0.85 120 50 419 3.59 0.85 1.35 159 566 458 1184
3 0.85 120 30 735 4.39 0.63 1.35 216 493 458 1167
4 0.85 120 50 705 4.2 0.65 1.35 207 780 458 1445
5 0.85 120 80 664 3.95 0.69 1.35 195 1159 458 1812
Catalyst Catalyst volume [dm^] Monolith length [mm] Thickness of metal foil [jLim] Cell density [Cpsi] GSA [m^/dm^] PGM on washcoat mass [%] Active component mass [g] Washcoat mass [g] Metal foil mass [g] Inner shell + pins mass [g] Total mass [g]
6 0.85 120 30 1125 5.0 0.55 1.35 245 579 458 1282
7 0.85 120 50 1073 4.76 0.58 1.35 234 908 458 1600
8 0.85 120 30 1687 5.0 0.47 1.35 285 713 458 1456
9 0.5 58 50 705 4.2 1.4 1.6 122 459 222 802
10 0.3 28 50 705 4.2 2.8 1.7 61 230 105 396
Table 2. The time to achieve 50 percent conversion in NEDC vehicle test. Converter # 1 2 3 4 5 6 7 8 9 10
Time to 50% conversion [s] CO HC 94 90 93-99 91-95 80 88 95 95 95 96 89 91 95 96 95 95 80 90 73 76
Rating 4 5 2 7 9 5 9 7 3 1
Prototype converters can be evaluated based on the moderate cost engine bench tests. A rating criterion, which is commonly used, is the light-off temperature, i.e., the temperature in which a certain conversion is achieved, typically 50 percent. These kind of engine bench tests have been done for four converters. The light-off temperatures of the selected converters are shown in Table 3.
542 Table 3. Light-off temperatures of CO, HC and NOxfor converters on engine test. Converter # 2 4 6 9
Tso.co [°C] 304 300 300 313
Tso.HC [°C]
Tso.NOx [°C]
313 305 305 320
322 313 316 344
The light-off temperature in the engine bench test does not correlate with the time to light-off in the NEDC tests, which is clearly seen by comparing tables 2 and 3. Thus, the light-off temperature does not give any direct evidence of the fast warm-up. However, the engine bench tests can be exploited in the construction of the converter model. Note that the experiments have been carried out with a slow input temperature rise where near steady-state conditions are prevailing.
4. Results The ability of the model to predict converter's performance has been evaluated against the experimental data, which is obtained from the full-scale tests where the converter has been mounted on the exhaust stream of an engine, and from the performed European driving cycle vehicle tests. In this work the values of the reaction kinetic parameters have been estimated using a part of the engine bench data. The obtained model gives a good agreement with the measurements and with the unused bench test data. The model has been used to predict the behaviour of converter during the first few minutes of the NEDC vehicle test. Fast variations in input concentrations are difficult to handle numerically. Especially, in socalled fuel-cut conditions, where an extremely lean or rich exhaust gas is suddenly inserted into the converter, the solving time of the model increases remarkably and the solution for a near complete conversion becomes unstable. Simplified inlet stream dynamics has also been studied by simulation. Firstly, stepwise change from ambient temperature (298 K) to 650 K has been assumed while the other inlet stream variables are kept constant. Secondly, stepwise change to 600 K temperature, 60 s after stepwise change from 298 K to 440 K in the inlet gas temperature, is done. The above stream temperature dynamics mimics the main features of the temperature dynamics of the NEDC vehicle test. Finally, the inlet stream dynamics of the NECD test is modified in such a way that the concentrations are kept constant and approximately same as in the engine bench tests. The inlet stream temperature and flow rate are varying similarly as in the NEDC vehicle test. In table 4 the rating of the converters is shown based on these simulations. The simulations with simplified inlet dynamics give a valuable guidance for the design of catalytic converter structure. Practically the same warm-up is predicted than obtained from the NEDC vehicle test. The differences between the predicted times to light-off by the modified NEDC and by the double stepwise inlet stream dynamics against the measured ones are relative small. The oxygen storage components of the catalysts reduce the effect of concentration variations, which might be the reason why the simulations assuming constant inlet concentrations give good results.
543 Table 4. Simulated time when the converter has achieved 50 percent conversion with one and two stepwise temperature change as well as modified NECD input dynamics.
Converter #
i
2 3 4 5 6 7 8 9 10
Stepwise tso.co [s] t50.HC [S] 21.3 16.8 17.2 22.0 20.3 15.5 20.4 26.6 27.0 35.2 22.4 17.0 22.2 29.4 19.6 26.2 15.4 12.0 9.2 7.3
Double stepwise tso.co [s] tso.Hc [s] 85 89 86 90 84 88 91 96 100 107 86 90 94 99 90 95 82 79 72 74
Modified NEDC tso.co [s] t50.HC [S] 93 93 93 93 89 93 95 93 106 97 93 93 99 96 96 93 84 92 74 78
The thermal mass has the most significant influence on the warm-up. Thus, the two heaviest converters (#5 and #7) have slowest warming up whereas the shortest and lightest converter #10 has the fastest warm-up. The structure of the next lightest converters #9 and #3 differ from each other. Converter #9 is shorter but converter #3 was made of thinner metal foil. Both of them have approximately same overall warm-up behaviour, but especially the results of converter #3 indicate a disadvantage of fast thermal response. The converter is not only heating up fast but it also cools down fast. The impact of this can be seen on the late HC light-off taking place according to the modified NEDC simulation and on the oscillation of conversion during the NEDC vehicle test. The same effect can be seen when the results of converters #4 and #8 are compared. In stepwise simulations the lighter converter #8 is better, but in the modified NEDC simulation and in the NEDC vehicle test converters behaves equally good. Once again the reason for loosing the better functioning of converter #8 seems to follow from the larger heat transfer area that gives a fast response to the inlet temperature variations. In the demonstrative aftertreatment system the inlet gas temperature is in the catalytic light-off region, i.e. reaction rate is very sensitive to temperature during the warm-up of the catalytic converter. The boosting of exothermic reactions is needed in moving onto the higher operation temperatures. Thus, the converters are sensitive to temperature variations and heat transfer rates. In some another applications inlet gas temperature might be higher and temporally decreasing temperature has slighter effect on the catalyst light-off. If it is possible to mount the converter closer to the engine higher inlet gas temperature naturally follows. Thus, the converter structure should be optimised separately in each application and, if possible, together with the careful selection of the converter position.
544
5. Conclusions In this work reactor modeling and experimentation have been combined to study the warm-up period of exhaust gas catalytic converters. The influence of structural modifications may be effectively studied numerically provided that an accurate model for the catalytic converter is available. Such a model can guide the converter design provided that the description of the chemical reaction kinetics is tuned for the catalyst at hand. Clearly the thermal mass has the most significant influence on the catalytic converter warm-up. The heat transfer area between gas and solid phase has an effect on the warmup. This is most crucial when the inlet gas temperature is on the catalyst light-off region. The real gas stream has several input variables that change simultaneously in a complicated way, which will lead to numerical problems. Thus, the presented simplified alternatives are attractive in the preliminary rating of catalytic converters. In the studied converters a good prediction of warm-up has been obtained even if the concentration variations at the converter inlet have been ignored.
6. Preferences Brandt, E.P., Wang, Y. & Grizzle, J.W. (2000) Dynamic modelling of a three-way catalyst for SI engine exhaust emission control. IEEE transactions on control systems technology, 8, 767. Kangas, J., Ahola, J., Maunula, T., Korpijarvi, J. & Tanskanen, J. (2002) Automotive exhaust gas converter model for warm-up conditions. 17* International Symposium on Chemical Reaction Engineering, Hong Kong, China. Koltsakis, G., Konstantinidis, P. & Stamatelos, A. (1997) Development and application range of mathematical models for 3-way catalytic converters. Applied Catalysis B: Environmental 12, 161-191. Koltsakis, G.C. & Stamatelos, A.M. (1999) Modeling dynamic phenomena in 3-way catalytic converters. Chemical Engineering Science 54, 4567-4578. Lacin, F. & Zhuang, M. (2000) Modeling and Simulation of Transient Thermal and Conversion Characteristics for Catalytic Converters. SAE Technical Paper Series 2000-01-0209. Mukadi, L.S. & Hayes, R.E. (2002) Modelling the three-way catalytic converter with mechanistic kinetics using the Newton-Krylov method on a parallel computer. Computers and Chemical Engineering 26, 439-455. Schiesser, W.E. (1991) The Numerical Method of Lines. Academic Press, London. Shampine, L.F & Reichelt, M.W. (1997) The matlab ODE suite. SI AM Journal on Scientific Computing, 18, 1-22.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
545
Gas-Liquid and Liquid-Liquid System Modeling Using Population Balances for Local Mass Transfer Ville Alopaeus^ Kari I. Keskinen^'^, Jukka Koskinen^ and Joakim Majander^ ^ Neste Engineering Oy, POB 310, FIN-06101, Porvoo, Finland ^ Helsinki University of Technology, Laboratory of Chemical Engineering and Plant Design, POB 6100, FIN-02015 HUT, Finland ^Enprima Engineering Oy, POB 61, FIN-01601 Vantaa, Finland
Abstract Gas-liquid and liquid-liquid stirred tank reactors are frequently used in chemical process industries. The design and operation of such reactors are very often based on empirical design equations and heuristic rules based on measurements over the whole vessel. This makes it difficult to use a more profound understanding of the process details, especially local phenomena. The scale-up task often fails when the local phenomena are not taken into account. By using CFD, the fluid flows can be taken into closer examination. Rigorous submodels can be implemented into commercial CFD codes to calculate local two-phase properties. These models are: Population balance equations for bubble/droplet size distribution, mass transfer calculation, chemical kinetics and thermodynamics. Simulation of a two-phase stirred tank reactor proved to be a reasonable task. The results revealed details of the reactor operation that cannot be observed directly. It is clear that this methodology is applicable also for other multiphase process equipment than reactors.
1. Introduction Computational fluid dynamics (CFD) approach has become a standard tool for analyzing various situations where fluid flow has an effect on the studied processes. Numerous studies using CFD for chemical process industry have also been reported. Mostly, they have been simple cases as the system is non-reacting, contains only one phase (liquid or gas), or physical properties are assumed constant. When we are dealing with multiphase systems like gas-liquid or liquid-liquid systems we must take into account some phenomena which are not of importance for one-phase systems. The vapor-liquid or liquid-liquid equilibrium is one of these that are needed in order to model the system. In addition to that, mass and heat transfer between the phases must generally be taken into account. Also, the two-phase characteristics of fluid flow need to be taken into consideration in the CFD models. Because CFD originates outside the field of chemical engineering or reaction engineering, the CFD program packages as such are normally not particularly well suited for modeling of complex chemical reactions or rigorous thermodynamics. Fortunately, the CFD program vendors have noticed this as they have provided the
546 possibility to include user code with their flow solver. In some cases, however, this is not quite a straightforward task.
2. Population Balances In the population balances, the local bubble size distribution is modeled. In practice, it means that the numbers of bubbles of various sizes are counted. The bubble size distribution is discretized into a number of size categories, and the number of bubbles belonging to each of the size categories is counted in each of the CFD volume elements. The dispersed phase is here referred as bubbles, but it may be liquid droplets or solid precipitates as well. The source terms for the bubble numbers are due to breakage and coalescence of bubbles, and mass transfer induced size change. Other sources (such as formation of small bubbles through nucleation mechanisms) were neglected in this study. The discretized population balance equation can then be written in the following form
"*
y=i+l
;=;
(1) 7=1
Various models for bubble breakage and coalescence rates are presented in the literature. These rates usually depend on physical properties, such as densities, viscosities and surface tension, and on turbulence properties, most commonly the turbulent kinetic energy dissipation rate. To calculate local bubble size distributions, also local physical properties and turbulence level should be used. This can be done via CFD (Alopaeus et al. 1999, 2002; Keskinen and Majander 2000). For the breakage frequency, the following function was used:
f g(a,)=C,f"'erfJ
Mc '^2
_ ,2/3„5/3 •'"^3
-^ (2)
V
For the daughter number distribution, P, the beta function is used
/?(a,,a^.)= 90at a,
( a. ^^\(
,V
^3
(3)
For the bubble coalescence rate, the following function is used. (4)
547 In the population balance equations, the number density of the bubbles is counted. This approach has been used in the simulation of two-phase processes in flowsheet simulators and in testing of the population balance models. However, in the CFD, the bubbles are divided into size categories according to mass fractions. Thus an additional interface code is needed between the user population balance subroutines used in a flowsheet simulator and that used in CFD. To calculate local mass transfer rates, local mass transfer area is obtained from bubble size distributions. Mass transfer fluxes are calculated in separate subroutine, and the mass transfer rate is obtained by multiplying mass transfer area by mass transfer fluxes.
3. Example 1: Gas-Liquid Stirred Tank Reactor In the first example, a stirred tank reactor with gaseous reactant feed was modeled. In the following figures, a detail of a stirred tank reactor is shown. The reactor is modeled with CFD, and user subroutines are implemented for population balance calculation, local mass transfer flux calculation, chemical kinetics and thermodynamics. All these were solved simultaneously with a CFD code.
Aui f, i 1 ^ -; ' *
Figure 1. Gas volume fraction distribution (left) and Sauter mean bubble diameter (right) in a gas-liquid stirred tank reactor.
4. Example 2: Liquid-Liquid Stirred Tank for Drop Size Measurements In the second example, a stirred tank with two liquid phases was modeled. The tank was used to measure liquid drop size distributions for fitting of the population balance parameters. The tank was modeled with CFD in order to examine the flow patterns and turbulence levels in the tank.
548
I
95.0 94.4
" '"^^^^'^S ^^^'I'^CT^S^^
93.8 93.2 92.6 92.0 91.4 90.8 90.2
i
89.6 89.0
Figure 2. Sauter mean drop diameters for a liquid-liquid system. Measuring probe is located at the front.
5. Conclusions CFD has become a standard tool for analyzing flow patterns in various situations related to chemical engineering. In many cases related to multiphase reactors, mass transfer limits overall chemical reaction. In these cases the accurate calculation of local mass transfer rates is of utmost importance. This is best done with the population balance approach, where local properties are used to model bubble or droplet breakage and coalescence phenomena. It has been proven that these rigorous models along with other multiphase and chemistry related models can be implemented in the CFD code, and solved simultaneously with the fluid flows.
549
6. Symbols A(3 a a^i C1...C4 F(ai, aj) g(a) Yi £ A(ai, aj) JLI p a
width of droplet class (m) drop diameter (m) Sauter mean diameter, ^32 = S aj^ / Z a-^, (m) empirical constants binary coalescence rate between droplets a, and Uj in unit volume (m's-^) breakage frequency of drop size a (s"^) number concentration of drop class / (m'^) turbulent energy dissipation (per unit mass) (m^ s""^) collision efficiency between bubbles a, and a^ viscosity (Pa s) density (kg m"^) interfacial tension (N m'^)
7. References Alopaeus, V., Koskinen, J., Keskinen, K.I., and Majander, J., Simulation of the Population Balances for Liquid-Liquid Systems in a Nonideal Stirred Tank, Part 2 Parameter fitting and the use of the multiblock model for dense dispersions, Chem. Eng. Sci. 57 (2CX)2), pp. 1815-1825. Alopaeus, V., Koskinen, J., and Keskinen, K.I., Simulation of the Population Balances for Liquid-Liquid Systems in a Nonideal Stirred Tank, Part 1 Description and Qualitative Validation of the Model, Chem. Eng. Sci. 54 (1999) pp. 5887 5899. Keskinen, K.I., and Majander, J., Combining Complex Chemical Reaction Kinetics Model and Thermophysical Properties into Commercial C F D Codes, presentation at AIChE 2000 annual meeting, Los Angeles, CA.
This Page Intentionally Left Blank
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
551
Robust Optimization of a Reactive Semibatch Distillation Process under Uncertainty H. Arellano-Garcia, W. Martini, M. Wendt, P. Li, G. Wozny Institute for Process and Plant Technology, Technische Universitat Berlin D-10623 Berlin, Germany
Abstract Deterministic optimization has been the common approach for batch distillation operation in previous studies. Since uncertainties exist, the results obtained by deterniinistic approaches may cause a high risk of constraint violations. In this work, we propose to use a stochastic optimization approach under chance constraints to address this problem. A new scheme for computing the probabilities and their gradients applicable to large scale nonlinear dynamic processes has been developed and applied to a semibatch reactive distillation process. The kinetic parameters and the tray efficiency are considered to be uncertain. The product purity specifications are to be ensured with chance constraints. The comparison of the stochastic results with the deterministic results is presented to indicate the robustness of the stochastic optimization.
1. Introduction In the chemical industry, operation policies of batch processes are mostly determined by heuristic rules. In the previous studies, deterministic optimization approaches have been used (Low et al., 2002; Li et al., 1998; Arellano-Garcia et al. 2002), using a model with constant parameters. Since the operation policy developed is highly sensitive to the model parameters and boundary conditions, product specifications may often be violated when implementing it in the real plant. For a reactive batch distillation, the chemical reaction kinetic parameters in the Arrhenius equation are usually considered as uncertain parameters, since they are often determined through a limited number of experimental data. The tray efficiency is another uncertain parameter which is important for the batch operation. Furthermore, the amount and composition of the initial charge are also uncertain, since they are mostly product outputs of a previous batch. To achieve a robust and reliable operation policy, a stochastic optimization approach has to be considered. This work is focused on developing robust optimal operation policies for a reactive semibatch distillation process, taking into account the uncertainties of the model parameters. Under the uncertainties, the product specifications are to be satisfied with a predefined confidence level. This leads to a chance constrained dynamic nonlinear optimization problem. Wendt et al. (2002) proposed an approach to nonlinear chance constrained problems, where a monotone relation of one uncertain input to the constrained output is exploited. We extend this approach to solve dynamic nonlinear problems under uncertainty and apply it to the batch distillation optinndzation problem.
552
2. Problem Description We consider an industrial reactive semibatch distillation process. A trans-esterification of two esters and two alcohols takes place in the reboiler. A limited amount of educt alcohol is fed to the reboiler to increase the reaction rate. The product alcohol is distillated from the reboiler to shift the reaction in the product direction. In the main cut period the product alcohol is accumulated with a given purity specification. In the offcut period, the reaction proceeds to the end of the batch and results in a mixture of the product ester and the educt alcohol in the reboiler. The composition of the educt ester is required to be smaller than a specified value, so that a difficult separation step can be avoided. The aim of the optimization is to minimize the batch time. The independent variables of the problem are the feed flow rate F of the educt alcohol and the reflux ratio Rv. The deterministic and stochastic nonlinear dynamic optimization problem can be formulated as follows min t,(F(t),Rv(t),t„,tJ
min t, (F (t),R^(t), t,, t J
s.t. the model equation system and
s.t. the model equation system and
Dj>Dr"
(PI)
Di>D7
^ A,NST ( t f ) ^ X A,NST
" l ^ A,NST (^ f ) — ^ A,NST J — ^ 2
rF(t)dt=Mi
rF(t)dt = M,
F''
F^
(P2)
with XD,I and XA,NST as the average distillate composition and the bottom purity. To handle the fraction switching time, r„, and the total batch time, tf, the lengths of the different time intervals are also regarded as independent variables. Di and D/^^^ are the total amount of the distillate product and its predefined lower bound, ai and a2 are userdefined probability levels to hold the two specifications. (PI) requires the exact knowledge of all inputs and model parameters. In most prevailing deterministic optimization approaches the expected (nominal) values of these uncertainties are usually employed. This leads to the possibility that some specifications may be violated when applying the a priori optimization results. To restrict the risk of such violation under the uncertainties, the constraints can be ensured with a user-defined probability. Thus, the solution of (P2) leads to a robust optimal operation policy.
3. Deterministic Optimization Results To implement the deterministic optimization approach, the sequential strategy proposed by Li et al. (1998) is used, where the whole algorithm is divided into one optimization layer with SQP as a standard NLP solver and one simulation layer, where all dependent variables are computed through an integration step. The model is a large scale DAE system which is discretized with collocation on finite elements. The whole batch time is discretized into 30 time intervals. The control variables are set as piecewise constants. The computed trajectories of the control variables for the optimal operation are illustrated in Fig. 1.
553
Fig. 1. Optimized policy by deterministic optimization approach. The optimal results indicate both the thermal separation and chemical reaction effects. The more product alcohol in the entire column, the less reflux ratio we need to satisfy the purity restrictions. The slow increase of the reflux ratio during the first three hours is allowed, since a large amount of product alcohol results from the drastic increase of the feed flow of the educt alcohol. However, when the feed flow has reached its maximum value, the reflux ratio needs to increase drastically in order to ensure the distillate purity constraint. The decrease of the reflux ratio can be explained with the time delay between the feed supply of educt alcohol and the resulting effect of formation of product alcohol caused by the chemical reaction. We assume the two kinetic parameters and the tray efficiency have a correlated multivariate normal distribution. Fig. 2 shows their impact on the two outputs by the optimized policy, through Monte Carlo simulation. It can be seen that the risk of violating the purity specification is near 50%. A
I -
38000
]
37500
4OS00 43000 4550 Frequency Factor 1
39500
41500 43500 45500 Frequency Factor 1
47500
s
J
8
h 1 1
.x-:^v i:«.^
CL
21^ 13000
14000
15000 16000 Frequency Factor 2
. .V. •*^-.i^.V^l$l<^;4-
•
17000
Fig. 2. Constrained outputs by optimal policy using deterministic approach.
554
4. Impacts of the Uncertain Inputs to the Constrained Outputs To find a proper monotone relation, the impacts of the uncertain parameters on the constrained output variables are analyzed. Using the trajectories of the controls by the deterministic optimization, the constrained outputs are computed with values around the expected value of the uncertain parameters by simulation. Simulation shows that the two frequency factors have a strong impact on the output constraints in all regions. They have monotone relations with both restricted output variables.
1
1
Product
/
^—'—T""""
Educt
1
1 / \^ ^***'
Sg i
\
, , , 40%
'
- -
4
1
1
*< I
, T^ ~r t
60%
Tray Efficivncy
Fig. 3. Relation between tray efficiency and the constraint output variables. The tray efficiency causes a strongly negative effect on the purity constraints in lower regions while causing a slightly positive effect in upper regions. The relation to both constrained outputs is strongly monotone, as shown in Fig. 3. Therefore, we have an uncertain variable which has a monotone relation to both output constraints, thus the principals of the approach proposed by Wendt et al. (2002) can be extended for solving the dynamic stochastic optimization problem.
5. Dynamic Chance Constrained Optimization The basic idea is the derivation of an equivalent representation of the probability by mapping the output feasible region to a region of the uncertain inputs. For dynamic processes, a more efficient dynamic solver is required to solve dynamic problems with a constraint variable y^^(tf) for a fixed time point tj- and the uncertain parameters occurring throughout the entire operation time with different control parameters u in different time intervals. The procedure of the dynamic solver can be divided into two steps: 1) determination of the reverse projection of the feasible region by the bisectional method and 2) computation of the gradients. The method is based on formulation of the total differential of the model equations g (jc, M, ^) •
dx du
d^^ du
du
Therefore a large-scale system of equations will be generated as follows:
(1)
555
(2)
c.
A /, An
J.
where 7, denotes the Jacobian (^/cbc|) at time interval / and m is the number of time intervals. Q is the gradient (^/a^^)' ^' ^^ i^/dr 1 ^^^ ^' ^^^nifies (3f/9M|.). The Jacobian at the last time interval Jm is adjusted by replacing the constrained variable with ^5. Thus, the desired gradient p S / 1 is included in the last line of the matrix, which denotes the gradients ^x/ \. The unknowns in this equation system, the values for xu, will be computed using Gauss elimination. This method for computing the probabilities and their gradients is one important part of solving dynamic problems.
6. Stochastic Optimization Results The approach is used to solve the batch distillation optimization problem (P2). Fig. 4 shows the resultant optimal policy. Compared to the results of the deterministic approach, the reflux ratio is slightly higher, which is necessary for lowering the risk of violating the purity constraints. This inevitably means that the total batch time has to be a little longer (5.6h) than that of the deterministic approach (5.28h). The deterioration of the objective value is obviously the price for a higher robustness. That the desired robustness is achieved can be easily seen in Fig. 5, the distribution of the constrained output variables are illustrated according to the operation policy obtained by the stochastic approach. Setting the probability level for both chance constraints to the value of 0.9, by the robust optimization policy, the risk of violation is reduced to less than 10%. s
J
K 3
Sjif 0,0
0,5
1,0
1,5
2,0
2,5
3,0
3,5
4,0
4,5
5,0
5,5
Time[h]
Fig. 4. Optimal policy by the stochastic optimization approach.
556
"^5500
40500
43000
45500
Frequency Factor 1
se
gS
8i o S
, •
.•
,
t
^
.
••« *
• . . • . . » , . .^... .*
Q:S
g 15000
16000
Frequency Factor 2
.^
• 15000
16000
Frequency Factor 2
F/^. 5. Constrained outputs by optimal policy using stochastic approach.
7. Conclusions and Acknowledgement A stochastic dynamic optimization approach has been successfully implemented for a reactive semibatch distillation process. The aim is batch time minimization subject to product purity restrictions. A method for computing the probabilities and their gradients is developed to solve the dynamic stochastic optimization problem. The results obtained by the implementation with a higher probability level show that the consideration of uncertainties with chance constraints leads to a trade-off between the objective value and robustness. A comparison of the stochastic results with the deterministic results is made with respect to the objective values and the reliability of satisfying the purity constraints. We thank the Deutsche Forschungsgemeinschaft (DFG) for the financial support under the contract WO 565/12-1.
8. Literature Cited Arellano-Garcia H., Martini, W., Wendt, M., Li, P., Wozny, G. 2002, Improving the Efficiency of Batch Distillation by a New Operation Mode, in: J. Grievink and J. V. Schijndel, Proc. ESCAPE12, Elsevier, 619-624. Li, P., Arellano-Garcia., H., Wozny, G., Renter, E. 1998, Optimization of a Semibatch Distillation Process with Model Validation on the Industrial Site, Ind. Eng. Chem. Res., 37: 1341-1350. Low K.H., Sorensen, E. 2002, Optimal Operation of Extractive Distillation in Different Batch Configurations, AIChE Journal, 48(5), 1034-1050. Wendt, M., Li, P., Wozny, G. 2002, Nonlinear Chance Constrained Process Optimization under Uncertainty. Ind. Eng. Chem. Res., 41, 3621-3629.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
557
Solution of the Population Balance Equation for LiquidLiquid Extraction Columns using a Generalized FixedPivot and Central Difference Schemes Menwer M. Attarakih\ Hans-Jorg Bart^ and Nairn M. Faqir^ ^Kaiserslautern University, Faculty of Mechanical & Process Eng., Institute of Thermal Process Eng., POB 3049, D-67653 Kaiserslautern, Germany. ^University of Jordan, Faculty of Eng. & Technology, Chemical Eng. Department, 11942, Amman, Jordan.
Abstract In this work, the so-called fixed pivot technique is generalized to discretize the full population balance equation describing the hydrodynamics of liquid-liquid extraction columns (LLEC) with respect to droplet diameter. The spatial variable is discretized in a conservative form using a couple of the recently published central difference schemes. These schemes are combined with an implicit time integration method that is essentially noniterative by lagging the nonlinear terms. The combined numerical algorithm is found fast enough for the purpose of simulating the performance of the LLECs.
1. Introduction In liquid-liquid extraction columns droplet population balance models are now being used to describe the hydrodynamics of the dispersed phase as well as the mass transfer. For the hydrodynamics this model accounts for droplet breakage, coalescence, axial dispersion, exit, and entry events. The resulting population balance equations are integro-partial differential equations (IPDE) that rarely have an analytical solution. These IPDEs are usually projected into a system of hyperbolic partial differential equations by discretizing the droplet diameter. This is done using the method of weighted residuals with the evaluation of many double integrals especially when the breakage and coalescence functions are not separable (Mahoney and Ramkrishna, 2002). In the present work we generalized the fixed- pivot technique of Kumar and Ramkrishna (Ramkrishna, 2000) to handle any integral property of the population number density for continuous flow systems. This technique has the advantage of being free of double integrals and hence it is considered computationally efficient. The discretization of the spatial variable is usually accomplished by the finite volume method using upwind differencing schemes. Kronberger et al. (1995) used the vector flux splitting based on the sign of the local droplet velocity; while the second order scheme they used is still dependent on the Riemann approximate solvers. In the present work we utilize the recently developed family of central differencing schemes of Kurganove and Tadmor (2000). These schemes are of first and second order accuracy and having the advantage of being free of characteristic decomposition beyond the CFL (Courant, Friedrichs and Lewy) related local speeds. The time discretization is
558 accomplished using a first order implicit approach that is essentially noniterative by careful lagging of the nonlinear terms. The accuracy of the combined algorithm is tested using a simplified analytical solution of the population balance equation (PBE).
2. The Mathematical Model The PBE describing the behavior of the dispersed phase in a continuous LLEC in terms of the concentration of a general quantity , p(v), could be written as: dp , d[U,p] dt dz
dz
z>.|Va
fuffie,^ S{z-Za) + p{p,v]
(1)
where p= u(v) N(t)f[v;t,z)Sv is the average quantity associated with droplets having a volume between v±dv per unit volume of the dispersion at height z and time t. N(t) is the average total number of droplets per unit volume of the dispersion and f[v;t,z) is the average droplet number density function. Ud is the dispersed phase velocity that is determined by coupling the volume balances of the two phases (Kronberger et al., 1995). Note that/^^^'^Cv) is the normalized number density of the droplets leaving the distributor with an average volume v. The last term on the right hand side is the net rate of droplet generation by coalescence and breakage per unit volume and unit time (Ramkrishna, 2000). In order to complete the specification of the problem, initial and boundary conditions are to be defined. The initial conditions are dependent on the start up situation of the LLEC, however the Danckwert's boundary conditions are utilized for this model by considering the LLEC with an active height H to behave like a closed vessel between O^and H~ (Wilburn, 1964).
3. Discretization of the PBE with Respect to Droplet Diameter The droplet volume (and hence diameter) is discretized using the fixed pivot technique. Consequently it is partitioned into Mx grid points according to the structure: v. = v.^^,2' ^/ -(^i-i/i'^^i+i/i)^^ associated
^ where jc, is called the fixed iih pivot. Let the total quantity
with the population density in the iih interval be defined
as:
/•v+l/2
(pXt^z) =
u{v)n(v,t,z)ov
where the remarkable significant of this quantity will be
•'Vl/2
Utilized in defining the total hold up (when u=v) of the dispersed phase ( 0 = 2^^,). The quantity, p, is now expanded using a point wise sampling of the related function at the pivot points: (the dependence of the discrete variables on t and z will be omitted for sake of simplicity): p(v,^z) = J ^ , ( J ( v - x , )
(2)
1=1
The basic idea in the fixed pivot technique is that when a droplet of volume, v, is
559 produced by either breakage or coalescence it will never coincide exactly with the /th pivot except for linear grid. So, the volume of this droplet will be assigned to the adjacent pivots such that any two moments of order mj and m2 related to the population density are conserved. Following this, Eq.(l) is discretized with respect to droplet volume by integrating its both sides over the /th grid boundaries and making use of Eq.(2) and after some algebraic manipulations we get:
dt
dz
dz
D. ' dz
Qd^^feed)
(3)
A
The discrete source term will be written in a rather compact form by introducing the idea of the interaction matrices for droplet breakage and coalescence which decouples the working variables vector, cp, from the grid structure. Consequently, the interaction matrices are generated only once a time even for time dependent frequencies. Now the discretized PBE (3) is projected onto the droplet diameter coordinate using the identity: un(v,t, z)ov =
un{d,t, z)od that leads to the vector function source term:
p = PL(p-\-(p^ •\y^'^(p]^-qy{co(p\
(4)
Note that the breakage matrix A is upper triangular with elements that are given by:
(5) ^u,i,k = f r /^uid I dl)rr'>(m,,m,,d)Sd^\y'
di.
PSd I
dl)rr{m,,m,,d)dd
di+\i2
where 7^' ^"^ and y^^ are the fractional volumes during droplet assignment satisfying the conservation of any two moments of order my and m2 (Ramkrishna, 2000), andP^{d \ d') = u{d)j3{d \ d')/u(d') is a modified daughter droplet distribution. The /th coalescence interaction matrix ^^'^ depends only on the grid structure with nonzero elements: T^'J = co(d^j,dl,(/>)Y^'j , where cois the coalescence frequency and.
u{d]+dl)
n--Sjj[rr^(m,,m,,d]^di)u{d])u{dl) [^-^3j,^]lrf^(rrh^rn,,d]^dl)
u{d]+dl) u{d])u{dl)
,ifdU
,ifdf
It is worthwhile to note that Sj^k is the kronecker delta and the symbol, •, appearing in Eq.(4) denotes the element by element product between the two given vectors.
560
4. Discretization of the PBE with Respect to Space and Time Eq.(3) represent a system of conservation laws that are coupled through the convective and source terms and is dominated by the convective term for typical values of D^ and Ud encountered in LLECs (Peclet No.^lxlCfH -2xl(fH). Since we are interested in the droplet volume balance to determine the column hold up, we let mi=0 and m2=l corresponding to conserving the zero and the first moments of the number population density. Due the dominance of the convective term it is expected that the hold up profile of each class (^,) will move with time along the column height with a steep front. So, accurate front tracking discretization approaches are to be used such as the nonoscillatory first and second order central difference schemes. Let the /th convective flux be denoted as F^ - U^/p^ and the staggering spatial grid: Zi^^i2 = z, ± Az / 2 and the average cell hold up as (p.i = \'^"^(p^{t,z)SzlISz.
The convective flux then is
discretized in conservative form using the Kurganove and Tadmor (2000) central schemes as follows:
Pi.Mn =
f'((PiMi^M,M /dz)-^F{(p,i,d(p,i Idz)
5,,,,i/2 (
Z
r—
^ij^i -(Pi
M,M dz
M,i 1 (7) dz
where the numerical derivatives appearing in Eq.(7) are reconstructed from the computed cell averages using a minmod-like limiter with adjustable parameter OE[1,2]. The local maximum speeds, S^^^yj' ^^^ evaluated component wise without resource to the full evaluation of the Jacobian df/d(p . This is accomplished by lagging the nonlinear term during the implicit time discretization where the numerical flux becomes locally linear f^^ =U'^(p'^^2it the time level r+7. Note that the above central differencing scheme is of second order accuracy and could be made first order by setting the spatial derivatives in Eq.(7) to zero. The diffusion term appearing in Eq. (3) is approximated using the difference between two adjacent first order derivatives. The time discretization is accomplished using the implicit Euler method where it is made noniterative by carefully lagging the nonlinear parts in the convective and source terms.
5. Numerical Results and Discussion Before we apply the numerical approach developed in this work to the full PBE, we would like to gain some trust in its performance by comparing it with some standard although simplified analytical solution. So, we consider a LLEC with a dispersed phase introduced at the location (Zd>0) to flow freely upward through a stagnant continuous phase. The dispersed phase velocity, Ud, could be taken as the free rising velocity of a single droplet for low values of hold up (Ut) and the axial dispersion of the dispersed phase is neglected as well as the droplet breakage and coalescence. The latter assumption should not be considered as an oversimplification of the problem since the authors have tested the performance of the droplet diameter discretization separately (Attarakih et al., 2003). Accordingly the analytical solution for the system of Eqs.(3) using the Laplace transforms could be simplified in terms of dispersed phase hold up to:
561
(pit, z)=f/r I"'"' p"" id)/u, {d)u t-^-^\Sd
(8)
U,{d)
where u is the unit step function and U^J"^ is the superficial dispersed phase velocity. To completely specify the problem we assume the following for a laboratory scale LLEC://=2.55 m, Zd=0.25 m, Zc=2.25 m, column diameter = 0.15 m, the feed distribution is exponential in terms of number density (e''^'''^), dmin = 0.1 mm, dmax=^fnm, and the volumetric flow rate of the dispersed phase is O.lm^/h. The single droplet terminal velocity is taken: U^ = 0.036J , where d is in mm and the droplet diameters as well as the spatial grids are uniformly constructed. The initial condition is taken as zero (no dispersed phase present initially). All the numerical testes are conducted using a PC processor Pentiumlll with 750 MGH speed and digital visual FORTRAN version 5.0. The Lj error ( l i ( 0 = S £ | (p^r'^iO-^tr'^it)
\ Ad^Az^) is
1=1 /=i
used to test the accuracy of the numerical solution at a given instant of time. In this work , the first and second order central schemes are denoted by KTl and KT2 respectively. Fig. 1 shows the convergence characteristics of the present numerical algorithm when compared with the analytical solution given by Eq.(8). First both KTl and KT2 converge in the senesce of the Lj error as the number of pivots and spatial cells is increased. Second the Lj error of the KT2 method is approximately 40% lower than the KTl method as expected where the minmod parameter, 0 is set equal to 2. Fig. 2 compares the analytical and the numerical solutions using both KTl and KT2 after t=10s. As expected KTl shows numerical dispersion especially around the steep moving front and hence the front is somewhat smeared. However, the KT2 scheme tries to capture the moving front in a better way with a slight increase in the CPU time. The CPU time is compared for both schemes using 500 time steps of length At=0.02s using the implicit Euler method where it is found that KTl needs 5s while KT2 7s. Next we applied the present numerical approach to simulate the full PBE including droplet breakage and coalescence with the following frequencies: T = 1.4x10'd\co = (d^ ^d^)llOO ,j3{d\d) = 6d^/d^, grid of 15x100 cells and U, =0.036J(1-0). Fig. 3 shows the steady state droplet volume distribution calculated using the above specifications and the KTl and KT2 schemes. It is clear that the droplet breakage is dominant and so as the droplets ascend the column the distribution is shifted to the small size range. This results in a nonuniform dispersed phase hold up along the column. The KTl and KT2 schemes produced identical results since the steady state profile is not sharp.
6. Conclusions The PB approach is used to model the complex behavior of the dispersed phase in a general LLEC. The resulting IPDEs are projected into a system of nonlinear and coupled conservation laws by generalizing the fixed pivot technique. These conservation laws were spatially discretized using nonoscillatory central differencing schemes. The implicit time discretization is made noniterative by careful lagging of the nonlinear terms. The extension of the present algorithm to mass transfer is under current development.
562
150 30
No. of pivots
200
No. of spatial cells
Figure 1: Convergence of the central differencing schemes, t=10 s: KTl & KT2.
3
. . .
^
Analytical VFS+FLEX KTl KT2
2.5 ^ 2 1.5
0.5
time =10 8
1 •
-
\
0 Column Height (m)
Figure 2: Comparison between KTl and KT2 methods.
Droplet Diameter (mm)
Column Height (m)
Figure 3: Steady state droplet volume distribution using KTl and KT2 methods.
7. References Attarakih, M.M., Bart, HJ. and Faqir, M.M., 2003, Chem. Engng. Sci, in press. Kronberger, T., Ortner, A., Zulehner, W. and Bart, H.J., 1995, Computers Chem. Engng., 19, S639. Kurganov, A. and Tadmor, E., 2000, J. Comput. Phys., 160, 241. Mahoney, A.W. and Ramkrishna, D., 2002, Chem. Engng. Sci., 57,1107. Ramkrishna, D., 2000, Population Balances, Academic Press, San Diego. Wilburn, N.P., 1964, Ind. and Engng. Chem. Fund., 3, 189.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
563
IdentiHcation of Multicomponent Mass Transfer by Means of an Incremental Approach Andre Bardow, Wolfgang Marquardt* Lehrstuhl fiir Prozesstechnik, RWTH Aachen, D-52056 Aachen, Germany
Abstract In this work, an incremental approach to model identification is introduced. The new method follows the steps of model development in the identification procedure. This reduces uncertainty in the estimation problems. Furthermore, it allows the efficient exploitation of model structure and thereby reduces the computational expense substantially. The proposed method is applied to examples from binary and ternary diffusive mass transfer and its performance is compared to current approaches.
1. Introduction The identification of models for kinetic phenomena in multiphase reactive systems is still an open question. In the common approach to model identification model candidates based on prior knowledge or physical insight are proposed and inserted in the balance equations. Measured data is then used to estimate unknown parameters in the fully specified models. We therefore call this approach simultaneous identification. Model discrimination techniques are finally employed to identify the most suitable model (see e.g. Asprey and Macchietto, 2000). The method suffers from several drawbacks: introducing model candidates into the balance equations may bias the estimation problem (e.g. Walter and Pronzato, 1997); computational cost is high since the whole estimation problem is solved for every possible candidate (e.g. Asprey and Macchietto, 2000); and the methods are usually tailored for point measurements with low-resolution (e.g. Schittkowski, 2002) whereas modem experimental techniques offer data with high spatial and temporal resolution. In this work a general incremental approach to model identification is proposed which minimizes model uncertainty and reduces computational cost while efficiently utilizing data from high-resolution measurement techniques. The approach tries to reflect the inherent model structure in the identification procedure itself The method will be illustrated with examples from diffusive mass transfer.
2. Incremental Modeling and Model Identification 2.1. Model development The key idea of incremental model identification is to mimic model development in model identification. For conciseness we will present the approach for isothermal binary diffusion. The extension to multicomponent mixtures is given in the examples below. * Correspondence should be addressed to W. Marquardt, [email protected]
564 Modeling of diffusion processes usually starts by formulation of the balance equations for each component
Model B:
dci _ dt
3(qw^)
dJi
dz
dz
(1)
where u^ is the volume average velocity, cj the concentration and Ji^ the diffusive flux of species 1. Next, constitutive equations have to be given for the convective and the diffusive flux. In diffusion experiments volume effects can usually be neglected, leading to w^ = 0 (Tyrell and Harris, 1984). Still, the modeler has to specify the diffusive flux. For binary diffusion, all models can be related to Pick's law (Taylor and Krishna, 1993) M o d e l T : y r =-D- 12 ^^^ dz
(2)
where DJ^ is the binary Pick diffusion coefficient. It is a function of composition in concentrated liquid mixtures. Therefore, a further constitutive equation has to be employed. Today, there is still uncertainty about a suitable model, especially for multicomponent mixtures (Taylor and Krishna, 1993). We therefore state the generic relationship Model D: Dl^^
f{x,e)
(3)
where x represents the mole fraction and the vector 0 collects all constant coefficients. If the function/and its parameters are known the model can be solved. 2.2. Model identiHcation The simultaneous approach to identification is computationally expensive and may lead to biased estimates. It neglects the inherent model structure with its sequence of models, each containing further assumptions about the process. In contrast, the incremental approach follows the steps of model development for model identification (Pig. 1). measured data structure of system
I states Xj(z,t) fluxes J(z,t)
model B
balances
model BT
balances
transport law
model BID
balances
transport law
structure for fluxes
structure for coefficients!
~1
coefficients D(z,t)
diffusion coefficient
model structure and parameters for diffusion process
parameters
n
Figure 1: Incremental approach to identification of diffusion models.
565 2.2.7. Model B: Balances The balance equation (1) contains the least uncertainty. Without introducing potentially uncertain constitutive equations, the diffusive flux itself is computed from this equation as a function of space and time
Model B: J^zJ)
=- f ^ ^ ^ ^ z ' J dt
(4)
assuming the boundary {z=0) impermeable. The main difficulty in the evaluation of Eq.(4) is the estimation of the time derivative of the measured concentration. This is an ill-posed problem, i.e. small errors in the data will be amplified. Regularization techniques have to be employed. A smoothing spline is used here (Reinsch, 1967). The diffusive flux estimation requires only the solution of the linear Eq. (4) independent of the number of candidate models. This decoupling of the problem carries fully over to the multicomponent case and reduces the computational expense substantially. 2.2.2. Model BT: Transport laws Different candidate models for the description of the diffusive flux may be introduced. The unknown diffusion coefficients are then computed as function of space and time. In the binary case, the diffusion coefficients are calculated from Eq. (2) as
ModelT: D\^{z,t) = J)^^'']^ . dc^{z,t)ldz
(5)
The spatial derivative is also calculated by the smoothing spline approach. Since transport coefficients have a physical interpretation which results in certain restrictions (e.g. positivity), those models violating any restriction could be discarded already at this stage. 2.2.3. Model BTD: Diffusion coefficients Several m o d e l s / ^ (Eq. (3)) for the diffusion coefficient are assumed. The parameters are estimated using the diffusion coefficients from Eq. (5) and the measured mole fractions. Since both quantities contain errors this is an error-in-variables problem:
^f^[^e\D^2iZi,0-f''{x{z,,U)
+ S^,e'')] + wsSl^
(6)
where J represents the error in the mole fraction and w weights for both types of errors. Here, the efficient solution method by Boggs et al. (1992) is used. Finally, the adequacy of the model candidates is quantified using the a posteriori probability for each model M according to the data Y (Stewart et al., 1998). In the case of unknown variance it can be calculated from p(M|y)-p(M)2-''«'%/2r'"
(7)
566 where the sum SM is the residual sum of squares and PM is the number of independent parameters in model M, To rank the models, probability shares n are calculated as TT^ = p(M I y ) / \ p(M'| Y). Based upon that value the most suitable model is chosen. It should be noted that this criterion does not carry fully over to the error-in-variables case. It is used here based on the heuristic argument that the errors in the measured mole fractions are expected to be much smaller than the values in the estimated diffusion coefficient. The corrected mole fractions are therefore assumed to be errorfree in the model discrimination step.
3. Numerical Example 3.1. Binary diffusion In this simulation study the diffusion coefficient in the mixture toluene-cyclohexane as given by Sanni et al. (1971) is estimated from mole fraction data. The assumed experimental setup was presented by Bardow et al. (2003). A typical experiment gives 60 concentration profiles with a spatial resolution of 400 points, i.e. a total of 24,000 data points. The simulated concentration profiles were corrupted by Gaussian noise with variance o^=70"^ which corresponds to very unfavorable experimental conditions and shows the robustness of the approach. The analysis is carried out as described above. Fig. 2 compares estimated and true flux values. There is a substantial difference for very small times. The estimation is excellent for larger times. This is due to the steep gradients at the beginning of the run which cannot be easily distinguished from measurement noise. Different polynomials were proposed for the diffusion coefficient. Fig. 3 shows the resulting fit from Eq. (6) and gives the probability shares TIM- Model discrimination favors the linear model due to the difficulty of estimating the full concentration dependence from a single experiment. Commonly, more than 10 experiments are used (e.g. Sanni et al, 1971). Therefore, only the linear trend can safely be deduced and the probability shares indicate the need for further experiments since no model gains high values. 1.2 x10"
true —
0.9
1
estimated |
t = 2 min-^c
—— true constant (71 =13.2%) ^ CO
'
linear (7t|j=39.3%)
1r It
M
•^0 6 t = 6 min
0.3
0
2
4.^. . 6 , position [mm]
Figure 2: Estimated diffusive flux.
8
10
0.2
0.4 ^Toluol
iV
0.8
Figure 3: Estimated diffusion coefficients.
567 Table 1: Ternary diffusion coefficient from a single experiment (noise level o^=10'^). coefficient D,/ Djj Djj D27 D22 value in IQ-^m^/s 4.35 ±0.006 1.69 ±0.012 3.56 ±0.006 6.15 ±0.012 error in % -2.0 -7.8 -2.3 -2.3
In order to compare the proposed method to the simultaneous approach we performed a classical parameter estimation for the linear model. It lead to a very similar solution and the residual sums of squares differ only by 0.11%, even though this objective is not directly employed in the incremental approach. Furthermore, the computational time for the simultaneous approach lies in the order of hours due to the distributed nature of the problem and the high measurement resolution whereas the incremental procedure takes only minutes including the fit of all models and the model discrimination. 3.2. Ternary diffusion In ternary mixtures, the diffusive flux of one component is influenced by the concentration gradients of the other components. This requires the use of Generalized Pick's Law leading to a matrix of four Fick diffusion coefficients (Taylor and Krishna, 1993). Current measurement procedures in ternary mixtures require at least two experiments to estimate these coefficient and the cross coefficients are still ill-determined (e.g. van de Ven-Lucassen et al, 1995). In this section the full diffusion coefficient matrix is estimated from a single experiment using the incremental approach. Sample diffusion coefficients are taken from Arnold and Toor (1967) who studied gas systems for which the diffusion coefficients (Eq. 3) are constant. The estimated values are compared to the true solution in Tab. 1. The diffusion matrix can be estimated from a single experiment with good precision using the incremental approach. It should be noted that the four diffusion coefficients are not identifiable from Eq. (5). But the insertion of the constitutive law for the diffusion coefficient (Eq. (3)) into the flux expression allows to overcome this situation. Furthermore, it should be stressed that this estimation problem is very difficult to solve by the simultaneous approach. The Fick matrix is positive definite which is enforced by three inequality constraints (Taylor and Krishna, 1993). In parameter estimation, a sequential approach with an infeasible path optimization routine is often used. This may not be possible since the model cannot be integrated if the matrix is not positive definite. This limitation does not apply to the incremental approach since no solution of the direct problem is required. It could therefore also be used to initialize the simultaneous procedure.
4. Conclusions and Future Work In this work a new incremental approach to model identification was presented which makes use of the inherent model structure. Thereby, uncertainty in each step of the calculation is reduced. It allows furthermore a decoupling of the problems which reduces the computational cost to several minutes even for distributed systems. The approach is especially suited for high-resolution measurements. The data is used to solve an infinite dimensional problem, the estimation of the diffusive flux. This step
568 may be error-prone with low-resolution data. But it could be shown that the method compares well with results from simultaneous identification strategies which may fail completely for ternary mixtures whereas the incremental approach is robust. A stepwise procedure as proposed here may be even more advantageous when it is difficult to formulate suitable candidate models. This was recognized by Tholudur and Ramirez (1999) for the estimation of reaction rates in a protein production model. Based on the estimation of the generalized fluxes data mining techniques may be employed to discover constitutive relationships. This possibility is currently investigated.
5. References Arnold, J.H. and Toor, H.L., 1967, Unsteady diffusion in ternary gas mixtures, AIChEJ. 13,909-914. Asprey, S.P. and Macchietto, S., 2000, Statistical tools in optimal model building, Comput. Chem. Eng., 24, 831-834. Bardow, A., Marquardt, W., Goke, V., Ko6, H.-J. and Lucas, K., 2003, Model-based measurement of diffusion using Raman spectroscopy, AIChE J., in press. Boggs, P.T., Byrd, R.H., Rogers, J.E. and Schnabel, R.B., 1992, User's reference guide for ODRPACK version 2.01 software for weighted orthogonal distance regression. National Institute of Standards and Technology, NISTIR 92-4834. Reinsch, C.H., 1967, Smoothing by spline functions. Num. Math., 10, 177-183. Sanni, S.A., Felland, C.J.D. and Hutchison, H.P., 1971, Diffusion coefficients and densities for binary organic liquid mixtures. J. Chem. Eng. Data, 16,424-427. Schittkowski, K., 2002, EASY-FIT: A software system for data fitting in dynamic systems. Struct. Multidiscip. O., 23(2), 153-169. Stewart, W.E., Shon, Y. and Box, G.E.P., 1998, Discrimination and goodness of fit of multiresponse mechanistic models, AIChE J., 44, 1404-1412. Taylor, R. and Krishna, R., 1993, Multicomponent Mass Transfer, Wiley, New York. Tholudur, A. and Ramirez, W.F., 1999, Neural-network modeling and optimization of induced foreign protein production, AIChE J., 8, 1660-1670. Tyrell, H.J.V. and Harris, K.R., 1984, Diffusion in Liquids, Butterworths, London. van de Ven-Lucassen, I.M.J.J., Kieviet, E.G. and Kerkhof, P.J.A.M., 1995, Fast and convenient implementation of the Taylor dispersion method, J. Chem. Eng. Data, 40,407-411. Walter, E. and Pronzato, L., 1997, Identification of Parametric Models: from Experimental Data, Springer, Berlin.
6. Acknowledgements The authors gratefully acknowledge the financial support of the Deutsche Forschungsgemeinschaft (DFG) within the Collaborative Research Center (SFB) 540 "Model-based Experimental Analysis of Kinetic Phenomena in Fluid Multi-phase Reactive Systems".
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
569
Development of the US EPA's Metal Finishing Facility Pollution Prevention Tool William Barrett, Ph.D., P.E. and Paul Harten, Ph.D. United States Environmental Protection Agency National Risk Management Research Laboratory 26 W Martin Luther King Dr, Cincinnati, Ohio 45268 e-Mail: [email protected]
Abstract Metal Finishing processes are a type of chemical processes and can be modeled using Computer Aided Process Engineering (CAPE). Currently, the United States Environmental Protection Agency is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a pollution prevention software tool for the metal finishing industry. CAPE is the central component of MFFP2T, and the CAPE-OPEN standards were used in MFFP2T development. MFFP2T development is expanding the application of CAPE-OPEN standards beyond the chemical industry to the metal finishing industry. It has a CAPE-OPEN compliant simulation executive to carry out simulations of user-constructed metal finishing processes, and it includes a graphical user interface (GUI) enabling users to build simulations of unique metal finishing processes that closely represent their own metal plating lines. MFFP2T provides only CAPE-OPEN compliant components. Included in MFFP2T is a Thermodynamic Material Object and Property Package to conduct thermodynamic calculations, such as estimation of thermodynamic properties of materials used in the metal plating process, and chemical equilibrium calculations. MFFP2T includes Unit Operations specific to the metal plating industry, both the basic operations (e.g. Alkaline Cleaning, Vapor Degreaser, Cr"^ Plating) and pollution control operations (e.g. Microfiltration, Ion Exchange, Reverse Osmosis). Many of the modules produced are expected to have immediate application in the chemical industry. MFFP2T utilizes the EPA's Waste Reduction (WAR) Algorithm Package to determine the potential environmental impact of materials used in the plating process. All components are developed using the CAPE-OPEN Component Object Module (COM) interface definition language (IDL), and have passed CAPE-OPEN tests available through the CAPE-OPEN Laboratories Network(CO-LaN).
!• Introduction The USEPA recently proposed new regulations of the metal finishing industry that will significantly reduce the amount and concentrations of wastewater that can be discharged. These new regulations are anticipated to have significant economic impact on the industry.
570 Currently, the USEPA is developing a Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a pollution prevention software tool for use by the metal finishing industry. The goal of MFFP2T is to enable a user to evaluate a metal finishing facility's plating line to determine possible methods available to reduce the quantity of waste generated. During the conceptualization process for this tool, it was determined that the tool must be able to simulate the chemical processes occurring within a metal finishing plating line. Processes to be simulated include the plating tanks, rinse tanks, and waste treatment operations, as shown on Figure 1. These operations can all be modeled using chemical engineering techniques such as unit operations and chemical equilibrium. CAPE-OPEN was chosen as the middleware platform for MFFP2T because of the potential for wide use of components developed as part of the MFFP2T project in the general chemical industry. Conversely, MFFP2T would also have the ability to utilize process optimization packages developed for the chemical industry through the same middleware. The clear advantages of these synergies were the driving force behind the selection of the CAPE-OPEN middleware platform. Microsoft's Visual Studio and COM were selected for development of MFFP2T largely because of Microsoft's extensive Active Template Library (ATL). ^l^nt Emission
> ^ A
^ ' ^^ ^»
Air Emission
Air Cleanser Emission
Acid
Air Emission Einission
Y T ^
" ^ " r
" ^ r
Vapor Degreaserl
Alkaline Cleaning
Electro Cleaning
Acid Etch
Spent Solvent Sludges
V V spent Solution Sludges
V V Spent Solution Sludges
Figure 1. Metal Finishing as a Chemical
V V Spent Acid Process. Sludges
Plating ^^ Make-up ^^ E^ ^Solutions ^ " ^ ° " ^ Emission ^^^^ Emissi Emission
T jChromium Plating
V Spent
V
SolutionsSludges
i
Y
r
Rinse
f7^
Used Rinse Sludges
2. CAPE-OPEN Interface Implementation Because the CAPE-OPEN middleware specification was selected for this project, the CAPE-OPEN interfaces needed to be implemented by the various unit operation objects. This section describes the general implementation of these objects. 2.1. Common interfaces The CAPE-OPEN specification recognized the need for handling concepts that may be required of any model object. The common interface specifications provide basic function specifications that are independent of the implementing object; and following basic principles of object-oriented software design, are created once and reused by multiple components. These interfaces include the ICapeldentification interface, the ICapeParameter interface, the ICapeError interfaces, as well as persistence interfaces and collection interfaces. These interfaces are implemented as base classes, and inherited by derived classes such as the thermodynamics or unit packages. The ICapeldentification interface (Belaud, J.P. and Pinol, 2000) is intended for use in interfaces that wish to expose its name and description. As a process simulation may contain several different instances of a particular class, the ICapeldentification interface provides a vehicle for the simulator to distinguish each instance in a user friendly way
571 by providing two properties, the instance name and description. The interface provides get/set functionality for these two properties. In COM implementations, these properties are passes as basic strings, or BSTR. In the case of the current implementation, the interface is implemented by the CCapeldentification class, which is inherited by the derived classes. A collection interface (ICapeCollection) was specified as a CAPE-OPEN common interface because various objects (such as unit operations) need to expose a list of objects (such as ports or parameters) to the simulation executive (Global CAPE-OPEN 2000). The ICapeCollection interface was designed as a read-only collection, that is, the users of ICapeCollection cannot add or remove elements. For this reason, the ICapeCollection is limited and required augmentation through an additional interface that implements element addition and removal functionality. COM also requires additional enumeration functionality to enable Visual Basic and scripting languages the ability to traverse the collection using the enumerator returned by the _NewEnum property and the lEnumVariant interface. The COM collection interface also includes the optional Add and Remove methods, as listed in Table 1. Table 1. Standard Properties and Methods of a COM Collection Object {Shepherd and King (1999). Member Add method Count method Item method _NewEnum property Remove method
Description Adds indicated item to the collection Returns the number of items in the collection Retums the indicated item in the collection Retums an item which supports lEnumVariant Removes the specified item from the collection
Optional Yes No No No Yes
2.2. Notes on the thermodynamic material object and property package implementation The material object implements a set of interfaces, including the ICapeThermoSystem, ICapeThermoPropertyPackage, ICapeThermoMaterialObject, and ICapeThermoMaterialTemplate interfaces (Global CAPE-OPEN, 2002). The ICapeThermoPropertyPackage interface provides methods to access data contained within the thermodynamic property database for pure substances. The thermodynamic property package object consists of a collection of different components available for inclusion in a material object. The ICapeThermoMaterialObject methods are used to obtain and calculate properties associated with a material object. This object must have direct access to the data contained in the property package object to be able to get properties for pure components from the property package. In the current implementation, the material object contains a collection of components exposed through the IComCollection interface described above. Each component contains the following data: concentration, constant properties, and non-constant properties. The material object also contains the temperature and pressure data which completes the definition of the system and mixture properties calculated for the system. The material object's Componentlds and GetComponentConstant methods enumerate the collection and return arrays of the names of the components present in the material object and their properties.
572
2.3. Unit operation implementation template The CAPE-OPEN unit operation specification provides the following five specific interfaces that each compliant unit operation object must implement: ICapeUnit, ICapeUnitEdit, ICapeUnitPort, ICapeUnit Collection^ and ICapeUnitReport. For this implementation of the unit operation template, some of these methods, such as the collection methods are common to all potential unit operation objects, while other methods, such as the edit, calculation and reporting method are likely to be dependent on the specific unit operation being modeled. As such, the unit operation is created as a template class with some methods not implemented. The unit operation template contains the Capeldentification, CapeUnitEdit and CapeUnitReport base classes and is defined as follows: // CCapeUnit template class ATL_NO_VTABLE CCapeUnit : publi c IDispatchlmpl, public CCapeldentification, public CCapeUnitEdit, public CCapeUnitReport, public CCapeUnitCollection
>
In the mixer-splitter implementation, this class template is used as follows: class
ATL_NO_VTABLE CMixerSplitter : public IMixerSplitter, public CCapeUnit
This usage of the CCapeUnit template allows the mixer-splitter object to provide all the necessary interfaces and a specialized dual interface, in this case, IMixerSplitter that can provide additional methods and properties for the mixer. 2.4. MFFP2T simulator models Electroplating occurs by formation of a metal atom from a metal ion, causing the deposition of the atom onto the electrode. In an electroplating tank, electrical voltage is applied, resulting in a current flow and the deposition of metal onto the electrode. The difference between the applied voltage and the cell voltage is called the overpotential (r|). The overpotential results in an electrical current flowing through the cell, plating metal onto the cathode. The electrical current is typically in terms of the area of the electrode, which is called the current density (i). By knowing the current density, the area of electrode, and the number of electrons required to convert the metal ion in solution to metal atoms on the electrode, the plating rate for the metal can be determined.
573 Rinsing is conducted after each step of the plating process prior to the next operation. A thorough rinse is required to prevent dragged out chemicals from one step from contaminating the chemical baths used in the subsequent processes. Contamination of a plating bath may render it unable to carry out its process and would require replacement. Plating, rinsing baths, and parts washers all have the potential for sludge formation as a result of precipitation of slightly soluble compounds in solution. The concentrations at which compounds precipitate is a function of the oxidation state of the metal ions in solution, presence of chelating agents, and solution pH. Therefore, in order to estimate sludge generation rates, the simulator must include solubility, acid/base, redox, and coordination chemistry calculations.
3. Environmental Impact Package and Database The US EPA is currently developing a CAPE-OPEN compliant version of the WAste Reduction (WAR) algorithm (Cabezas, Bare, and Mallick, 1999; and Cabezas, Bare, and Mallick, 1999). The WAR algorithm determines the Potential Environmental Impact (PEI) of the manufacturing process. The PEI considers the human and ecological impacts of discharges from processes, providing a more relevant measure of the actual impacts that simply considering reductions in mass or volume. The WAR package supplements the DIPPR (Design Institute for Physical Property Research) Database of 1,685 chemicals with eight categories of human health and environmental impact data. Data not available for chemicals in this DIPPR database were estimated using molecular methods, such as QSAR (Quantitative Structural Activity Relationships). In order to consider each risk category, the potential environmental impact (^^) for a chemical can be determined by summing the specific environmental impacts of chemical:
where: an
= is the weighting factor for impact category n and ^'^
= the category n specific environmental impact for chemical k
The categorical specific environmental impact for a chemical can be viewed as a ratio of the score of a chemical in the category to the average score of chemicals in that category. For example, the chemicals can be scored based on human toxicity using factors such as the LD50.
4. Discussion and Conclusions Past research and development in emissions and risk characterization has focused on the development of a risk screening tool to target reduction of risk to workers. The 2000 update of the National Metal Finishing Environmental Research and Development Plan (Pacific Environmental Services, Inc., 2000) identified that the emphasis of emissions
574 and risk characterization R&D shifted from emissions controls and other end of pipe approaches to research, development and demonstration of alternative materials, and pollution prevention. Accordingly, MFFP2T development is shifting to create a tool that will consider cost effective pollution prevention solutions. With the goal of the the US EPA's updated metal finishing R&D plan in mind, development has begun on MFFP2T. The capabilities of MFFP2T will include identification of waste generating processes and evaluation of pollution prevention alternatives. Pollution prevention has numerous advantages to the plating facility, including: • Reduced Environmental Risks By reducing the total mass of wastes discharged by the facility, the risks to workers and surrounding communities should be reduced, and • Economic Benefits Reducing the amount of waste produced reduces waste disposal and improves the utilization of chemicals obtained for the process. Profitable pollution prevention (P3) is an approach being embraced within industry to find cost effective technologies and practices for compliance with the regulations. The goals of development of MFFP2T is to provide the user with a program that can easily be used to model the plating process and evaluate the effects of modifications to the plating process on the environmental impact of wastes produced. In order to meet this objective, the model must conduct material balances around each process in the plating line and evaluate potential chemical reactions that may occur within each tank. For example, spent rinse water may be reused at numerous locations within the process. The program will need to evaluate the different reuse possibilities, determine which are feasible and compare the economic aspects of the different reuse options. MFFP2T allows the metal finishing industry to evaluate various process modifications to reduce the quantity and environmental impact of wastes generated. The environmental impact packages developed as part of MFFP2T can be used to analyze similar processes that support the CAPE-OPEN interfaces, expanding the applicability of these environmental impact packages to the chemical and allied industry.
5. References Belaud, J.P. and Pinol, 2000, Open Interface Specification: Identification Common Interface, Global CAPE-OPEN. Global CAPE-OPEN, 2000, Open Interface Specification: Collection Common Interface, Global CAPE-OPEN. Shepherd, G. and King, B., 1999, Inside ATL, Microsoft Press, Redmond. Global CAPE-OPEN 2002, CAPE-OPEN Open Interface Specification Thermodynamic and Physical Properties, GCO-ThermoVersion 1.06, Global CAPE-OPEN. Cabezas, Heriberto, Bare, Jane C , and Mallick, Subir K., 1997, Computers in Chemical Engineering, 21, pp. S305-S310. Cabezas, Heriberto, Bare, Jane C , and Mallick, Subir K.,1999, Computers in Chemical Engineering, Vol. 23, pp. 623-634. Pacific Environmental Services, Inc., 2000, Metal Finishing Environmental R&D Plan: An Update, US Environmental Protection Agency, Cincinnati, OH.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
575
Modelling and Simulation of Kinetics and Operation for the TAME Synthesis by Catalytic Distillation Grigore Bozga\ Gheorghe Bumbac\ Valentin Plesu\ Hie Muja^ and Corneliu Dan Popescu^ ^University Politehnica of Bucharest, Centre for Technology Transfer in the Process Industries, 1, Polizu Street, Building A, Room A134, Sector 1, RO-78126, Bucharest, Romania, Phone/fax: +40-21-2125125, email: [email protected] ^S.N.P. PETROM, INCERP Ploiesti Subsidiary, 291 A, Republicii Blvd., RO-2000 Ploiesti, Romania, Phone: +40-244-198738, Fax: +40-244-198732, email [email protected]
Abstract The work presents a simulation study of a reactive distillation (RD) unit for tert-amylmethyl-ether synthesis using HYSYS™ simulation environment. In order to simulate the RD column this was represented as an ensemble of three components corresponding to the software chemical operation modules available in HYSYS. The calculated iso-amylenes conversion was compared with measured values on a pilot plant RD unit.
1. Introduction Methyl ethers replace lead compounds in gasoline. One of these ethers largely used nowadays, obtained by the etherification of 2-methyl-l-butene (2-MlB) and 2-methyl2-butene (2-M2B) with methanol, is the ^amyl-methyl-ether (TAME). Catalytic distillation, involving reaction and separation to take place in the same unit, is possible and beneficial when volatility of reactants is close and quite different from the volatility of products and when the conversion is limited by chemical equilibrium. Synthesis of TAME from methanol and iso-amylenes fits to this conditions. In the case of TAME synthesis, the reaction is taking place on the surface and/or in the pores of the catalyst grains (cationic exchange resin Amberlyst). TAME is an important additive for ecological gasoline, consequently modelling and simulation of kinetics and operation of TAME synthesis is an important practical problem. Commercial process simulators have scarce means for the simulation of catalytic distillation columns. We propose a solution to adapt the resources of HYSYS™ 2.4.1 simulation environment to solve this problem. A conceptual model generated, for a catalytic distillation column with Sulzer KATAPAK structured packing, containing Amberlyst 15 ion exchange resin. In the reactive distillation column model, three zones are considered: the stripping zone (simulated as the reboiled absorber standard HYSYS™ operation), the reactionseparation zone and the rectifying zone (simulated as refluxed absorber standard HYSYS™ operation). The reaction separation zone is modelled considering the backflow cell model (BCM) with forward flow of the liquid and backward flow of the vapour. The BCM consist of series of five continuous stirred tank reactor units (CSTR)
576 with the same geometry and size. In each cell of the series is assumed to reach the vapour-liquid equilibrium, the increase of conversion being calculated as in a CSTR reactor. The diffusional limitations on the process kinetics inside the catalyst pellets are taken into account. Both stripping and the rectifying zones are represented as noncatalytic packed columns. The simulation is performed using UNIQUAC-UNIFAC model property package to calculate the activity coefficients in the liquid phase. The model simulation results are compared with pilot plant experimental data on TAME synthesis by catalytic distillation. In this paper, we focused on the relevance of the commercial software HYSYS™ for the simulation of catalytic distillation columns. As in the current version of HYSYS'^^ the built-in RD module is not directly suitable for the simulation of the heterogeneous catalytic distillation process, this study is concentrated to develop a model for heterogeneous RD and to implement it in the HYSYS™ simulation environment. The objectives of this work are to develop a suitable simulation module for heterogeneous reactive distillation compatible with HYSYS'^'^ and to apply it to an intermediary scale pilot plant unit. The simulated reactive distillation unit includes a pre-reactor (adiabatic tubular fixed bed reactor) and the reactive distillation column. The advantage of using a pre-reactor for TAME synthesis is to insure a partial transformation of reactants before RD column, thus increasing the throughput of reaction system.
2. Reaction Kinetics and Thermodynamics For methanol/reactive i-amylenes ratios close to the stoichiometric one, the main chemical reactions usually taken into consideration are the following:
+CH^OH CH3 — CH2 — C = C H 2 < I CH3 (2M1B)
= ^ VRI
OCH3 I CH3 — CH2 — C—CH3 I CH3 (TAME)
VR3
CH3 — CH = C —CH3 CH3 (2M2B) Both etherification reactions are exothermic, i.e. the equilibrium isoamylenes conversion to TAME decreases with temperature. The isomerisation reaction at operation temperature (between 60°C and 120°C) favours the 2-M2B formation and this component will have the greatest concentration in the reaction mixture. From kinetic point of view this situation is not advantageous because a higher reactivity amylene (2MIB) is replaced by a lower reactivity one (2-M2B). The two olefins (2-MlB and 2M2B) used in the synthesis are fed as a hydrocarbon mixture resulted as the C5 fraction
577 from Fluid Catalytic Cracking (FCC) unit. In Table 1 the composition of the feeding mixture used in the simulated process scheme is presented. Since methanol associates almost all hydrocarbon components into simple and complex azeotropic pairs, the system shows strong non-ideal properties. The property package used to calculate the liquid activities of the considered components is based on UNIQUAC-UNIFAC model. Table 1. Feed Composition. Methanol 2M-1B 2M-2B TAME n-Pentane i-Pentane 1-Pentene
0.232584 0.063166 0.121623 0.000077 0.065240 0.369887 0.028088
Kinetic studies for TAME synthesis were published by Muja et al. (1986), Randriamahefa et al. (1988), Piccoli and Lovisi (1995), Oost and Hoffmann (1996) Rihko et al. (1995, 1997), Sundmacher et al. (1999) etc. In this study we used the following reaction rate expressions published by Oost and Hoffmann, valid for Amberlyst cationit as a catalyst, characterised by an exchange capacity of 5.2 mgHVg: f
1
'R\
Kal
^Rl -
^2
1
K„,
«4
(1)
"?J\ «4
(2)
a^j
^J
1 a.\ ^R3 - ^3
(3)
Ka3 a^
^J
The temperature dependence of the reaction rate constants is given by Arrhenius type expressions: ki(T) = 1,215835 •10'''exp( - 13752,9/T) [kmol/ kgca, h]
(4)
k2(T) = 9,401640- lO'" exp( - 12488,5/T)
[kmol/ kgea, h]
(5)
k3(T) = 2,142878 -10^' exp( - 17040,8/T) [kmo^ kgca, h]
(6)
The chemical equilibrium constants of the three reactions are calculated from the relations proposed by Syed et al. (2000): ln( A:^, ) = -39,065 + — ^ — + 4,6866 ln(r) + 0,007737 - 2,635 • 10"V^ +1,547 • 10~* T"
(7)
578 ln(K^2^ = -34J9S + —-^—+ 3,9168 ln(r) +0,012937-3,12M0~V^+1,805-10"^^ ^^1 — K„^ I K„^
(8) (9)
3. Process Flowsheet In figure 1 TAME synthesis process flowsheet using catalytic distillation is presented. The C5 fraction is mixed with methanol and the resulting stream is fed to the preliminary reactor (IV). The resulting mixture is mixed with a recycled methanol stream and is fed to a catalytic distillation column, below the reaction zone.
cpproofs fIcMshest
b)o^lsins&ies mocb
Figure 1. Simplified flowsheet for RD column. The three zones of the reactive distillation column are represented in the flowsheet as three different units. The stripping zone of the column is simulated as reboiled absorber, a standard operation in HYSYS^^. The flow and mixing in the reaction zone is approximated by a backflow cell model (BCM) with forward flow of the liquid and a back flow of vapour in the reactive part of RD zone (Roemer and Durbin, 1967). The BCM consists in a series of five perfectly mixed cells of equal size. This number of cells was estimated from the RTD data measured on KATAPAK packing (R. Dima et al., 2003). In each cell the conversion increase was calculated considering uniform distribution of the catalyst in the cells and vapour-liquid equilibrium. The influence of the internal diffusion on the process kinetics was evaluated by integrating the mass balance equations inside the catalyst pellets and calculating the effectiveness factor value. The estimated average value of the
579 internal effectiveness factor of catalyst pellets in the reaction zone of the column was 0.8. The third part is another pure mass transfer unit, representing the rectifying zone of the reactive distillation. This zone is simulated as refluxed absorber a HYSYS™ standard operation. Pilot plant characteristics The main characteristics of the simulated pilot plant are the following: volume of catalyst in the pre-reactor 0.2 m^; volume of catalyst in the column 0.21 m^; diameter of the catalyst pellets 1mm; column diameter 0.4 m; column rectifying zone packing height 2 m; column stripping zone packing height 3.5 m; the height of the reactive zone of column 1.7 m. The pilot plant is provided with instrumentation for measurement of temperatures, flow rates and pressure. The composition is measured by sampling and laboratory analysis. Due to the limited accuracy of instruments for flow rates measurement, the accuracy of experimental data is limited. An error of about 10 % in the evaluation of pilot plant isoamylenes conversion has to be considered.
4. Results and Discussion Several simulated values of flow rates, temperatures and concentrations are presented in Table 2 for a set of representative operating conditions. The simulation results with HYSYS'^^ for the TAME synthesis reactive distillation module set-up, presented in this work, allows to draw the following conclusions: From the chemical transformation point of view it is profitable to place the reaction zone as close as possible to the top of the column. However, above the reaction zone a minimal separation zone is needed to separate TAME from the distillate. It is recommended to place the column feed bellow the reaction zone in order to ensure high reactants concentrations in this zone (as they are more volatile comparing with the reaction product). The best structure for the RD column obtained from this simulation study, involve 15 theoretical stages. As we counted the plates from top to bottom, the best position for the reaction zone are the theoretical plates 3 and 4, and the feed plate is the 5-th plate. The optimal reflux ratio is 2, as result of the tradeoff between separation degree and energy saving. Table 2. Simulation results. [Name Vapour Fraction Temperature [C] Pressure [bar] Molar Ftaw (kgmole/h] Mass Flow [kg/hj Liquid Volume Flow [USGPH] Heat Flow [kW] Comp Mole Frac (Methanol) Comp Mole Frac (2M-1 -bulene) Comp Mole Frac (2M-2-butene) Comp Mole Frac (TAME) Comp Mole Frac (n-Pentane) Comp Mole Frac (i-Pentane) Comp Mole Frac (1 -Pentene) Comp Mole Frac (tr2-Pentene)
MEOH 1 0 0000 ! 65 00 1 65001 1.498 1 48.00 15.94 -97.67 1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 O.OOGO
CATCRK ^ 0 0000 * 46 78 * 6500; 4 943*
FEED 0 0000 50 00 6500 6 441
352.3 14G.0
PROD
Tame prod
0 0000
OIJOOO
01)000
78 57
136 8
62 60
6 000
4 000
3.300
5 678
0 9020
4 663
400.3
400.3
91.33
309.0
161.9
158.9
31.15
127.3
Top prod 1
-171.0
-268.7
•268.7
-77.21
-192.6
0.0000
0.2326
0.1295
0.0000
0.1335
0.0823
0.0632
0.0073
0.0001
0.0094
0.1585
0.1216
0.0680
0.0114
0.0558
0.0001
0.0001
0.1344
0.9708
0.0001
0.0850
0.0652
0.0740
0.0051
0.0891
0.4820
0.3699
0.4196
0.0018
0.5105
0.0366
0.0281
0.0319
0.0003
0.0387
0.1555
0.1193
0.1354
0.0104
0.1628 1
580 The measured iso-amylenes conversion values in three runs of the pilot plant, under identical experimental conditions (Table 2) were 70.97 %, 81.26 % and 78.77 %. The simulated value, found for the same working conditions, is 76 %. As observed, taking into consideration the accuracy of the measurements, a fairly good agreement appears between the simulated and experimental conversion values. However, further improvements are needed concerning the kinetic data, the flow model of the reaction zone and the mass transfer limitations around the catalyst pellet.
5. Conclusions This paper presents a theoretical study for the modelling of reactive distillation column operation in TAME synthesis. The simulation results are in a fairly good agreement with experimental data obtained in the experimental pilot plant at SNP PETROM, INCERP Ploiesti subsidiary. The quality of the results obtained in this paper is limited by the uncertainty introduced by the phase hydrodynamics in the reaction zone and the phase equilibrium hypothesis. The authors foreseen additional studies in order to better describe phase hydrodynamics, mass transfer outside and inside the catalyst pellets and their influence on process performances.
6. Nomenclature ai - activity of component i; Kj - chemical equilibrium constant; kj - reaction rate constant; rj - reaction rate; T - absolute temperature, K;
7. References Dima R., Soare, G., Bozga, G., Plesu, V. (2003), to be published. Doherty M.F. and Malone, M.F. 2001, Conceptual Design of Distillation Systems, Mc Graw Hill. Muja, I., Goidea, D. and Marculescu, N. 1986, Revista de Chimie, 37, 1047 (in Romanian). Oost, C. and Hoffmann, U. 1996, Chem. Eng. Sci., 51, 329. Piccoli R.L. and Lovisi, H.R. 1995, Ind. Eng. Chem. Res., 34, 510. Randriamahefa, S. and Gallo, R. 1988, J. Mol. Catal., 49, 85. Rihko, L.K. and Krause, A.I.O. 1995. Ind. Eng. Chem. Res., 34, 1172. Rihko, L.K. and Kiviranta-Paakkonen, P.K., Krause, A.I.O. 1997, Ind. Eng. Chem. Res., 36, 614. Roemer, M.H. and Durbin L.D., 1967, Ind. Engng. Chem., Fundls., 6, 120. Sundmacher, K. and Hoffmann, U. 1994, Chem. Eng. Sci, 49,4443. Syed, F.H., Egleston, C. and Datta, R. 2000, J. Chem. Eng. Data, 45, 319.
8. Acknowledgement This work was performed under the Romanian National Research Programme RELANSIN, Project no. 943.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
581
Reduction of a Chemical Kinetic Scheme for Carbon Monoxide-Hydrogen Oxidation R.B. Brad, M. Fairweather, ^J.F. Griffiths and A.S. Tomlin School of Process, Environmental and Materials Engineering and ^Department of Chemistry, University of Leeds, Leeds LS2 9JT, UK
Abstract The systems of differential equations that arise from comprehensive elementary reaction schemes are invariably large, stiff and stongly coupled, causing them to be computationally expensive to solve, and thus limiting their use within complex flow or dynamic models. There is hence a need for the systematic reduction of kinetic systems whilst maintaining the important features of original full schemes. Such methods of sensitivity analysis, principle component analysis, the quasi-steady state approximation, and dimension estimation through time-scale analysis are discussed. These methods are successfully used to create a reduced scheme for carbon monoxide-hydrogen oxidation, a system which demonstrates complex oscillatory behaviour in a spatially uniform flow reactor system and represents a challenging problem for mechanism reduction.
1. Introduction The design of modern chemical process reactors is aided by the use of computational fluid dynamics (CFD) to study the chemically reacting, turbulent flows encountered in their operation. At present, the majority of CFD codes used as the basis for design rely on approaches that introduce simplifying assumptions regarding chemical reaction, e.g. fast chemical reaction, which frequently exclude direct kinetic effects from such computations. This in turn severely limits the applicability and accuracy of these models, and hence their overall usefulness. The inclusion of kinetic effects into these prediction methods not only requires approaches that admit such effects within turbulent flow calculations, e.g. transported probability density function or conditional moment closure approaches, but also kinetic mechanisms with computer run times significantly less than those of the full kinetic schemes conventionally used to describe process operations. Methods available for the systematic derivation of reduced schemes from full kinetic descriptions are described below, together with an example of their performance when applied to the oxidation of mixtures of carbon monoxide and hydrogen. These methods are suitable for the reduction of any full kinetic mechanism to a reduced scheme that retains the important behavioural features of the original scheme, but which is computationally more efficient due to a reduction in the number of reactions and species present in the reduced scheme. Carbon monoxide-hydrogen oxidation is addressed since this system demonstrates complex oscillatory behaviour in a continuously stirred tank reactor (CSTR), and represents a challenging problem for
582 mechanism reduction. Additionally, it is also of practical relevance to the oxidation of biofuels. ' Oscillatory and Complex Oscillatory areas.
Steady Reaction
Steady Reaction
620
640
660
680
700
720
740
760
780
800
T/K
Figure 1. Numerically generated P/T diagram derived using the full carbon monoxidehydrogen oxidation scheme with 0.5% hydrogen. An overview of the methods used previously in mechanism reduction is presented in Tomlin et al. (1997). The present work uses a combination of existing methods to produce a carbon monoxide-hydrogen oxidation scheme with fewer reactions and species variables, but which accurately reproduces the dynamics of the full scheme. Local concentration sensitivity analysis was used to identify necessary species from the full scheme, and a principle component analysis of the rate sensitivity matrix employed to identify redundant reactions. This was followed by application of the quasi-steady state approximation (QSSA) for the fast intermediate species, based on species lifetimes and quasi-steady state errors, and finally, the use of intrinsic low dimensional manifold (ILDM) methods to calculate the mechanisms' underlying dimension and to verify the choice of QSSA species. The origin of the full mechanism and its relevance to existing experimental data is described first, followed by descriptions of the reduction methods used. The errors introduced by the reduction and approximation methods are also discussed. Finally, conclusions are drawn about the results, and suggestions made as to how further reductions in computer run times can be achieved.
2. Choice and Verification of Test Scheme Experimental data (Johnson, 1991) has shown that, at certain temperatures and pressures, the oxidation of carbon monoxide-hydrogen mixtures in a CSTR exhibits a variety of behaviours, including steady reaction, oscillatory and complex oscillatory (enclosed region in Figure 1) regimes. The CO-H2 sub-set of the Leeds methane oxidation scheme (see http://www.chem.leeds.ac.uk/Combustion/methane.htm) was used to successfully model the behaviour of this system over a large region of the pressure-temperature (P/T) diagram, as shown in Figure 1, and at a wide range of relative initial CO, H2 and O2 concentrations. This comprehensive scheme contains 69 reactions and 11 species plus temperature, and displays the complex behaviour observed in the experimental study noted above. The predictions of the scheme were found to be
583 qualitatively accurate when compared to the experimental results of Johnson (1991), although the oscillatory region was slightly wider and shifted to lower P/T values.
3. Removal of Reactions from the Scheme The removal of reactions from the scheme was based on a principle component analysis of the rate sensitivity matrix that considers the local dependence of the rate of formation of a necessary species on rate parameters. In using this approach the first stage is to identify which species in the scheme are considered necessary for accurate prediction of the chosen important species and features. 3.1. Identification of necessary species Those species that are considered to be important must be chosen first, and these are likely to include the main reactants and products. From this list of n important species, by use of a sensitivity analysis method, those species that require accurate concentrations in order for the important species to be modelled accurately are identified over a range of conditions and time-points. These species are referred to as necessary species, and include the important species, plus any species found to have an effect on the important species. The measure of the effect of changing the concentration of each species on the rate of production of an important species is defined as n
Bi-'^[iCi/fj)/Ji,jf,
(1)
where Jij = df/dci is the Jacobian, fi is the rate of production of important species j , and Ci is the concentration of species /. If Bi is found to be above a user defined tolerance then species / becomes a necessary species and it is included in any further summation. Bi values must then be recalculated iteratively, summing over all necessary species until no more species are deemed necessary. This produces a list of N>n necessary species that is used in identifying reactions for removal from the starting mechanism. 3.2. IdentiHcation of redundant reactions Having defined a set of N necessary species it is now necessary to define a sub-set of reactions that still produces accurate concentrations of these species and temperature. This is achieved using a principle component analysis (Tomlin et al (1997) and references therein) of the normalised sensitivity matrix F^ F , where F,,j=(kj/fi)(df,/dkj)
= (d\nf,)/(d\nkj)
= (v,jRj)/f,,
(2)
and kj is the/^ rate parameter, V/j is the stoichiometric weight of species / in reaction j , and Rj is the reaction rate. It can be shown that F^F can be expressed in terms of its eigenvectors and eigenvalues, where the eigenvectors reveal the coupling between species and the eigenvalues the weight of the corresponding eigenvector. The user may
584 then define thresholds for the maximum contribution to an eigenvector and for the weight of an eigenvalue and then remove reactions with contributions below the threshold. The thresholds of the eigenvalues and eigenvectors are increased and any reactions with small contributions removed until the scheme fails to reproduce the results of the full scheme. In this way a scheme with only 24 reactions was created and, by an indirect effect of removing reactions, one species (HCO) no longer occurs in the mechanism. A sample trajectory is plotted in Figure 2(a) demonstrating how the oscillatory kinetic features of the system are preserved by the reduced scheme. Although there is a phase shift in results derived from the reduced mechanism, a slight shift in ambient temperature of 1.9 K brings predictions of the reduced mechanism in line with those of the full scheme, as shown in Figure 2(b). Hence, the qualitative behaviour is retained in its entirety, and the consequence of a marked reduction in the scale of the mechanism is only equivalent to a small shift in P/T space.
4. The Quasi-Steady State Approximation The removal of 65% of the reactions does not, however, reduce the computational time drastically. To do this, a reduction of the number of differential equations involved in the system, or a reduction of the stiffness of the system, is required. This can be achieved by either removing species altogether, as has occurred in the above scheme, or by making approximations to some of the equations. The quasi-steady state approximation (QSSA) works on the assumption that fast reacting species locally equilibrate within the system. The equations for QSSA species can be approximated by setting dc/dt =fi = 0, and an expression for each species can be written in terms of the other species, the calculation of their concentration then being performed using an analytical expression or through inner iteration (Peters and Rogg, 1992) which is more efficient than solving a differential equation. An indication as to which species should be chosen for QSSA is given by considering the instantaneous QSSA error and/or species life-times (Turanyi et al, 1993). The instantaneous QSSA error for a single species is defined as Ac- = /^ / Jjj, where the reciprocal of 7^ is the life-time of species /. If the species life-time is very small, or if the rate of production of a species is very 2400-.
2400 2200 2000-li
- F u l l Scheme - Reduced Scheme -Reduced Scheme with QSSA
2200-1 2000 -|
1800-||
1800-11
1600-1
1600
1400 4
1400
1200
1200
1000
1000 43
800
800-[
-Full Scheme (695K) - R e d u c e d Scheme (693.1K) - Reduced Scheme with QSSA (691K)
600
600 10
t/s
Figure 2. Comparison of the full scheme, the reduced scheme, and the reduced scheme with QSSA at 15 Torr. Left (a) - 695K ambient temperature, Right (b) with shifted ambient temperatures.
585 small, the value of the instantaneous QSSA error will be small, and the species should be considered for application of QSSA. Considering the QSSA error for each species at a large number of points on the P/T diagram, an ordered list of QSSA candidates was created. It was found that QSSA could be applied to three species before the scheme started to give spurious results. The dotted trajectory in Figure 2a has the QSSA applied to species H, OH and HO2, and it is evident that the dynamics of the system are still preserved, with a small shift in ambient temperature again bringing the reduced mechanism in line with the full scheme (Figure 2b). The resulting scheme contains 24 reactions and 10 species, with QSSA applied to 3 of these species. This leaves a mixed system of ordinary differential and algebraic equations, with 8 differential equations (7 species and temperature) and 3 algebraic equations, which has reduced stiffness..
5. Calculating the Dimension of the Underlying System Due to the existence of "fast" variables we can often assume that the dynamics of a system, after some time, are governed by a reduced set of equations, describing the motion on a slow manifold of smaller dimension than the original phase space. Using perturbation methods it is possible to estimate the error induced by assuming that the system moves on such a slow manifold (Tomlin et al, 2001). By choosing a threshold for this error it can be used to calculate of the underlying dimension of the system at any point along a reaction trajectory. The perturbation method used transforms the system from the original species variables to a new set of variables, z„ called "modes", each corresponding to a system time-scale. The collapse of each mode onto a lower dimensional manifold is given by Azj - (Xfj^i)/A^
falling below a user defined
threshold value, where X/^. and A,i are the left eigenvectors and corresponding eigenvalues of the Jacobian. The method can be used to decide when a mode has approached a slower manifold, and hence when the behaviour of the system can be modelled using a lower dimension. During the oscillatory behaviour observed in the
E b T3 ®
2
E
10
t/s
Figure 3. Dimension of the system during the trajectory of the full scheme in Fig. 2.
586 CSTR the dimension of the slow manifold is seen to oscillate, with a rapidly varying, high dimension near temperature peaks, and a lower, more stable dimension during periods of lesser activity and lower temperature dependant reaction rates (Figure 3).
6. Conclusions and Future Work It has been demonstrated that by using a combination of reduction methods a 69 step, 11 species chemical kinetic description for carbon monoxide-hydrogen oxidation can be systematically reduced in size and computational complexity to a 24 step, 7 species plus 3 QSSA species scheme that retains the complex behaviour of the original mechanism. This has been achieved through formal mathematical techniques, rather than being based on any chemical intuition. QSSA analysis has shown that 8 variables is enough to model this system. However, perturbation techniques suggest that there is scope for the removal of further variables. The identification of major contributing species through time-scale analysis will aid the choice of variables for future reductions. Although the present work did result in savings in computational time, these were not significant. This is largely due to the complexity of the system under investigation, and the extent to which the reactions are coupled. For other, less-coupled systems, typical of many chemical engineering applications, the techniques applied have already been demonstrated to produce significant run-time savings. The repro-modelling approach (Turanyi, 1994) has also been used to produce substantial reductions through the fitting of large quantities of kinetic data to polynomial difference equations or look-up tables whose dimensions are based on time-scale separation techniques such as those mentioned above. The use of fitted models saves the computational expense of solving a system of differential equations at every time point. Future work on the present system will use this method, with the choice of variables for analysis being based on existing reduction and time-scale results. Future work will also address the assessment of errors introduced through application of the reduction methods.
7. References Johnson, B.R., 1991, Non-Linear Dynamics of Combustion Reactions in a Well Stirred Reactor, Ph.D. Thesis, University of Leeds. Peters, N. and Rogg, B. Eds., 1992, Reduced Kinetic Mechanisms for Applications in Combustion Systems. Springer-Verlang, Berlin. Tomlin, A.S., Turanyi, T. and Pilling, M.J., 1997, in Low Temperature Combustion and Auto Ignition, Ed. Pilling, M.J., Elsevier, Amsterdam. Tomlin, A.S., Whitehouse, L., Lowe, R. and Pilling, M.J., 2001, Faraday Discussions, 118. Turanyi, T., Tomlin, A.S. and Pilling, M.J., 1993, J. Phys. Chem. 97, 163. Turanyi, T., 1994, Proc. Twenty-Fifth Symp. (Int.) on Combustion, pp 949-955.
8. Acknowledgement The authors wish to thank the EPSRC for their financial support of the work described.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
587
A Procedure for Constructing Optimal Regression Models in Conjunction with a Web-based Stepwise Regression Library N. Brauner^ and M. Shacham^ ^School of Engineering, Tel-Aviv University, Tel-Aviv 699 78, Israel ^Chem. Eng. Dept, Ben-Gurion University, Beer-Sheva 84105, Israel
Abstract Construction of an optimal, of highest precision and stable regression models is considered. A new algorithm is presented which starts by selecting the independent variables included in a linear model. If such a model is found inappropriate, increasingly more complex, higher precision models are considered. These are obtained by addition of nonlinear functions of the independent variables and transformation of the dependent variables. The proposed algorithm is incorporated in the SROV toolbox (Shacham, M. and N. Brauner, 2002, Computers chem. Engng., in press). Using an example, it is demonstrated that the algorithm generates several optimal models of gradually increasing complexity and higher precision from which the user can select the most appropriate model for his needs.
1. Introduction Analysis, reduction and regression of experimental and process data are critical ingredients of various CAPE activities, such as process design, monitoring and control. The accuracy and reliability of process- related calculations critically depend on the accuracy, validity and stability of the regression models fitted to experimental data. It is usually unknown, a-priori, how many explanatory variables (independent variables and/or their functions) should be included in the model. An insufficient number of explanatory variables result in an inaccurate model, where some independent variables that under certain circumstances significantly affect the dependent variable are omitted. On the other hand, the inclusion of too many explanatory terms renders an unstable model (Shacham and Brauner, 1999). Often transformations (such as the Box and Cox, 1964 "maximum likelihood" transformations) of the dependent and/or some of the independent variables should be applied in order to obtain the most accurate and stable regression model. The presently available stepwise regression programs have several shortcomings for use in CAPE related computations. They do not search for optimal value of the Box-Cox transformation parameters for the dependent and/or the independent variables, thus the search should be conducted manually. They may be highly sensitive to numerical error propagation caused by collinearity among the independent variables and yield inaccurate results without giving any warning concerning the inaccuracy. Most of them
588 do not consider the accuracy of the data available in determining the number of variables to be included in the model, thus may yield an unstable regression model. Development of better programs for stepwise regression and data reduction is hindered by the lack of data sets, which are large enough and representative to CAPE related applications, and can be used both for defining the needs for further developments and for testing the software. In order to address this need, we have started developing a webbased library, which includes data such as physical and thermodynamic properties, process monitoring data and data used for estimating properties from descriptors of molecular structure. Some data sets contain over a hundred independent variables and over 200 data points. The library contains the data sets including information concerning the experimental error, pertinent references and optimal models that we have found. In the course of library development, we have found that in general, applying stepwise regression and/or Box-Cox transformations separately does not yield all the optimal models. These and additional techniques should be applied in a systematic, procedural manner in order to obtain the best results. In the next section, some basic concepts will be reviewed and the proposed algorithm for the selection and identification of optimal regression model will be presented. In section three, the proposed procedure will be demonstrated using refinery data that were extensively discussed in the literature (Daniel and Wood, 1980). All the calculations are carried out with a modified version of the SROV program of Shacham and Brauner (2002).
2. Basic Concepts A standard linear regression model can be written: y = >^o+Axi+>ff2X2--- + >ff«x„+8
(1)
where y is an A^-vector of the dependent variable, Xj (j = 1,2, ... «) are iV vectors of explanatory variables, >^o» A» Pn ^^ the model parameters to be estimated and 8 is an N vector of stochastic terms (measurement errors). It should be noted that an explanatory variable can represent an independent variable or a fiinction of one or more independent variables The vector of estimated parameters p = A » A » Pn ^^^ be calculated via the least squares error approach by solving the normal equation:
X'^XP = X^y
(2)
where X=[l, Xi, X2,... x j is an N(r&X) data matrix and X^X=A is the normal matrix. This method is rarely used for actual calculations, since it is subjected to an accelerated propagation of numerical errors in cases of colinearity (see for example, Brauner and Shacham, 1998). The condition number of the normal matrix, K(A), (the ratio of the absolute values of the maximal to minimal eigenvalues) is used as a convenient measure of the ill-conditioning of the regression problem. Alternative methods for least-squares regression are described by Bjorck (1966). The SROV program combines stepwise regression with QR decomposition to find the optimal regression model. The QR decomposition solves the equation Xp = y by
589 decomposing X into the product of a matrix Q of orthogonal columns, and an upper triangular matrix, R. The SROV algorithm generates the Q matrix using the GramSchmidt method (see for example, Bjorck, 1966). Variables are selected to enter the regression model according to their level of correlation with the dependent variable and they are removed from further consideration when their residual information gets below the noise level. Addition of new variables to the model stops when the residual information of all remaining variables gets below their noise level. A detailed description of the SROV algorithm and the criteria used for variables selection and replacement can be found in Shacham and Brauner (1999,2002). The quality of the regression model is assessed in view of numerical and graphical information, which includes the model variance, confidence intervals on the parameter estimates, the linear correlation coefficient, residual and normal probability plots. The model variance is defined: s'^ - [(y - y)^(y - y ) ] / v , where y and y are the measured and calculated vectors of the dependent variable respectively, v is the number of degrees of freedom (v= N-i)^ +1)) and k is the number of independent variables included in the model. The linear correlation coefficient is defined by B} = [(y - J^)^ (y - j)] /[(y - J ) ^ (y - J')], where J/ is the mean of y. The variance and ^ are used for comparison between various models, where a regression model that yields a smaller variance and the 1^ value closer to 1 is considered superior. The confidence interval on parameter/ {lS.pj) is defined by Ay^y =t(v,a)Js'^ajj , where ajj is the diagonal element of the inversed normal matrix, and t(v,a) is the statistical t distribution corresponding to v degrees of freedom and a desired confidence level, a. A model where one or more of the confidence intervals are greater in absolute value than the associated parameter values (parameter value is not significantly different from zero) is considered unstable (or even ill-conditioned). Therefore, it is usually considered as unacceptable. The signal-to-noise ratio indicators, which are used by the SROV program for variable selection, usually remove from the model variables associated with insignificant parameter values. However, the removal of the free parameter (Po) is the user's responsibility. This may be required based on theoretical considerations or due to excessive confidence intervals on this parameter. If the distribution of the errors in the residual plot (plot of y — y versus y ) is random, so that no clear trend can be identified, the model can be considered as appropriate representation of the data. Otherwise, the use of Box-Cox transformation and/or addition of nonlinear frinctions of the independent variables should be considered. The Box-Cox transformation is a power transformation of the dependent variable: y' = y^ , where the parameter A is selected so as to minimize the variance of the resultant correlation. In order to enable a meaningfril comparison of the variances of regression models obtained with different A values, the dependent variable must be standardized. The standardized variables employed for the search are : w = K^{y^-\) for XitO and w = K2 \r^y for /I == 0 , where K2 = ^JY\yi and K^ = l/(/lAr^~ ; . In case the Box-Cox transformation with linear terms of the independent variables does not yield a random distribution of the residuals, adding nonlinear functions of the independent variables should be considered. It is customary to use a frill quadratic model (containing nonlinear functions of the form XiXj and x,^) or polynomial model (including higher powers of x') as an initial bank of explanatory variables for carrying out the stepwise regression, unless theoretical considerations suggest different fiinctional forms. It is worth noting that quadratic and polynomial models tend to be ill-conditioned if many terms or high powers of the independent variable are included in the model. Ill-
590 conditioning of the model is indicated by a very large value of the condition number of the normal matrix. Such ill-conditioning can be prevented by transformation of the independent variables to the [-1, +1] (or similar) range. One of such transformation supported by the SROV program is the standardization, where the transformed variable is defined by z = (x ~ 3c) / st,dev(x).
3. The Procedure for Constructing Optimal Regression Models For the sake of brevity, only cases where there is no prior information (from theoretical considerations and/or from experience) on nonlinear functions of the independent variables to be included in the regression model, are considered. For the dependent variable, the commonly used function Inj^ can be used as starting point for the search instead of y. Based on the principles outlined in the previous section, the search procedure for the optimal regression model can be outlined: 1. The Box-Cox parameter is set at /I = 1 for using y as dependent variable or A = 0 for using \ny as dependent variable. A search for the independent variables to be included in the optimal linear model is carried out using SROV. The search is repeated with fio set at zero and the best model is selected according to the variance and R values. The residual plot for the selected model is examined. If the errors are randomly distributed finish, otherwise proceed to step 2. 2. A search for the Box-Cox parameter value that yields a minimal variance is carried out, using the SROV in an internal loop to select the independent variables to be included in the model with each value of Z . The residual plot for the selected model is examined. If the errors are randomly distributed finish, otherwise proceed to step 3. 3. Step 2 is repeated using a quadratic model (in case of several independent variables) or a polynomial model (one independent variable). The condition number of the normal matrix is checked. If it is not much larger than that of the linear model, finish. Otherwise proceed to step 4. 4. Step 3 is repeated using transformed independent variables. Note that transformation of the variables requires adding a free parameter to the model even if it was omitted in previous steps. The most appropriate model is selected by comparing the models obtained in steps 1 - 4 on the basis of the variance, R , the residual plots and additional practical considerations (complexity of the model, derivatives etc.)
4. Operation of a Petroleum Refining Unit - An Example This example was first introduced by Gorman and Toman (1966) and since then was extensively used in the statistical literature. The data set contains 36 data points of 10 dependent variables and one independent variable, where each row in the data set represents one day of operation of a petroleum refining unit. The complete data set of this example is given in Daniel and Wood (1980). They have also carried out a stepwise regression analysis of the data set using a linear model that includes a free parameter and the transformation ln>^ for the dependent variable. This corresponds to Step 1 of the proposed algorithm with A = 0. The optimal solution obtained for this case by SROV is shown in Table 1. Note that the range of the dependent variable, the parameter values
591 and the variance correspond tow = K2\ny= 128.68 Iny. Six out of the ten variables are included in the regression model. In terms of stability, this is a borderline case, since the confidence interval on fio is very close to the parameter (absolute) value itself. The residual plot for this model is shown in Figure 1, indicating a clear trend in the error distribution. For low values of the dependent variable (w), the residuals tend to be negative and for high values, the residuals are mostly positive. Thus, it is necessary to proceed to the following steps of the proposed algorithm. The optimal results obtained by SROV in the various steps of the algorithm are summarized in Table 2. The insignificant value of y9o obtained in step 1 implies that it can be removed from the model (step 1.2). Indeed, using a linear model with A = 0, but without a free parameter improves the results in several respects. The variance decreases, R^ gets closer to one, there are only six parameters in the model and all the parameter values are significantly different from zero. The residual plot, however, still indicates an opportunity for fiirther model refinement. Proceeding to step 2 with a search for an optimal Box-Cox parameter (Aoptimai = 0.3) resuhs in only a marginal reduction of the variance. Introducing a quadratic model in step 3 increases the number of the potential explanatory variables to 65. As shown in Table 2, the inclusion of quadratic terms leads to significant improvements, where a stable model with 9 parameters and with a variance of about half the value obtained with the previous linear models. Obviously, introducing non-linear effects of the independent variables may change the optimal value of the Box-Cox parameter. Thus, a search for the optimal value of i is carried out simultaneously with the search for the variables to be included in the model. However, Table I Optimal regression model for >^ = 0 , (K2 = 128.68) linear model including a free parameter. Parameter Value Confidence (variable) interval Po Pl(Xl) P2{X3)
Pate) P4(X6)
PsCxs) P 6 (Xio)
no. of parm. variance R^ K(A)
758.0123 -11.4787 -223.1336 -1078.4844 136.0463 9.855 5.4129 7 1322.05 0.851540 5.4228E+05
267.1427 11.4697 175.5181 755.9046 85.1292 7.5778 3.6777
l ^ ^ i > M 0^«ratieft<)fii^i«tr#(wiii«im)$tfnl
l^ii^ixmimm
Figure 1. Residual plot for optimal regression model: A = 0 , linear model including a free parameter.
592 Table 2 Results summary of optimal models of the various stages of the algorithm. Algorithm stage
1.1
1.2
2
3
4
A
0 linear no yes 7 1322.05
0 linear no no 6 1190.50
0.3 linear no no 6 1060.72
0.55 quadratic no no 9 588.54
0.618 quadratic yes yes 13 344.51
0.85154 5.4228E+05
0.86266 5.7534E+07
0.86799 5.7534E+07
0.94024 1.7786E+13
0.97098 99.2685
model transformation free parm. no. of parm. variance R^ K(A)
since the condition number of the resulting quadratic model is greater by several orders of magnitude compared to those of the linear models, a transformation of the independent variables is advisable (step 4). The optimal model obtained by SROV, using the quadratic model with transformed variables, includes 12 explanatory variables (13 parameters), where all the confidence intervals are significantly different from zero. The variance is the smallest of all the models tested and ^ is the closest to one. The condition number is very small in comparison to the other models, thus this solution can be considered highly accurate. The linearity of the normal probability plot for this case indicates normal distribution of the residuals.
5. Conclusions It has been demonstrated that the proposed algorithm suggests a considerable progress in modeling and regression of data, especially in cases where there is no a-priori information on the model structure neither from theory nor from experience. The algorithm starts by identifying the independent variables of an optimal (of the lowest variance and stable) linear model and gradually progresses to models of increasing complexity and precision as necessary. Along the route, the algorithm generates several optimal models from which the user can select the one that is most appropriate for his needs, while considering the model complexity and precision.
6. References Bjorck A., 1996, Numerical Methods for Least Squares Problems, SIAM, Ph.,PA Box, G.E.P. and Cox, D.R., 1964, J. of the Royal Statistical Society B, 26, 211. Brauner, N. and M. Shacham, 1998, J. of Math, and Computers in Simulation, 48, 77. Daniel, C. and F.S. Wood, 1980, Fitting Equations to Data - Computer Analysis of Multifactor Data, 2""^ Ed, John Wiley, New York. Gorman, J.W. and R.J. Toman, 1966, Technometrics, 8, 27-51. Shacham, M. and N. Brauner, 1999, Chem. Eng. Process. 38,477. Shacham, M. and N. Brauner, 2002, Computers chem. Engng. (in press).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
593
Dynamic Simulation of the Borstar® Multistage Olefin Polymerization Process C.Chatzidoukas^' ^ J.D.Perkins^ E.N.Pistikopoulos^ C.Kiparissides^'* ^ Department of Chemical Engineering and Chemical Process Engineering Research Institute, Aristotle University of Thessaloniki, PO Box 472, 54006 University City, Thessaloniki, Greece. ^Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London SW7 2BY, UK
Abstract This study deals with the development of a dynamic model for an industrial olefin polymerization plant (Borstar®). The model captures the dynamic behaviour of the different process units, and accounts for molecular polymer properties and thermodynamic properties of polymer mixtures using an advanced equation of state. The model validity is tested against industrial data.
1. Introduction Polyolefins are conmionly produced by solid-catalyzed polymerization systems, which permit the polymerization to proceed at moderate operating conditions (pressure and temperature) in i) liquid-slurry, ii) solution and iii) gas phase polymerization processes. The commercial success of catalytic polymerization in polymer industry in parallel with the demanding polymer market for polymers with tailored properties to specific applications prompted a continuous step-up in catalyst technology. This progress observed in catalytic systems kept step with an analogous development in polymerization reactor systems. Catalyst morphology and its size distribution in conjunction with reactor residence time and operating characteristics have significant impact on polymer molecular architecture properties, with pronounced effect on final product mechanical properties and processing characteristics. Therefore, both the polymerization process and the reactor system, as well as the catalyst morphology are cautiously chosen based on the desired range of polymer properties. Borstar® is an industrial olefin polymerization plant/technology, which combines different polymerization processes and reactor units, utilizing an advanced catalytic system. In the present work, a detailed model for the dynamic and steady-state simulation of this industrial plant has been developed. A comprehensive kinetic model for the ethylene-1-butene copolymerization over a two-site catalyst was employed to predict the MWD and CCD in the Borstar® process. The Sanchez-Lacombe equation of state (S-L EoS) was employed for the thermodynamic properties of the polymerization system and the phase equilibrium calculations in the process units.
Corresponding author. Tel.: +30310-99-6212, Fax: +30310-99-6198, E-mail: cypress @ alexandros. cperi. certh. gr
594
2. Polymerization Plant The Borstar® is an industrial scale olefin polymerization continuous multistage plant, which produces a broad variety of polyethylene (PE) (Figure 1). The combination of different polymerization processes, different reactor units and a multisite catalyst contribute to the wide distribution of the final polymer properties. The plant consists of three reactors (two Loop reactors and a fluidised bed reactor (FBR)), of different dimensions and operating conditions, and a Flash separator. Based on the polymerization process implemented in each reactor unit, the plant is divided into two sections. The slurry-phase polymerization section, which provides an attractive environment for high crystalline, low molecular weight product, and the gas-phase polymerization section, where high molecular weight, low crystalline polymer is favoured. The Loop reactors, which permit operation under high polymer concentration conditions, are employed for the slurry-phase polymerization, while the FBR is used for the gas-phase polymerization. The sequence of plant units is selected to comply with safety and operational rules considering the path of catalyst fragmentation, the gradual reduction of its activity and the growing of polymer particles. Precisely, the Loop reactors are located at the beginning of the plant and then follows the FBR to avoid dissolution of the amorphous polymer in the liquid inventory existing in the Loop, making the resin tacky and hard to process (Zacca et al., 1996). The Flash separator between the reactors is necessary to flash the liquid polymer mixture at the output of the Loop to gas/solid conditions before entering the second section of the plant. The Borstar® process utilizes a novel highly active Ziegler catalytic system, which ensures high-level productivity for all the reactor units. Fresh catalyst is fed only to the first Loop reactor, which serves as a "prepolymerization reactor" with short residence time where the polymerization is carried out under mild operating conditions in terms of monomer concentrations and/or temperature. Hence the critical phenomena at the beginning of polymerization, such as catalyst fragmentation and sharp temperature rise, are easily handled in this "prepolymerizer", and then the polymer microparticles are transferred to the subsequent reactor units. As the polymer particles are conveyed from the Loop reactor to the FBR, polymer is gradually growing under extremely different operating conditions resulting in a multilayer product with bimodal MW and copolymer composition distribution. To further reduce the hazard of temperature runaway due to the exothermic polymerization reaction, inert diluent (propane in the Loop and N2 in the FBR) are fed to decrease the monomer concentrations and therefore the polymerization rate (Ferrero and Chiovetta, 1990).
3. Mathematical Model In this work a mesoscale/macroscale level approach of the Borstar® plant is attempted focusing on the study of average polymer properties, dynamic behaviour and control of process units. To describe the kinetic of ethylene-1-butene copolymerization in the plant a unified kinetic scheme for the three reactor units based on a two-site Ziegler catalyst is employed (Table 1). The symbol F^. denotes the concentration of "live" copolymer chains of total length n ending in an "i" monomer unit, formed at the "k" catalyst active site. PQ^ and D^ denote the concentrations of the activated vacant catalyst sites of
595 Table 1: Kinetic mechanism of olefin copolymerization over a Ziegler catalyst. Description Activation
Reaction
Aluminum Alkyl: Sp *^ + A
^^ ) P ^
Propagation Initiation: Additional units:
pk + [ M . ] - j 4 _ ^ p k . Pn,i + [Mil-
^A n+1.1
Deactivation Spontaneous:
- ! ^ C S + D!S
Chain Transfer Spontaneous: by Hydrogen: by Monomer:
Pt.
'''''" 'PO + DS [H2]^^Mij 111
_k Ir
_v
V
- ^ P O + DS
Figure 1: Schematic representation
of the
BORSTAR^ process.
type "k" and "dead" copolymer chains of length n produced at "k" catalyst active site, respectively. Pseudokinetic rate constants (Hutchinson et al., 1992) are used to simplify the kinetic rate expressions for the multicomponent polymerizations, while the moments of the total number chain length distributions (TNCLDs) for "live" and "dead" copolymer chains are defined (Hatzantonis et al., 2000) to describe the conservation of polymer chains of various lengths. 3.1. Reactor model The reactor units are approximated as continuous stirred tank reactors (CSTRs). This assumption is supported by the good mixing pattern of the Loop reactors and the similar behaviour of FBRs to CSTRs (Chinh, et al, 1996). Bearing in mind that the same kinetic mechanism applies for the three CSTRs, a generalized dynamic model is developed which is properly adapted to every reactor, considering their different operating conditions and that fresh catalyst is fed only to the "prepolymerizer". The model accounts for a two-phase polymerization system, a polymer-rich and a polymerlean phase, which practically contains no polymer. Polymerization is carried out in the first phase only, where monomers and other components are solute in polymer particles. Thermodynamic equilibrium between the two phases is assumed to settle instantaneously. Polymer crystallinity has a pronounced effect on the polymerization since the light components are soluble only in the amorphous fraction of polymer particles. Therefore, an effective volume (Vgf) of polymer phase is defined and polymerization occurs only there. However, both energy balance and components mass balances are written with respect to the overall reactor volume (Vr). The unsteady-state mass balances for monomers (Mi), hydrogen (H2), nitrogen (N2), propane (Prop) and two types of polymer, polymer produced in the current reactor unit (Cpoi) and polymer coming from a previous unit (Cpre) with different molecular properties, are derived as follows:
596
tit
k=1
'^;~^~^n,
T
+FinXin,H, +FrecXv,H2 -FrecoXv,H2 -Fout^out.H^ " ^ef ^ W ^^ ] ^ R H^
(3)
~ -^2 "^ ^in^in,N2 "*" ^rec^v,N2 " ^rec0^v,N2 " Pout^out,N2
d(VrProp) ,
d(v C J ^^
~ ^Prop "'" ^in^in,Prop **" ^rec^v,Prop " ^recO^v.Prop " ^out^out,Prop
)
(^)
^^^
C -Rn^in,pol ' ^out-^out,pol ^ ^ ^pre + ^pol
^-^z
d(v,Cp„,) „ N^-r,,... I^„k K ^ C VefS M W M ^ S R FoutXout,poi "p;—7"];:;— dt k=l
(^)
where R M , R H are the monomer and hydrogen consumption rates at the catalyst active site of type 'k'. Similarly, the molar balances for the potential catalyst sites, Sp , the vacant active sites,PQ, and all other molecular species Y \
(Y^
:XQ,X\,X\,
JIQ , ^i !^, ^i 2) can be derived:
T
^
^
7^
^P'in +^poUn "VO^ef^P
\
^'^
= F.\.,„+r^Qoe,,Po^
(8)
^fc|^=r,^-QoSefY^
(9)
where Fp^^ j^ and F^^^ ^^ are the inflows of potential and vacant catalyst sites of type 'k' respectively from previous units, 8ef is the fraction of the effective volume over the r^ ,rs r"^ are reaction terms derived according to the kinetic scheme reactor volume and r^ rY,rp (Table 1). Finally, the dynamic energy balance for the reaction mixture is written as: dH accum _ j j dt
_j_ jj_^ _ jj^^^ _ jj^^^ _^ j j
_Q
QQ)
where the terms Hpre, Hin, Hout, H^ec and Hgenr denote the enthalpies of the stream coming from previous units, the total input in each reactor, the product removal stream, the recycle stream and the polymerization reaction, respectively. Q is the cooling rate of the cooling jacket in the Loop reactors. 3.2. Flash separator model A dynamic model for the Flash separator in the plant has been developed. A two-phase mixture is formulated in the flash tank, a polymer-rich (Heavy) and a polymer-lean (Light) gas phase. Thermodynamic equilibrium is assumed to settle instantaneously between the phases and the flash calculations are carried out employing the S-L EoS. It
597 is assumed that no polymerization occurs in the Flash unit and therefore the unsteadystate mass and energy balances are derived:
^ ( ^ v e s s c l M i ) _ T 7 rj F V F Y — *^in^in,i ~ ^out,H^H,i ' *^out,L^L,i
i\V\ ^^ ^^
at - 7 - = FinHin - Fin E (Zin,i^H at i=i
j j - Fout,HHout,H " Fout,LHout,L
^^^^
where Fin, Fout,L and Fout,H, are the flow rates of the input stream from a previous unit, the output Light stream recycled back to the first reactor unit and the output Heavy stream sent to the FBR. Hin, Hout,H» Hout,L and AHvap,i are the enthalpies of Heavy and Light output streams and the vaporization enthalpy of components 'i' in the input stream, respectively. Pressure and level PI feedback controllers are employed in the Flash tank using Fout,L and Fout,H as manipulated variables. 3.3. Thermodynamic model For components solubilities in polymer particles, pressure in the process units and flash calculations in the Flash unit the Sanchez-Lacombe EoS is used. It is appropriate for polymer mixtures and derives from a lattice-fluid model (Kirby & McHugh, 1999): ~2
^
^
p + P + T ln(l-p) + ( l - i ) p
(13)
where p,Tand p are the reduced pressure, temperature and density of a pure component defined with respect to characteristic values of the component, and r is the number of lattice occupied by a molecule. To apply this equation to a mixture of components mixing rules are used to find the characteristic values of the mixture based on binary interaction parameters. Industrial data for the solubility of these components (i.e. Mi, H2, N2, etc.) in PE at several conditions are employed to tune these parameters.
4. Simulation Results and Discussion The mathematical model developed in this study for the dynamic simulation of the Borstar® plant contains over two thousands algebraic and over a hundred state variables. The gPROMS® (Process Systems Enterprise, Ltd) dynamic modelling platform was employed with a foreign object Fortran module for the thermodynamic calculations. The model predicting capabilities range from temperature, pressure and phase compositions in the process units, to production rate and molecular properties (i.e. density, MW, polydispersity, etc.) of polymer produced in every unit and at the output of the plant. Comparison of model prediction with industrial steady-state data is used for the qualification of model's validity. From the scaled production rates of the three units presented in Figure 2 it is clear that the polymerization rate in the "prepolymerizer" is almost two orders of magnitude lower than in the main reactor units. Figure 3 presents the scaled crystallinity of polymer in the reactors and the gradual decrease of polymer crystallinity in the FBR as more amorphous polymer is produced by the gas-phase process. Finally, Figures 4 and 5 present the time change of polymer density in the main reactors illustrating the effect of polymer from previous unit on density of the total polymer removed from each reactor.
598 1.00 0.95
A
]
0.90
i" °^ o
Prepolymerizer Loop Reactor FBR Steady-State industrial data
1 1 1 1
3
0.80
-2
0.75 0.70 0.65 0.60
] \^ •j
"1
Prepolymerizer 1
FBR
0.55 0.50
Figure 2: Polymer production rate.
-l
1
1
1
,
'
1
1
•
1 ,
Figure 3: Polymer crystallinity.
-Total polymer • Polymer produced in fliis unit - Polymer from previous unit Steady-state industrial data
&
9.40 H - T o t a l polymer • • Polymer produced in this unit - Polymer from jwevious unit Steady-state industrial data 40 60 Scaled Time
Figure 4: Polymer density in the Loop Figure 5: Polymer density in the FBR. reactor.
5. Conclusions In this study a comprehensive mathematical model for the dynamic simulation of the Borstar® olefin polymerization plant has been presented. The agreement of model predictions to steady-state plant data was satisfactory and simulation of several operating points of the plant is feasible. Verification of dynamic profiles predicted by the model with real dynamic data renders this model a potential tool for the optimisation of the process operation.
6. References Chinh, J.-C, Filippelli, M.C.H., Newton, D. and Power, M.B., 1996, US Patent 5,541,270. Ferrero, M.A., Chiovetta, M.G., 1990, Polym. Plast. Technol. Eng., 29, 263. Hatzantonis, H., Yiannoulakis, H., Yiagopoulos, A. and Kiparissides, C , 2000, Chem. Engng Sci., 55, 3237. Hutchinson, R.A, Chen, CM. and Ray, W.H., 1992, J. Appl. Polym. Sci., 44, 1389. Kirby, C.F., McHugh, M.A., 1999, Chem. Rev., 99, 565. Zacca, J.J., Debling, J.A., Ray, W.H, 1996, Chem. Eng. Science, 51,4859.
7. Acknowledgements The authors gratefully acknowledge the financial support provided for this work by DGXII of EU under the GROWTH Project "PolyPROMS" GlRD-CT-2000-00422.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
599
Agent-Oriented Modelling and Integration of Process Operation Systems Cheng Huanong, Qian Yu, Li Xiuxi, Li Hehua School of Chemical Engineering, South China University of Technology, Guangzhou, P. R. China
Abstract For the past years, many chemical process operations (CAPO) software were designed respectively and didn't collaborate efficiently, which makes it difficult for optimal process operation. In this paper, an agent based modelling method is presented to address this problem, in which elements in the process operation systems are divided into two parts, agents and objects. Based on the developed models, three integration strategies are addressed to implement the integration of the whole process operation systems. Also presented are the accomplishment of the integration strategies and the system architecture.
1. Introduction With the development of information technology and economic globalization, process industries tend to be more networked and digitized. More efforts are concentrated to transform varies of production factors into information and integrate the information to optimize the production process. For the past years, several software and computer tools have been developed to aid the chemical process operations (CAPO). These software and tools are designed respectively and may not collaborate efficiently. So many efforts are focused on the integration of different CAPO to Computer Integrated Process Operation System (CIPOS). Several modeling methodologies have been addressed to the issue. The original modeling methodologies are the structural analysis (SA) (Ross, 1977) and object-oriented (00) (Partridge, 1994) methods. The SA develops a function modeling method of analyzing and communicating the functional perspective of an entire system, which heavily relying on the designer's experience. 0 0 resolves the complexity of the problem domain into abstracted, comprehensible systems, which are arguably closer to our intuitive understanding of the world around us. 0 0 avoids the unreasonable decomposition of the system, which often found in SA. However, one shortcoming the analysts found in 0 0 method is that the system is composed of and dominant by passive objects. In the real world, there are still another categories of active entities such as many operation tasks in the process operation system. So a novel method, agentoriented method, is introduced to remedy the shortage of 0 0 methods. Agent-oriented method decomposes the world into two parts: passive entities (objects) and active entities (agents), which are the foundational components of the system.
600
2. Agent-Oriented Modeling of CIPOS By agent-oriented method, the process operation system is divided into two parts. The objects are equipment, units or processes. Corresponding to the objects, the agents are the engineering tasks on the units or processes. 2.1. Object models Objective entities in the process operation system, such as reactor, distillation column units and processes, are used and conducted by the different operation tasks. Modeling methods for the units and processes objects are first principle rules, statistical regression, artificial neural network, and fuzzy logic relation. However, the mentioned above single models have some shortages unavoidably. To remedy the defects of single models, hybrid models are actively researched recently. One kind of hybrid models (Qi, 1999) combines part of first principle equations with ANN, in which ANN is used to determine parameters of the first principle models. Fuzzy logic approach (Qian, 1999) is used for representing imprecision and approximation of the relationship among process variables. It is successfully incorporated into conventional process simulators. Several efforts (Baffi, 1999) have been made to combine statistical analysis with non-linear regression, which are polynomial, spline function and ANN. 2.2. Operation task models Operation tasks in CIPOS are active entities including real-time simulation, on-line optimization, schedule and fault diagnosis. The tasks are executed by running the corresponding software on the different computers. The exchange and sharing of the information of tasks is carried out by varies software sharing the data and cocalculation. The use of agent to model the operation tasks satisfies the requirements of operation tasks in CIPOS. Agent is an entity of autonomy, communicability, reactivity and pro-activeness (Maguire, 1998). The professional software (Aspen, GAMS, G2 et al.) or customer-built program is used as the agent internal module to treat with their specific domain tasks. At the same time, these agents serve and request with each other to collaborate for the common object. In this paper, a multi-agent system is used (Cheng, 2002), which includes the standard agent architecture, the communication platform of agents, communication language and protocol, and data format of the communication language. The standard agent consists of the internal knowledge base (KB), an executor, a set of function modules and a communication module. In this work. Common Object Request Broker Architecture (CORB A) (Object Management Group, 2002) is used as the conmiunication platform of agents. Knowledge Query Message Language (KQML) is accepted as the language and protocol of the information exchange between agents. We use the philosophy provide by standard for exchange of product data (STEP) to be the data standard of the internal content of KQML. The internal content is expressed in EXPRESS, an information modeling language (Garcia-Flores, 20(X)).
601
3. Implementation of the Process Operation System By agent-oriented method, process models and operation task models are set up for the process operation system. The integration of the process operation system can be classified into three aspects. (1) Integration of object models. (2) Integration of agent models. (3) Support for operator participation. The integration of the object models is to exchange and sharing of the available process model information in different application software, which enhances the efficiency of the process operation and reduce the maintain cost. The integration of agent models is to collaborate of the different operation agents for the common aim. The support for operator means the architecture of CIPOS is open and easy to interact with the operators. 3.1. Integration of object models Integration of object models is implemented by neutral process model, neutral process data and related process rules. Neutral data files are used for some process software with specific input/output data file such as Aspen Plus and Pro II. For a real process unit, however, the physical properties, design parameters and operation parameters are same to the different process models. These parameters are defined, classified and standardized by STEP to facilitate different domain engineers to understand and use. These standardized date files may also be transformed to a specific input/output file under some application protocol, which make the process data exchange between different software effectively (AspenTech, 1998). Besides of exchange of the process data, process models are also shared in different applications. When modeling for a process, modeling engineers may not realize that the needed models already exist. Existing process models may need not modified too much to satisfy the modeling requirements. Further more, modeling engineers do not take it into consideration that models may be used again in another programming team later. So it is important to depict and store the models in the neutral format such as Interface Define Language (IDL) files. By providing different input/output interface for different software, these neutral process models can be transformed into the special program code or software component, such as CORBA and COM. CAPE-OPEN standards (CAPEOPEN Consortium, 2000) are used for the references of the exchanging the process models. Similarly, the rules related to process models can be integrated, which facilitate engineers of different domains to share the process knowledge. For example, a piece of heuristic knowledge, 'the operation pressure of the column is sensitive to temperature of the input stream', is not only useful to the process control, but also important to the process monitoring. In this work we propose a framework of the process model knowledge repository (PMKR) to store and exchange three kinds of process model information: neutral process model, neutral process data and related process rules. When the operation process change, the different process models are modified to adapt to the new environment aided by PMKR. The process model repository is illustrated in figure 1.
602
Physical properties STEP ( ^ ^ D ^ g n paramet^^^^ Operation parameters
f^—"^
[ Neutral^ateJU ^ Neutral Model
M
Process model knowledge repository
Rules
Figure 1. Process model knowledge repository.
3.2. Integration of operation task models The integration of operation task models means the collaboration of multi-agents for a common objective of the whole process operation systems. Collaboration of agents is a procedure in which agents dynamically distribute sources, knowledge and information, negotiate and cooperate with each other to eliminate the confliction among different operation decisions. For the collaboration, it is essential for an agent to know when inform/request whose agent, the content of the inform/request. So a basic prerequisite for an operation agent model is to have enough knowledge and information about the related agent status and the conmion objective. These knowledge and information are stored in the internal database of agent. Therefore, construction of internal knowledge base is the solution for the integration of operation tasks. The construction of agent internal knowledge base is illustrated as follows (Cheng, 2002): First, divided the whole operation system into three parts according to the category of the domain knowledge and the scale of time response. For these three parts, responding agent models are built, named FDD (Fault Detect and Diagnosis) agent, CO (Control and Optimization) agent, and SC (Schedule) agent. By this way, the integration of the operation tasks is reasonably simplified. Second, the idea of the Activity Diagram in the Unified Modeling Language (UML) (Object Management Group, 1999) is used to analyzing the interactivities among these agents. In the diagram, 'process operation' activity is performed by basic process instrument and control system, such as PID control system. Fault detection and diagnosis, APC and optimization, and schedule are executed with FDD agent, the CO agent, and SC agent, respectively. These activities interact by information flows, which transfer in the modes of sequence, loop, and concurrency. For instance, the plant date is transfer to the fault detection and diagnosis. If an abnormality is detected and the cause of the fault is found, the fault information will be send to both schedule and optimization activities. At the same time, the control action is send to the process operation to eliminate the fault. In addition, the schedule result is transfer to the fault detection and diagnosis to avoid mis-warning. Schedule activities receive the fault information, unit optimization results and market data to make out schedule decisions. Based on the activity diagram, it is clear to know activities of agents when production state and market situation change. Each agent knows which agent to communicate, the content of the collaboration and the corresponding decision. The knowledge may be
603 pre-stored in the internal knowledge base of the agent. When the environment change and problem solution, the agent modify the rules or create the new rules by its functional module. 3.3. Participation of operators Process models, operation tasks models and operators are not independent in the process operation. The operation task models (agents) evoke the suitable process models and make decisions on the calculation results under the supervision of the operator. Therefore, the participation of operator is critical to the success of the CIPOS. The procedure of consulting between agents and operators is similar to the interaction of the operation task models. In this situation, the operator can be looked as a special agent. The integration of the operation tasks and operators is also a collaboration resolution. So integration of process models, operation task models and operator is also depended on the internal knowledge base. The required rules can be created by the methods mentioned in the section 3.2.
4. System Architecture of CIPOS Based on the agent-oriented model and integration strategies, we outline the system architecture of the process operation system. In this system architecture, different process models, operation tasks and operators are integrated together, which is depicted in Fig. 2. Process models for real chemical process and units are stored in the neutral model repository. These neutral process models are transformed into executable function modules that can be evoked by the agents. Operation tasks are modeled using agent. They are connected to the process model repository, the plant units, and DCS by CORBA. The commercial software (G2, Aspen, GAMS) can be one of the function modules in the agent. Using KQML, the operation task agents communicate with each other to eliminate conflicts during decisions making. Operators acted as special agents incorporated in the presented system framework. By integrating the process models, operation task agents and operators, the CIPOS is realized. (
FDD Agent ] (
C (
1
Operators
SC Agent 7T^
)
(
Communication Bus
]
5
Obiect Model
CO Agent T^
f
]
>
Plant Units and DCS
Figure 2. System architecture of CIPOS.
604
5. Conclusions Based on agent-oriented analysis and modeling methodology, the process models and operation task models are set up for the process operation system. These models respond to real chemical process directly, which are easy to be understood and adopted by the different domain engineers. So the agent-oriented modeling method remedy the difficulties of transformation from the system analysis to the model construction, then to the software design. The results of simulation and optimization from these models can be used to direct the real chemical process.
6. References AspenTech, 1998, website, http://www.aspentech.com/. Baffi, G., Martin, E.B. and Morris, A.J., 1999, Non-linear projection to latent structures revisited: the quadratic PLS algorithm. Computers and Chemical Engineering, 23, 395. CAPE-OPEN Consortium, 2000, website, www.co-lan.org. Cheng, H.N., Dissertation, Agent-Oriented Analysis, Modeling and Integration of Process Operation Systems, 2002, South China University of Technology. Maguire, P.Z., Struthers, A., Scott, D.M. and Paterson, W.R., 1998, The use of agents both to represent and to implement process engineering models. Computers and Chemical Engineering, 22(Suppl.): S571. Object Management Group, 1999, OMG Unified Modeling Language Specification Object Management Group, 2002, http://www.omg.org/. Partridge, C , 1994, Modeling the real world: Are classes abstractions or objects? Journal of Object-Oriented Programming, 7(7): 29. Qi, H.Y., Zhou, X.G., Liu, L.H. and Yuan, W.K., 1999, A hybrid neural networks-first principals model for fixed-bed reactor. Chemical Engineering Science, 54, 2521. Qian, Y. and Zhang, P.R., 1999, Fuzzy rule-based modeling and simulation of imprecise units and processes, Canadian J. Chemical Engineering, 77(1), 186. Ross, D.T. and Schoman, K.E., 1977, Structured analysis for requirements definition, IEEE Transactions on software Engineering, SE-3, 1:1.
7. Acknowledgements Financial supports from the National Natural Science Foundation of China (No.29976015), China Major Basic Research Development Program (No. G2(XX)0263) are gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
605
Off-line Image Analysis for Froth Flotation of Coal Caglar Citir, Zeki Aktas, Ridvan Berber Department of Chemical Engineering, Faculty of Engineering, Ankara University Tandogan 06100 Ankara, Turkey
Abstract Froth flotation is an effective process for separating sulphur and fine minerals from coal. Such pre-cleaning of coal is necessary in order to reduce the environmental and operational problems in power plants. The separation depends very much on particle surface properties, and the selectivity can be improved by addition of a reagent. Image analysis can be used to determine the amount of reagent, by using the relation between surface properties and froth bubble sizes. This work reports some improvements in the efficiency of the image analysis, and in determination of bubble diameter distribution towards developing froth-based flotation models. Ultimate benefit of the technique would allow a pre-determined reagent addition profile to be identified for controlling the separation process.
1. Introduction As studies show that the performance of froth flotation is affected by the froth structure, its determination plays a vital role for controlling the process (Sadr-Kazemi and Cilliers, 1997; Banford et al., 1998). Some progresses have been made in determining metallurgical parameters that influence surface froth appearance, and in image analysis of flotation froths (Banford, 1996; Holtham and Nguyen, 2002). However, looking at even the recent reports, there seems to be difficulties to overcome. This work intends to develop a pixel tracing technique for off-line image analysis to determine the size distribution of froth, and reports some improvements in this perspective.
2. Image Analysis Technique Images were recorded by a camera over the flotation cell, where Zonguldak bituminous coal was used. Each image as shown in exemplary Image 1 is, then considered to be a matrix of dimensions m x n. Each element or pixel of this matrix is the brightness intensity scaled from 0 to 255 (8 bits/pixel).
Image 1: Original froth image.
606 Froth shape is assumed to be spherical and therefore the only parameter to be determined is the diameter of the froth. The main idea is to determine the edges of the froth regions with minimum error. To achieve this goal, a series of procedures are suggested during the paper. 2.1. Identifying local intensity minima The local minimum of intensity along an image scan line was identified by checking the intensities of the neighboring pixels in four scan lines, two extending from north to south, and from west to east, and others with deflections of 45 degrees as shown in Figure 1. Neighboring pixel groups of any pixel (aij) are defined by Equations 1-4. Any pixel having the minimum value in all its neighboring pixel groups is assigned as an edge point. The process is implemented by using a scan radius of r. While small values of r results in noisy images, larger values reduces the continuity of the borders. The original image (Image 1) is processed by different scan radii from 1 to 4 and related after-process images are shown in Image 2-5. White color represents the edge points in these images. Scan radius is selected as 4, while the continuity of the borders reduces quickly by higher scan radii in most images. To increase the continuity, every 8 neighboring pixels of an edge point are assigned as new edge points. The resultant image of this analysis, which is shown in Image 6, requires further processing i.e. border thinning. Thus, the thickness of the borders will be exactly one pixel. (1) WE = [a. ._^,a. ._,^i,...,a, J,a. .^p...,a.j^^]
(2)
NWSE = K-_,,^_,,a._^^i^^._^^p...,a.^^.,a.^i^^.^p^
(3)
NESW = K_,,^>,,«,_,+ij^,_i V.., «,-,,•, «,>i,^^^^
(4)
Image 3: r = 2.
Image 2: r= 1. NW
N
Image 4: r = 3.
Image 5: r = 4.
NE
sw s SE Figure 1: Scan lines.
Image 6: r = 4 (increased continuity).
607 2.2. Border thinning Thickness of the borders identified by local intensity minima method need to be reduced, however with care not to loose connectivity and continuity of the borders. An iterative algorithm, with a series of conditions, is developed for this procedure to clearly mark the edge points. Each edge point in an image has 8 neighboring pixels, which are numerated from 1 to 8 as shown in Figure 2. Two values N and S, which are used in the conditions, are defined as the number of edge points and the number the transitions of edge-nonedge (vice-versa) points in the ordered sequence of the neighboring pixels, respectively. Every edge point in the image, which satisfies all conditions in the first series are marked/flagged. Once the whole image has been checked, the pixels flagged are removed. The second stage of the procedure is similar to the first but with a different condition series. These two stages are repeated iteratively, where no further pixel satisfies the conditions, in other words may be removed. With the suggested modifications, border thinning process achieved the real skeleton image with all nonedge points deleted. The resultant image is given in Image 7. 2.2.7. First condition series (i)2
•i 1
|6~ 5
Figure 2: Neighboring pixels.
Image 7: Skeleton image.
2.3. Calculation of the bubble diameters Once the image is analysed and the borders are identified, assessing the bubble diameters becomes the next issue. In this respect, Banford (1996) has reported the practice, assumes that the bubble region is the area encircled by the edges. Based on our analysis and experimental results, we propose that the diameter of a bubble is the diameter of the smallest circle encircling the bubble region. Thus, the bubble diameter is the distance of the farthest edge points of this region. However, deviations from ideal lighting and surface homogeneity cause, what we call, "the image noise", and unnecessarily reduce the average diameter. In order to eliminate this difficulty, we
608 suggest that after all diameters are calculated, the bubbles, which have a diameter value less than the variance (Equation 5) of all diameters, should be eliminated. Therefore, the average diameter is calculated by the given Equation 6. The difficulty comes from the image noise and reduces the average diameter value. Therefore, elimination of some bubbles was clearly useful and resulted in more realistic computer images complying with our experimental observations.
-^=E
1 N
(5) (6)
i=l / D; >or /
i=l D: >
3. Experimental The flotation experiments, which have been analysed, were all carried out using Denver flotation equipment. The cell has approximately Idm^ capacity and 600 g of slurry was used in every test. The dimensions of the cell are shown in Figure 3. The cell was located in the base of the frame, directly below the impeller shaft. Experimental details of the flotation tests are summarised in Table 1. Firstly, required amount of distilled water was poured into the cell to prevent the dry coal feed sticking to the bottom and walls of the cell. The 30 g Zonguldak bituminous coal sample to be floated was added to the cell whilst water was being agitated. This operation was performed over a short time period. The slurry was mixed well for 1 minute (wetting time) and then 0.9 mg Triton x-lOO/g coal was added to the slurry as a solution. The slurry containing Triton x-100 was conditioned for further 2 minutes. At the end of this conditioning period, bubbles induced by the agitator passed through the pulp forming a froth on the upper pulp surface. The froth overflowed a weir on the recess side of the cell and was collected in the evaporating dishes at various time intervals until the overflow ceased. Photographs of the froth were taken to determine the changing bubble structure of the top of the froth during the froth flotation process. The photographs were later analysed by image processing. The froth concentrates were analysed for their water, total solids and ash fractions. The tailings were filtered and then dried for subsequent particle size and ash content analysis.
155 mm
60 mm
110mm
1 N 122 mm
Figure 3: The cell used in the froth flotation experiments.
609 Table 1: Some physical properties of the coal and the Denver cell flotation test conditions. Wetting time, min 1.0 -53 Particle size, |Lim Conditioning time, min 2.0 0.88 Moisture content, % pH (natural) 6.8 23.95 Ash content, % Agitator speed, rev/min 1250 30.0 Feed mass, g 29.74 Nominal cell volume, dm^ 1 Dry mass, g Impeller diameter, cm 7.2 7.18 Total ash, g Shaft diameter, cm 4.4 22.55 Coal, dmmf, g Cell cross-sectional area at 570.0 Cell water, g 570.26 top of cell, m^ 0.0201 Total water, g 5.0 Cell cross-sectional area at Pulp density, % 2.1 bottom of cell, m^ 0.0134 Aeration rate, dm^ min'^
4. Results and Discussion The tests were performed in the presence of various initial Triton x-100 loadings. The results obtained from only one test were used in this article. The test was batch and with single stage reagent addition, which was 0.9 mg/g coal. Separation parameters were evaluated in terms of the combustible solid recovery and grade. The grade of accumulated concentrate is defined as G = (1 - Ac/Af), where Ac and Af are the ash contents of the concentrate and the original feed coal, respectively. Figures 4 and 5 show the cumulative grade and the mean bubble diameter as a function of time. The cumulative grade decreases sharply for first 100 s after that follows slowly. As seen in Figure 5, the mean diameter steadily increases with time and reaches to the highest value towards the end of test.
0.75
•
•o
S 0.74 en •% 0.73 iS
•\
1 0.72 O 0.71
' 1 ~~— 1 -
0.70 100
150
200
250
Time, s
Figure 4: Time vs. cumulative grade.
Figure 5: Time vs. mean bubble diameter.
Figure 6, which is plotted by using the fitted functions, is given to represent the relationship between the mean bubble diameter and the cumulative grade. It is clearly seen from the figure that there is a strong relationship between the mean diameter and the purity of the final product. Similar findings were also reported in the previous studies (Sadr-Kazemi and Cilliers, 1997; Banford et al., 1998). As the cumulative grade steeply decreases, a gradual increase in the mean diameter is observed up to 40 mm.
610
20
40
60
80
Mean bubble diameter, mm
Figure 6: Mean bubble diameter vs. cumulative grade.
5. Conclusion The suggested image analysis method is implemented in C with a user interface, and applied on the images taken in different times of a flotation. The mean bubble diameters are calculated for each image and the cumulative grade i.e. the purity of the final product is found experimentally for different times of the flotation. The relationship between the mean diameter and the cumulative grade is represented by the fitted functions; hence for every value of the mean diameter, a cumulative grade can be read. As the suggested technique provides the "fingerprint" of the froth, that is fairly simple to compute, it holds promise for further implementations either. Our efforts are directed to its on-line use for process control purposes.
6. References Banford, A.W., 1996, The Use of Off-line Image Analysis in Assessing The Effect of Various Reagent Addition Strategies on the Performance of Coal Flotation in a Batch Cell, PhD Thesis, The University of Manchester, U.K. Banford, A.W., Aktas, Z. and Woodburn, E.R., 1998, Interpretation of The Effect of Froth Structure on The Performance of Froth Flotation Using Image Analysis, Powder Technology, 98, 61-73. Holtham, P.N., Nguyen, K.K., 2002, On-line Analysis of Froth Surface in Coal and Mineral Flotation Using JKFrothCam, Int. J. of Mineral Processing, 64, 163180. Sadr-Kazemi, N., Cilliers, J.J., 1997, An Image Processing Algorithm for Measurement of Flotation Bubble Size and Shape Distributions, Minerals Engn. 10, 10751083.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
611
Moving Finite Element Method: Applications to Science and Engineering Problems Maria do Carmo Coimbra, Carlos Sereno^ , Alirio Rodrigues Laboratory of Separation and Reaction Engineering School of Engineering, University of Porto Rua Dr. Roberto Frias, s/n, 4200-465, Porto, Portugal
Abstract Many problems in science and engineering are formulated in terms of time-dependent partial differential equations. The first part of this contribution gives an overview of the moving finite element method based on a piecewise higher degree polynomial basis in space. In the second part, applications are presented in order to give a convincing demonstration that the proposed moving finite element method is a powerful tool to compute numerical solution of a large class of ID and 2D problems modelled by timedependent partial differential equations. Numerical results are described which illustrate some important features of the proposed moving finite element method for solving problems in one and two dimensional space domains.
1. Introduction Moving finite element method (MFEM) is a discretization technique on continuously deforming spatial grids introduced by K. Miller and R. Miller (1981a) to deal with timedependent partial differential equations (PDE) involving fine scale phenomena such as moving fronts, pulses and shocks. In the literature (Baines, 1994) several formulations of the moving finite element method using a piecewise linear functions as its finite dimensional approximation are described. In our formulation solutions are calculated using a Galerkin approach with a piecewise higher degree polynomial basis in space (Coimbra, 2002). Numerical results are described which illustrate some important features of the proposed moving finite element method for solving problems in ID and 2D dimensional space domains. In order to clarify the effects of the different method parameters we pay special attention in the analysis of nodes movements and its relations with the choice of the initial grid, the degree of the approximations and other parameters such as penalty constants and tolerance errors. In ID we choose a problem from mathematical biology, which describes an ionic flow across a semi-infinite nerve membrane and a problem from chemical engineering concerning diffusion-convection and reaction in a catalytic particle. In 2D the moving finite element method will be used in the simulation of a fixed bed heat transfer transient model, which includes axial and radial dispersion.
^ Universidade da Beira Interior, Departamento de Matematica, 6200 Covilha, Portugal
612
2. Overview of the MFEM The present paper follows our two earlier contributions, Coimbra (2000 and 2002) where we have presented the formal treatment of moving finite element method with a piecewise higher degree polynomial basis in space. Without loss of generality we will only describe the MFEM in 2D. The mathematical model of a process involving diffusion, reaction and convections in 2D usually consists of an equation of the form
^
=F , ^ H - F , ^ + H
where
U = (UY,U2,..-',U^)
(1)
is Sin 91" valued function depending on (x,y)E
Q^dSi^and
t>0. The matrix Fj, F2 and the vector H may depend on (jc,y),t,—,—. We have dx dy the initial conditions u = Uo(x, y) and Dirichlet or Neumann boundary conditions. The MFEM is a procedure for finding numerical solutions of (1) based on the method of lines. So, the discretization of the PDE is performed in two stages. In the first one the space variables are discretized by finite element allowing the movement of the spatial nodes. The second stage deals with the numerical integration in time of the resulting ordinary differential systems to generate the numerical solution. For this step we use LSODI integrator described by Hindmarsh (1980). To discretize the space domain we consider a triangular mesh. The solution u^ is approximated by U^ ~^^i^m,j
'
where (^1 are the piecewise basis functions at /th node, time dependents through the time dependence of the nodal position, and UJ^j is the value of U^ at Ath interpolation node of the jth triangle of the mesh. The positions of the mesh points are predicted by minimizing the square of the norm of the residual of the approximation in the governing partial differential equations with respect to variations in nodal amplitudes and their positions. This procedure originates a non-linear system of ordinary differential equations. Some of these equations must be overwritten to introduce the boundary conditions. The implementation of the moving finite element method is based on the numerical calculations of all integrals defining the ODE system and the use of penalty functions to prevent the mass-matrix singularities and the grid distortion. These functions do not interfere on the solution and have as an additional effect the regularization of the movement of the nodes.
3. Numerical Examples The use of a code based on MFEM to solve a PDE requires from the user must specify some parameters such as the starting space grid or the time-tolerances for ODE solver. Additional parameters are included to define the degree of the approximations used, to control the distortion of the grid and to prevent the singularities of the mass-matrix. The user must supply the minimal node distance (in ID) or the minimal area for triangles (2D) as well two others parameters in order to define penalty functions. To choose these
613 parameters it is not necessary to have some knowledge of the solution itself. Usually some trial runs are enough. However the user must be aware that the adaptivity of the method is influenced by the choices of those parameters. In all the examples presented here we consider for the time-tolerances for ODE solver, tol = 10~ . The minimum permissible cell width or area is c^ = 10"^. The other values of penalty constants used in all examples are C2 = 10"^ and c^ = 10"^. 3.1. A problem from mathematical biology Our first example problem is a model of reaction-diffusion that provides a model of ionic current flow across a semi-infmite nerve membrane (Verwer, 1981 and Sereno, 1989) and is given by: du ~dt du
—r- + dx
u(u-a){l-u)-v (3)
b(u-cv)
for 0<jc<200 and t>0.
u=u(x,t)
is an electro-chemical potential and v = v(x,t) a
recovery variable that enables the system to return to its rest state. The initial values for u,v are M = v = 0 and the boundary conditions are — (0, t) = — and -— (200, t) = 0, dx 2 dx for t>0. The values of the parameters are: a = 0.139, ^ = 0.008, c = 2.54 and / = 0.45 . / is a constant current applied at the left end of the nerve. Figure 1 show the solutions profiles and nodes movements obtained with an initial grid with 17 points concentrated at left end, cubic approximations in each element.
S 40
60
80
100
120
80
100
120
80
100
100
120
120
Figure 1: Nodes movement and solutions at t=80, t-120, t=160 and t=200.
614 3.2. A diffusion-convection-reaction problem in a catalytic particle Our second example is a problem from chemical engineering concerning diffusionconvection and reaction in a catalytic particle. The model equation described by QuintaFerreira (1988) and Sereno (1989) is given by:
(4)
for 0 < x < 2 and t>0. y = y(x,t) is a normalized concentration, x is the space variable normalized the half thickness of the particle, t is the time variable normalized by the diffusion time constant, A,^ is the intraparticle Peclet number and O is the Thiele modules. The initial value for y is y(x,0) = 0 and the following boundary conditions are imposed for ^ > 0: y(0,t) = 1 and y{2,t) = 1. Simulations are carried out for different values of the parameters X^ and O in order to test the performance of the algorithm. An initial grid with 14 points concentrated at left end were considered and approximations of degree 6 in each element Figure 2 compares the solution profiles at various times, from t=0.001 to t=0.5, for two different values of X^, X^ = 0 , on the left and X^ =10 on the right. Both runs are obtained with 0 = 1 . Notice that the choice of X^ = 0 , figure 2(left), illustrates the behaviour of the reaction-diffusion model in absence of convection. Figure 3 shows the effect of the Thiele modules on concentration profiles with X^ =10.
Figure 2: Effect of the intraparticle Peclet number on concentration profiles,
X^=0,
(left) X^ =10, (right).
Figure 3: Effect of the Thiele modules on concentration profiles, ^ = 3 (right).
^ = 2,(left)
615 3.3. A problem from heat transfer in a fixed bed Our third example is a 2D pseudo-homogeneous model described and studied by Ferreira (2002) concerning the propagation of waves of temperature in a fixed bed. The governing dimensionless equation is .
. .dT dt
1 d^T FCf^ dx^
L d^T dT RPe^ dy^ dx
dT
(4)
yR?Q, dy
where T is the temperature, L the length of the bed, R its radius, x the axial variable along fixed bed, y the radial variable, Pe^^the axial Peclet number, Pe^the radial Peclet number and b,^ the parameter of thermal capacitance. Initial condition is given by r((jc, y),0) = 1 if jc = 0 and r((x, >'),0) = TQ elsewhere. The boundary conditions are: Tmy\t)
= l, —{ily),t)= ax
— {{xfilt) = Omd —({x,l\t) oy oy
= -Bin{x,l),t),Bi
the
thermal Biot number. Figure 4 and 5 show respectively the non-normalized temperatures profiles, and nodes movements obtained with cubic approximations in each triangle, with a starting grid of 60 elements. For this run ^f^=l3, Pe;j=100, Pe,=500, Bi=S and 70= 0.
Figure 4: Temperature profiles for various times.
W7 W)r-
m\z^i2^ 0.2
0.4 ^ 06
08
Figure 5: Nodes Movements for t=0.05, t=land t=4.
0.2
0.4 , 06
08
616
4. Conclusions The numerical results presented in this paper show the capability of the moving finite element method to solve a set of rather diverse time-dependent problems. They show that, even with few nodes in space grid, the adaptivity of the grid allows the obtention of accurate solutions. The effects of the different numerical parameters of the MFEM became clear. Further optimisation of the 2D computational will allow its use in modelling and simulation of industrial processes as one more tools for solving time dependent partial equations, using adaptive space grid with few nodes without damaging the accuracy of the solutions.
5. References Baines, M.J., 1994, Moving Finite Elements, Oxford University Press. Coimbra, Maria do Carmo, Sereno, C. and Rodrigues, A.E., 2002, A moving finite element method for the solution of two-dimensional time-dependent models, Applied Numerical Mathematics, in press, available online 11 June 2002. Coimbra, Maria do Carmo, Sereno, C. and Rodrigues, A.E., 2001, Applications of a moving finite elements method. Chemical Engineering Journal, 84, 23-29. Coimbra, Maria do Carmo, Sereno, C. and Rodrigues, A.E., 2000, Modelling multicomponent adsorption process by a moving finite element method. Journal of Computational and Applied Mathematics, 115, 169-179. Ferreira, L., Castro, J.A, Rodrigues, A.E., 2002, An analytical and experimental study of heat transfer in fixed bed. Int. J. of Heat and Mass Transfer, 45(5), 951-961. Hindmarsh, A.C., 1980, LSODE and LSODI, two initial value ordinary differential equations solvers, ACM-SIGNUM Newslett., 15, 10-15. Miller, K. and Miller, R.N., 1981a, Moving finite elements. Part I SIAM J. Numer. Anal. 18,1019-1032. Miller, K., 1981b, Moving finite elements. Part II SIAM J. Numer. Anal. 18,1033-1057. Quinta-Ferreira, R., 1988, Contribui^ao para o estudo de reactores cataliticos em leito fixo: efeito da convecgao em catalisadores de poros largos e casos de catalisadores bidespersos, PhD Thesis, FEUP, University of Porto (in Portuguese). Sereno, C , 1989, Metodo dos elementos finitos moveis: aplicagoes em engenharia quimica, PhD Thesis, FEUP, University of Porto (in Portuguese). Verwer, J.G., Blom, J.G. and Sanz-Serna, 1981, An adaptive moving grid method for one-dimensional systems of partial differential equations, Journal of Computational Physics, 83(2), 454-486.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
617
Solution Multiplicity in Multicomponent Distillation A Computational Study Nirav M. Dalal^ and Ranjan K. Malik^ Chemical Engineering Department, Indian Institute of Technology-Bombay Powai, Mumbai 400076, India
Abstract Rigorous models of staged distillation processes are formulated by setting up material balance equations, equilibrium relations, summation equations, and enthalpy balance equations (MESH equations). In these models, the extent of nonlinearity may be very severe, particularly for azeotropic and reactive distillation systems. MESH system based mathematical models can thus yield multiple solutions (multiple steady states), a fact which has been observed by many researchers. Multiple steady states (MSS) are not detected automatically by the state-of-the-art commercial simulators, despite the fact that extensive capabilities are available for complex distillation columns and the solution algorithms used are very robust and efficient. Since the implications of multiplicity could be numerous, it is of great importance to detect MSS in distillation by a systematic procedure. The approach proposed in this paper uses a recent algorithm that can track the multiple solutions of nonlinear algebraic equations (NLAE) starting from only one initial point. This optimization-based algorithm is interfaced with the MESH system to solve two case studies, an ideal distillation system with two products, and an azeotropic distillation system. For both the case studies, the proposed methodology looks very promising as it successfully tracks the MSS from a single starting point
1. Introduction Distillation is a widely used and important separation operation in the chemical industries. Rigorous model for the distillation process involves the MESH system of equations. Since the model is nonlinear, it may have more than one (feasible) solution. Simulation packages do not provide any systematic procedure to find multiple solutions. In the absence of complete information on MSS, wrong conclusions might be drawn on various aspects of design, operation, and control. Thus, a need exists for a systematic approach to find all feasible solutions for the distillation process. More than one steady state for the same set of specified variables (output multiplicity) is one of the interesting features of azeotropic distillation. Simple distillation columns with ideal vapor-liquid equilibrium, however, may also show MSS (Jacobsen and Skogestad, 1991). The existence of output multiplicities in distillation were first reported on the ternary ethanol-water-benzene (EWB) system. Earlier simulation-based studies had reported two distinct steady states depending on the starting guesses (Bekiaris et al.. * Presently with Reliance Petroleum Limited, Jamnagar, India. ^ Author to whom all correspondence should be addressed.
618 2000). Magnussen et al. (1979) have presented simulation results for three steady states (two stable and one unstable). The EWB system has been further used in many other studies on solution multiplicities (e.g., Prokopakis and Seider, 1983; Seider and Kovach, 1987; Cairns and Furzer, 1990). It has also been reported that the multiplicity may depend on thermodynamic methods. For example, the multiplicity for the EWB system has been observed with NRTL and UNIQUAC methods but not with Wilson equation (Bekiaris et al., 2000). Several researchers (Prokopakis et al.,1981; Bekiaris and Morari, 1996) have shown the existence of multiple solutions for other chemical systems also by changing either the specifications or the initial guesses. It is thus desirable to track multiple solutions starting from only one initial point, without the tedious task of using different trials. Though homotopy continuation methods address this issue, they do not guarantee locating all possible solutions (Hlavacek et al., 1970; Hlavacek and Seydel, 1987). Interval-Newton methods have also been reported to find regions containing all solutions of NLAEs (Floudas and Maranas, 1995). In this work, the approach is based on convex lower bounding, coupled with a partitioning strategy, which provides guarantee for convergence to all solutions. For this, the algorithm given by Floudas and Maranas (1995) for finding multiple solutions of NLAEs has been adapted and implemented.
2. Motivation The implications of multiplicity on distillation design, synthesis, simulation and control can be critical (Bekiaris and Morari, 1996). Using an acetone-benzene-heptane example, Bekiaris et al. (1993) have illustrated how column profiles may jump from one steady state (99% acetone) to another (93% acetone) for some feed disturbance. Seider and Kovach (1987) performed simulations and experiments on dehydration of sec-butanol and conjectured that the observed erratic column behavior is a consequence of MSS. Moreover, the sensitivity and dynamic characteristics of each steady state may differ and affect column controllability. The existence of MSS raises other problems related with the start-up strategy that would drive the column to the desired steady state. Thus, it is essential to detect all possible steady states of a given column. Since the popular commercial simulators do not have provision to find MSS directly, the aim of this work is to develop a systematic procedure to achieve this. A very recent work in this direction is due to Vadapalli and Seader (2001).
3. Problem Formulation The algorithm of Floudas and Maranas (1995) finds all solutions to a nonlinear system of equations subject to inequality constraints and variable bounds (given below):
hj(x) = OjENE g^x)<0,kENi x'<x<x^ where NE is the set of equalities, Nj the set of inequality constraints, and x the vector of variables. Now, an optimization problem is formulated by adding a slack variable s. Min s
X, s >0
619 subject to:
hjix)-s
4. Verification of the Implemented Algorithm The implemented algorithm is verified with a test problem (Floudas and Maranas, 1995) comprising two nonlinear equations subject to bounds on the two variables. 4xi^ + 4xiX2 + 2x2^ - 42xi - 1 4 = 0 4x2^ + 4xiX2 + 2x,^ - 26xz - 22 = 0
subject to: -5.0<Xi<5.0 and -5.0<X2<5.0 It is observed that the same solutions (Table 1) were obtained with different initial guesses but the order of the solutions depends upon the initial guesses. Table 1: Nine solutions of the test problem Solution 1 2 3 4 5 Xi -3.77931 -0.27084 0.08668 3.38515 X2 -3.28319 -0.92304 2.88425 0.07385
-2.80512 3.13131
6 7 8 -0.12796 3.00 3.58443 -1.95371 2.00 -1.84813
9 -3.07303 -0.08135
After successfully solving a simple system of two nonlinear equations, the algorithm is next applied to a simple distillation problem that does not involve rigorous thermodynamic calculations.
5. Methanol-Propanol Column The problem taken from Jacobsen and Skogestad (1991) is a 9-tray methanol-propanol column, and was initially run with the following assumptions: (a) total condenser; (b) constant molar flow except at feed stage; (c) saturated liquid feed; (d) constant relative volatility; and (e) energy balance is neglected. The results were found in good agreement with those given by Jacobsen and Skogestad (1991). Then, the constant molar flow assumption was removed and the energy balance incorporated (Dalai, 2001). The results presented here are for both molar and mass reflux cases.
620 5.1. Multiple steady states for fixed molar reflux rate Here, the specifications are molar boilup V (fixed at 4.5 kmol/min) and molar reflux L (varied from 4.6 to 4.75 kmol/min). For each set of specifications, multiple solutions are found starting from one initial point using the algorithm. c
0.6
0.6
E
s o CD
4.6
0.0
4.65
4.7
4.75
K/blar Ffef lux Fbte, Ukmol'nin)
B_
4.6 4.65 4.7 4.75 Molar Reflux Rate, L (kmol/min)
Figure 1: Multiple steady states for molar reflux rate Lfor methanol-propanol column. (A) Mole fraction of methanol in bottoms; (B) Distillate rate D The plots in Figure 1 are similar in nature to those given by Jacobsen and Skogestad (1991). Among the three solution branches, the upper and lower solution branches are stable while the middle one is unstable because (3yD /dL)v, known as gain, is negative, where yo is the distillate composition. Although the specifications and equations were identical to those given by Jacobsen and Skogestad (1991), there is a little difference in the range of multiplicity between our results and theirs. The reason for this is unclear. 5.2. Multiple steady states for fixed mass reflux rate The same column is considered with mass reflux rate instead of molar reflux rate as input (specification) along with molar boilup rate. The boilup rate is fixed at 2 kmol/min and the mass reflux rate Lw is varied from 57.0 to 59.5 kg/min. For discrete values of reflux rate, multiple solutions have been found using one initial point. The plots in Figure 2 compare well with those given by Jacobsen and Skogestad (1991).
i "^
0.1-
C
o
1
^•^•
lil % 0.001 • 7 ^ 5
A
57.5
58
58.5
59
59.5
^bss Reflux Rate, l^ (kg/min)
Figure 2: Multiple steady states for mass reflux rate Lwfor methanol-propanol column. (A) Mole fraction of methanol in bottoms; (B) Mole fraction of methanol in distillate.
6. Ethanol-Water-Benzene (EWB) Column The problem considered is of EWB azeotropic distillation and is taken from Magnussen et al. (1979). Several research workers have observed the existence of MSS; however, there is some uncertainty about the exact number of steady states observed for a given column configuration and operating conditions. The column here (Figure 3) has 28 trays including a total condenser (stage 1) and reboiler (stage 28).
621
Distillate F.nfrainpr
Feed (Rate = 100 kmol/hr) Composition (mole fraction) Ethanol: 0.89, Water: 0.11
FPPH
Entrainer (Rate = 45.32 kmol/hr) Composition (mole fraction) Ethanol: 0.22, Benzene: 0.74, Water: 0.04
Figure 3: Schematic Diagram showing EWB column configuration.
The two specifications are reflux ratio (5.47) and distillate rate (63.3 kmol/hr). By ignoring liquid-phase splitting in reflux drum, and with UNIQUAC thermodynamics (binary interaction parameters are used from Bekiaris et al., 1996) in the steady state distillation model, the composition and temperature profiles obtained for two different steady states starting from a single initial point are shown in Figures 4 and 5. The profile for steady state II does not appear smooth because the solvers (FFSQP and NLPQLP) enabled convergence with limited accuracy. Even though presence of a third steady state has been reported in the literature, only two of the three steady states could be tracked. -Ethanol Water . Benzene
1 0.8 0.6
i/
I
OS
•i
0.6 if
/Water - - - - Benzene
6 ^ 0.4 Y^ 0 . 2 ^
0.2
o4-^ 9
13
17
21
\
0
25
9
B
Tray Number
13 17 21 TrayNumber
25
Figure 4: Composition profiles for EWB column for (A) Steady state I (B) Steady state II. 352 n ^348-
— — S t e a d y state 1
f
s 2 344-
-
-
- Steady state II
1
/
y
1 336332-
1
r7
10
13
16
19
22
25
28
Tray Number
Figure 5: Temperature profiles for two steady states.
6.1. Results from Simulation Packages The EWB column was simulated repeatedly using ASPEN PLUS (version 9.3) and CHEMCAD (version 3) with the same set of specifications as above and the same UNIQUAC interaction parameters (from Bekiaris et al., 1996). All runs converged to steady state II (low purity steady state) and not to steady state I. The single steady state located by simulators proves that wrong designs could be developed if the user is not aware of the presence of MSS and/or does not have the facility to obtain the same. Table
622 2 compares the results from the simulations with those obtained from the optimizationbased algorithm implemented in this work for steady state II. Table 2: Comparison of results from algorithm and simulators. Top Composition
: Ethanol : Water : Benzene Bottom Composition : Ethanol : Water : Benzene (lO^xkcal/hr) Condenser Duty (lO^xkcal/hr) Reboiler Duty
Algorithm 0.3691 0.1081 0.5282 0.9228 0.0718 0.0000 -3.4215 3.4062
ASPEN PLUS 0.3682 0.1007 0.5311 0.9216 0.0783 0.0000 -3.4336 3.3522
CHEMCAD 0.3607 0.1081 0.5311 0.9274 0.0726 0.0000 -3.5070 3.4968
7. Conclusions An optimization-based approach has been developed to track multiple solutions in a distillation column starting with only one initial point. For the methanol-propanol column, all steady states were successfully obtained for a range of reflux specifications (both on mass and mole basis) without changing the initial guess. For the EWB column, however, two of the three reported steady states were tracked despite using two different solvers (FFSQP and NLFQLP). Since the methodology implemented in this study has worked very successfully on a range of problems discussed above, it is felt that its performance can be improved further by interfacing a more robust and efficient solver.
8. References Bekiaris, N., Guttinger, T.E, and Morari, M., 2000, AIChE J., 46,5, 955-78. Bekiaris, N., Meski, G.A., Morari, M. 1996, Ind. Eng. Chem. Res. 35(1), 207-227. Bekiaris, N., Meski, G.A., Radu, CM., Morari, M., 1993, Ind. Eng. Chem. Res., 32, 9, 2023-2038. Bekiaris, N., Morari, M., 1996, Ind. Eng. Chem. Res., 35, 11,4264-80. Cairns, B.P., Furzer, I.A., 1990, Ind. Eng. Chem. Res., 29, 7, 1383-95. Dalai, N., 2001. M.Tech. Thesis, IIT, Bombay. Floudas, A., Maranas, C , 1995, Journal of Global Optimization, 7, 2, 143. Hlavacek, V., Kubicek, M., Jelinek, J., 1970, Chem. Engng. Sci., 25, 1441-1461. Hlavacek, V., Seydel, R., 1987, Chem. Engng. Sci., 42, 6, 1281-95. Jacobsen, E.W., Skogestad, S., 1991, AIChE J., 37,4,499-511. Magnussen, T., Michelsen, M.L., Fredenslund, A., 1979, Inst. Chem. Engng. Symp. Ser. No. 56, Third international symposium on distillation, ICE Rugby, England. Prokopakis, G.J., Seider ,W.D., 1983, AIChE J., 29(1), 49. Prokopakis, G.J., Seider, W.D., Ross, B.A., 1981, In Foundations of Computer-aided Chemical Process Design, Mah, R.S.H., Seider W.D., Eds.; AIChE; New York, 239-272. Seider, W.D., Kovach, J.W., 1987, Comput. Chem. Eng., 11, 6, 593-605. Vadapalli, A., Seader, J.D., 2001, Comput. Chem. Eng., 25, 2-3,445-464.
9. Acknowledgement We gratefully acknowledge Prof. K. Schittkowski (Department of Mathematics, University of Bayreuth) and Prof. Andre L. Tits (Institute for Systems Research, University of Maryland) for providing us the NLPQLP and FFSQP solvers respectively.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
623
Multiobjective Optimisation of Fluid Catalytic Cracker Unit Using Genetic Algorithms Dhaval Dave and Nan Zhang Department of Process Integration, UMIST, Manchester, UK, M60 IQD
Abstract A rigorous model is presented for simulating an industrial fluid catalytic cracker unit. Two different kinetic lumping schemes are presented for FCC reactor modelling and the regenerator is modelled as two separate regions: the dense bed and the dilute phase. Kinetic parameters of five lump model were tuned with industrial data. An adapted version of nondominated sorting genetic algorithm (NSGA) was then used to optimise the performance of the unit. Operational insights are developed by using several objective functions and decision variables are obtained for optimal operation. Capabilities of an algorithm are presented by case study using five lump kinetic scheme and objectives considered are maximisation of gasoline production and minimisation of CO emission from regenerator. More detailed ten lump kinetic scheme is proposed for multi objective analysis of recycle slurry flow and CO emission from regenerator. Pareto optimal solutions are obtained and the results are expected to enable the process engineer to gain useful insights to locate compromised operating conditions.
1. Introduction Fluid Catalytic Cracking (FCC) is one of the most important processes in an oil refinery. Its function is to convert heavy hydrocarbon petroleum streams into more valuable, lighter hydrocarbon fractions such as middle distillate, gasoline and liquefied petroleum gas etc. Optimal operation of the process is decisive for overall economic and environmental health. Often several objectives and constraints are involved in the process, and optimisation studies incorporating these conflicting objectives would be invaluable to the process engineer. Numerous papers relating to the FCC process can be found in the published technical literature. They present various aspects of design, kinetics, mathematical modelling and simulation, stability, optimisation and control. The history of the development and commercialisation of catalytic cracking was reconstructed in detail in a review by Avidan and Shinnar (1990). Different workers have discussed the kinetics in the reactor and the regenerator and have modelled these units separately, while others have developed integrated models for the reactor-regenerator system. Several studies have also been carried out on optimisation and optimising control of FCC units. Most of the studies in optimisation of FCCU were limited to single objectives (e.g. profit maximisation, maximisation of conversion, maximisation of individual product production etc.). In the present work, multiobjective optimisation study is performed on an industrial fluid catalytic cracker unit. We use an adapted version of nondominated sorting genetic
624 algorithm (NSGA) (Deb and Srinivas, 1995) and the objective functions considered in the present study are maximisation of gasoline production and minimisation of CO emission from regenerator. The objective functions considered for more detailed kinetic model are to maximise recycle slurry flow and to minimise CO emission from regenerator (and/or maximisation of gasoline production).
2. FCCU Modelling The FCC unit comprises of two basic parts, a reactor/riser in which hydrocarbon cracking reactions occur and a regenerator in which catalyst regains its activity by burning coke deposited on it during cracking. More detailed description of the process is available in Avidan and Shinnar (1990). Recently, Dave and Saraf (2002) reviewed the extensive literature available on modelling of industrial FCC units. Selection of an appropriate lumping scheme was one of the most important issues in this modelling exercise. Ten lump kinetic scheme developed by Jacob et al. (1976) and five lump kinetic model proposed by Ancheyta et al. (1999) were examined closely. The virtue of more detailed lumping scheme over other less detailed models is that rate constants is that rate constants are independent of feed composition. But utilisation of these models are limited by two problems i.e. detailed characterisation of streams is not available on a regular basis and elaborate kinetic information is scarcely available. Thus, a balance between kinetic description required and cost of laboratory analysis often decides selection of lumping strategy. 2.1. Five-lump model Dave and Saraf (2002) modified the original scheme of Ancheyta et al. (1999) by assuming gasoline and LPG also convert to coke. Figure 1 presents the modified kinetic study, used in the present work. Since rate constants for this model are dependent upon feed quality, they are obtained by tuning industrial data available. In the present work, five lump kinetic model developed by Dave and Saraf (2002) is used for multi objective analysis for objective functions gasoline production maximisation versus minimisation of CO emission from regenerator. 2.2. Ten-Lump model Since five-lump model is unable to capture feed quality and hence impact of recycle slurry flow rate on the performance of unit. It was necessary to develop a model, which is based upon detailed characterisation of feed. In present study. Ten lump kinetic scheme (as shown in figure 2) developed by Jacob et al. (1976) is used with some modifications. Since Jacob et al. (1976) lumped coke and gas together, regenerator model can not be integrated with its present form. In this case, gas yield was predicted from correlation available in Gary and Hadwerk (1994) and hence ten lump model was made suitable. This model has also been tuned for an industrial application. Since hydrodynamics and detailed design were idealised, it was necessary to tune the model to match with industrial performance. Dave and Saraf (2002) described regenerator as two-region (the dense bed and the dilute phase) model, essentially following the scheme of Krishna et al. (1985) with some modification. Bulk of the coke combustion reaction occurs in the dense bed and the dilute phase is the region above the dense bed, 'after burning' of carbon monoxide and catalyst entrainment are the main effects of this section. The dense bed is modelled as a
625 well-mixed reactor with regard to catalysts and a plug flow reactor for gas stream, whereas dilute phase is modelled as a plug flow reactor for both entrained catalyst particles and gas stream. ki
^ m* GaiOil^
• Coke 4 "
Figure 1 Five lump kinetic scheme (Dave and Saraf, 2002).
Figure 2 Ten lump kinetic scheme (Jacob etal. 1976).
The model involves many ordinary differential equations which are non-stiff and solved using Runge-Kutta-Gill method (Gupta, 1995). Integrated reactor and regenerator model was solved using Newton-Raphson method (Gupta, 1995). The complete set of the reactor and the regenerator model equations as well as the values of the associated parameters are given in Dave and Saraf (2(X)2).
3. Multiobjective Optimisation The operation of the FCC unit above is now optimised. It was assumed that economics of FCC unit is affected by Ffeed, Tfeed, Feat, Tair and Fair- Furthermore, it is assumed that the regenerator works in full combustion mode and there is no CO- reboiler present. Many refiners prefer to work on full combustion mode because of regeneration effect. Many objective functions can be considered in an optimisation study. In the given unit it was imperative to maximise gasoline production, so it was chosen as one of the objective. Maximisation of gasoline requires more conversion and more throughput, and hence more amount of coke formation. This coke deactivates the catalyst and therefore it is require burning off as much coke as possible in the regenerator, requiring higher amount of air. Burning of coke results in the formation of pollutants CO and CO2. The regenerator size is fixed and also there is a limited capacity of regenerator feed air blower. So as to increase the gasoline production beyond certain extent, trade-off is generally made by operating regenerator in the partial combustion mode (i.e. emitting more CO through regenerator). Also refiners would like to increase recycle slurry flow rate because unconverted heavy slurry is the lowest price stream. Increasing the recycle flow has an adverse impact on coke formation and hence to maintain conversion levels at desired level, trade off is made by partial conversion. Based on above discussion, we could see that objectives involved here are gasoline production maximisation (maximisation of gasoline yield + throughput), minimisation of regenerator air flow rate, minimisation of CO emission and maximisation of recycle slurry flow rate.
626 Consideration of all the objectives simultaneously is difficult to analyse. So we form three problems which optimise two objectives simultaneously for simplicity. The optimisation problems are solved using genetic algorithm (GA) made suitable for multiobjective problems and known as NSGA. NSGA differs from traditional GA in the sorting procedure adapted. The potential solutions are sorted twice, first nondominated solutions are obtained and these are then sorted depending upon the number of other chromosomes nearby. Thus, the algorithm is able to fmd solutions closer to the true sets of compromised solutions and maintains diversity in the gene pool at each time. Details of this algorithm are provided by Deb and Srinivas (1995). 3.2. Problem 1 (Maximisation of gasoline yield vs. minimisation of CO emission) M a x F i ( X ) = Yconversion
(1)
^conversion = sum of the yiclds of gasoline, LPG, dry gas and coke. It should be noted that using five lump model (Dave and Saraf, 2002), maximisation of conversion is equivalent to maximisation of gasoline conversion. Max F2(X) = 10.0/ (1 + Xco)
(2)
Where decision variables X = {Tfeed. Feat»Fair»Tair }* Bounds on the decision variables are specified such that it captures operations of all industrial FCC units. It should be emphasised here that the total feed flowrate Ftot (Ffresh + Frecy) is fixcd at value 28 kg/s. The constraints other than the model equations are as follows. Trgn < 1000 K
(3)
Crgn < 0.001 (kg coke/kg catalyst)
(4)
Constraints on regenerator temperature and coke content of regenerated catalyst were required in order to prevent catalyst deactivation due to thermal damage and excessive coke deposition. The Pareto optimal solutions obtained along with the decision variables corresponding to each point on the pareto curve is given in figure 3. 3.3. Problem 2 (Maximisation of gasoline production vs. Minimisation of CO emission) MaxFi(X)=Ftot*(Ygasoline)
(5)
Max F2(X) = 10.0/ (1 + Xco)
(6)
W h e r e X = { Ftot, Tfeed, Feat y Fair , Tair }
* Tfeed = Feed preheat temperature, Feat = Catalyst flowrate , Fair = air flowrate to the regenerator, Tair = Regenerator air preheat temperature, Ffresh = Fresh feed flowrate, Frecy = Recycle feed flowrate, Ftot = Total feed flow rate, Xco = Composition of Carbon monoxide in flue gas (vol%)
627 It should be noted that again five lump model (Dave and Saraf, 2002) is used. It was assumed that composition of feed is maintained constant by fixing recycle ratio. Appropriate bounds on the decision variables are specified and the constraints for this problem are same as that are listed in problem 1. 3.4. Problem 3 (Maximisation of recycle slurry flowrate vs. maximisation of gasoline production) M a x F i ( X ) = (Ffresh+ Frecy)*(Ygasoline)
(7)
M a x F 2 ( X ) = Frecy
(8)
W l i e r e A = | Frecy? Afresh? ^ fegj. Teat ? ^air ? ^ aj^ )
Modified ten lump model is used, since feed composition is allowed to change in this problem. Appropriate bounds on the decision variables are specified and the constraints specified in problem 1 are applied. Also an additional constraint is specified to ensure CO emission within some limit. Xco<2%
(9)
It is because of limitation of space pareto optimal solutions obtained for problem 2 and 3 are not shown. The results are expected to enable the process engineer to gain useful insights and to locate compromised operating conditions. In fact, such procedure can be applied for many other objectives for FCCU. .„».
^o > c o w 40
E
LU •
4.6
1
QOM QGCG5
* *
Q003 CXCE5 QOGB
^^ \1 « o o
QCOB
»
QOOt
•
CXXXE
•;
0
o o
eD
"O
ee
64
68
68
7D
-a
i
/
4.3
•
39 38 b
> «» 80
Conversion
^Vt^^
\1
•
4? 4.1
•
i i
65
70
" is
72
Conversion
u.
23 22 ?1 20 19 18 17 16 15 6D
• •
•
&
^
^'
*
?tt
Conversion
6B0
530
e
24
4.5 4.4
525
640
520
eao
*^% *^ * *40^
515
^f^lMl
*
a^
620
510
610
505' 500
eoo • • • | l *
495
590
580
490
D
62
64
66
68
Converelon
70
72
eb
62
'M '^
68
Conversion
7D
72
Conversion
Figure 3 Pareto optimal solutions obtained for problem 1 using five lump model.
7!2
628
4. Conclusions Two different kinetic lumping models are tuned in order to simulate an industrial FCC unit. Operational insights are developed by performing multiobjective optimisation study using non dominated sorting genetic algorithm. Pareto optimal solutions are obtained for different objective functions and constraints considered, which are expected to help process engineer to locate favoured solution.
5. References Ancheyta, J.J.; Lopez, I.F; Aguilar, R.E.; Moreno, M.J., 1997, A Strategy for Kinetic Parameter Estimation in the Fluid Catalytic Cracking Process, Ind. Eng. Chem. Res., 36, 5170-5174. Avidan, A.A.; Shinnar, R., 1990 Development of Catalytic Cracking Technology. A Lesson in Chemical Reactor Design, Ind. Eng. Chem. Res., 29, 931-942. Dave, D.J. and Saraf, D.N., 2002, A model suitable for rating and optimization of industrial FCC units, selected for publication in Indian Chemical Engineer. Deb, K. and Srinivas, N., 1995, Multiobjective optimizatoin using nondominated sorting in genetic algorithm Evol. Comput., 2, 106-114. Gary, J.H. and Handwerk, G.E., 1993, Petroleum refining, technology and economics, 3, Marcel Dekker. Gupta, S.K., 1995, Numerical Methods for Engineers, Wiley Eastern/New Age Intl. Jacob, S.M.; Gross, B.; Voltz, S.E.; Weekman, V.M., Jr., 1976, A Lumping and Reaction Scheme for Catalytic Cracking, AIChE J., 22(4), 701-713. Krishna, A.S. and Parkin, E.S., 1985, Modeling the Regenerator in Commercial Fluid Catalytic Cracking Units, Chem. Eng. Prog., 81(4), 57-62.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
629
Novel Operational Strategy for the Separation of Ternary Mixtures via Cyclic Operation of a Batch Distillation Column with Side Withdrawal D. Demicoli and J. Stichlmair Lehrstuhl fur Fluidverfahrenstechnik, Technische Universitat Munchen, Boltzmannstr. 15, D-85748, Germany, email: [email protected]
Abstract In this paper we introduce a novel operational policy for the purification of an intermediate boiling component via batch distillation. The novel operational policy is based on feasibility studies of a cyclic distillation column provided with a side withdrawal. The process is validated via computer based simulations. Furthermore, the effects of the most important process parameters are investigated.
1. Introduction Batch distillation is a very efficient and advantageous unit operation for the separation of multicomponent mixtures into pure components. Due to its flexibility and low capital costs, batch distillation is becoming increasingly important in the fme chemicals and pharmaceutical industries. Nevertheless, there are intrinsic disadvantages associated with conventional batch processes. These are: long batch times, high temperatures in the charge vessel and complex operational strategies. Hence, alternative processes and operating policies which have the potential to overcome these disadvantages are being extensively investigated. S0rensen and Skogestad (1996) compared the operation of regular (fig. la) and inverted (fig. lb) batch distillation columns for the separation of binary mixtures. In a later work S0rensen and Prenzler (1997) investigated the cyclic, or closed operation for the separation of binary mixtures. Warter et al. (2002) presented simulations and experimental results for the separation of ternary mixtures in the middle vessel column (fig. Ic). The cyclic operation was applied also in this case. Multicomponent mixtures can be separated in the multi-vessel distillation column. This might also be operated in closed operation (Wittgens et al. 1996). In this paper we introduce a novel process for the separation of ternary mixtures via cyclic operation of a batch distillation column provided with a side withdrawal {fig. Id). This consists of a distillation column equipped with sump and distillate vessels, to which the charge is loaded at the beginning of the process, and a liquid withdrawal section placed in the middle of the column.
630 Charge (d)
Feed
Fig. 1: Different column types — (a) regular (b) inverted (c) middle vessel and (d) novel cyclic batch distillation column with side withdrawal.
2. Feasibility The column shown in figure Id can be visualised as an inverted batch distillation column placed on top of a regular batch column, the two being connected at the withdrawal stage. Hence, feasibility studies for the regular and inverted batch distillation columns may be applied to the novel process provided that the concentration of the withdrawal tray lies on the column's profile. Therefore, it is possible to obtain pure intermediate-boiling product b from an infinite column operated at infinite reflux ratios, only if the distillate and sump vessels contain the binary mixtures a-b (lightintermediate boilers) and b-c (intermediate-heavy boilers), respectively.
3. Process The charge was initially equally distributed between the sump and distillate vessels. The column was than operated in a sequence of two process steps: a) Closed operation mode. During this step, the light and heavy boilers were accumulated in the distillate and sump vessels respectively (fig. 2a, b). Hence, the column was operated at total reflux and no side-product withdrawal until the concentration of the high boiler in the distillate vessel and that of the low boiler in the sump were sufficiently low. b) Open operation mode. During this step the withdrawal stream divided the column in an inverted (top) and a regular (bottom) batch column. Hence, the reflux ratio of the lower column was used to control the heavy boiling impurity c in the withdrawal stream. The reboil ratio of the inverted column was analogously used to control the light boiling impurity.
631 The internal reflux and reboil ratios are related to the flow rate of the withdrawal stream through the mass balance around the withdrawal stage: W=LU-LL=V-(1/RB-RL);
(1)
W/V = 1/RB-RL
At the end of the process, the internal reflux ratios were equal to unity and the flow of the withdrawal stream equal to zero i.e. RL = RB = 1; W = 0 (fig. 2d). 3000 1
(a)
3000 1
r * " " ^ " " " i
mol i
Total Hold-up i
^ n^ A ^\ ^ \V
2 ? 1000 ^ to -^ -J
\\
-bi \V ;
j
10
'i™' ' ' " n
20
• ^
»—.i.
• •!
30
50
Time
Time
5000
I
M
1.0
(d)
~J>^^
0.8 Reboil Ratio, d2
0.6
tti
0.4
Reflux Ratio 10
20
30
min
50
10
20
30
min
50
Time
Fig. 2: Hold-up in (a) distillate vessel, (b) sump, (c) side product accumulator and (d) internal reflux and reboil ratios.
4. Composition of the Charge To study the effect of the composition of the charge, equal amounts of feeds of different compositions were processed in the same column operated in closed loop. The separation was carried out in the shortest time when the charge was rich in intermediate boiling component (fig. 3a). Furthermore, even though the duration of the start-up step increased with decreasing concentration of b in the feed, its effect was of minor importance with respect to the increase of the duration of the production step (open operation mode). This was due to the fact that both the light/intermediate and the intermediate/heavy separations, at the beginning of the process, could be carried out at low reflux and reboil ratios (fig. 3b) for feeds rich in b. On the other hand, if the charge contained low amounts of the intermediate boiler, both the light/intermediate and the heavy/intermediate separations required high reflux and reboil ratios. This is in agreement with the results obtained by S0rensen and Skogestad (1996) in their
632 comparative studies on the regular and inverted batch columns. For feeds containing low amounts of b, the recovery dropped significantly hence, the process time decreased for very low concentrations of b in the charge. Therefore, our investigations were limited to the case in which the feed was much richer in b than in a and c. In such cases the relative content of a and c played a minor role and influenced mainly the duration of the start-up of the process i.e. the closed operation mode. (a)
90
30
Start-up " 0 0.2
0.4
0.6
0.8
Concentration of b in Feed
Fig. 3: Effect of composition of charge on (a) duration of the process, (b) internal reflux and inverse of the internal reboil ratios.
5. Effects of the Geometric Parameters The geometric parameters of the process were identified as the total number of stages and the position of the withdrawal tray. 5.1. Number of stages The total number of stages was varied while the position of the withdrawal feed was kept in the middle of the column and the composition controllers were placed two stages
7
9
11
13
Number of Stages
15
17
19 C>
7
9
11
13
Number of Stages
15
17
19
—o
Fig. 4: Effect of total number of stages on (a) recovery and (b) purity of the products.
633 below and two stages above the withdrawal tray. The set-points to the two controllers were not varied during this investigation. Hence, the concentration profile around the withdrawal tray was fixed by the two control loops and the composition of the intermediate boiler was independent of the number of stages. With increasing number of stages, lower reflux ratios were required to achieve high purity b hence, the recovery rate of the intermediate boiler increased and the process time decreased. The concentration of b in the top and sump vessels at which the process became infeasible decreased with increasing number of stages. Hence, the recovery of b (Gb) (and the purity of the light and heavy boilers) increased with increasing number of stages (fig. 4). 5.2. Position of withdrawal tray The position of the withdrawal stage determined the relative size of the two column sections. Hence, by shifting the withdrawal tray upwards, the upper column section got smaller and the purity of the light boiling product decreased while that of the heavy boiler increased, and vice-versa. Since the control loops fixed the concentration profile around the withdrawal stage, the purity of the intermediate boiler was not affected. On the other hand, as the withdrawal tray was moved away from the middle of the column, the recovery rate of the intermediate boiler (6 ) decreased.
6. Termination Criteria for the First Process Step Increasing the duration of the first process step reduced the concentration of the light boiler present in the sump of the column and that of the heavy boiler in the top vessel at the beginning of the second step {fig. 5 b). Hence, with increasing duration of the first process step, the concentration of b in the column at the beginning of the second process step increased. This led to an increased concentration of the middle boiling product and to an increased recovery of the light and heavy boiling products {fig. 5). (a)
1.0
Purity 6
moi mol
w
^^^^^
p 0.6
0.4 0
10 20 Duration of Start-up
min
40
0
10
20
Duration of Start-up
min
40 [^>
Fig 5: Effect of the duration of the start-up on (a) purity and recovery and (b) moles of c in the distillate vessel at the end of the start-up phase.
634
7. Set-Point to Composition Controllers The control loop of the upper column section controlled the composition of the low boiler a, in the liquid phase two stages above the withdrawal stage, while the lower control loop controlled the composition of the high boiling impurity c, two stages below the withdrawal stage. Hence, the concentration of impurities in the withdrawal stream increased with increasing set-points. The duration of the process increased with decreasing set-point. This was due to the fact that higher reflux and reboil ratios were required to reach the lower set-points i.e. high purity b. Set-points lower than the concentration reachable at infinite reboil and reflux ratios were unfeasible.
8. Conclusion In this paper we have introduced a novel operational policy for the purification of an intermediate boiling component via the cyclic operation of a batch distillation column with a side withdrawal. The feasibility of the process was investigated by considering the novel column configuration as an inverted batch distillation column placed over a regular batch column. A novel operating strategy, based on the feasibility studies, was developed and verified by computer aided simulations. Furthermore, the influence of most important parameters on the performance of the process was systematically investigated.
9. Notation B Bottom fraction g Flow rate of bottom product [mol/s] a Low boiling component L Liquid flow rate [mol/s] RL Reflux ratio 6 Recovery rate [mol/s]
D Distillate fraction M Flow rate of distillate b W product [mol/s] b Intermediate boiling c component X V Vapour flow rate [mol/s] SP RB Reboil ratio o Recovery [mol/mol]
Middle vessel fraction Flow rate of withdrawal stream [mol/s] High boiling component Molar fraction Side product accumulato
10. References S0rensen, E., Skogestad, S., 1996, Comparison of regular and inverted batch distillation, Chem. Engng. Sci., Vol. 51, No. 22,4949-4962. S0rensen, E., Prenzler, M., 1997, A cyclic operating policy for batch distillation Theory and practice, Comp. Chem. Engng., Vol. 21, Suppl., S1215-S1220. Warter, M., Demicoli, D. Stichlmair, J., 2002, Batch distillation of zeotropic mixtures in a column with a middle vessel, Comp. Aided Chem. Engng., Vol. 10, 385-390. Wittgens, B., Litto, R., S0rensen, E., Skogestad, S., 1996, Total reflux operation of multivessel batch distillation, Comp. Chem. Engng., Vol. 20, Suppl., S1041S1046.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
635
Modelling and Optimisation of a Semibatch Polymerisation Process Ludwig Dietzsch*, Ina Fischer BTU Cottbus, Lehrstuhl Prozesssystemtechnik, Postfach 101344, D-03013 Cottbus Stephan Machefer BTU Cottbus, Lehrstuhl Chemische Reaktionstechnik, D-03013 Cottbus Hans-Joachim Ladwig BASF Schwarzheide GmbH, D-01986 Schwarzheide
Abstract This paper focuses on the modelling and optimisation of an industrial semibatch distillation process with a polymerisation reaction taking place in the reboiler. The dynamic model presented here is implemented in CHEMCAD and validated with experimental data from the industrial plant. An approach to optimise the economical performance of the semibatch process is discussed. As a result of the work so far the control structures and operation policies were improved and it was shown that further optimisation is worthwhile.
1. Introduction Because of the increasing trend toward the production of low-volume/high-cost materials batch and semibatch processes become more and more important. In today's competitive markets, this implies the need for consistent high quality and improved performance. Over the last few years there has been growing interest in techniques for the determination of optimal operation policies for batch processes. Dynamic simulation has become a widely used tool in the analysis, optimisation, control structure selection and controller design. Some of the most recent work has been concerned with the mathematical optimisation of batch process performance (Li, 1998, Li et al., 1998). In this paper an industrial semibatch polymerisation process is considered. In order to guarantee the product quality particularly controlled reaction conditions are necessary. The general aim of this work is to ascertain optimal state and control profiles and to develop a model-based control scheme. As a first step, this paper introduces the dynamic model, which is validated with experimental data, and describes the optimisation approach. An aim of the work is to assess the possibilities of the commercial flowsheet simulator CHEMCAD in the optimisation of the performance of semibatch polymerisation processes. Finally the formulation of the mathematical optimisation problem, solution strategies and their implementation in CHEMCAD are discussed.
* To whom correspondence is to be adressed. Fax: ++49 355 691130. Phone: ++49 355 691119. E-mail: [email protected]
636
2, Process Description The industrial process (Figure 1) consists of a reactor (acting as the reboiler), a packed column, a total condenser and two distillate vessels. The polymer is manufactured through reversible linear polycondensation or step-growth polymerisation. The overall reaction can be characterised by the following scheme: dialcohol (A) + dicarboxylic acid (B) ^^ polyester (P) + water (C) Actually the reaction mechanism is much more complex (Chapter 3.1). It leads to a polymer chain length distribution. The polyesterification is an exothermic reaction. At the beginning dialcohol and dicarboxylic acid are charged to the reactor. Then the reactor is heated up to operating temperature. A further amount of dialcohol is fed to the reactor during the batch. Water is distilled from the reboiler and an excess of dialcohol is used to shift the reaction equilibrium to the product side. In the first period the pressure is kept constant. The distillate, nearly pure water, is accumulated in the first vessel. As the reaction progresses it gets more difficult to remove the condensate. Hence, the pressure is reduced in the second period to evaporate the remaining water. The concentration of dialcohol in the distillate increases. In this period the distillate is accumulated in the second vessel. The end of the batch is reached when the product shows a required acid value, carboxyl number and viscosity. Temperatures, pressures and flow rates are measured on-line (Figure 1). Furthermore the reaction is followed by on-line determination of viscosity, acid value and carboxyl number. The water content in the liquid polymer is found by off-line analysis. The major costs arise from the raw materials and the costs per hour of energy and wages. Thus a reduction of the batch time and the loss of dialcohol through the distillate is desirable. Besides a stable operation and less varying batch times are aimed to be achieved by better control.
COOLING WATER
dialcohol dicarboxylic acid water polyester
Figure 1. The semihatch process.
637
3. Modelling and Simulation 3.1. Rigorous modelling The model is built in CHEMCAD with additions CC-DColumn and CC-Reacs. Different control loops are implemented. The characterisation of the complex kinetics of the polymerisation reaction is a very important part of the modelling. Reaction Kinetics According to Flory (1937, 1939, 1940), self-catalysed polyesterifications follow thirdorder kinetics with a second-order dependence on the carboxyl group concentration and a first-order dependence on the hydroxyl group concentration. Experimental verifications show deviations for conversions less than 80%. The reaction then follows a second-order kinetics with a first-order dependence on the carboxyl group and the hydroxyl group concentration, indicating a bimolecular reaction between one carboxyl group and one hydroxyl group. A simplified approach is chosen for the dynamic CHEMCAD model. Following Flory's investigations the polyesterification is described by a consecutive-parallelreaction scheme: k[
(I)
v;, -A + v^ -B
(II)
v g - 0 + v;[-A ^
v\, O + vl C
(1)
^ vg-P + vg-C
(2)
Two model components are introduced, an oligomer (O) as intermediate product and the polymer (P) as final product. The chain length distribution is not considered in the model. The oligomer and the polymer are characterised by the average molecular weight and chain length. The following rate equations are considered for the polyesterification:
dX dt dt
= k« • [ 0 r " - [ A r " - k f . [?]="" .[C?"
(4)
The kinetic parameters of the polyesterification were determined from literature (Beigzadeh and Sajjadi, 1995, Chen and Hsiao, 1981, Chen and Wu, 1982, Kuo and Chen, 1989) and process data. Thermodynamic model The vapour liquid equilibrium is described by the NRTL equation considering a nonideal liquid phase. The NRTL parameters for the system dialcohol/water are taken from Gmehling (1991, 1998). Because of missing experimental data for carboxylic
638 acid-systems and, of course, for the model components, the oligomer and the polymer, UNIFAC with different modifications (Gmehling and Wittig, 2002, Larsen and Rasmussen, 1987, Torres-Marchal and Cantalino, 1986) is used to predict the vapour liquid equilibrium and to determine NRTL parameters of it. Since there are considerable differences between the prediction methods (Figures 2-3) this choice has an important effect on the simulation results.
molefractionoligomer
mote fraction oligomer
Figure 2. Vapour liquid equilibrium oligomer-dialcohol at 5 kPa.
Figure 3. Vapour liquid equilibrium oligomer-dialcohol at 101 kPa.
3.2. Simulation results The model was validated with experimental data from the industrial site. Figures 4-5 show selected simulation results in comparison with the measured profiles. The results are satisfactory.
' ^V 1-5
"^
^ °h •=•0.01, c
S1"^ c
8
§'"' 1E-6-,
f
/^"^•^v^
v' !
'
• • A •
VJ^5*"*»»«**»»»»« ^ •^ •^^ ^.^ i i i * . \» ^**^
[COOHl - ExperimenT*****-.., [OH] - Experiment [COO] - Experiment [HjO] - Experiment ICOOH] - Simulation [OH] - Simulation [COO] - Simulation [H,01 - Simulation
r [COOlf vrrrr*-*-** • •"' L[OH] k-
100009500 90008500 —, 80005 * 7500 «J 7000-
/^•'*' /• /• r/
= esoo [COOH] t ~ "-^
f
"^•^. l-
[H,01"
1E-6-
Figure 4. Simulated and measured concentration profiles in the reactor.
•
Experiment Simulation
9500 -9000 8500 -8000 7500 -7000 6500 : ;6000 -1000 500
.2 6000» T3 1
/•
Z
500
3 O i
•
10050
-100 50
105
-10 • 5
1-
-1
Figure 5. Simulated and measured amount of distillate.
The dynamic model is employed to analyse the batch performance, to investigate different control loops and to determine potential of improvement. Suggestions for improvement of the control structures and the operating policies can be derived from the
639 dynamic simulation, which leads to better performance, shorter and less varying batch times.
4. Optimisation Approach The objective of the optimisation is to minimise the batch time. Feed and reflux ratio profiles are considered as decision variables within the optimisation problem. Constraints to be taken into account are product specifications (acid value, carboxyl number, viscosity, water content), feed amount and limitation of the feed flow rate. The model DAE's are discretised and the resulting algebraic system is optimised with a NLP algorithm (e.g. a SQP solver). The objective function and the constraints can be defined as VBA macros and then be computed by CHEMCAD. Present work is concerned with the implementation of the optimisation algorithm.
5. Conclusions In this paper a dynamic model for a semibatch polymerisation process was presented. It was validated with experimental data from the industrial site and used for simulating the process. The simulation results show that the model can adequately describe the process and therefore constitute the base for the optimisation. The flowsheet simulator CHEMCAD has proved an efficient and powerful tool for the modelling, simulation and optimisation of semibatch polymerisation processes. Through findings gained by the dynamic simulation the batch operating time can be shortened and its variation can be reduced already. So the economic performance of the industrial process was improved. A mathematical optimisation approach is now being implemented to determine optimal operating policies. Future work will deal with the implementation of the optimal trajectories considering disturbances of the process and on-line optimisation as well.
6. Nomenclature [...] = concentration, mole/kg A = dialcohol B = dicarboxylic acid C = condensate (water) O = oligomer P = polymer k = reaction rate constant t = time, s a = reaction order Greek Letters X = reaction rate, mole/kg,s V = stoichiometric coefficient Subscripts f = forward reaction r = reverse reaction
640 Superscripts I = first reaction, equation (1) II = second reaction, equation (2)
7. References Beigzadeh, D. and Sajjadi, S., 1995, J. Polym. Sci. Part A: Polym Chem., 33, 1505. Chen, S.A. and Hsiao, J.C., 1981, J. Polym. Sci. Part A: Polym Chem., 19, 3123. Chen, S.A. and Wu, K.C., 1982, J. Polym. Sci. Part A: Polym Chem., 20, 1819. Flory, P.J., 1937, JACS, 59, 466. Flory, P.J., 1939, JACS, 61, 3334. Flory, P.J., 1940, JACS, 62, 2261. Gmehling, J., 1991, Vapor Liquid Equilibrium Data Collection, Vol. 1: Aqueous Organic Systems, Chemistry Data Series, DECHEMA, Frankfurt. Gmehling, J., 1998, Vapor Liquid Equilibrium Data Collection, Vol. la: Aqueous Organic Systems, Chemistry Data Series, DECHEMA, Frankfurt. Gmehling, J. and Wittig, R., 2002, Ind. Eng. Chem. Res., 28,445. Kuo, C.T. and Chen, A.S., 1989, J. Polym. Sci. Part A: Polym Chem., 27, 2793. Larsen, B.L. and Rasmussen, P., 1987, Ind. Eng. Chem. Res., 26 (11), 2274. Li, P., Garcia, H.A., Wozny, G. and Renter, E., 1998, Ind. Eng. Chem. Res., 37 (4), 1341. Li, P., 1998, Entwicklung optimaler FUhrungsstrategien fiir Batch-Destillationsprozesse, VDI Verlag, Dusseldorf. Logsdon, J.S. and Biegler, L.T., 1993, Ind. Eng. Chem. Res., 32 (4), 692. Reid, R., Prausnitz, J.M., Poling, B.E., 1987, The Properties of Gases and Liquids, McGraw-Hill, New York. Torres-Marchal, C. and Cantalino, A.L., 1986, Fluid Phase Equilibria, 29, 69.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
641
A Global Approach for the Optimisation of Batch Reaction-Separation Processes S. Elgue^, M. Cabassud^ L. Prat^ J.M. Le Lann^ J. Cezerac'' "Laboratoire de Genie Chimique, UMR 5503, CNRS/INPT(ENSIACET)/UPS, 5 rue Paulin Talabot, B.P. 1301,31106 Toulouse Cedex 1, France ^Sanofi-Synthelabo, 45 Chemin de Meteline, B.P. 15,04201 Sisteron Cedex, France
Abstract Optimisation of fine chemistry syntheses is often restricted to a dissociated approach of the process, lying in the separated determination of optimal conditions of each operating step. In this paper, a global approach of syntheses optimisation is presented. Focusing on the propylene glycol synthesis, this study highlights the benefits and the limits of the proposed methodology compared with a classical one.
1. Introduction The synthesis of fine chemicals or pharmaceuticals, widely carried out in batch processes, implies many successive reaction and separation steps. Thus, synthesis optimisation is often restricted to the determination of the optimal operating conditions of each step separately. This approach is based on the use of reliable optimisation tools and has involved the development of various optimal control studies in reaction and distillation (Toulouse, 1999; Furlonge, 2000). Nevertheless, such an approach does not definitely lead to the optimal conditions for global synthesis. For instance, optimising the conversion of a reaction for which separation between the desired product and the by-products is more difficult than with the reactants, will involve an important operating cost, due to further difficulties in the separation scheme. Thus, necessity to simultaneously integrate all the process steps in a single global optimisation approach clearly appears. Recent issues in the dynamic simulation and optimisation have been exploited to accomplish this goal. Thus, in literature optimisation works based on a global approach recently appears (Wajge and Reklaitis, 1999). These works because of the global process configuration (e.g. reactive distillation process), because of the modelling simplifications and because of the optimisation procedure do not allow grasping the benefits linked to a global approach. The purpose of the present study lies in the comparison between a classical and a global optimisation approach, by the mean of a global synthesis optimisation framework. Applied to a standard reaction-separation synthesis of propylene glycol production, this comparison emphasises the characteristics of each approach.
642
2. Optimisation Framework The present work is based on the use of an optimisation framework dedicated to optimal control of global syntheses (Elgue, 2001). This framework combines an accurate simulation tool with an efficient optimisation method. Because of the step by step structure of global syntheses, the simulation tool is based on a hybrid model. The continuous part represents the behaviour of batch equipments and the discontinuous one the train of the different steps occurring during the synthesis. Non-linear programming technique (NLP) is used to solve the problems resulting from syntheses optimisation. This NLP approach involves transforming the general optimal control problem, which is of infinite dimension (the control variables are timedependant), into a finite dimensional NLP problem by the means of control vector parameterisation. According to this parameterisation technique, the control variables are restricted to a predefined form of temporal variation which is often referred to as a basis function: Lagrange polynoms (piecewise constant, piecewise linear) or exponential based function. A successive quadratic programming method is then applied to solve the resultant NLP.
3. Propylene Glycol Production Industrially, propylene glycol is obtained by hydration of propylene oxide to glycol. In addition to monoglycol, smaller amounts of di- and triglycols are produced as byproducts, according to the following reaction scheme: C3H6O + H2O
II—•
C3H8O2
C3H6O + C3H8O2
'-^-^
C6H14O3
C3H6O + C6H14O3
'-^—>
C9H20O4
(^)
Water is supplied in large excess in order to favour propylene glycol production. The reaction is catalysed by sulfuric acid and takes place at room temperature. In order to dilute the feed and to keep the propylene oxide soluble in water, methanol is also added. The reaction is carried out in a 5-litre stirred-jacketed glass reactor. Initial conditions described by Furusawa et al. (1969) have been applied: an equivolumic feed mixture of propylene oxide and methanol is added to the reactor initially supplied by water and sulfuric acid, for a propylene oxide concentration of 2,15 mol/L. In agreement with previous works reported in literature, kinetic parameters of the reaction modelled by an Arrhenius law are summarised in table 1.
Table 1: kinetic model of propylene glycol formation. Reaction 1 2 3
Pre-exponential factor
(L.moH.s"^) 1.22 10^ 1.39 10^^ 9.09 10^^
Activation energy (KcaLmol^K-^) 18.0 21.1 23.8
Heat of reaction (KeaLmor^) -20.52 -27.01 -25.81
643 Table 2: Components separation characteristics. Bubble point Component Component Propylene oxide (reactant) 34 °C Propylene glycol (product) Methanol (solvent) 65 °C Dipropylene glycol (by-product) Water (reactant) 100 °C Tripropylene glycol (by-product)
Bubble point 182 T 233 °C 271 °C
According to components bubble point (table 2), the distillation involves the separation of the methanol and the resultant reactants (for the most part water) from the reaction mixture. Propylene glycol and by-products are then recovered from the boiler. The overhead batch distillation column consists of a packed column of 50 cm in length and 10 cm in diameter. A condenser equipped with a complex controlled reflux device completes this process. A heat transfer fluid supply reactor jacket with a temperature varying from 10 to 170°C according to the operating steps.
4. Reaction Optimisation Optimal control of reaction generally involves two contradictory criteria: the operating time and the conversion. In this paper, the study amounts in the determination of the optimal profiles of temperature and reactants addition for an operating time criterion with an acid conversion constraint set up to 95.5%. Within the context of an industrial reactor, the considered temperature consists in the heat transfer fluid temperature. Two different optimal control problems have been studied with or without a production constraint on the by-products amount: by-products amount inferior to 3.5% of the total production. In these problems, the temperature profile of the heat transfer fluid is discretised in five identical time intervals. Piecewise constant parameterisation of the temperature has then been adopted. Reactant addition flow rate has also been discretised in five intervals, but only the four last ones have the same size. Then, the time of the first interval and the value of the piecewise constant constitute the optimisation variables of the feed flow rate. The results associated to an optimal reaction carried out with a by-products constraint are given on figure 1. 60
600
Without by-pro ducts constraint
60
W i t h by-products constraint
[
50
/""•^
—-—-
1
450
g40
1 ,...<<'.........,
s
g40
300 S
1
H 20 — Reactor temperature
10-
1 r
^
,
reactarts
,
^
I
^'
300 %
150 — Reactor
Heal transfer fluid temperature — A d d i t i o n of
0-
^.u.:-^
[0
temparakm
II
Heat transferfluidtemperaiure
| |
— A d d i t i o n of reaclante
0 80
100
Figure 1: optimal reaction profiles. Optimisation results show that a batch addition of the reactant constitutes the best feed profile. According to the reaction scheme (consecutive-competitive) these profiles could be a priori determined (Burghardt and Skrzypek, 1974). The configuration of the
644 activation energies implies that a temperature increase favours the production of byproducts. Thus, it appears that, in optimal profiles, the temperature is maintained low and particularly with a by-products constraint. At the end of the reaction step, the reactor temperature increases in order to achieve the conversion constraint (95.5 %).
5. Distillation Optimisation Optimal control of a batch distillation column consists in the determination of the suitable reflux policy with respect to a particular objective function (e.g. profit) and set of constraints. In the purpose of the present work, the optimisation problem is defined with an operating time objective function and purity constraints set on the recovery ratio (90%) and on the propylene glycol final purity (80% molar). Different basis functions have been adopted for the control vector parameterisation of the problem: piecewise constant and linear, hyperbolic tangent function. Optimal reflux profiles are determined with the final conditions of the previous optimal reactions as initial conditions. The optimal profiles of the resultant distillations are presented on figure 2. Without by-products constraint
'
' ' Piecewise constant — Piecewise linear — Hyperbolic tz»igent
-- H/pertX)lic tanc|en(
5 o.e
1 °«
I ff
/
0.4
tf^'^ 0.2-
--^-^"""^ 0
120
150
Figure 2: optimal reflux profiles. The determination of the optimal reflux profiles leads to equivalent results in the case of propylene glycol production with or without by-products constraint. In fact, due to a reaction step carried out with large water excess, the compositions at the beginning of the separation step are quite identical, independently of the considered production.
6. Global Synthesis Optimisation In this part of the study, optimisation of the production is carried out according to a global approach. The reaction step and the successive distillation step are considered simultaneously in the evaluation of the optimal operating conditions. In order to compare these results with the classical approach ones an operating time criterion has been chosen. Thus, the optimisation problem lies in the minimisation of the operating time required for the propylene glycol synthesis. According to the previous optimisations, two kinds of production have been studied: a production with yield and purity constraints and a production with an additional by-products constraint. In order to compare the different approaches, the same constraints have been adopted. During the optimisation study, the influence of different reflux policies has been evaluated. Like in distillation and for the same reasons, variation of reflux policy leads
645 to the same optimal reflux profile and does not affect the other optimal operating conditions. Nevertheless, hyperbolic tangent profiles appear better allowing slightly reduction of the operating time. Consequently, only reflux optimal profiles based on hyperbolic tangent function are described in results graphical representation (figure 3). W i t h by-products constraint
Without by-products constraint 200
a S
— Heal transferfluidtemperature
S
150
Heat transferfluidtemperakiie Propylene glycol composition — Reflux ratio
1 E
Propytene glycol composition — Reflux ratio
1
/y
1 1
50 i
50-
0 • 0
Figure 3: optimal profiles for global synthesis.
7. Comparison of Optimisation Approaches The different optimisation approaches are compared in terms of optimal control variables (figure 4) and in terms of criteria (table 3). Propylene glycol synthesis involves three control variables: reactants introduction, heat transfer fluid temperature, reflux policy. Whatever the optimisation approach is, a batch profile always constitutes the optimal solution of reactants introduction. Hence, the comparison of control variables is reduced to the comparison of temperature and reflux policy. Without by-products constraint
With by-products constraint
' 150
-0.8
J I
Classical approach — Global approach
Classical approach 0.6 5 s
— GHobal approacfi 90
i.
y f —\
\i..f 1
^
-
—
-
"
"
"
"
^y ^
•
"
"
^
'
/
^0.4 1
s
i^
>J
S
1
..... -
-0.2
30
-
•
X
y-"'"''
\
•
—-—"""^
.."-'"'
1
H L p
0
Figure 4: Comparison of the different approaches for synthesis optimisation. For a global approach, the optimal temperature profile of heat transfer fluid involves a shorter pure reaction step. The beginning of distillation (maximal value of the heat transfer fluid temperature) occurs earlier, before the total conversion of propylene oxide, revealing a coupling between the reaction and the separation steps. Optimal solutions based on a classical approach, because of the dissociated consideration of the operating steps can not integrate such a coupling. This coupling involves a particular temperature profile, lying during pure reaction step, initially in an increase reaction
646 temperature and then in an enhanced cooling in order to compensate the by-products formation during the beginning separation and to obtain the desired final production. Table 4: Comparison of operating times according to the optimisation approach. Synthesis Without by-product constraint With by-product constraint
Approach Classical Global Classical Global
Reaction 1 h 04 min
Separation 3 h 02 min
1 h 34 min
3 h 02 min
Total 4 h 06 min 3 h 30 min 4 h 36 min 4 h 11 min
Gain 14% 9%
The global approach improves the determination of the optimal operating conditions, resulting in an operating time reduction, from 9 to 14 %. This reduction appears as the consequence of the coupling between reaction and separation steps resulting in a faster reaction and an earlier separation. Nevertheless, an addition of constraints by restricting the coupling influence reduces the possible benefits. Consequently, global optimisation approach appears attractive in cases of high coupling syntheses and offers only few improvements in high constrained synthesis problems.
8. Conclusion The developed framework presents the capability of solving optimal control problems by a global and also a classical approach. The application of this characteristic to a propylene glycol synthesis provides a relevant comparison leading to strategic conclusions about global approach advantages. Thus, outwards cases of global process (e.g. reactive distillation process) in which a global approach constitutes the only optimisation alternative, global approach appears more favourable in cases of high freedom processes. Introduction of constraints in the optimisation problem restricts the process freedom and reduces the influence of a global approach. For high constrained processes, unfortunately the majority of fine chemistry processes, the global approach tends towards a classical dissociated approach and then provides few improvements. Nevertheless, in such cases our framework offers the advantage of an estimation of these improvements.
9. References Burghardt A. and J. Skrzypek, 1974, Chem. Eng. Sci., 29, 1311. Elgue S., M. Cabassud, L. Prat, J.M. Le Lann, G. Casamatta and J. Cezerac, 2001, In Proceedings of ESCAPE 11, Kolding, Denmark, 1127. Furlonge H.I., 2000, PhD Thesis, Imperial College, London. Furusawa T., H. Nishiura and T. Miyauchi, 1969, J. Chem. Eng. Japan, 2, 1, 95. Toulouse C , 1999, PhD Thesis, I.N.P. Toulouse, Toulouse. Wajge R.M. and G.V. Reklaitis, 1999, Chem. Eng. J., 75, 57.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
647
Computational Modelling of Packed Bed Systems N. Gopinathan, M. Fairweather and X. Jia Leeds Institute of Particle Science and Engineering, University of Leeds, Leeds LS2 9JT, UK
Abstract Understanding the way particles pack within beds, and fluids flow through them, is of interest to a wide range of unit operations. This paper described a novel digitalisation approach that is capable of predicting the way in which particles of any shape pack into a container of any geometry. Prediction of fluid flow through such beds is also demonstrated through the use of lattice Boltzmann simulations. Both methods are shown to be capable of providing detailed information on the structure and flow within packed beds, with predictions being in reasonable agreement with available data. Overall, the combination of techniques provides a powerful modelling capability for packed beds that is of value to the improved design of a wide range of unit operations.
1. Introduction The use of particulate bed systems is conmion within the chemical industry where they are employed in processes such as absorption, heat recovery, solids handling, distillation and heterogeneous catalytic reaction. A typical matrix bed will contain a range of particle sizes, and in industrial beds the particles can be loaded using a variety of methods that can result in loose or dense packing. The nature of the packing is governed by the particle mass and shape, the loading method and the subsequent operating conditions of the bed. The universally accepted parameter used to represent bed structure is the mean voidage. This parameter does not, however, furnish any information about a bed's local structural properties, and therefore voidage information in the axial, radial and angular directions is also required. These local values are again affected by a number of parameters, such as the particle and tube diameter, the packing method and particle size distribution through the bed, as well as the bed inlet and outlet conditions. Plug flow is conmionly assumed to be the case in most packed beds. Whereas this may be the case at the centre of a large packed bed, it is certainly not so in the vicinity of the container walls and close to the ends of the bed. Considerable flow mal-distribution can occur close to walls since large void spaces, and hence flow channels, exist in their vicinity that offer less resistance when compared to the centre of the bed. The flow distribution through a bed is therefore closely linked to the bed's structure. A great deal of work (e.g. McGreavy et al., 1986) has been undertaken to quantify the structural properties of packed beds, and the transport processes through them, to permit the design of more efficient unit operations. This work has generally resulted in empirical correlations that, despite their usefulness, are limited by the range of data used
648 in their formulation. In particular, correlations are generally only available for regular shaped packing materials with a limited number of particle sizes. This paper describes novel approaches to predicting the structure of such beds and the fluid flow through them. Particle packing is described using a digital approach (Jia and Williams, 2001) that avoids many of the difficulties suffered by conventional packing models. The key innovation is the digitisation of both particle shapes and the packing space, with this approach being applicable to the packing of particles of any shape and size in a container of any geometry. It is also capable of simulating physical phenomena such as size segregation, and the influence of vibration. Fluid flow through packed beds is often simulated using conventional finite-volume, computational fluid dynamic (CFD) techniques. Such simulations have again, however, been limited to simple packing materials (e.g. Taylor et al., 2000) due to the difficulties encountered in representing large numbers of complex-shaped objects. The present work takes an alternative approach by using lattice Boltzmann modelling (WolfGladrow, 2000) which is able to deal with fluid flow through porous structures more readily than conventional Navier-Stokes equation solvers.
2. Particle Packing Model In a digital computer image everything is pixel based. An object, no matter how complex in shape, is represented by pixels. The same applies to a volume rendering in three dimensions (3D). This pixelation (in 2D) or voxelation (in 3D) of objects, and of the space between them, is the basis of the packing algorithm. An arbitrary shape is now simply a coherent collection of pixels, with the space mapped onto a grid. In a 2D implementation a shape may be digitised in two ways. Simple and analytical shapes, for which library support is provided by software development tools (e.g. Visual C++), are created and digitised directly into the computer. More complex shapes are imported as bitmap images, with each image containing a single shape or object. The bitmap may be obtained by scanning the object directly, or its photo image, or created using drawing packages (e.g. AutoCAD). Extension to 3D is achieved using similar approaches. Since the packing space or container is digitised and represented in the same way as the particles, using a container with a complex geometry presents no additional difficulties. Moreover, just as a particle may be added at any time during packing, the container may be introduced or changed at any time. Particles are allowed to move randomly in a simulation, one grid at a time, on a square lattice. In 2D, there are 8 possible directions (4 orthogonal and 4 diagonal) to choose from, all with equal probability. In 3D the number is 26 (6 orthogonal and 20 diagonal). It is convenient to treat diagonal moves as composed of two orthogonal moves. For example, a move in the upper-left direction can be thought of as an upward move followed by a left move. In order to encourage particles to settle down, the upward component of a move is only accepted with a so-called rebounding probability. The result is a directional and diffusive motion of the particles, rather like a random walksbased sedimentation model. This diffusive movement helps the particles to effectively penetrate and explore every available packing space. Since particles reside and move on a grid, collision and overlap detection is a simple matter of detecting whether two objects occupy the same site(s) at a given time, rather
649 than having to compute and test intersections between objects, which is usually the most computationally expensive part of particle simulations. Since a particle moves only one grid at a time, the overlap detection procedure ensures that the particle will not jump over, or enter the hollow part, of another particle during packing. It also allows solid particles to be represented by their outlines, which substantially speeds up the packing process since fewer pixels per particle need to be processed at each move. It is possible, and perhaps even necessary if the effects of the actual particle interactions are to be considered, to move a particle more than one grid at a time. In this case, care must be taken to prevent particles jumping over or into other particles. Although the packing algorithm does not explicitly involve physical forces, some effects of physical interactions can be simulated. Since particles are allowed to move sideways as well as up and down, even after they form part of the packing, the effects of high frequency, small amplitude vibrations can be simulated. Vertical vibration is controllable by the rebounding probability, where a value of 0 means no vertical vibration, and a value of 1 means an equal opportunity for the particles to move up and down (equivalent to the particles having no opportunity to settle down and form a packing). Typically, a value between 0.2 and 0.5 is used. The computer program which embodies these techniques is known as DigiPac, further details of which may be found elsewhere (Jia and Williams, 2001).
3. Lattice Boltzmann Model Conventional CFD methods use the continuum assumption to allow fluid behaviour to be modelled at a macroscopic level, with the behaviour of individual molecules or fluid particles not being considered. Other techniques are, however, available which simulate or mimic fluid flow by solving equations for a distribution of fluid particles that are then allowed to move and collide with each other and solid surfaces. This provides a microscopic description of fluid particles which, when averaged, recovers the macroscopic information provided by continuum solutions. Original developments in this area stem from the work of Frisch et al. (1986) who employed the technique of lattice gas hydrodynamics in which the fluid is modelled as a cellular automaton and the flow represented by the motion of particles on a lattice. More numerically efficient variants of this method, such as the lattice Boltzmann approach (McNamara and Zanetti, 1988), were subsequently developed. Commercial codes, e.g. PowerFLOW, which use lattice-based approaches are available, and this particular code was used in the present work. Based on discrete forms of the kinetic theory equations, this code employs an approach that is an extension of lattice gas and lattice Boltzmann methods in which particles exist at discrete locations in space, and are allowed to move in given directions at particular speeds over discrete time intervals. The particles reside on a cubic lattice composed of voxels, and move from one voxel to another at each time step. Solid surfaces are acconmiodated through the use of surface elements, and arbitrary surface shapes can be represented. Particle advection, and particle-particle and particle-surface interactions, are all considered at a microscopic level to simulate fluid behaviour in a way which ensures conservation of mass, momentum and energy, and which recovers solutions of the continuum flow
650
- Simulation •Experiment
(a)
1 6 11 16 21 Particle diameter/ Tube diameter ratio
0.4 0.5 0.6 Simulated voidage
Figure 1. (a) Comparison of predicted and measured mean voidage values for a range of particle-to-tube diameter ratios, (b) scatter diagram of the same data. equations for mean fluid properties. In the present work this model was employed to predict the laminar flow of fluid through a packed bed. Experimental validation of such techniques has been performed, e.g. by Mantle et al. (2001) using velocity data from multi-nuclear magnetic resonance imaging. The main advantage of using these methods in the current application is that they can be easily interfaced with the packing technique since the basis of the two techniques is similar.
4. Results and Discussion Validation of the particle packing algorithm was performed by comparing simulation results with experimental data obtained by McGreavy et al. (1986). In these simulations, spheres of 16 mm diameter were packed into cylindrical tubes with differing diameters. The resulting packing densities are plotted in Figure 1. It can be seen that, over the range of particle-to-tube diameter ratios considered, predicted mean void fractions compare well with experimental values, with predictions reproducing the trend to a constant value at high diameter ratios. The peak value is also faithfully reproduced. This peak occurs due to enhanced voidage near the tube walls when the particles are only slightly smaller than the diameter of the tube, circumstances in which the natural packing of the particles is disturbed by the wall and results in a larger voidage. Some over-prediction of the data is, however, evident. Deviations at lower particle-to-tube diameter ratios can be explained, to some extent, by errors inherent in the digitization of small diameter particles. In addition, however, the experiments employed bed compression that makes exact prediction difficult, particularly for larger particle-to-tube diameter ratios. Spatial distributions of voidage are required if the reliable prediction of transport parameters is to be performed. Figure 2(a) shows axially averaged values for the solid fraction of particles encountered over the cross-section of a bed with a particle-to-tube diameter ratio of 9.5. Results for the radial voidage, and solid fraction, on a line located approximately half way along the bed are given in Figure 2(b). Although not shown, these results compare well with data, and illustrate how the packing model can be used to predict local voidage values within a bed. In particular, the influence of the wall in
651 Fucking density afong the diameter of the tube
21 Tube Dia/S (mm) 0
20 40 60 Distance from wall (mm)
80
Figure 2. (a) Solid fraction distribution within a packed bed, (b) influence of the wall. producing high voidages in its proximity, and the way voidage values oscillate to a near constant value in the centre of relatively large beds, are faithfully reproduced. Lattice Boltzmann simulations were also performed on a packed bed considered by McGreavy et al. (1986). These considered a cylindrical tube of 50 mm diameter that was packed with 16 mm diameter spherical particles, with a flow rate of 4 litres per minute through the bed. The simulation, which was made for water flow at ambient conditions, employed a 50x50x220 matrix with approximately 25k iterations being required to achieve steady state flow through the bed. Figure 3 shows the predicted velocity of liquid through the tube along two lines across its diameter, one at the approximate centre of the bed and one at its exit. These results show high velocities at those locations where high voidage exists and channeling of flow between particles occurs, and demonstrate the ability of the simulations to provide detailed velocity information within a bed. These results are in qualitative agreement with experimental data which is also given in the figures, although it should be noted that exact agreement with such single-line, instantaneous data cannot be achieved due to the differing distributions of particles within the real and simulated beds. Averaged data, which is not available for the test case considered, is necessary to allow a more meaningful comparison with predicted results. Lastly, Figure 4 gives predicted flows at various cross-sections inside the tube. In this figure, the black areas indicate the presence of particles, and pixels between the particles indicate local flow velocities (with bright pixels showing high velocities). 0 25
V®lo<^'ty prof ile inside the tube - Simulation • BcperimentaJ
0.25
Velocity profile at the exit
_ 0.20
0.00 0.01 0.02 0.03 0.04 Distance from w all (mj
0.05
0.01 0.02 0.03 0.04 Distance f r o m wall (m)
0.05
Figure 3. Velocity profiles inside a bed, (a) at 140 mm and (b) at 220 mm from the top of the bed.
652
Figure 4. Velocity distributions in the bed, left - at the entrance (30 mm), middle inside the bed (120 mm), and right - near the exit (210 mm from the top of the bed). In qualitative agreement with experimental findings, it is observed that velocities at the tube entrance are higher than those at its exit, with variations in velocities at each crosssection occurring in line with packing density. Channeling effects are again seen, particularly in near-wall regions where voidage values are high. Overall, these results demonstrate that lattice Boltzmann models can be used to provide a detailed understanding of flow behavior in packed beds that is of value in bed design.
5. Conclusions A new approach to simulating the packing of particles in beds using a digitalisation technique has been described and demonstrated to agree well with available experimental data. Lattice Boltzmann simulations have also been conducted for flow through packed beds, with results being in qualitative agreement with data. Overall, this combination of techniques provides a powerful modelling capability for packed beds systems that is of value to the improved design of a wide range of unit operations. Further work will concern a more systematic validation of the techniques described, including gathering of the averaged data needed to enable realistic comparisons.
6. References Firsch, U., Hasslacher, B. and Pomeau, Y., 1986, Phys. Rev. Lett., 56, 1505. Jia, X. and Williams, R.A., 2001, Powder Tech., 120, 175. Mantle, M.D., Sederman, A.J. and Gladden, L.F., 2001, Chem. Eng. Sci., 56, 523. McGreavy, C , Foumeny, E.A. and Javed, K.H., 1986 Chem. Eng. Sci., 41,787. McNamara, G.R. and Zanetti, G., 1988, Phys. Rev. Lett., 61,2332. Taylor, K., Smith, A., Ross, S. and Smith, M., 2000, Phoenics Jl. CFD and Applns., 13, 399. Wolf-Gladrow, D.A., 2000, Lattice-Gas Cellular Automata and Lattice Boltzmann Models, Springer-Verlag, Berlin.
7. Acknowledgements MF and XJ would like to thank BNFL, and NG the Key worth Institute of the University of Leeds, for their financial support of the work described. The fluid flow predictions employed in this paper were obtained using the Exa Corporation code PowerFLOW.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
653
Application of Molecular Simulation in the Gibbs Ensemble to Predict Liquid-Vapor Equilibrium Curve of Acetonitrile Mohammed Hadj-Kali* Vincent Gerbaud^ Xavier Joulia* Arme Boutin^ Philippe Ungerer^ Claude Mijoule^ Jerome Roques^ ^ Laboratoire de Genie Chimique, UMR CNRS 5503, BP 1301, 5 Rue Paulin Talabot, 31106 Toulouse Cedex 1, France, E-mail: [email protected] ^ Laboratoire de Chimie Physique, UMR CNRS 9611, Bat. 349, Universite de Paris Sud, 91405 ORSAY, France, E-mail: [email protected] ^ Centre Interuniversitaire de Recherche et d'Ingenierie des Materiaux, UMR CNRS 5085, ENSIACET, 118 Rte de Narbonne, 31077 Toulouse Cedex 4, France, E-mail: [email protected]
Abstract The Lennard-Jones (LJ) parameters of the nitrile group (CN) have been optimized on the basis of selected experimental properties of acetonitrile following the method proposed by Ungerer (2000). The resulting parameters have been used to determine both the liquid vapor coexistence curve of acetonitrile by performing Monte Carlo simulations in the Gibbs Ensemble and its critical parameters.
1. Introduction Phase equilibrium knowledge is important for the design and simulation of many separation processes like distillation and extraction ones. Several engineering and thermodynamic models, like activity coefficient models and equations of state, are commonly used to generate coexistence data at conditions for which experimental data are missing. Unfortunately, the predictive value of such models is limited, as they often rely on experimental mixture data near the conditions of interest. Moreover, the discrepancy between the number of compounds referenced in the Chemical Abstract (some millions) and the number of physico-chemistry data bank properties (some thousands) favors the advent of new methods to predict physico-chemistry properties. Molecular simulation has emerged as a complementary tool (Allen and Tildesley, 1987, p. 5). It links the microscopic details of a system (atoms masses, energetic interactions, molecules distribution, etc.) to macroscopic properties of experimental interest (physical state, transport coefficients, equilibrium properties, etc.). Besides its academic interest, molecular simulation is technologically usefiil when experiments are impossible. Molecular simulation is based on statistical thermodynamics concepts. Its success in predicting thermodynamic properties and in improving our understanding of complex systems, depends on the availability of efficient simulation algorithms and accurate force field models. Whereas Molecular Dynamics and Monte Carlo techniques have reached a practical maturity over the years, reliable force field developments are still the limiting step of prediction accuracy. Involved in this latter challenge, the present work focuses on the development of LJ parameters for the nitrile group (CN). The same methodology used by Ungerer (2000) has been adopted.
654
2. Statistical Thermodynamics and Ensembles When measuring a macroscopic property X, the value obtained is not a constant but rather an average value over chaotic motions and collisions of a large number of molecules occurring a characteristic time scale (10"^ s) far shorter than the measurement time (Prauznitz et al, 1999 p.754). In other words, macroscopic properties are time averages over all possible quantum state that a system may assume during the measure. The object of statistical thermodynamics is to calculate these time averages as a function of molecular properties. But, because of the complex time evolution of thermodynamic properties for a large number of molecules, Gibbs suggested replacing the time average by an ensemble average and postulated that "the time average of a dynamic property^ of a real system is equal to the ensemble average of that property". A statistical ensemble is a (mental or virtual) collection of a very large number of systems, each constructed to be a replica on a thermodynamic (macroscopic) level of the real thermodynamic system of interest. Among the usual ensemble, the isolated microcanonical ensemble with fixed N,V,E is useful for theoretical discussion. For more practical applications, however, non-isolated systems are considered, like the canonical ensemble in which N, V and T fixed (Mc Quarrie, 1977, p. 37). Other standard ensembles exist such as the Gibbs ensemble employed here destined to phase equilibrium calculations and described afterwards.
3. Monte Carlo and Molecular Dynamics I
i»o
B configurations
b
^!
"JL
_ _ _J
Phase I
Phase II
Initial configuration
Volume Translation change
;.o-: b
O"
A configurations
lO^,
b^
Oi
Siboi
; « 0 Oi
\Soo\
lO
^ «
Particule transfer
t-.
i lO
! fo^\ !
O Ju 1 01
Figure 1. Monte Carlo and Molecular Dynamic methods. Figure 2. Gibbs Ensemble Methodology, To get a significant average value of the macroscopic properties, one must generate a great number of configurations consistent between them. For this, two techniques are commonly used, namely Monte Carlo and Molecular Dynamics (see figure 1) (Fuchs et al, 1997). Molecular Dynamics is used to calculate time-dependent quantities such as transport coefficients and generates successive configurations by solving the Newton equation. Monte Carlo technique is used for equilibrium properties and generates ^ A dynamic property (such as pressure) is one that fluctuates in time, in contrast to a static property (e.g. mass), that is constant in time.
655 randomly configurations of the system. It is often combined with other methods hke the configurational bias Monte Carlo method (Mooij et al., 1992), to explore more efficiently the phase space.
4. The Gibbs Ensemble Monte Carlo Method In order to compute phase equilibrium, we have used the Gibbs Ensemble Monte Carlo method (GEMC) proposed by Panagiotopoulos (1987). A full development of its statistical mechanics basis was given by Frenkel & Smit (Frenkel and Smit, 1996, p. 184). Figure 2 illustrates the principle of this technique. The simulation is performed in two microscopic regions within the bulk phases, away from the interface. The thermodynamic requirements for phase coexistence are that each region should be in internal equilibrium, and temperature, pressure and chemical potentials of all components should be the same in the two regions. The temperature of the system in the Gibbs ensemble is specified in advance, the remaining three conditions are satisfied respectively by three types of Monte Carlo moves, namely (i) random displacement of molecules by translation and rotation within each region (ii) random fluctuations in the volume of the two regions to satisfy mechanical equilibrium (iii) random transfers of particles between regions.
5. Molecular Mechanics: Force Field Model Many of the thermodynamic systems that we would like to study are unfortunately too large to be considered by quantum mechanical methods which deal with electrons of the system and offer the advantage of being non parametric methods. With force field models, also known as molecular mechanics, the electronic motions are ignored, the atoms or group of atoms are replaced by beads and the bounds by spreads. This reduces considerably the number of variables and enables us to use classical mechanics laws. The molecular modelling force fields in use today can be interpreted in terms of a relatively simple four-component functional form of the intra and inter molecular forces within the system (Leach, 1996, p. 132):
bounds
ai angles
-zz N
torsions
(1)
N
+
i=l j=i+l
4s,
^ns^r,.
F(r^) is the energy potential which is a function of the A^' particles positions. The first three terms are described using a harmonic potential and are related to intramolecular interaction: bond stretching, bending and torsion. The fourth contribution is the nonbonded term where appear the Coulomb potential for electrostatic interactions and a Lennard-Jones (LJ) potential for Van der Waals interactions. More sophisticated force fields may have additional terms, but invariably contain these four components. Many force fields for molecular simulations have been developed over the past decades, the bonded interaction parameters are obtained by fitting to ah initio data. Partial charges are used for describing electrostatic interactions. In recent years, the Ewald
656 summation technique for handling the long-range character of the electrostatic interactions has become popular but induces a large computer cost. Van der Waals non bonded potential parameters are obtained by fitting some experimentally observed properties. At present, the limited capacity of computers prevents us from using efficiently all atoms descriptions (AA potential), in which each atom is represented by a separate Lennard-Jones center. Consequently, several authors proposed united atom (UA) potential, such as the NERD potential (Nath et al, 1998) or the TraPPE model (Martin and Siepmann, 1998), where a group like CH2 or CH3 is represented by a single force center, this approach was pioneered in the early work by Jorgensen (1984) with its OPLS model. While each force center is located on the carbon in the more classic UA potentials, it is shifted in the Anisotropic United Atoms (AUA) potential proposed by Toxvaerd (1990) for n-alkanes, so that it is placed between the carbon and the hydrogen atoms of the related group. Here, the AUA formalism is used.
6. Quantum Mechanical Calculations The molecular calculations are performed in the framework of the density functional theory using the deMon program package (St Amant, 1990; 1991). Non local gradient corrections are added for the exchange and correlation terms. A full geometry optimization is performed using a conjugate gradient method. Results for the geometry of acetonitrile agree with experimental data cited by Goldstein et al. (1999). The partial charge distribution needed in the electrostatic term is calculated trhough a MEP population analysis of the electronic potential surface. This population analysis splits charges according to the polarity and Van der Waals diameter of each atom and is better than Mulliken's which splits charges according the sole Van der Waals diameter. However, it leads to larger values that significantly reduce the transfer rate and the convergence speed during the GEMC simulation because of larger repulsive electrostatic contribution.
7. The Optimisation of Lennard Jones Parameters These parameters have been optimized on the basis of selected experimental data of acetonitrile (C2H3N). We decomposed this molecule in three force centers: the CH3 groupement described by AUA4 parameters obtained by Ungerer (2000) and the carbon and nitrogen atoms whose LJ parameters remain to be optimised. We used the optimization method proposed by Ungerer (2000) which consist on minimizing the following dimensionless error function:
F[s^,S,,C7^,CJ^) = -Y}-^ iV
,•
j - ^ - ^
(2)
S^
Where St is the estimated statistical uncertainty on the computed variable XT"'^, while ^^^ is the associated experimental measurement (either ln(F^% ^^vap or pug). F is considered as a function of four parameters to optimize, namely the interaction energies of carbon and nitrogen atoms 8c, SN and their molecular diameters GQ, CJN which represent the parameters of the LJ potential (see equation (1)). This methodology leads to the results Hsted in Table L
657 These parameters have helped to determine the Hquid vapor coexistence curve of acetonitrile shown below (Figure 4.) by performing GEMC simulations at several temperatures. In this curve, we plotted the variation of the temperature versus the density in the vapor and liquid phases. We compared our values (diamond symbol) to both experimental data obtained by Francesconi et al. (1975) presented by circles and those provided by Warowny (1994) (cross symbol). The agreement is good with an overall mean deviation below 2%. 600 o o fl> o o OA
^500 1 ^ ^ 400
V
f
o Francesconi 75 X Warowny 94 A our work O calculated critical point
r
H 300 0 200
200
400 Density (kg.m"^)
600
800
Figure 4. Liquid-vapor equilibrium curve of acetonitrile.
8. Critical Point Estimation T r = 0.94
T r = 0.80
T r = 0.98
600 h 400h 200 h
!,5«-K)6
5c-K)6
^0
2,5«-K)6
5e-K)6
O
2,5«-K)6
5c+06
Figure 5. GEMC simulation Density in two boxes far and close to the critical point. The driving force for a GEMC simulation to remain in a state with two stable regions of different density is the free energy penalty for the formation of interfaces within each of the two regions. When the system is away from the critical point of the phase transition under study (see Figure 5.a), the equilibrium densities and compositions of the coexisting phases can be simply determined by averaging the observations after Table 1. Optimized Uparameters for the CN group. 8c/kB
8N/kB
(K) 50.677
(K) 65.470
(A°) 3.5043
(A°) 3.3077
Table 2. Estimated and experimental critical parameters of acetonitrile. Tc(K)
Experiment 545.50 237.10^
simulation 546.94 242.70
658 equilibration. As one approaches a critical point even closer, the penalty for the formation of interfaces becomes small, and there are frequent exchanges of the identity of the two boxes as shown in Figure 5.c. Therefore, the highest temperature at which the coexistence can be observed is not a proper estimate of the critical temperature of the system. To estimate critical parameters, we adopted the method proposed by (Frenkel and Smit, 1996, pi97) and used by (Smit et al., 1995) for n-alkanes molecules. The results obtained compared to the experimental values published (Kratzke and Miiller, 1985) are shown in Table 2 and reported on Figure 4.
9. Conclusion We have briefly reviewed some concepts of molecular simulation techniques widely used in biological and physical chemistry fields but which remain not so well known in chemical engineering apphcations. In this work, the Lennard Jones parameters of the nitrile group -CN were optimised for the acetonitrile molecule and used with a MEP charge population analysis. Compared with experimental data, the results obtained show a good agreement and confirm the potentialities of this method in exploring macroscopic systems. The next step will then be to extend the research for other nitriles for which the data are insufficient and explore the transferability of the model developed for one nitrile molecule to other nitriles.
10. References Allen, M.P., Tildesley, D.J., 1987, Computer Simulation of Liquids, Oxford. Francesconi, A.Z., Franck, E.U., Lentz, H. 1975, Ber. Bunsen-Ges. Phys. Chem. 79, 897-901, (in German). Frenkel D., Smit B., 1996, Understanding Molecular Simulation, San Diego. Fuchs A., Boutin A., Rousseau B., 1997, Entropie 33 (208), 5-12. Goldstein, E., Buyong, M.A., Lii, J.H., Allinger, N.L. J. Phys. Org. Chem., 1996, 9, 191. Jorgensen, W.L., Madura, J.D., Swenson, C.J., 1984, J. Am. Chem. Soc, 106, 6638. Kratzke, H., Miiller, S., 1985, J. Chem. Therm., 17, 151-158. Leach, A.R., 1996, Molecular Modelling. Addison Wesley Longman, Harlow. Martin, M.G. and Siepmann, J.L, 1998, J. Phys. Chem. B 102,2569-2577. McQuarrie, D.A., 1976, Statistical Mechanics, Harper Collins Pubhshers, New york. Mooij, G.C.A.M., Frenkel, D. and Smit, B. 1992, J. Phys. Condens. Matter 4, L255-L259. Nath, S.K., Escobedo, F.A. and de Pablo, J.J., 1998, J. Chem. Phys. 108, 9905-9911. Panagiotopoulos, A.Z., 1987, Mol. Phys., 61, 813-826. Prausnitz, J.M., Lichtenthaler, R.N., de Azevedo, E.G. Molecular Thermodynamics, 1999, Prentice Hall International, Upper Saddle River. Smit, B., Karaboni, S., Siepmann L, 1995, J. Chem. Phys., 102 (5), 2126-2140. St-Amant, A., Salahub D.R., 1990, Chem. Phys. Letters, 169, 387. St-Amant, A., 1991, Ph. D. Thesis, University of Montreal. Toxvaerd, S., 1990, J. Chem. Phys., 93,4290. Ungerer, P., 2000, J. Chem. Phys., 112 (12), 5499-5510. Warowny, W., 1994, J. Chem. Eng. Data, 39, 275-280.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
659
Simulation of Supported Liquid Membranes in Hollow Fibre Configuration Ian C. Hallas and Eva S0rensen^ Centre for Process Systems Engineering, Department of Chemical Engineering, UCL (University College London), Torrington Place, London WCIE 7JE, U.K.
Abstract Supported liquid membranes offer excellent selectivity for use in gas separation. The transport of CO2 through an aqueous diethanolamine solution held within a hollow fibre membrane is modelled in this paper. When compared with flat-sheet models, the results demonstrated that radial geometry has to be taken into account in a hollow fibre model. The model was used to simulate the CO2 separation in membrane contactors and the results were compared with experimental data. The discrepancy between the results and the experimental data is thought to be due to the conditions within the membrane contactors, which are far from ideal.
1. Introduction A membrane is a semi-permeable barrier used to separate one or more components from a liquid or gaseous mixture. A membrane separates a feed stream into two streams, one passing through the membrane (the permeate), the other retained on the feed side (the retentate). Membranes are either porous or dense. In general, porous membranes give high permeation rates but are not selective enough to separate gases. Supported liquid membranes (SLMs) are a hybrid of the two membrane forms. They consist of a porous substrate that has been immersed in a carrier solution, which is then held within the pores by capillary forces. The solution reacts with the target species within the feed gas and increases the rate of adsorption and diffusion through the membrane. This results in high selectivity for the target species and is known as facilitated transport. A number of studies have been made into the use of SLMs for CO2 separation and the most recent works have investigated amine based carrier systems. Meldon et al. (1986) used mono-, di- and tri-ethanolamine in polyethylene glycol solution whilst Davis and Sandall (1993) compare the performance of diethanolamine (DEA) and diisoproponalamine as CO2 carriers. Guha et al. (1990) and Teramoto et al. (1996) also investigated the use of aqueous DEA in SLMs for CO2 separation and published models for predicting the flux of CO2 through flat-sheet membranes. Both models were derived in the same way but different solution methods were used. The main objectives of this paper are: 1) To extend a model for facilitated transport through SLMs, previously presented in the literature for flat-sheet membranes, to allow for transport through hollow fibre membranes, 2) to compare the flat sheet and hollow fibre models and show that the radial geometry must be taken into account when considering hollow fibre units, 3) to demonstrate, by comparison with experimental data, that even with radial variations, the model is still too simple to accurately describe the transport through a hollow fibre unit and 4) to suggest how the model can be improved. ^ Corresponding author: Tel: +44 20 7679 3802, e.-mail: [email protected]
660
2. Supported Liquid Membrane Models For commercial applications, flat sheet membranes are rarely used due to poor space efficiency. Many industrial systems make use of hollow fibre devices in order to maximise the surface area per unit volume. In this work, a model is developed to predict the separation of CO2 from a feed gas stream by a hollow fibre SLM. 2.1. Flat sheet membrane models The reaction of CO2 with this type of amine (DEA) occurs in two stages (Guha et al. 1990) C02(a)+ R2NH(b) « R2NC00-(e) + H"^ R2NH (b) + H^ <=» R2NH2''(f)
(1) (2)
CO2 diffuses through the SLM as both un-reacted CO2 and carbamate. The sum of these diffusion rates is the facilitated transport rate, predicted using Picks law Na, T = -Da (dCa / d x )|(0,L) " De (dCe / dx)|(o,L)
(3)
The assumptions of electro-neutrality throughout the membrane and the equal diffusivities of all amine species lead to Na, T = -Da (dCa / dx) |(o,L) + ^/iDb (dCb / dx) |(o,L)
(4)
To find the concentration gradients at the boundaries, the mass balance equations must be solved. The steady state balances for the CO2 and amine through a flat membrane are Da.d^Ca/dx^ = cOa Db.d^Cb/dx^ = 2cOa
(5) (6)
The reaction rate of CO2 is given by the following expression (Teramoto et al. 1996) COa = [klCaCb - ( ki(Cb,T " Q ) ' / KeqQ )] / [ 1 + k2/(k3Cb)]
(7)
Guha et al. (1990) solved the mass balance equations numerically using a finite difference method. They assumed that the second reaction step occurred so rapidly that it had no effect on the overall rate of reaction. Hence the equations solved were Da.d'Ca/dx' = Vi Db.d'Cb/dx' = [kiCaQ - ( ki(Cb,T - Q ) ' / KeqCb)]
(8)
Teramoto et al. (1996) did include the effects of the second reaction step but used a different solution approach, simplifying the equations by assuming a negligible amine concentration gradient through the membrane. This assumption reduced the problem to a set of linear algebraic equations that could be solved explicitly. Teramoto et al. (1996) compared the predictions obtained using both methods with experimental data and found that their approach gave the most accurate results. The Guha model tended to over-estimate the CO2 flux and it was cited that the reasons for this were the omission of the second reaction step and the manner in which the authors treated the membrane tortuosity, x^. Multiplying the membrane thickness by some function of x, results in an effective diffusion path along which the concentration ^ Tortuosity, a dimensionless value that expresses by what extent the membrane pores deviate from regularly spaced and sized linear channels
661 profiles are determined. Guha et al. (1990) used the relationship xL as the effective diffusion path length but Teramoto et al. (1996) put forward an argument for using the relationship x\. In this paper, we argue that an accurate simulation of a hollow fibre SLM should incorporate radial geometry in the mass balance equations as the fibre radius can have a significant effect upon the permeation rates as will indeed be demonstrated. Because Teramoto's approach is not suitable for radial systems, the full mass balance equations must be solved. 2.2. The Hbre membrane model The Guha model discussed earlier was derived for flat sheet membranes. As hollow fibre modules are used in this study, the model must be modified to allow for a radial geometry. In the following, it is assumed that a feed gas containing CO2 is passed through the fibre bores. A counter-current sweep gas flowing on the outside of the fibre removes the CO2 permeate, thereby maintaining the concentration gradient across the fibre. The steady state mass balance equations derived for the model used in this work are: Da(d'Ca/dr') + (Da/r)(dCa/dr) = ki(CaCb - [C^T-C^f/4K,^C^) I [1 + k2/k3Cb] Db(d'Cb/dr') + (Db/r)(dCb/dr) = 2ki(CaCb- CbT-Cb]'/4KeqCb) / [1 + k2/k3Cb]
(9) (10)
with the boundary conditions: At r = Ri, Ca = Ha.Pa,f and dCb/dr =0 At r = Ro, Ca = Ha.Pa,p and dCb/dr=0 Facilitation Factor, CO2 Flux,
^(z) = Vi Db[Cb(Ro,z)-Cb(Ri,z)] / Da[Ca(Ri,z) - Ca(Ro,z)] Na,T(z) = Dad + ^(z)) [Ca(Ri,,) - Ca(Ro,z)]/T(Ro - RO
(11) (12)
The feed and sweep gas streams are modelled by the following: Total feed flow FT, and feed CO2 flow. Fa: Total sweep flow ST, and sweep CO2 flow, S^:
dFj/dz = dVJdz = -27ieRoJa(z) (13) 6.SJI&L = 6SJ&L = 27i8RJa(z) (14)
3. Results The models presented in this paper have been solved using gPROMS (PSEnterprise Ltd., 2001) and the distributed equations were solved using the backwards-finite difference method. 3.1. Comparing the accuracy of the flat-sheet models The flat sheet and hollow fibre models presented in the previous section were used to predict how the flux varies with the feed CO2 partial pressure and the results are shown in Fig. 1. In addition to the basic Guha model, variations were also tested that incorporated the second reaction step and the effective diffusion path relationship put forward by Teramoto et al. (1996). Experimental data, obtained by Guha et al. (1990) is shown for comparison The results indicate that including the effects of the second reaction step in the Guha model increases its accuracy and contradict the theory of Teramoto et al. (1996) that x^L should be used as the effective diffusion path. For this reason, in this work we include the effects of the second reaction step but keep the effective diffusion path as xL.
662 3.2. The impact of the fibre radius Next, the CO2 flux through membrane fibres with different sized inner radii was predicted (Eqs. 9-12). The use of a radial geometry model accounts for the difference in membrane surface area between the outside and inside of the fibre and the effect upon the membrane concentration profiles.
2.5
A 2 J
OJ
o\
I E q CO
1.5
u
1
5A®
^>
"E
)H^
1 -
^
A—•
."4^
• -
^
^
; ^ ^
i^^
™
.
"•""^
—»-...«> •
•1"""# B-HB
^ X
<0.5
1
r
0.5 -
-^
^
0 0 0
50 Pa.f,
100
100 1000 10000 Fibre inner radius, urn
IOC
cmHg
Figure 1. CO2 permeation rates through an SLM. Carrier: Aqueous DEA at 1.94 moLdm'^. Key: o - Experimental Data, Guha et al; MGuha model; x - Teramoto model; ^ - Guha model, inc. 2^ reaction step only; # - Guha model, inc. 2^^ reaction step and z^L as diffusional path.
Figure 2. The impact of fibre radius on CO2 Flux. Key: K-membrane wall thickness = 25jUm; ^ - wall thickness = 50/Jm; • - wall thickness = lOOjLtm.
This effect is less significant if the membrane thickness is small in relation to the fibre diameter as shown in Fig. 2. However, as the fibre diameter decreases, the effects of radial geometry become more pronounced. The fluxes were all predicted for the same conditions - feed and sweep CO2 pressures of 20.73 and 1.08 cmHg. respectively through an SLM of aqueous DEA at 1.94 mol.dm'^. The tortuosity was set at 5. The broken lines show the fluxes obtained using the flat sheet models under the same conditions. This shows that the radius of the fibre must be approximately 1000 times greater than the wall thickness before the effects of radial geometry become negligible. In commercial hollow fibre units, the ratio of fibre radius to wall thickness is typically between 2 and 5. This illustrates that radial geometry must be taken into account when simulating hollow fibres as the use of an equivalent flat sheet model could over-estimate the CO2 flux by as much as 25%.
4. Simulating a Hollow Fibre Contactor A case study is considered which requires the CO2 content of a 10 1/min feed stream to be reduced from 5.0% to 0.5% by volume. Standard medical dialysis contactors are used for this process, the details of which are shown in Table 1. With air as the sweep gas, our experimental work has shown that the optimum flowrate is 500 cw?.s^ per m^ of membrane area, flowing counter-currently to the feed gas. Ambient air usually contains about 0.03% CO2 by volume. The carrier solution used is aqueous DEA at a concentration of 4 mol.dm"^. The physical property data corresponding to this solution is shown in Table 2.
663 Table 2. Physical Property Data (Teramoto et at.).
Table 1. Contactor Properties. Membrane Thickness, L Inner fibre radius, R^ Fibre Length, Y Total area per contactor Membrane tortuosity, T Membrane porosity, e
40 um 100 [im 24.3 cm 1.3 m^ 2.5 0.5
Solubility constant. Ho Forward Rate constant, ki Equilibrium constant, K^n 2"^^ Reaction parameter, CO2 diffusivity, D, Amine diffusivity, Dh
4.2*10"^ mol.cm'lcmHe'^ 4.0*10^ cmlmol"^s-^ 4.165*10^ 1.18*10"^ mol.cm-^ 5.0*10-^ cmls-^ 2.08*10'^ cmls'^
4.1. Results and Discussion The optimum (minimum) membrane area for the process described above was found by solving Eqs. 9-14 using gPROMS. The model estimated that 9.08 m^ would be needed, provided by 7 contactor units arranged in parallel. Fig. 4 illustrates how the CO2 content of the feed stream changes as it moves through the contactors whilst Fig. 5 shows the effect that this has upon the CO2 concentration within the membrane. However, experimental results obtained for these gas flowrates found that 15 contactor units, providing a total area of 19.5ml were needed to reduce the CO2 concentration to 0.5%, indicating an error in the model estimate of over 50%. The approach of the model assumes that all of the membrane surface area present in the unit is used for CO2 separation but this does not appear to be the case. As the fibres become wet, they cling together and form a compact bundle that the sweep gas cannot penetrate. Towards the centre of this bundle, the CO2 concentration surrounding these un-swept fibres will remain high. Also, channels form between the bundle and the contactor shell, allowing more of the sweep gas to bypass the fibres completely. These are the main factors why the model significantly under-estimates the area needed.
5
10
15
20
Distance along membrane fibres, cm.
Figure 4. CO2 content of the feed/ retentate along the fibre length.
Figure 5. CO2 concentration profile along the fibre length.
5. Conclusions A simple model for predicting the flux of CO2 through hollow fibre supported liquid membranes has been presented. It has been shown that radial geometry must be considered in order to accurately simulate the flux of the gas through the walls of a hollow fibre and that a flat sheet model is not able to capture this. However, because of
664 the conditions within the contacting units, the hollow fibre contactor model actually under-estimates the area needed to perform a specific duty. To improve the accuracy, it is necessary to include the effects of wetted fibre bundles and uneven sweep gas distribution and this will be considered in our future work.
6. Nomenclature c D F H ki k2/k3 Ke,
L N P R Ri Ro
X
Concentration, mol.cm ^ Diffusivity, cm^.s"^ Feed gas flowrate, cm^.s'^ Solubility constant, mol.cm'^.cmHg'* Forward rate constant, mol.cm"^.s"' Reaction rate parameter, mol.cm'^ Equilibrium constant Membrane thickness, cm Flux through the membrane, mol.cm'^.s"* Partial pressure, cmHg Radial distance through membrane, cm Inner radius of membrane fibre, cm Outer radius of membrane fibre, cm Distance through a flat membrane, cm
Y Fibre tlength, cm Z Distance along the membrane fibre, cm Greek symbols e Membrane porosity T Membrane tortuosity (0 Reaction rate, mol.cm"^.s"* Facilitation factor ^ Subscripts A CO2 B DEA Carbamate C F Feed stream P Permeate stream Total T
7. References Davis, R.A.; Sandall, O.C., 1993, CO2/CH4 Separation by Facilitated Transport in Amine-Polyethylene Glycol Mixtures. AIChE,, 39 (7). Guha, A.; Majumdar, S.; Sirkar, K.K. 1990, Facilitated Transport of CO2 through an Immobilized Liquid Membrane of Aqueous Diethanolamine. Ind. Eng. Chem. Res. 29. Meldon, J.; Paboojian, A.; Rajangam, G. 1986, Selective CO2 Permeation in Immobilized Liquid Membranes. AIChE Symp. Series. 248. PSEnterprise Ltd, 2001, gPROMS Advanced User Guide. Teramoto, M.; Nakai, K.; Ohnishi, N.; Huang, Q.; Watari, T.; Matsuyama, H. 1996, Facilitated Transport of Carbon Dioxide through Supported Liquid Membranes of Aqueous Amine Solutions. Ind.Eng.Chem.Res. 35.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
665
On the Principles of Thermodynamic Modeling Tore Haug-Warberg* Department of Chemical Engineering, Norwegian University of Science and Technology, NTNU, N-7491 Trondheim, Norway
Abstract Applied thermodynamics is to a large extent about heterogeneous phase equilibria, and, quite naturally, much effort has been put into the development of more accurate phase models. On the other hand, surprisingly little work has focused on the consistency of thermodynamic frameworks made from independent model contributions. For this purpose a set of syntactic and semantic modeling rules must be established prior to the computer implementation. These rules should be stated in an application independent manner, i.e. there should be no need for a dedicated (commercial) program interface. One possibility is to define the rules implicitly by virtue of operator overloading, and in this paper it is shown that three algebraic operators (-f, • and ^) suffice to describe thermodynamic frameworks of arbitrary complexity. The use of operators, as opposed to a dedicated program interface, has the advantage that complex frameworks can be described on the basis of thermodynamic reasoning, and without the knowlegde of any implementation details. This makes model maintenance, exportation, and documentation easier. Examples based on a Helmholtz energy equation-of-state, and a Gibbs energy model with separate activity coefficient models for each of the binaries, are discussed.
1. Introduction Thermodynamic state models are normally written as explicit functions in either T,p,Ni or T,V,Ni, but in the lack of a universal phase description the model may require contributions from several (empirical) state functions (equations of state, activity models, Cp-functions, etc.). Given a combined model of this kind, the computation of thermodynamic phase and reaction equiUbrium is, at least in principle, routine, but it remains a practical issue how to construct a consistent thermodynamic framework, and in particular so when dealing with general computer interfaces. One example is the calculation of H = U -\- pV. Everything works smoothly as long as the variables are taken from the same equation of state but not so if e.g. V is replaced by V taken from an external correlation. The user must then decide to use either H oxU -\- pV iniht calculations. In this paper we shall employ algebraic operators in the definition of thermodynamic frameworks. Nothing new is added to the theory of equilibrium thermodynamics, but it is noteworthy that only three operators (+, • and ^) are needed to specify frameworks * e-mail:tore.haug-warberg @ chemeng.ntnu.no
666 of arbitrary complexity. Other authors has addressed similar problems using symbolic algebra, see Rudnyi & Voronin (1995) and Castier (1999), and CAPE-open interfaces (Braunschweig, Pantelides, Britt & Sama 2000). A more traditional software interface approach has been described by Uhlemann (1995, 1996). The programming was done in Ruby, confer Thomas & Hunt (2001). The Ruby language was preferred in competition with Matlab, Maple, C++, Python and Perl for its elegant iterator idiom and clean class abstraction, but it must be admitted that it is not the most wide-spread computer language in the world, and neither is it very suitable for numerical computations. The Ruby code is therefore used only to verify the syntax and the semantics of the framework (and for data base handling). The outcome of the description is an object X which can be linearized^ into XML-format, and as such be exported to a calculation engine written in C++. The linearization is reversible, i.e. a copy of the Ruby-object can be made by parsing a previously spawned XML-document; X.to_xml <=> IO.from_xml(X.to_xml).to_xml where Wis the standard Ruby class for handling input-output things, and fromjxml is a user defined method for parsing the XML-document (spawned by to_xml). Other exportations are also possible, e.g. to Matlab or BTEX, but these are (of course) irreversible.
2. Algebraic expressions From elementary algebra we are famiUar with the expression: A 4- ^ • C ' " where a number of alternative interpretations are available, e.g. A, B,C,m e I, or A, B,C eR and m el, or A,B e E"^'" and C e W""""" eRmdm el. The same algebraic syntax can be used to define thermodynamic frameworks as well, but what really counts in this context is the operator precedence ^ > • > -f, and not the original operator behavior. In fact, to stress that we do not talk about a true mathematical algebra, the operators are deUberately given new fancy names Chain (-f). Patch (•) and Tell ( ^) in order to avoid confusion with their arithmetic counterparts. Using these operators a thermodynamic function can be realized as a node in a connected acyclic graph (tree). The + operator is both conmiutative and associative while the • operator is associative and distributive:
A+B=B+A A + (Bi + B2) = A + Bi + B2 Bi^Ci^Ci^B Bi<{Ci^D) =
(1) Bi.Ci^D
5 • (Ci + C2) = 5 • Ci + 5 • C2 It is also necessary to use () to resolve ambiguous calculation paths and [ ] to identify operand hsts, but these operators are not central to the thermodynamic understanding of the problem. This is as far as the syntax goes. The semantics is controlled by the left operands of the binary expressions and these objects must know exactly which classes are "friend" classes with respect to + and • operations. They must also know how to arrange ^Translation of an object's representation into another format without using code jumps.
667 the component lists in order to make a consistent thermodynamic model framework. The goveming construction rules are, B.class e A.chain B.cmp = A.cmp
A-{-B=f>
(2)
Cf.class e B.patch \/i e {1,2,... ,n} Ci.class = Cj.class \/i,Je{l,2,...,n} U Ci.cmp = B.cmp
^•[Ci,C2,...,C„]
^
(3)
Ci.cmp = 21 B.cmp
i=l,n
i=l,k
where class is a built-in Ruby function returning the class name of the object, and chain and patch are user defined methods retuming a set of friend classes for the H- and • operations respectively. Finally, cmp is a user defined method which returns the component set of the object. The + operator adds the results calculated by object 5 to a similar data structure in object A. Example: A = ideal and B = excess properties. The • operator spawns the results calculated by object C, into a dissimilar data structure in object B. This allows for structural changes in the calculated results, but the left operand B must of course know exactly how to perform the patch. Example: B = Helmholtz energy and C, = standard state of component /. The power operator ^ will be used to modify the behavior of a particular function, e.g. to specify which of the many existing m (cy^z)-correlations to be used in a cubic EOS. Note the difference between = (set operation) and = (list operation) as used in the last two comparisons. The first test ensures that the union of the component sets of Ci=i^„ equals the component set of B, while the second test ensures that each component is repeated k e {1,2, n} times in the cumulated list. For pure component standard states k = 1, for geometric activity models like e.g. the Kohler expression (Bertrand, Acree & Burchfield 1983), A: = 2, andfinallyfor a full mixture contribution k = n.
3. Results Two examples are provided to illustrate the benefits of algebraic description of thermodynamic frameworks. The first example is based on a traditional cubic equation of state approach: jmix A
_
JO T
I I
jig
I
jSRK,m_soaoe
(4)
7^
t^i (/J ^^^ '^^ - ^/J T^^^) + Z^^ (^/^^ - ^^°) n
(5)
668 The annotations refer to a collection of thermodynamic classes needed for the semantics control: Helmholtz energy {A)y standard state (T), ideal gas (J), equation-of-state residual (7^), heat capacity integral (C), and enthalpy and entropy reference (7i). The number of these classes is (perhaps disappointingly) large, but one must realize that thermodynamics is an old subject and that some of the conventions made over the past 100 years are quite inadequate for computer implementation, hi addition to the physically reasoned classes, one extra class *S is needed to faciUtate a general interface to the Gibbs energy and Helmholtz energy objects (and maybe grand canonical potential objects, etc.). This root class represents a mathematical quantity called a surface which is responsible for wrapping the canonical state variables of Gibbs and Helmholtz energy objects into a completely generic thermodynamic state function. Similarly, an extra class M is used to provide a layer between the Helmholtz class and the model classes (the need for this class depends somewhat on the implementation details). The algebraic representation of the Helmholtz model can now be written as, Si.Aic{TicCicH-{-Mi.I
+ Mi.R'")
(6)
where each symbol S, A,... represents an instance of the classes 5 , A • • • (the function modifier '" tells srk to use Soave's w-factor correlation). The syntax of the expression is easily verified, but it is harder to prove that the semantics is correct as well. This would require: a) a sound mathematical basis for the algebra, or, b) a full discussion of all the participating classes. It can be appreciated, however, that provided the semantics is under control the algebraic expression is a compact representation of the Helmholtz model. Note by the way that one further simplification is possible if 7Z is implemented as a chain "friend" of I: Si.Ai.{TicCicH+{Mic{l-i-
R'")))
(7)
A grand overview of the model structure can be obtained by viewing the expressions as data trees (Eq.6 to the left and Eq.7 to the right): •
A
•
T
•
C
•
H
S
M
•
/
•
j^m
+ M
i^ A
i^
T
i<
C
•
/
•
H
+
+ )
M
(8)
+ R^
Here, the chains are arranged vertically and the patches horizontally. This makes it easier to remember that the chains add information at the same level of complexity (standard state plus equation of state contributions), while the patches add information at an increasing level of complexity when going from left to right (the surface being more general than the Helmholtz function, which is more general than either the standard state or the equations of state, and so on). The two algebraic expressions are completely equivalent in
669 the sense that the computed results will be the same, but the actual calculation paths will of course differ. At this point it is of interest to see what the Ruby code looks Hke. Analytical conciseness does not guarantee a simple computer code, but in this case the code snippet is seen to be very close to the analytical expression: mix = ['Nitrogen','Oxygen','Argon'] gas = Surface.new(mix) * ( Helmholtz.new(mix) * ( StandardState.new(mix) * ( MuT_cp.new(mix, :poly3 , ' ig') * ( MuT_hs.new(mix, :hOsO,'ig') ) ) + EquationOfState.new(mix) * ( ModTVN_ideal.new(mix,:idealgas) + ModTVN.new(mix,:srk).tell(:m_soave) ) ) ) Here, new is the standard Ruby constructor (overloaded in each class). The function names/7o/y5, hOsO, idealgas, srk and msoave do all refer to actual model implementations, while ig is a phase tag used for the data base binding. If no function is specified in the constructor then anonymous will be called. The role of this function (which is also overloaded in each class) is simply to patch the calculation results from the underlying nodes without adding any private functionality. Finally, the ^ operator is not implemented as such but rather as a call to method tell (again due to implementation details). The second example is made more complex to show how the patch operator can be employed to obtain quite sophisticated model arrangements: .
. = ^ ^ + ^^+^kohleA^
i=2,3
-^Pi
1,2
'^1,3
'^2,3
)
W
i=2,3 V
Z^^ (/J^^'^^^ - ^/J '-f'")+z^' i^f^i - ^^n (10) c
n
The algebraic equivalent is ^ • G ^ ( / ' ^ [ F 2 3 ^ F 2 3 , Fi^Ci*7^l] + M * / - h M ^ ^ ^ [ £ l 2 , £ l 3 , ^ 2 3 ] )
(11)
where F/y is an instance of the Poynting integral class V appUed to the components / and j . Similar interpretations hold for instances of the excess Gibbs energy class S, the heat
670 capacity integral class C and the standard state class H. Note that V\ lacks a concrete implementation but that it is still needed in order to fulfill the semantics. However, since no function name is specified the properties of water will be calculated without a Poynting contribution (the anonymous function takes care of the data propagation).
4. Discussion The possibility of using algebraic operators to build thermodynamically consistent model frameworks has been addressed. By defining a thermodynamic class hierarchy in Ruby, where each class knows about its "friend" classes and how to deal with their calculation results, it is shown that a set of rules can be established to make the framework description bothflexibleand consistent. The outcome of the description is a Ruby object that can be hnearized into XML-format before it is sent to a calculation engine written in C++. A second possibiUty (not illustrated) is the exportation of Matlab code. In this case a standalone Matlab function is produced which encapsulates the entire framework, including all the model parameters, into one black-box state function returning the function value, the gradient and the Hessian of both Gibbs energy and Helmholtz energy models. A third possibility (under development) is the exportation of L^TgX-code for self documenting model description.
5. References Bertrand, G. L., Acree, Jr., W. E. & Burchfield, T. E. (1983), 'Thermochemical excess properties of multicomponent systems: Representation and estimation from binary mixing data', J. Solution Chem. 12(5), 327-346. Braunschweig, B. L., Pantehdes, C. C, Britt, H. I. & Sama, S. (2000), 'Process modeling: The promise of open software architectures', Chem. Eng. Prog. 96(9), 65-76. Castier, M. (1999), 'Automatic implementation of thermodynamic models using computer algebra', Comput. Chem. Eng 23(9), 1229-1245. Rudnyi, E. B. & Voronin, G. F. (1995), 'Classes and objects of chemical thermodynamics in object-oriented programming. 1. a class of analytical functions of temperature and pressure', CALPHAD 19(2), 189-206. Thomas, D. & Hunt, A. (2001), Programming Ruby: The Pragmatic Programmer's Guide, Addison-Wesley, Boston. Uhlemann, J. (1995), *An object-oriented environment for queries and analysis of thermodynamic properties of compounds and mixtures', Compw/. Chem. Eng. 19(Suppl.), 715-720. Uhlemann, J. (1996), 'An object oriented software environment for accessing and analysing thermodynamic pure substance and mixture data', Chem-Ing-Tech 68(6), 695698.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
671
Short-Term Scheduling in Batch Plants: A generic Approach with Evolutionary Computation Jukka Heinonen and Frank Pettersson [email protected], [email protected] Heat Engineering Laboratory, Department of Chemical Engineering Abo Akademi University, Biskopsgatan 8, FIN-20500 Abo, Finland
Abstract This paper presents a program that uses a graphical user interface through which an enduser can describe a batch process using simple building blocks and then develop schedules for it. By allowing the end-user to build schematics of the target process hardware and write product recipes we can evade the all too common need for a software vendor to build specifically targeted releases for different batch processes; processes that might use common batch-specific hardware which is just connected and/or used in a different manner. Using genetic algorithms as an instrument a software package is presented that is able to target and solve large scheduling problems.
1. Introduction In e.g. the chemical engineering industry, manufacturers are faced with ever growing demands on profitability. Using multi-purpose batch mode production is one effort to introduce more profits through increased process flexibility. Expanding and renewing processes through new hardware acquisitions is another common solution, yet it is of utmost importance to use existing hardware as efficiently as possible. That is where efficient scheduling and utilization of the available hardware enter. Scheduling problems occurring in batch mode production have been studied extensively. Methods including mixed integer linear programming (e.g. Kondili et al. 1993) in either pure or hybrid MILP-models are popular but sadly these lack some edge when it comes to NP-hard problems. Since an industrial sized scheduling problem typically expresses an exponentially growing number of candidate solutions, this leaves such approaches somewhat wanting. Probabilistic optimisation and scheduling techniques like simulated annealing (e.g. Kirkpatrick et al. 1983) or genetic algorithms (e.g. Goldberg 1999) are good alternatives, and in this paper the genetic algorithm is the preferred approach to tackling large scale multi-purpose batch processes and their scheduling.
2. Methods Figure 1 shows an overview of the scheduling procedure. There are three main steps; specifying the available hardware, telling the system how it is used when manufacturing various products and then generating the schedule and presenting it to the user. The
672 initial states of the hardware can be entered, i.e. storage tank levels, reactor states etc., alternatively default (empty) values can be used. An interesting question is if the program can be made to automatically get the initial values from, for instance, an automation system or a database located on a LAN-server. At present time, however, the values are inputted by hand.
j
HARDWARE SPECIFICATION
k)==>
HARDWARE DATABASE
RECIPE SPECIFICATION
kj==>
RECIPE DATABASE
INITIAL VALUES FOR HARDWARE
L
EVOLUTIONARY COMPUTATIONS PHASE SIMULATION PHASE
Figure 1: Overview of the scheduling procedure. 2.1. Hardware specification The first step is to specify the hardware found at the manufacturing plant. This is done by using a graphical user interface and a simple select & drop mechanism. An object is selected and placed on the drawing board. By opening an "object properties "-page one can further refme and shape the object. For instance, if the user wants to specify a storage tank the object is given a volume and preferred safety limits for the cistern profile. Next a general method of input for the cistern is defined: On-demand, continuous feed etc. which helps the simulation part of the program to understand the behaviour of the cistern. If several similar cisterns are needed it is a simple matter to select the defined cistern and make copies of it. The underlying mechanisms are completely object-oriented so the copies inherit all properties of the parent. A principle behind the program is to keep the number of building blocks small. At this stage typical hardware included in the program are reactors, storage tanks, concentrators, mixers, splitters and filters. The development continues so more hardware will be included in the future. The specified hardware can be saved and reloaded from a database for future use. 2.2. Specifying a recipe The second step is to specify how different products are manufactured using the declared hardware. The resulting model is called a "recipe", and tells us how much to take of necessary substances, where to find them, what to do with them and how long process steps in e.g. a reactor will take. Eventual recycles and feeds are also defined at
673 this stage. The input procedure of a recipe is through a GUI, with needed information entered through drop-down menus, edit boxes and other common controls. A recipe is written for each different product manufactured. Different recipes are collected in a database for quick access and modification. Having written a recipe, the user can get a connection-based view of the hardware needed for the defined product and verify that the recipe is written correctly. When planning a schedule the objective function must be specified. That is, in a multiproduct plant, do we want to meet some production criteria like "Produce 10 units of A, 25 units of B and minimise makespan or setup times", or do we want to make sure all customer orders are manufactured and delivered within given due-dates, i.e. to prioritize production slightly different depending upon due-date penalties. 2.3. Building a schedule With all the necessary information present (including initial states for the hardware), a schedule can be calculated. The mechanism is divided into an evolutionary computation phase, where the order of events in the system is sought, and, using current recipes and information encoded in the chromosomes, the cistern profiles and reactor states are simulated in a simulation phase. With information from the simulation step, chromosome fitnesses can be calculated. These phases continue to loop until an end criterion is met. The criterion might be e.g. "loop until system converges", "loop for a specific time", "loop a specific number of generations" or the user can simply stop the calculations, check the resulting schedules and continue the calculations if they are found lacking. During runtime, the state of the current population is presented to the user by means of average population fitness-curves as well as fitness distribution plots along with various other helpful information. 2.4. The GA engine From previous trials (Heinonen and Pettersson 2002), the permutation encoding approach was found to be suitable. Exponential ranking was chosen as selection method, i.e. an exponential expression is used in calculating the parental selection probabilities. Individuals with larger fitness values receive a proportionally larger probability to be selected as parents. The following expression by Blickle (1995) is used to calculate the selection probabilities:
c
—\
where /?, is the selection probability for chromosome /. The number of chromosomes in the population is N. Note that 0 < c < 1, and the closer c is to 0 the higher the "exponentiality" of the method and the higher the selective pressure. A c-value of 0.65 is used here. After trials two-point crossover was chosen as the crossover operation with a crossoverrate of 60%. Mutation was kept at 1%.
674 Since some multiproduct batch plants can be very large and contain lots of different products a question that arises is calculation time. The genetic algorithm is very suitable for distributed networking and a mechanism is to be tested where a client-server structure can be used to distribute the simulation workload. The calculation clients can run on any NT-operating PC and the main idea is that the PC-user can allocate how much of their CPU-time is used for client-side calculations. Alternatively, more efficient chromosome structures might allow for scheduling of some very large problems.
3. Example Problem Figure 2 shows a process with two different continuous raw material feeds into storage tanks (note that the picture shows only specified hardware, not their connections or process flow, which is defined by the recipe). Two concentrators concentrate the chosen material into a batch-operated reactor battery. A concentrator can only accept one type of material stream until the concentration task is completed. The reactors are made up of a prereactor and a mainreactor which form a pair. Processing times in the reactors vary, depending upon chosen raw materials. Once a prereactor has finished, the batch is transferred to the mainreactor where it undergoes further processing. After a finished reaction the batch is separated (thus yielding product) and some of the unused material is transported back as a recycle through a continuously operating reactivation step. The separation step can only accept one batch at any given time, and the time of separation varies depending upon which type of raw material was used in the batch. The reactors have different volumes and configurations and the concentrators have different capacities. The storage tasks are given minimum and maximum safety limits. In this particular example, 4 reactor pairs are locked to use only raw material A, and 6 pairs to use only material B. Cleaning a used reactor takes 30 minutes, no other setup times occur. There are no due-dates since the objective is to maximise production while fulfilling the hardware constraints. The scheduling task is to calculate a one-month schedule, manufacturing as much product from the different feeds without violating any cistern safety limits.
675 Vohano: 25.00 ra3 1 ^ Mar 90.00% m Min: 10.m% •%* Cuirent: 16.00 % Storage tank A,Constant Fk)w
m
Volume: 50.00 in3 Max: 90.00%
• fent:ffl^
Storage tankB.Constant F b w
n
• :k)neentmtor l,Diy-Content Converter
a
1
H
1
1
1
a
1
[Reactors |
• Rjeactmtion step^plitter
• •
Concentrator 2,Dry-Content Converter
• • a •
a a • a
B 1
m
Volimw: 35.00 m3
n
|Rec3rcle ar>d Separation^
•
*—' Separation step^pbtter
"^J Concentrator 3,Dry-Content Converter
• ^ M?: i Current: 38.50% Storage tank C,On-DenMnd
Figure 2: A screenshot of the example process specified using the available building blocks.
4. Results In figure 3, a resulting gantt-chart can be seen. Concentrators, reactors, storage tanks and the separation step all have their separate schedules which can be viewed, saved and/or printed. Utilization figures are presented which makes identification of process bottlenecks easier. Figure 4 shows cistern profiles for the storage tanks, and as can be seen, they are well between the defined safety limits of 10% and 90% respectively. Computational time was around 15 minutes on a 600 MHz PC for the complete onemonth schedule.
iBBDDDDDDQDaDBEEfc Fiir^ir^iF^-rinFiPiiPiRPj' wrm ITI
§
HBBBBBBjQBBBEBIH BOBBBBEBBEEBBB !h
IH
Figure 3: A resulting one-month schedule for the reactor pairs in the example process. Similar charts are constructed for every piece of hardware used.
676 CISIKRN PROHLKS
Figure 4: Cistern profiles for the three storage tanks in the example process.
5. Conclusions The results look promising. Due to the GUI and existence of simple building blocks, it is easy to defme a variety of common batch-type processes and construct schedules that give a good solution relatively fast. With some further tuning and introduction of more hardware this has the potential to become a very helpful tool for scheduling purposes and/or bottleneck identifications. In future work, some classic scheduling examples from the literature are to be used for benchmarking purposes.
6. References Blickle, T., Thiele, L., ETH Zurich, 1995. Goldberg, D.E., 1999, Genetic algorithms in search, optimization & machine learning. Addison Wesley, California. Heinonen, J., Pettersson, F., Scheduling a batch process with a genetic algorithm: comparing different selection methods and crossover operations. Submitted to European Journal of Operational Research, 2002. Kirkpatrick, S., Gelatt, CD., Vecchi, M.P., Science 1983, 220, 671-680. Kondili, E., Pantelides, C.C. and Sargent, R.W.H., Comput. Chem. Eng. 1993, 17, 211.
7. Acknowledgment The financial support from TEKES, the Finnish Technology Agency, is gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
677
Model of Burden Distribution in Operating Blast Furnaces Jan Hinnela and Henrik Saxen Faculty of Chemical Engineering Abo Akademi University Biskopsgatan 8, FIN-20500 Abo, Finland E-mail:[email protected], [email protected]
Abstract A model for estimation of the burden distribution is developed for blast furnaces with bell-type charging equipment. The model uses a radar measurement of the local burden level, combined with equations for the dump volume and the repose angle of the charged material, which are solved by least squares. The burden surface is described by two half lines intersecting in the point where the trajectory of the falling dump hits the burden surface. The model is tuned and applied to data from a Finnish blast furnace, and the results are in general agreement with findings reported in the literature.
1. Introduction The blast furnace is the main process in the world used to produce iron for primary steel production. Preprocessed ores, in the form of sinter or pellets, are charged with coke in alternate layers into the furnace top and the iron-bearing burden is reduced and smelted by the hot gases formed in the combustion that is maintained lower down in the furnace by injecting hot blast (preheated air). The ore melting and coke combustion and gasification make the bed descend slowly, and a new layer is charged as the stock descends below a level set-point. The radial distribution of the burden materials plays a significant role in the operation of the process (Poveromo 1995, 1996). The coke and ore layers exert considerably different resistance to gas flow, so variations in the layer thicknesses in the radial direction affect the gas flow distribution and the pressure loss in the dry part of the process, therefore affecting both thermal and chemical phenomena in the shaft. The distribution of the coke layers also determines the size of the "coke windows" in the softening-melting (cohesive) zone, through which the gas enters from the lower regions (Omori 1987). Devices have been developed for estimation of the burden distribution, either by contact (lizuka et al. 1980, Iwamura et al. 1982, Kajiwara et al. 1983) or non-contact (Iwamura et al. 1982, Mawhinney et al. 1993, Grisse 1991) techniques, but such equipment is very expensive and generally requires considerable maintenance efforts. Models for indirect estimation of the burden distribution have been proposed (lizuka et al. 1980, Kajiwara et al. 1983, Nicolle et al. 1987, Itaya et al. 1982, Nikus 2001, Saxen et al. 1998, Hinnela and Saxen 2001). This paper presents a model that estimates the burden distribution from a single radar measurement of the vertical level of the burden surface (stock level) in combination with conditions for the shape of the stock line and the volume of the charged dump.
678
2. The Model and Its Numerical Solution Inspired by the findings from burden distribution experiments reported in the literature and by the approach made by Kajiwara et al. (1983), the surface of the charged dump is approximated by two main linear segments (cf. Fig. 1) (1) (2) where z is the vertical distance from a reference level (often taken to be the lowest position of the edge of the large bell) to the stock line, rcj is the radial coordinate of a (possible) crest, RT is the throat radius of the furnace, and / is the running number of the dump. If the stock level is known at the moment when the dump is charged, one may solve the falling trajectory (Omori 1987) of the burden and use the intersection of the trajectory with the burden surface as the radial coordinate of the crest. In case movable armors are applied, the trajectory after hitting the armors can be calculated from the armor position and angle, assuming a certain deflection from the plates.
4(r) = aur + a,. 4(r) = a,,r + a^,
^r(o=^%/
Fig. 1 Schematic picture of a charged layer with a crest at r = rcj and a bending point of the layer-surface at r = rb, where the angle of the surface is reduced. The burden surface before charging is depicted by the thick solid line.
In order to determine the parameters, a,=(flfi,j,a2,i>^3,i»«4,i)^ of CQS. (1) and (2), the following equations are stated: Eq. (3) expresses continuity of the stock line at r = r^,/, and eq. (4) that the radar (subscript r) measurement, z,^z{rr), should be satisfied after the dump has entered (superscript a). The volume of the charged dump should satisfy eq. (5), where superscript/? denotes the conditions prior to the dump, m is the mass and p is the bulk density of the material. Finally, the angle of the burden surface given by
679 the center-side half-line should agree with a given value, a, determined by the angle of repose of the material (cf. eq. (6)). «M'"c,i+«2,=«3..'"c,,+«4,r
(3)
. . . , . ( 0 = 1 " " ^ ' " " " '' '•-'"
(4)
2ft Rj
RT
V,=m,lp, = J j{zf(r)-zfir))rdrd0 0 0
= 2nj(zt(r)-z:ir))rdr
(5)
0
a,. =~atan(a3p •
(6)
The level and shape of the burden surface prior to the dump, zf ( r ) , can be determined from the previous solution, zf_i(r), and the local stock level measured by the radar after the previous dump, z%_i, under certain assumptions (Saxen and Hinnela 2002). In burden distribution experiments the surfaces of the dumps on the wall side of the crest and the coke-layer surfaces in the very central part of the furnace have been found to exhibit angles clearly smaller than the corresponding repose angles. The former observation has been considered by adding condition (7), where /5 is a constant, while the latter has been accomplished by modifying eq. (2) by an additional condition (8), where 0 < ^ < 1 and r^, < rcj (cf. Fig. 1). Finally, to deal with the possibility that a heavier material is charged on top of a lighter material in the same dump, as is done to implement center-coke charging (Uenaka et al. 1988), the crest of the first material (coke) is shifted by a quantity, Ar^, towards the furnace center (cf. eq. (9)). y6=atan(ai.) Ziir) = Cci3,ir + a,. if
(7) r < r,
r; = r , - A r ,
(8) (9)
The model has been implemented in Madab, solving equations (4)-(8) by least squares but treating the condition of continuity at the crest (eq. (3)) as an equality constraint. Initial guesses, a^^\ of the unknown variables are readily obtained from the vertical level of the stock measured by the radar and from the desired surface angles (Saxen and Hinnela 2002).
3. Tuning and Results The model has been applied on data from the blast furnace No 1 of Rautaruukki Steel in Raahe, Finland. This medium-sized bell-top furnace is equipped with movable armors with ten possible positions (MA=1,...,10), has a throat radius ofRj = 3.15 m and a radar that measures the burden level 0.6 m from the furnace throat wall. The furnace burden consists of sinter (S), pellets (P) and coke (C). The data evaluated are from two distinct periods where the furnace was operated with ten-dump charging sequences. The average burden-layer thicknesses, Azn estimated from the radar signals, as well as the main characteristics of the charging programs are given in Table 1.
680 3.1. Tuning Before the model can be applied, the values of some open model parameters have to be set. In accordance with findings reported in the literature (Kajiwara et al. 1983, Michelsson 1995) the change in inclination of the coke surface was set as ^ = 0.3 (cf. eq. (8)), and the effect of the radial position of the bend point, r/,, was determined by minimization of the square sum of residuals over a 80-dump data set from Period 1 (cf. Table 1). The results showed that the error was very insensitive to the position of the Table 1. Main characteristics of the charging programs of the two periods. Period 1 MA Viva' 9.56 4
Period 2 MA V/ m' 2 11.3
J
Type S
2
P
10
3.15
0.216
C+P
10
5.73
0.158
3
C
2
13.0
0.515
C
7
11.4
0.495
4
P+S
6
13.0
0.317
P+S
2
14.1
0.442
5
C
2
13.0
0.471
C
2
14.0
0.497
6
S
7
9.56
0.324
S
2
11.5
0.358
7
P
10
3.14
0.247
C+P
10
5.69
0.144
8
c
2
13.0
0.483
C
7
11.6
0.518
9
P+S
5
13.0
0.359
P+S
2
14.2
0.445
c
2
13.0
0.520
C
2
14.0
0.456
Dump
10
Azr/ m 0.429
Type S
Azr/ m 0.397
bending point, with a flat minimum at r^ = 0.8 m, in general agreement with the findings of other authors (Kajiwara et al. 1983, Michelsson 1995). The effect of the coke-push parameter of eq. (9) was studied by a similar procedure applied on a uniform 200-dump data segment from Period 2, where two coke dumps in the sequence were center-charged. The analysis showed minimum error at Ar^ = 0.3 m. 3.2. Evaluation In the analysis of the model it was found that it was able to produce a reasonable description of the layered structure of the burden in the shaft. Figure 2 shows the estimated burden distributions in the upper part of the shaft for segments of Periods 1 and 2, where coke is depicted by light, ore by gray and pellets by dark layers. In estimating the descent of the charged layers in the shaft, a procedure described in Saxen and Hinnela (2002) was applied. The radial distributions of the layer thicknesses, presented in the lower panels of the figure, illustrate that the charging program of Period 1 enhances peripheral gas flow by putting more coke towards the wall (e.g., to clean the wall from accretions), while the charging program of Period 2 corresponds to those applied at normal efficient furnace operation. In the latter period, the center-charged coke dumps are seen to give rise to a pronounced "chimney" in the central part of the shaft. This distribution is in general agreement with the results reported in literature (Kajiwara et al. 1983).
681
4. Discussion and Conclusions A model for studying the formation of burden layers in the ironmaking blast furnace has been developed on the basis of a single-point measurement of the stock level by radar. The model, which, furthermore, makes use of geometrical conditions of the problem at hand, has been kept conceptually simple so it can be applied to track the burden distribution in operating blast furnace. The model has been tuned to data from a
r(m)
r(m)
Fig. 2 Estimated distribution of the burden layers in the shaft (upper panels) and the corresponding distribution of the materials (lower panels) for Period 1 (left) and Period 2 (right). Light regions refer to coke, gray to sinter and dark to pellets.
682 Finnish blast furnace by adjusting a set of model parameters, and the results of the tuned model are in general agreement with results presented in the literature and with findings from pilot-scale models. In the future, the model will be evaluated on more extensive data sets and the relatively large variation in the measured layer thicknesses from sequence to sequence will be analyzed. An off-line version model will be used in the design of novel charging programs.
5. References Grisse, H.J., 1991, Steel Times International, Nov. 1991, 612. Hinnela, J. and Saxen, H., 2001, ISIJ International 41, 142. Hinnela, J., Saxen, H. and Pettersson, F., 2002, Modeling of the Blast Furnace Burden Distribution by Evolving Neural Networks, submitted manuscript, lizuka, M., Kajikawa, S., Nakatani, G., Wakimoto, K. and Matsumura, K., 1980, Nippon Kokan Technical Report, Overseas 30, 13. Itaya, H., Aratani, F., Kani, A. and Kiyohara, S., 1982, Rev.Met., CIT 79,443. Iwamura, T., Sakimura, H., Maki, Y., Kawai, T. and Asano, Y., 1982, Trans. ISIJ 22, 764. Kajiwara, Y., Yimbo, T. and Sakai, T., 1983, Trans. ISIJ 23,1045. Mawhinney, D.D., Presser, A. and Koselke, T.G., 1993, Procedings of Ironmaking Conf., ISS, Vol. 52, 563. Michelsson, M., 1995 "Investigation of the effect of movable armors and the falling trajectory of coke". Internal report 9501, Fundia Wire Oy Ab, Koverhar, Finland, (in Swedish). Nicolle, R., Thirion, C , Pochopien, P. and Le Scour, M., (1987), Proceedings of Ironmaking Conf., ISS, Vol. 46, 217. Nikus, M., 2001, A set of models for on-line estimation of burden and gas distribution in the blast furnace. Doctoral dissertation. Heat Engineering Laboratory, Abo Akademi University, Finland. Omori, Y. Ed., 1987, Blast Furnace Phenomena and Modelling, The Iron and Steel Institute of Japan, Elsevier, London, UK. Poveromo, J.J., 1995-1996, Burden distribution fundamentals. Iron & Steelmaker 2223, No. 5/95-3/96. Saxen, H., Nikus, M. and Hinnela, J., 1998, Steel Research 69,406. Saxen, H. and Hinnela, J., 2002, Model for burden distribution tracking in the blast furnace, accepted for Mineral Processing and Erxtractive Metallurgy Review, Special Issue on Advanced Iron Making. Uenaka, T., Miyatani, H., Hori, R., Noma, F., Shimizu, M., Kimura, Y. and Inaba, S., 1988, Proceedings of Ironmaking Conf., ISS, Vol. 47, 589.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
683
Environmental Impact Minimisation through Material Substitution: A Multi-Objective Optimisation Approach A. Hugo, C. Ciumei, A. Buxton and E.N. Pistikopoulos * Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London, SW7 2BY, U.K.
Abstract This paper presents an approach for identifying improved business and environmental performance of a process design. The design task is formulated as a multi-objective optimisation problem where consideration is given to not only the traditional economic criteria, but also to the multiple environmental concerns. Finally, using plant-wide substitution of alternative materials as an example, it is shown how potential stepchange improvements in both life cycle environmental impact and process economics can be achieved.
1. Introduction Over the past decade, the awareness created within the chemical processing industry to design production facilities that operate within acceptable environmental standards has increased considerably. In their comprehensive review of approaches that include environmental problems as part of the process design task, Cano-Ruiz and McRae (1998) identify the need to formulate problems with environmental concerns as design objectives rather than constraints on operations. They argue that the traditional approaches where pollutant flows in waste streams are bounded by regulatory limits do not capture the underlying environmental concerns. Instead designers should balance environmental objectives against the economic ones and should formulate suitable objective functions that accurately represent the environmental performance of the process. Subsequently, the design task is often formulated as a multi-objective optimisation problem where the trade-off between environmental and economic criteria can be explored. Although the value of multi-objective optimisation to environmental process design has been widely recognised, most of the applications only consider the process in isolation from its supply chain. The majority of approaches also use so-called burden or physical indicators based upon the mass of waste and emissions released instead of quantifying the potential environmental impact of pollutants. Within this context Life Cycle Assessment (LCA) provides a more holistic approach for quantifying environmental * To whom correspondence should be addressed. Tel.: (44) (0) 20 7594 6620, Fax: (44) (0) 20 7594 6606, E-mail: [email protected]
684 performance measures (BS EN ISO, 2000). Accordingly, the Methodology for Environmental Impact Minimization (MEIM) was developed to incorporate the environmental impact assessment principles of LCA into a formal multi-objective process optimisation framework (Pistikopoulos et al., 1994). Since its original formulation the methodology has also been successfully extended to include molecular design techniques for the design of substitute solvents (Pistikopoulos and Stefanis, 1998; Buxton ^^ a/., 1999). In this paper, the MEIM is revisited and applied to an illustrative example to highlight its potential at identifying potential step-change improvements in both process economics and life cycle environmental performance.
2. Illustrative Example - Process Description The industrial manufacturing facility for the production of 152 tons per day crude acrylic acid (95% pure on a weight basis) using propylene, air and steam as raw materials acts as the illustrative example (Figure 1). The process consists of a reactor followed by a cooler, an absorption column (using demineralised water as absorbent), a solvent extraction column (using di-isopropyl ether as the base case solvent) and a distillation column for solvent recovery. Each unit is modelled using established design equations and assumptions as presented in various chemical engineering process design handbooks. 7£^ DEMIN WATER
M AIR COMPRESSOR
SOLVENT RECOVERY DISTILLATION COLUMN 10
PROPYLENE
^ > - > ^ L-L EXTRACTOR
14 SOLVENT^ ' p j ^ ^ MAKE-UI
r ^
12 SOLVENT RECYCLE
W
11 WASTE WATER
Figure 1. Illustrative Example Process Flow Diagram.
3. System Boundary and Inventory Analysis The first step in the methodology is the expansion of the foreground system (the acrylic acid plant) to include up-stream/input processes within the boundaries of the background system. This allows the traditional waste minimization techniques to be extended by considering a more complete description of the environmental impact of the process. Since a constant production rate is used as a design basis, downstream (product use) stages are not accounted for and the study is, therefore, a "cradle-to-gate''
685 assessment. The scope of the life-cycle study considers the following processes associated with the production of acrylic acid: • • • •
Propylene production and delivery Steam generation Thermal energy generation Electricity generation.
For the chosen background processes, emissions inventories are taken from standard literature sources. The mass and energy balance relationships of the various unit operations provide the emissions inventory for the acrylic acid plant (foreground system). Grouping all of these wastes from both the fore- and background systems together into a global waste vector represents the total environmental burden of the entire life cycle. This burden vector typically consists of a large number of polluting species and transformation of the pollutant mass flows into corresponding environmental impacts allows the vector to be aggregated into a more manageable system of lower dimensionality.
4. Impact Assessment - The Eco-Indicator 99 Methodology Although a number of methods exist for performing the transformation of burdens into impacts, very few account for temporal and spatial behaviour of pollutants. Instead of considering only the point-source conditions of each emission, it is necessary to consider a larger region. One recently developed method that addresses this issue is the Eco-Indicator 99 (Pre Consultants, 2000). A damage-oriented approach is adopted to assess the adverse environmental effects on a European scale according to three main damage categories: Human Health, Ecosystem Quality and Resources. The result is the classification of the impacts according to 12 indicators - some of which are concerns common to most impact assessment methods. From the complete set of 12 indicators, the following 9 were found to be most relevant to chemical process design: • • •
Damage to Human Health: Carcinogenic, Respiratory Effects (Organic), Respiratory Effects (Inorganic), Climate Change, Ozone Layer Depletion, Damage to Ecosystem Quality: Ecotoxic Emissions, Acidification and Eutrophication, Damage to Resources: Extraction of Minerals, Extraction of Fossil Fuels.
Next, the three main categories are put on a common basis through normalisation and aggregated into the three main headings. Finally, a score of relative importance is assigned to each category to arrive at the single Eco-Indicator 99 value. The developers of the Eco-Indicator 99 method stress that this step of impact category valuation is naturally a subjective exercise that largely depends on cultural perspectives. Three different value systems are, therefore, presented according to social types.
5. Multi-Objective Process Design Model In the most general case, the environmentally conscious process design problem addressed by the MEIM involves both continuous and discrete decision variables and
686 can be formulated as a multi-objective Mixed Integer Nonlinear Programming (moMINLP) problem. For the illustrative example though, the topology of the production process is known and no discrete decisions are considered. The problem, therefore, reduces to a multi-objective Nonlinear Programming (moNLP) problem: \f\ (^) = Total Annualised Cost min U i ^ I/2(jc) = Eco - Indicator 99 hix) = 0
The goal of multi-objective optimisation is to obtain the set of efficient solutions (noninferior or Pareto optimal solutions). By definition, a point is said to be efficient when it is not possible to move feasibly so as to decrease one objective without increasing at least one other objective. In the MEIM, this set of efficient solutions is obtained by reformulating the original multi-objective optimisation problem as a parametric programming problem. Discretisation of the parameter space into sufficiendy small intervals then allows the application of the e-constraint method (Miettinen, 1999). However, since the e-constraint method can neither guarantee feasibility nor efficiency, both of these conditions need to be verified once a complete set of solutions has been obtained. An algorithm for detecting efficiency based upon the definition of domination sets is, therefore, included as a final processing step.
6. Base Case Results As the set of efficient solutions highlights (Figure 2), a conflict exists between a design achieving minimum cost and a design achieving minimum environmental impact. It shows that no improvement in the environmental performance can be achieved unless the economic performance is sacrificed and more money is invested. The only way to improve both the economic and environmental performance is to structurally modify the process topology through equipment and/or material substitutions. Owing to the high environmental burden resulting from the operation of the liquidliquid extractor and its downstream distillation column, substitution of the organic solvent offers an ideal opportunity for modifying the process in search of step-change improvements. The task is, therefore, to identify potential candidates that can be used as substitutes to the existing solvent, di-iso propyl ether (DIPE).
7. Single Solvent Identification - Separation Task Level A wealth of techniques exists for designing molecules with desired characteristics. Despite the plethora of approaches, most approaches focus on designing materials with the required processing properties without explicitly considering the plant-wide implications. In an attempt to address this limitafion, Buxton (2002) proposed that only with an expanded process boundary, the selection of materials can lead to consistent cost optimal and environmentally benign improvements. Within this context, he developed a procedure for designing solvents using process and plant-wide environmental objectives.
687 Binary variables are used to represent the occurrence of molecular structural groups (e.g. -CH3, -CHO, -OH ...) found in the group contribution correlations. This allows molecules to be generated according to a set of structural and chemical feasibility constraints. In addition, a variety of pure component physical and environmental property prediction equations, non-ideal multi-component vapour-liquid equilibrium equations (UNIFAC), process operational constraints and an aggregated process model form part of the overall procedure. Finally, the solvent identification task is solved as a mixed integer non-linear programming (MINLP) problem (Buxton et al., 1999). Applying the procedure to the illustrative example, results in 43 structurally feasible solvents being identified as possible substitutes. 21 of these had to be excluded though owing to the existence of an azeotrope in the extract mixture, while another 9 failed to achieve the desired level of extraction. Analysing the 13 remaining feasible candidates, it is interesting to note that the two best performing solvents, diethyl ether and methyl isopropyl ether, are structurally very similar to the base case solvent, di-iso propyl ether. For illustrative purposes, the solvent with the fourth best performance during the identification task, n-propyl acetate (NPA), is taken into to the expanded process boundary to verify its behaviour in terms of the multiple performance criteria.
Figure 2. Base Case Efficient Set of Solutions (DIPE as Solvent).
8. Plant-Wide Verification - Step-Change Improvements Substituting the existing solvent with one of the potential candidates, NPA, and resolving the multi-objective process design problem, results in a shift in the set of efficient solutions as shown in Figure 3. Changing the solvent used in the liquid-liquid extractor from DIPE to NPA can potentially result in an optimal plant-wide design requiring 14.8 % less in total annual cost while simultaneously achieving a 9.3 % reduction in environmental impact.
688
9. Conclusions A multi-objective optimisation methodology is presented for the design of a chemical plant with consideration being given to not only the business incentives, but also the environmental concerns. Application to an illustrative example, illustrated how the substitution of alternative materials can potentially shift the set of trade-off solutions, resulting in step-change improvements in both the economic and life cycle environmental performance.
2094.4
2094.8
/
/
2308
2309
f2 = Eco Indicator 99 [Pointslhr] Figure 3. Shifting the Pareto Curve through Solvent Substitution.
10. References BS EN ISO, 2000, ISO 14040, British Standards Institution. London. Buxton, A., 2002, Solvent Blend and Reaction Route Design for Environmental Impact Minimization, PhD Thesis, Imperial College, London. Buxton, A., Livingston, A.G. and Pistikopoulos, E.N., 1999, AIChE J., 45(4), 817-843. Cano-Ruiz, J.A. and McRae, G.J., 1998, Annu. Rev. Energy Environ., 23,499-536. Miettinen, K.M., 1999, Nonlinear Multiobjective Optimization, Kluwer, Netherlands. Pistikopoulos, E.N. and Stefanis, S.K., 1998, Comp. & Chem. Eng., 22(6), 717-733. Pistikopoulos, E.N., Stefanis, S.K. and Livingston, A.G., 1994, AIChE Symposium Series, 90(303), 139-150. Pre Consultants, 2000, The Eco-Indicator 99, 2°"* ed., Amersfoort, Netherlands.
11. Acknowledgements The authors are grateful for the financial and administrative support of the British Council, the Commonwealth Scholarship Commission and BP p.l.c.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
689
Genetic Algorithms as an Optimisation Toll for Rotary Kiln Incineration Process Inglez de Souza, E.T.; Maciel Filho, R.;Victorino, I.R.S. State University of Campinas - Chemical Engineering College, Campinas, SP. - Brazil e-mail: [email protected]
Abstract Since incineration process has showed great importance in chemical industries a number of developments were published. This work can be categorised in two different aspects, control strategy and steady state optimisation. The former acts after the process is somehow considered with its parameters properly set, with the mission of keeping the variables around a set point. On the other hand, the steady optimisation seeks such parameters that are used in control strategies. In many real processes, especially with rotary kilns equipment, the above mentioned considerations do not work in harmony. In this gap the present paper drives its efforts in order to achieve process integration in real time, at least for a simple level, amalgamating steady state optimisation and control strategy to enhance process performance. The main tool used as the optimiser in a genetic algorithm, and the control strategy has been formulated with the same concept of predictive control.
1. Introduction This work has, as the main goal, to offer new options of performance enhancing of such process. One seeks to optimise temperature profiles along the combustion chamber according to pre-established conditions. Discrete internal and the outlet flow positions have been computed in the objective function for further optimisation, which results were used with an on-line optimiser and as a predictive control strategy. The incineration process consists of a rotating cylinder modelled by a 2D deterministic model, simulating heat transfer - including radiation. Sparrow and Cess (1970) - mass transfer, species generation and consumption due to combustion, Inglez de Souza (2000), Tomas (1998). Since it has not been easy to use the entire deterministic model because of its expensive computation time in association with a genetic algorithm, a factorial design was proposed. The referred statistical tool has been applied to obtain a reduced model for the optimisation procedure. Temperature has been calculated with a 2^ factorial design, and optimisation based on pre-determined spatial profile along the central axis of the kiln. In this study the phenomenological model operates as if it were the real plant, while the statistical derived one act with the on-line genetic algorithm optimiser to produce the best profile, suitably setting the inlet flows. Elaborated with the same approach, a predictive control strategy has been formulated. The central idea remains identical, an analytical expression obtained with the above-mentioned technique foresees the set
690 points. The control structure receives information from the process, send it to the controller, which manipulates the operational parameters in order to minimise the output controlled variable with the set point. Not only the optimiser but also the control strategy has been optimised with a genetic algorithm model, which is described below. Fundamental theory about factorial design can be found in Bruns (1995). Based on genetic and evolutionary principles, GAs work by repeatedly modifying a population of artificial structures (chromosomes) through the application of selection, crossover, and mutation operators. The evaluation of optimisation happens with the adaptation function (fitness), which is represented by the objective problem function in study (involving the mathematical model of the analysed system), that determines the process performance.
2. Methods 2.1. Factorial design In the last item it has been cited that a factorial design was applied. The mathematical model has been used, as it was the real plant. The basic idea is to obtain an analytical expression, statically representative of the plant, to foresee plant outputs. Chosen the desired output with its respective inputs influences, a cluster of 16 data has been collected according to the following schema. Table 1. Factorial design for 4 input variables - 2 scheme. Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Variable 1 + + + + + + + +
Variable 2 + + + + + + + +
Variable 3 + + + + + + + +
Variable 4 + + + + + + + +
The symbol "+" mean that the output result of the studied variable has been considered for an upper reference, while "-" is the opposite, the variable is calculated with it lower reference. The lower and upper limits are fixed according to the interest area, because this range of values will determine the region that it is possible to apply the model. Equation (1) represents the commented factorial design mathematical model.
691 y = y + a^.x^+a2.^2
+ a^.^3 + a^,x^ + a^.x^.X2 + a^.x^.X3 H— +
^n\ '^\ '^2 '^3 "^ ^n2 '^\ •^2-^4 "^
^ ^n3 '^l '^2 '^3 -^4
y - Output variables. y - Mean output. Xi, X2, X3, X4 - Input variables In the above equation, all variables influences and interactions among then have been considered. These parameters plus the mean value are calculated with the following equations. I order to let the further procedure clear, and extra column has had to be added in Table 1, which would be supposed to contain the "y" values. For calculation purposes and starting from now these values are labelled as being (yi, yi, • •. yie)16 V-
(2)
y^^ii
The remain parameters (ai, a2, ..., ans) are calculated as a functions of its influences in the output result. If one is computing variable " 1 " influence, the signs of the first column are entirely used, as well as for variables 2,3 and 4 the columns number 2,3 and 4 are fully adopted. Equation (3) and (4) elucidate it. 16
^sign(x,),,y, aj =-^
;;=l,n3
(3)
0
^sign(x) = +l if'V \sign(x) = -\ if"-"
^^^
If the parameter is being calculated for terms with more then one variable, then a signal analysis must be carried out. A "+" with "+" combination, as well as "-" with "-", will result in "+" general signal. Opposites signals "+" with "-", will result in "-" general signal. An example calculation for ai leads to:
a,
^ , - y i i + y i 2 - y i 3 + y i 4 - y i 5 + yi6 g
^ (5)
2.2. Genetic algorithms The results, which will be further presented, have been optimised with a binary code genetic algorithm. This kind of representation codify the variables in a sequence of "0" and " 1 " named chromosome, that expresses a real value. A population, where the
692 chromosomes are distributes, evolves up to an optimum established by the processes configurations. A binary chromosome is presented in equation (6). (OOllOO
OOl)
(6)
First of all, it is necessary a selection algorithm to proceed with the genetic algorithm. In this work a tournament has been used. This technique compare, in a random fashion, the chromosomes up to the best-fitted organisms has been selected. The others genetic operators, rather then selection, are crossover and mutation. A crossover operation, as it is performed in this work, interchange genetic information, which are "crossedover" at random points of the chromosome. This genetic operation is executed according to equation (7). vj = (oOl 1001000 |l lOOl)
vj = (oOl 1001000 |01011)
V2= (ill 1001000 |0101l)
v; = (ill 1001000 |llOOl)
The last operation, mutation, is performed in the same way that crossover do. Along the chromosome, a gene (binary number), is selected in mutation probability proportion. The gene has its characteristic mutated, "0" to " 1 " , or vice-versa.
3. Results The operational parameters used to evaluate the proposed technique are depicted in Table 2. Table 2. Operational parameters used in the simulations. Zones number: 10 Conductivity: 0.61 W/m/K (insulation) Solid residence time: 1.800 s
Intern and extern diameters : 1,40/1,25 m (primary) 1,40/1,25 m (secondary)
Kiln slope: 2,0 degrees
Chamber distance: 12,0 m (primary) Rotational velocity: 0,033rpm. Inlet flows rate (kg/s): residue: 0,20 primary air: 0,85 secondary air: 3,65 fuel: 0,10 Inlet temperatures(K): residue: 303 primary air: 333 secondary air 333 fuel: 333 Reference and ambient 298 K
temperatures:
693 The objectives functions used to generate the results are quite simple in its essence. Every prescribed internal profile compared with the factorial design prediction model gives a deviation error. The sum of these deviations completes the objective function. e = ym'yp
(8)
e-Deviation error. ym - Output from the fundamental mathematical model. Yp - Predicted output from the factorial design model. In this case, the outputs are represented by the combustion chamber temperature profile. The number of discrete selected points is three. Thus, the objective function is represented as follow^: r^ref
p
/ -
(9)
^ref
T^^^ - Desired spatial profile. The final part of this work is devoted to a predictive control strategy. It makes use of a factorial design equation in order to predict the exit gas temperature. Then, after detecting a perturbation the genetic algorithm is called to minimise the effect of it calculating the new manipulated variable value. In Figure 1 this strategy is presented. Thermocouple
Fuel I Air -J •-
Rotary kiln
Outlet gas
• Residue GA.C GA.C: Genetic Algorithm Control Figure 1. Control Strategy Scheme. The input analysed variables are respectively: primary air, secondary air, fuel and solid residue. Their non-optimised, initial values are: 0,85 kg/s, 2,65 kg/s, 0,10 kg/s and 0,20 kg/s. The upper and lower values are +10% and -10%. After the genetic algorithm computations a new operational parameter profile has been achieved, which is showed in Table 3.
694 Table 3. Optimised operational parameters. Parameter New value Primary air (kg/s) 0,77 Secondary air (kg/s) 2,385 Fuel (kg/s) 0,0925 Solid residue (kg/s) 0,183 In control situations, equation (9) is reduced to just one discrete point, controlled variable. In this case the objective function set the perturbation, as well as the noncontrolled variables, and the genetic algorithm predictive control is performed. Table 4 presents a series of simulations, for some conmion perturbations, and presents the new calculated values for the manipulated variable. The references values used for these calculations are t those presented prior to the steady state optimisation. The controlled variable is the temperature, while the manipulated is the fuel. Table 4. Updated values of the manipulated variable after step perturbations. Step order (%) Numerical Updated controlled Variable processing time (s) variable + 10% 4 0,0968 Solid residue 0,102 4 Secondary air +10% 4 0,0943 Secondary air -5% 4 0,0970 Solid residue -10%
4. Conclusions The results have shown that the proposed procedure, coupling factorial design reduced model and the genetic algorithm, as an optimisation tool, is very suitable to deal with real time process integration. It has been possible to drive the process to idealised conditions, never dismissing actual and practical constraints, as well as the process in tight control.
5. References Bruns, et al., 1995, Planejamento e Otimiza9ao de Experimentos, UNICAMP, Campinas. Inglez de Souza, E.T., 2000, Mathematical Modelling, Control and Rotary Kiln Process Incineration with Post Combustor, M. Sc. Dissertation, Chemical Engineering College, UNICAMP, Campinas. Michalewicz, Z., 1996, Genetic Algorithms + data structure = evolution programs, Springer-Verlag. Michalewicz, Z., Schoenauer, 1996, Evolutionary Algorithms for Constrained Parameter Optimisation Problems, Evolutionary Computation, 4(1), 1-32. Tomas, E., 1998 Mathematical modelling and simulation of an rotary kiln incinerator for hazardous wastes in steady and non steady state, Ph.D. Theses, Chemical Engineering College, UNICAMP, Campinas. Sparrow, E.M., Cess, R.D., 1970, Radiation heat transfer, Broks/Cole.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
695
Dynamic Simulation of an Ammonia Synthesis Reactor N.kasiri, A.R.Hosseini, M.Moghadam CAPE lab, Chem.Eng.Dept., Iran Univ.of Sci.&Tech., Narmak, Tehran, Iran, 16844 [email protected]
Abstract In the ammonia synthesis process like most other processes the main processing unit and one that attracts most attention from a control point of view is the rector. In ammonia production plant there exists two reactors, the mathanation and the synthesis. The methanation reactor is of much less importance as little amount of reaction takes place in it, while the synthesis reactor is of outmost importance. In this paper initially a four-bed ammonia synthesis catalytic reactor is simulated in a dynamic environment. The simulation is then used to analyze the effect of sudden change on feed pressure and variation of feed distribution on different beds on process parameters such as temperature, pressure, flow rate and concentration through the process.
1. Introduction Automatic and computer aided control of process plants is in practice for many years. This is due to the positive effects of computer control on a production line from a production engineer's point of view. It enables on line, fast and precise control of processes and it is much more valued when only fine controlling practice could be effective and possible. Automatic control of processes has been advancing in the technologies being used. The main tool on which computer aided process has been control is based is the dynamic model of the plant. A controlling practice can only be as effective in application as the model used for prediction is precise and exact. With a dynamic model the behavior of a process in time is predicted as a result of a wanted or an unwanted change on a process parameter. A dynamic process model may also be used for other purposes such as the design of start up and shut down procedures. With all this in mind the ammonia synthesis reactor has been simulated here and the simulation used for an in depth analysis of the process.
2. Kinetics of the Ammonia Synthesis Reaction The ammonia synthesis reaction is an equilibrium reaction between hydrogen, nitrogen and ammonia, in the presence of magnetic iron oxide. The conversion is a function of pressure, temperature and the reactants ratio. Raising the reaction pressure increases the equilibrium conversion therefore increasing the heat of reaction produced inside the catalyst bed. This causes an increase in temperature resulting in a rise in spacial velocity. These results in a reduction in the reactor volume required compared to a reactor operating at lower pressure. The equilibrium reaction is defined by: - N 2 + - H 2 <=> NH 3 , AH 130c = -ll.OKcal
, AH,290c = -13.3Kcal
W
696 The reaction is exothermic and the equilibrium rate constant is given by: Kp =
(2)
PNH^
1
3
(PN2)2X(PH2)2
To describe the kinetics of this reaction in the gas phase, the equation type is selected to be equilibrium and reversible and the stocheometry of it is selected using standard equations. The equilibrium and rate constants as functions of temperature are given by: LnK = 2.6899-5.5192x10"' +
2001.6
2.69 ILnT
k = 2.45x10'exi
1.635x10' RT
(3)
When K is the equilibrium constant and k the rate constant.
3. Dynamic Simulation of the Ammonia Synthesis Reactor As stated before a four bed catalytic fixed bed ammonia synthesis reactor has been simulated here in a process simulator environment. The choice of four fixed beds is to ensure the simulation is capable of providing on analysis tool for as complicated a reactor as exits in this process. The simulation is initially developed in a steady state environment. To do so the SRK thermodynamic package is used. The equations and the relating constants are defined as expressed before. The instrumentation and control devices and the relating parameters were then designed and installed on each of the streams and equipments. The simulation developed thus far is transferred into the dynamic environment. Fig.l demonstrates the mechanism the four-bed reactor was simulated. As observed the main feed gas is divided into two stream one passing through the control valve (VLV-100) and the other bypassing it. The two join and enter the shell side of the preheater exchanger, entering the top of bed 1 after being heated up. The rest of the feed is sub-divided into four quenching streams. Each of the quench gas streams are pass through a separate control valve and fed on top of each of the four catalytic beds to control process operation. The final product exiting bed 4 is passed through the tube side of the feed preheater.
Figl: the reactor mechanism simulated in the simulation environment.
697
4. Results and Discussions Using the dynamic model developed some cases of interest are studied. They are reviewed below. 4.1. Case 1: The effect of sudden changes in feed pressure on process parameters As observed in fig.2 a sudden increase in feed pressure of 5 bars initially results in an increase in feed flow rate passing through the 5 control valves. The feed flow rate increase through each of the valves is proportional to the initial flow passing though each under steadies state conditions. The controllers close the valves gradually to control the flow rate passing through resulting in the gradual reduction in the flow rates demonstrated in fig 2. A trend opposite to that described above is demonstrated in fig. 3 for a case of reducing the feed pressure by 5 bars. The changes on feed pressure affects reactor pressure and as the reaction takes place in the gas phase the ammonia produced is affected. The 5 bars increase causes an increase in the reactor pressure raising the reaction rate and therefore ammonia production rate as demonstrated on fig.4. The controlling effect of the values again reduces the molar ammonia flow rate. The opposite effect is observed in fig.5 for the case of sudden reduction in feed pressure of 5 bars. The smaller change shown on fig.5 as compared to fig.4 indicates that the reaction is less affected by pressure reductions compared to pressure increases. One of the process parameters of importance in controlling the reactor operation is the temperature of the streams leaving each of the four catalytic beds. In this simulation exit temperatures of the four beds are recorded and the plots are shown in fig.6 and fig.7. The temperature variations are very slight and therefore not of much significance on their effect on the overall operations. As expected the bed exit temperatures show an increase for the 5 bars pressure increase in the feed and a reduction for the reverse case. The larger temperature changes shown on bed 1 exit are dew to the fact that a larger portion of the reaction takes place in this bed compared to the other 3 beds.
250000n • "
H*-^
'
200000
—Y^
*
••/:•[
••?—^ •
50000
11.1 time (hours)
•
,
N
^
?••"--. l!k^i^;^:^J!r''"'" • •'"•'•' r'vvyrj '^ \ •"'"»
*"*5lP*-
11
'
10.8
11
11.2
'
'•""
11.4
11.6
'""y
11.8
12
time(hours) | - » - s t 2 - * ~ St. 5 - A — s t . 1 1 -»«—St. 8 - » - St. 14|
Fig.2: The effect of 5 bar pressure increase on mass flow rate of feed entering reactor beds.
Fig3: The effect of 5 bar pressure decrease on mass flow rate of feed entering reactor beds.
698 9000 8500
7000
O
8000
$
6000
TO o E § 0) E E
7500
^
5000
<
7000 6500 6000
•5
4000
I
3000
< 10.99
11.09
11.19
11.29
11.39
11.49
0 10.44
10.94
540 535 530 fl, 525 3 520 2 515 S. 510 I 505
540 n
':, - ' V' 510 '
>r =:
".-'.^ ' ^ y
'
500 = M =
•^
!:
17 -Se.1 -
-Se.2-
11.94
Fig5: The effect of 5 bars pressure decrease on ammonia production.
Fig4: The effect of 5 bars pressure increase on ammon a production.
' ' '^
11.44 Time (liours)
time (hours)
490 5t
•
o § 2000 E " 1000 E
^
5500 5000 10.89
^
495 f: 490 P 15 Time(hours)
-Se.3-
Fig6: The slight effect of 5 bars pressure increase on temperature of streams leaving the 4 beds.
-St.10-
-St.12-
-St15
Fig7: The slight effect of 5 bars pressure decrease on temperature of streams leaving the 4 beds.
4.2. Case 2: The effect of feed distribution on the four beds on process parameters As unlimited number of variations can be thought of when different percentages of feed distribution on each of the four catalyst beds are to be considered a change of feed to one bed may or may not be compensated for by variations of the flow rates on the other 3 beds. In the following two such cases studied by the simulation are described. 4.2.1. Case 2-1: Reduction of feed rate to bed 1 and a proportional compensative increase of feed rate on the other 3 beds In this case study the feed rate being fed to bed 1 (the preheated flow) is reduced. Increasing the feed rate to the other three beds compensates the exact amount of reduction. The percentage distribution is given in table 1. Table 1: feed distribution variation on the 4 catalytic beds for case 2-1. Mass distribution of feed%
Feed entering bed 1 after preheating
Feed entering bedl without preheating
Before change
49.52
After change
45.15
Feed entering bed 2 without preheating
Feed entering bed 3 without preheating
16.82
12.7
13.68
7.18
17.87
14.06
14.46
8.435
Feed entering bed 4 without preheating
699 Fig. 8 demonstrates feed variations. The variations are carried out using the valve openings and due to this an initial overshot is observed in the increase of stream 2 and the decreases of the other four streams. Changes in the amount of value opening results in flow rate variations and therefore the pressure changes that affect flow rates so that initial settings are achieved. Fig.9 shows temperature variations caused as a result. This is partially due to lower overall temperature of main feed entering bed 1 as well as the larger quenching effect of the other non-preheated feed entering other beds. The overall reduction in reactor temperature as expected causes a better reaction propagation and therefore more ammonia produced as demonstrated in fig 10.
550 n 540 ;
T^ •"""""^
^-^-t
*-•—
m*-'
520 •^^^2^""^ T' yrr'''r-'r '• . y^'":"'--* ^^ ,. .,i„, 510
•% 1 ,:
•; ;; jin,.,..
500 , 490 ^ '••"•"—*<"'
85
"" ';'
82
85.5
Time (hours)
"' " " \
wtc.
132 Time(houres) |_»-_St6 -•—St12 _A_St.10 ^<—SI15 1
Fig8: Variation of feed entering the 4 beds in time according to case 2-1.
^'§9- The variation of temperature of 'f^''"' '^«^'«^ ^ ^^^'^ according to case 2-1.
6450. ^.^"m-
6400 -
—'—^ ^
o
<
5:
6150 53.4
73.4
93.4
113.4
1314
153.4
17
Time (hours)
Fig 10: Variation of ammonia produced by time as a result of changes in case 2-1.
4.2.2. Case 2-2.'Increase in feed rate to bed 3 and a proportional and compensative decrease in the other 4 feed streams The feed rate to bed 3 is drastically increased from 13.7% of the total feed to 21.85%. The increase is made up for by reduction on the other streams. The amounts of changes are given in table 2.Reduced feed rates to beds 1 and 2 results in an increase in temperature of stream leaving the two beds. This is shown in fig 11. The reverse is observed in the same plot for the temperature of bed 3 due to the increase of the feed
700
rate to this bed. The temperature of stream leaving bed 4 is affected diversely by the reduction in the temperature of stream leaving bed 3 (causing a reduction in temperature of bed 4 product) as apposed to the reduction of the fresh feed entering bed 4 (causing an increase in temperature of bed 4 product). The sum effect is reduction in bed 4, product temperature. This is demonstrated in fig. 11 .As temperature variations are not significant production rates are not varied significantly. Only slight increase in overall ammonia production is observed as shown infig.12. Table!: feed distribution variation on the 4 catalytic beds for case 2-2. Mass distribution of feed%
Feed entering bed 1 after preheating
Feed entering bed.l without preheating
Before change
49.53
16.89
After change
45.2
15.67
99
Feed entering bed 2 without preheating
Feed entering bed 3 without preheating
Feed entering bed 4 without preheating
12.7
13.7
7.16
11.24
21.85
6.02
119 90
Time(hours) -St.15
Figll: The variation of temperature of stream leaving 4 beds according to case 2-2.
110
130
Time(hours)
Figl2: Variation of ammonia produced by time as a result of changes in case 2-2.
5. References Chemical Engineering Science Journal, 1991, Vol.36, 19. Chemical Engineering Science Journal, 1992, Vol.47, No. 9-11, 2523. Chemical Engineering Science Journal, 1996, Vol.51, No. 11, 2927. Egyhazi, J., Scholtz, J., 1983a, Simulation of the operation of the new and partially deactivated ammonia synthesis catalyst, part 2: Calculation of the concentration and temperature distributions, Hungarian chemical Soc, Hung, Vol.2, 13. Egyhazi, J., Scholtz, J., 1983b, Simulation of the operation of the new and partially deactivated ammonia synthesis catalyst, part 1: Measured/Calculation temperature distribution, Hungarian chemical Soc, Hung, Vol.2, 7.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
701
Reaction Modeling Suite: A Rational, Intelligent and Automated Framework for Modeling Surface Reactions and Catalyst Design Santhoji Katare James Caruthers, W Nicholas Delgass, Venkat Venkatasubramanian Center for Integrated Materials-to-Product Design (CIMProD), School of Chemical Engineering, Purdue University, West Lafayette, IN, USA - 47907 email: [email protected]
Abstract The continuing development of high throughput experiments (HTE) in the field of catalysis has dramatically increased the amount of data that can be collected in relatively short periods of time. The key questions in the current scenario are (1) Even w^hen HTE can afford "Edisonian" discovery, how can the increasing amounts of data be converted to knov^ledge that will guide the next search in the vast design space that encompasses catalytic materials? and (2) How can HTE data lead to fundamental understanding? In order to address these questions, we propose a catalyst design architecture that involves (1) a forward model to predict the performance of a given material structure and (2) a genetic algorithm based inverse problem that uses the forward model to search the descriptor space for a material that meets specific design objectives. We have developed a rational, automated knowledge extraction (KE) engine to aid the forward model building process. The Reaction Modeling Suite (RMS; U.S. patent pending) is a set of tools based on artificial intelligence and optimization techniques that enables the expert to initiate the kinetic modeling sequence in a simple reaction chemistry language. The software then interprets this information into a reaction sequence, writes the appropriate equations, optimizes the model parameters while keeping them in physically and chemically allowed bounds, and does statistical analysis of the results. These steps have been demonstrated for propane aromatization on HZSM-5. A successful forward model has considerable value in its own right, but its power is dramatically leveraged by the inverse model, which forecasts successful specific catalyst formulations. This potential to truly design catalysts is the retum on the investment in model building.
1. Introduction Materials Design can be defined as the rational framework with associated tools for determining the optimum material and/or formulation to meet a given set of design objectives. For extracting basic understanding from HTE data and to search through the large space of materials, we propose a Materials Design framework that has two
702 components: (i) a forward model that relates the chemical composition and/or high level descriptors of the composition to the performance of the material in the application of interest and (ii) an inverse model that relates the performance to the desired chemical composition or formulation. Although solution of the inverse problem is often the primary technological objective, rational vs. Edisonian design methods require the availability of good, robust forward models, and the development of good forward models will require indepth knowledge of the material system of interest. The objective of this paper is to examine the development of computer-based systems for the particular case of Catalyst Design - systems that can begin to take full advantage of the rate of data generation offered by High Throughput Experiments (HTE). We will first present a brief overview of KE and review the state-of-the-art. Subsequently, we will describe our work-to-date in developing a computer-based KE engine for catalyst development followed by an example. Finally, we will provide a brief summary.
Forward Problem
1
^ 1 Predictive Modell-
^
ir
Material Composition and [operating Conditions!
t
•1
Material Performance L
Design
J
J*—1
Inverse Problem Figure 1. Schematic of the forward and inverse problems in Materials Design.
2. Overview of Knowledge Extraction (KE) In order for quantitative HTE to reach its full potential, KE must occur at a rate that is comparable to the ever increasing rate of data generation. The key concept is that the computer based KE engine should approach data and models in a similar manner as that used by the human expert. It is important to realize that data is information not knowledge, and while multi-color and 3D visualization may allow one to better observe the data, the real objective is to capture the knowledge content of the data in a form that allows for continuous accumulation of knowledge. It is our hypothesis that combination of a computer-based KE engine in concert with a human expert is the needed combination. In the proposed KE framework, both HTE data and chemistry rules are the starting point, models are automatically generated from rules, the parameters of the model are optimized by comparing the predictions of the model with the data, and finally the features of the predictions and data are compared. The most comprehensive approach to date for catalytic performance is the MicroKinetic (MK) approach of Dumesic and coworkers (1993). MK analysis is a systematic approach for heterogeneous catalysis that uses a wide range of experimental and theoretical
703 information to test various model hypotheses. It has provided considerable insight into the fundamental behavior of catalytic systems; however, in its current form Knowledge Extraction via MK analysis is inexorably tied to a level of human intervention that is incommensurate with HTE and the increasing complexity of fundamental models. Efforts on computer based generation of large scale reaction mechanisms include both the representation of reaction species and pathways using matrices (Green et. al., 2001) and structure-oriented lumping (Quann and Jaffe, 1996), optimization and statistical analysis (Tomlin, 1997). Of particular interest is the work of Mavrovouniotis (1998) in developing a compiler for generating chemical reactions from chemical rules, a graph theoretic method to identify candidate mechanisms (Fan et. al., 2002) and recent work by Koza (2001) using genetic algorithms to discover potential reaction pathways. Artificial Neural Networks have been developed to explain catalytic performance on the basis of its structure and other descriptors (Hattori, 1995). Recently Baems and coworkers (Wolf and Baems, 2000) demonstrated an evolutionary approach for combinatorial selection and optimization of catalytic materials using genetic algorithms. The objectives of the methods described in this paragraph are to model data and find new catalytic materials; however, these Al approaches do not directly address how an improved understanding is to be developed.
3. Reaction Modeling Suite (RMS) We will now discuss the details about the RMS - a set of tools for generating model predictions of kinetic performance firom chemistry rules and experimental data. As shown in Figure 2, this set of operations include (i) translation of chemistry/catalyst rules such as basic reaction pathways, postulated groupings of species with similar reactivities, Polanyi relationships, etc. written in near-English language form to a computer compatible syntax, (ii) generation of the appropriate algebraic and/or differential equations (DAEs) consistent with the rules, (iii) solution of the DAEs with minimal user intervention including nonlinear optimization to determine the sets of model parameters that can best fit the data either via least-squares or in terms of features and (iv) statistical analysis of the various fits. Details about KE are available elsewhere (Caruthers et. al., 2002). The compiler translates the pseudo-English language chemistry rules into a reaction network, which is then parsed into a mathematical model using the law of mass action kinetics. The resultant model typically is nonlinear and the data are often incomplete or noisy. This results in the fact that there may be many parameter sets for a single model and/or multiple sets of rules that can describe the data. Consequently, the analysis process must be extremely fast, robust and inclusive if it is to keep-up with the speed of HTE. We have recently developed a genetic algorithm (GA)-based optimization procedure (Katare et. al, 2002) that is able to locate large numbers of local minima that are nearly indistinguishable from the global minimum at a rate that is at least two-orders of magnitude faster than alternative search procedures (Esposito and Floudas, 2000). This allows a more complete evaluation of all the parameter sets consistent with a given data set, rather than just choosing between the first several parameter sets that fit the data.
704 Performance Curves
Chemistry Rules p^
time
Reaction Modeling Suite Chemistry Compiler
Equation Generator
Parameter Optimizer
Statistical Analyzer
Figure 2. Schematic of the various tools in the Reaction Modeling Suite. Application of the RMS is illustrated for the problem of propane aromatization on a zeolite catalyst, H-ZSM-5. A number of kinetic models have been proposed for aromatization of alkanes over H-ZSM-5 (Lukyanov, 1995); however, a model with predictive capabilities continues to remain a challenge. Our kinetic model is based on a reaction scheme involving adsorption, desorption, protolysis, dehydrogenation, hydride transfer, P-scission, oligomerization, aromatization and alkylation reactions. The proposed set of reaction 'rules' generates a model with 31 gas phase species, 29 surface species, and 302 reaction steps. To reduce the number of parameters involved, the reactions were categorized into 34 families and all reactions in a particular family were assumed to have the same rate constant or a set of rate constants that are a specific function of the carbon number of the species. Each reaction family is parameterized in terms of either a rate constant or an equilibrium constant and the carbon number dependence within a family is considered in terms of the Polanyi relation. Transition state theory has been used to estimate bounds on the preexponential factors and literature values have been used to bound the activation energies and provide interrelationships between various reaction families to reduce the number of parameters to 14. The proposed model assumes that the reactions of neutral surface alkoxy species take place through carbenium/ carbonium ion transition states. Details of the model will be communicated in a future publication. The results of the GA-based hybrid search methodology described above are presented in Figure 3, where the data are taken from Lukyanov et al. (1995). We also located 32 additional local minima that are almost as good, with respect to the sum of squared error criteria. It is important to know if there are multiple local minima, each with it's own parameter set, since different minima can have different physical implications. The key motivation for developing a forward kinetic model is to use it for searching for materials in the context of the inverse problem as described in Figure 1. The current model was used with a genetic algorithm based inverse solution strategy to search for a catalyst material that maximizes the aromatics yield. Figure 4 shows the model parameters for the original catalyst with 7% aromatics yield (first bar) used to validate the model and those of the optimum catalyst, which corresponds to a 68% aromatics yield (second bar). The details about the model parameters are available elsewhere (Caruthers et. al., 2002). The resultant catalyst has significantly large values of the rate constants for oligomerization and
705
Space Time X 10^ [hr] Figure 3. Results from the kinetic model The dots show the data from Lukyanov et. al. (1995) and the solid lines show the model predictions. dehydrogenation. The former is responsible for reactions that increase the length of the olefin chains which eventually go to aromatics and the latter enables the formation of olefins from paraffins and thus feeding into the process of aromatization. The crucial link of mapping the zeolite structural descriptors such as the Si/Al and the catalyst pore size and shape to the reaction rate constants would enable this framework to search for these descriptors and hence truly design catalysts.
1
2
3
4
5
6
7
8
9
10
11
12
13
Model Parameters Figure 4. Optimum catalyst found by the genetic algorithm based inverse search.
706 4. Summary The availability of huge amounts of data through HTE techniques has opened new avenues for "combinatorial modeling" that enables knowledge extraction from data. Large search spaces of materials can not be searched without the availability of models for guidance. For this approach to be effective, model building cannot be the limiting step. Reaction Modeling Suite affords a rational, automated framework to allow the expert initiate the modeling sequence in a simple reaction chemistry language. The software then interprets this information into a reaction sequence, writes the appropriate equations, optimizes the parameters while keeping them in physically and chemically allowed bounds, and does statistical analysis. These steps have been fiilly demonstrated in the propane aromatization example. The hybrid algorithm for parameter estimation is of particular note, since it enables thorough search of the parameter space with a dramatic improvement in speed over other methods. A successfril forward model has considerable value, but its power is dramatically leveraged by the inverse model, which forecasts catalyst formulations. This potential to design catalysts is the return on the investment in model building.
5. References Caruthers, J.M., Lauterbach, J.A., Thomson, K.T., Venkatasubramanian, V., Snively, CM., Bhan, A., Katare, S. and Oskarsdottir, G., In Understanding Catalysis from a Fundamental Perspective: Past, Present, and Future, Bell, A., M. Che and W.N. Delgass, Eds., Invited Paper for the 40* Anniversary Issue of the J. Catal. (In Press). Dumesic, J.A., Rudd, D.F., Aparicio, L.M., Rekoske, J.E. and Trevino, A.A. 1993, The Microkinetics of Heterogeneous Catalysis, ACS, Washington DC. Esposito, W.R. and Floudas, C.A., 2000, Ind. Eng. Chem.Res., 39,1291. Fan, L.T., Bertok, B. and Friedler, F., 2002, Comp. and Chem., 26, 3, 265. Green, W.H., Barton, P.I., Bhattacharjee, B., Matheu, D.M., Schwer, D.A., Song, J., Sumathi, R., Carstensen, H-H., Dean, A.M. and Grenda J.M., 2001, Ind. Eng. Chem. Res., 40, 5362. Hattori, T. and Kito, K., 1995, Catal. Today, 23,4, 347. Katare, S., Bhan, A., Delgass, W.N., Caruthers, J.M. and Venkatasubramanian, V. AIChE J. (Submitted). Koza, J.R., Mydlowec, W., Lanza, G., Yu, J. and Keane, M.A., 2000, Reverse Engineering and Automatic Synthesis of Metabolic Pathways from Observed Data Using Genetic Programming, SMI-2000-0851. Lukyanov, D.B., Gnep, N.S. and Guisnet, M.S., 1995, Ind. Eng. Chem.Res., 34, 516. Mavrovouniotis, M.L. and Prickett, S.E., 1998, Know. Based Sys., 10,199. Quann, R. J. and Jaffe, S.B., 1996, Chem. Engg. Sci., 51,10,1615. Tomlin, A.S., Turanyi, T. and Pilling, M.J., 1997, In Mathematical Tools for the Construction, Investigation and Reduction of Combustion Mechanisms" (Pilling, M.J., Ed.) 35, 293. Wolf, D., Buyevskaya, O.V. and Baems, M., 2000, App. Catal. A: Gen., 200, 63.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
707
Computer Aided Prediction of Thermal Hazard for Decomposition Processes Yong Ha Kim, Mi Jung Ryu, Euijin Han, Seong-Pil Kwon* and En Sup Yoon School of Chemical Engineering, Seoul National University, Seoul, 151-742, Korea Institute of Chemical Processes, Seoul National University, Seoul, 151-742, Korea* E-mail: [email protected]
Abstract Peroxide compounds are usually very reactive and flammable. They have caused many catastrophic accidents around the world because of their reactive potential. Conventional methods to assess risk of such a reactive chemical have been done by experiments with precision machine such as DSC (differential scanning calorimeter), ARC (accelerating rate calorimeter), etc., but they need more finance, concentration and charge of danger. To overcome that, computer aided prediction method using group contribution method was used in this study. Some essential thermodynamic properties of chemicals were evaluated by this method, and then adiabatic temperature rise for each decomposition steps of peroxide compound were obtained, which can be a good index of the hazardousness of reaction. The result was approximate to other experimental and simulation data from references.
1. Introduction With the view to the risk analysis and safety management, it is important to predict thermal hazards of new materials. Peroxide compounds, one of the hazardous chmical group, has been being the causal material of many tremendous accidents in the chemical process industries. Values of adiabatic temperature rises (Ar^^) of each side reaction of them can be good indices of thermal hazards since it represents potential of violent decomposition process. Thermodynamic properties which are the bases of calculating ATflrf, are conventionally obtained from experiments for correcness(Mannan et al., 2001). But at the hazard identification aspect, these experiments can be hazardous as well as it requires many times and costs. Computer aided prediction using group contribution method, therefore, can be a good alternative. It can be readily applied to materials of which properties are unknown In this study, computer aided group contribution method was used to obtain values of the enthalpy and Gibbs free energy of methyl ethyl ketone peroxide (MEKPO) decomposition processes. Feasibilities of each decomposition reaction set were studied, and then Ar^^ for feasible decomposition processes was obtained with A^// and Cp from group contribution method.
708
2. Computer Aided Prediction of Chemical Hazards Reactive chemicals can release large and dangerous amounts of energy under certain condition. They can lead to unwanted side reactions that differ from the routine mainly in the rate at which they progress. Unfortunately it is often not appreciated that the reactive chemical hazards is seldom a unique characteristic of the chemical or the process itself but highly dependent on the process conditions and mode of operation. In contrast, the identification of a reactive hazard requires the detailed evaluation of both the properties of the substances used and the operating conditions(Mannan et al., 2001). Essential information for evaluation of chemical hazards consists of process chemistry mechanisms and parameters. Evaluating this information is not an easy task. Laboratory testing using experimental equipments such as accelerating rate calorimeter (ARC), differential scanning calorimeter (DSC), has been the conventional approach to evaluate chemical hazard. This approach is practical for simple systems, but may not be applicable for more complex systems. Because of the large number of chemical compounds and different reaction scenarios, evaluation can be very expensive and time consuming. Moreover, for a complex system, experimental procedures will provide an overall evaluation of system thermodynamics and kinetics data but will not explain reaction pathways. In this way, obtaining all of information about conditions (causing side reaction) of a chemical is impossible, and more for new materials. Therefore, another approach is necessary to predict thermal hazards. Computer aided prediction of thermodynamic properties, therefore, can be a good alternative.
3. Group Contribution Method Group contribution method has accuracy, simplicity and wide-distribution at a time. The thermodynamic properties of a chemical are functions of structurally dependent parameters, which are determined by summing the number frequency of each functional group(Joback, 1984; Poling et al., 2001). Group contribution method by Joback can divide the functional group more easily than other group contribution methods and express various types of materials. Groups of distinctions are consisted of non-ring, ring, halogen, oxygen, nitrogen and sulfur groups. In these properties, standard state enthalpy of formation, standard state Gibbs energy and polynomial coefficients for standard heat capacity are necessary. Joback's method is generally accurate for the formation properties. It is very accurate for organic compounds and easier to use than other methods such as Constantinou & Gani's, Benson's, and so forth(Poling et al., 2001). That's why it is convenient to encode and build expert system. Equations of group contribution method are as follows; A//^° = 68.29 + 2^A^^(AHF^)
(1)
AG^ ° = 53.88 + S Nj^ (AGF^)
(2)
709
S°=
k
.k
(3) 17^+
k
k
where N^t is the number of groups of type k in the molecule, Fk is the contribution for the group labeled k to the specified property,/, and Tis the temperature in Kelvins.
4. Procedure of Thermal Hazard Prediction XdentiflcaiticMn Possible patfiways
Exclusloii ComiiuterAki
Group ContrllMitioii Nettiod - :
Aafuisftloii Adiabatic temperaturerise(-<s3i^)
Figure 1. Procedure of Thermal Hazard Prediction.
First of all, all of possible reaction pathways is identified with thermodynamic properties of reactants and products. Available information of similar systems may be used to build this set of possible pathways. Experimental information about the products formed and the subsequent chemistry is another basis for building this set of possible pathways. Secondly, infeasible and not hazardous reaction pathways among all hte possible pathways are excluded. The smaller value of standard state Gibbs free energy for the reaction, ArG , the higher feasibility of the reactions(Alberty, 1987). And energy potential (heat of reaction) of the reaction by measuring heat flux. The AHr and Cp can be calculated from the enthalpy of formation. A/// for each component. But the thermodynamic properties were unknown, so group contribution method is used in this step. Finally, this analysis allows calculation of the adiabatic temperature increase (ATad) with standard state enthalpy, A^// and heat capacity and Cp of the reaction. In order to
710 assess the thermal hazard of the reaction apthways, the adiabatic temperature rise by following has to be calculate(mannan et al., 2001);
Arad
AH, mC ^
(4)
where AHr is heat of reaction, m is reactant substance mass, and Cp is specific heat of reaction mixture.
5. Case Study : MEKPO Decomposition Processes
Reactainl:: 7 type$ of MfEKPO CH2.C:^;MEK
Hull the sbiver
' C^li^|0^ *~> 1x^04'2C2H4O2 H^C^OgO Vr>/'. S^ext the Oi»timuiii readloii p^ithvimy (Haxlmum l-Sfi\}
u
C^4Ai-*H,0'h4CiHA^3CAO
Figure 2. Procedure of optimum reaction pathways selection.
Methyl ethyl ketone peroxide (MEKPO) is the typical sort of highly reactive chemical. It is used as a catalyst for the room temperature curing of unsaturated polyester resins and an initiator for polymerization reactions. It is manufactured in the oxidation process of methyl ethyl ketone (MEK) with hydrogen peroxide (H2O2) (Liaw et al., 2000).
711 Procedure of MEKPO process has three steps. The first step is an oxidation process of MEK. The second step is a decomposition reaction of hydrogen peroxide (H2O2). And the third step is decomposition of MEKPO. The first step is only a desired reaction, but the second and third are undesired side reactions. The major decomposition products from MEKPO are carbon dioxide (CO2), water (H2O), acetic acid(C2H402), formic acid(CH202) and MEK (C4H8O). In general, MEKPO exists as a mixture of seven different types as followings: 10 wt% C4H10O4, 45 wt% C8H18O6, 12 wt% C12H26O8, 5 wt% C16H34O10, 2 wt% C20H42O12, 1 wt% C24H50O14 and 25 wt% Ci2H2406(cyclic trimer) (Milas and Golubovic, 1959). Due to their complex structure, some of them have higher potential of reactive hazard. And various types of them can lead to parallel decomposition reaction. In order to assess the risk of the runaway reaction in the MEKPO Process, the adiabatic temperature rise by following has to be calculated with equation (4)(mannan et al., 2001). The procedures of optimum reaction selection is as Figure 2.
6. Results Table 1. Adiabatic temperature rise at the MEKPO decomposition step.
Reactant
Thermal Inertia
A//^ [kJ/gmol]
Mad [K]
100% MEKPO
1.0 1.0 4.2 1.0 2.9
-333.9 -167.0 -39.8 -83.5 -28.8
1782.5 891.3 212.2 445.6 153.7
50% MEKPO 25% MEKPO
Table 2. Comparison with experimental data by Liaw et al.(2000).
Step MEKPO Oxidation H2O2 Decoposition MEKPO Decomposition
Ar«,[K]
^ Reactant
Thermal ^^^^^.^ Inertia
^Regu^ ^
50% MEK 50% H2O2 15% H2O2 50% MEKPO 25% MEKPO
2.9 3.4 3.4 4.2 2.9
195 196 59 212 154
Reference Data 210 183 42 219 188
With the optimum reaction pathways shown Figure 2, adiabatic temperature rises were obtained at various conditions(MEKPO mixture composition ratio, concentrations of reactants, thermal inertia, etc.). Adiabatic temperature rises at the MEKPO decomposition step are shown in Table 1. Study on adiabatic temperature rise of MEKPO is still rare. Thus, to compare the results in this study with reference data, simulation conditions were assumed at the same
712 condition as Liaw et al.(2000) in which adiabatic temperature rises of MEKPO were obtained from experiments. The results are as shown in Table 2.
7. Conclusion Adiabatic temperature rise values were obtained in this study as a index of thermal hazard prediction of MEKPO. Feasible reactions at every MEKPO decomposition steps were identified from the possible reaction clusters by obtaining Gibbs free energy of reaction. And for each feasible reaction, enthalpy of reaction, heat capacity values and adiabatic temperature rise were assessed. Thermal inertia and MEKPO mixture composition ratio were considered. Adiabatic temperature rise values for each reaction condition were easily obtained, and by this, it is shown that this approach in this study can be a good methodology to get both qualitative and quantitative risk assessment result for hazardous undesirable reaction. The results were compared with the experimental and simulation data from the reference, and the errors were less than reasonable range.
8. References Alberty, R.A., 1987, Physical Chemistry, 7* ed., John Wiley & Sons., New York. Joback, K.G., 1984, A Unified Approach to Physical Property Estimation Using Multivariate Statistical Techniques, S.M. Thesis, MIT, Cambridge. Liaw, H.J., Yur, C.C., Lin, Y.F., 2000, Journal of Loss Prevention in the Process Industries, 13,499. Mannan, M.S., Rogers, W.J. and Aldeeb, A., 2001, Proc. of HAZARDS XVI, Manchester, U.K., 41. Milas, N.A., Golubovic, A., 1959, Journal of the American Chemical Society, 81, 5824. Poling, B.E., Prausnitz, J.M. and O'Connell, J.P., 2001, The Properties of Gases and Liquids, 5* ed., McGraw-Hill, New York.
9. Acknowledgement "Brain Korea 21 Project" of Ministry of Education & Human Resource Development has been supporting this study.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
713
Experimental and Theoretical Studies of the TAME Synthesis by Reactive Distillation Markus Kl6ker\ Eugeny Kenig\ Andrzej G6rak\ Kazimierz Fraczek^, Wieslaw Salacki^, Witold Orlikowski^, ^University of Dortmund, Chemical Engineering Department, Dortmund, Germany ^Research and Development Centre for the Refinery Industry, Plock, Poland Email: [email protected]. Fax: +49 231 755-3035
Abstract The heterogeneously catalysed synthesis of TAME (tert-amyl-methyl ether) via reactive distillation is investigated experimentally and theoretically. The structured catalytic packing Montz MULTIPAK®-2 is used in the catalytic section of a 200 mm diameter pilot scale column with a total packing height of 4 meters. Simulations with a developed rate-based model covering 11 components and 4 chemical reactions are in good agreement with experimental data. The simulations studies show the influence of the reflux ratio on conversion and selectivity.
1. Introduction Reactive separation is a novel technology that combines chemical reaction and product separation in a single apparatus. Depending on applied separation method, reactive distillation, reactive extraction, reactive absorption and other combined processes can be distinguished. The most popular in petrochemical industry are catalytic distillation processes (CD), e.g. selective hydrogenation of benzene, diolefins and acetylenes; desulfurization of fluid catalytic cracking (FCC) gasoline, jet and diesel fuels; aromatics alkylation; paraffine isomerisation and dimerisation. One of the most important CD processes is the production of tertiary ethers which are widely used as ecologically friendly additives for motor fuels. Currently, more than 100 units are in operation using CD to produce MTBE, TAME and ETBE. The major advantages for CD in ethers production are the capital cost reduction and lowering of energy costs due to the utilisation or reaction heat (more than 20%). Moreover, conversion is increased due to removal of products via distillation (25% for TAME), and the product selectivity is improved. The production of ethers via CD can also benefit from increased catalyst lifetime due to reduction of hot spots and removal of fouling substances from the catalyst. There are several possibilities of immobilising the solid catalyst in industrial CD columns on basis of trays, random and structured packings. A survey on available catalytic column internals is presented by Krishna and Taylor (2000). In this paper structured packing Montz MULTIPAK®-2 filled with catalyst Amberlyst 35 WET is applied for the TAME synthesis from light gasoline from the FCC process.
714
2. Chemical System The light gasoline of a FCC unit was used as the source of isoamylene fraction. Crude gasoline contains 12 wt% of active isoamylenes and about 1 wt% of dienes. Isoamylene fraction was obtained by distillation and diene hydrogenation of the light gasoline. The final content of isoamylenes in the feed was in the range 19-21 wt % and the concentration of dienes less then 0.01 wt %. The number of components identified by gas chromatography exceed 90 species. The methanol feed contains more then 99.9 wt% of pure methanol and water in the range 0.015 - 0.045 wt%. The reaction scheme for the production of TAME from these reagents is as follows: 2-Me-l-butene + MeOH <->TAME 2-Me-2-butene + MeOH o TAME 2-Me-1 -butene f-> 2-Me-2-butene 2-Me-1 -butene + 2-Me-2-butene -^ C10H20 2*2-Me-l-butene -> C10H20 2*2-Me-2-butene -^ C10H20 2*MeOH -^ CH3OCH3 + H2O 2-Me-l-butene+ H20<-^ rerr-amyl alcohol 2-Me-2-butene + H2O <-> ferr-amyl alcohol rerr-amyl alcohol + MeOH <^ TAME + H2O
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
The system (I)-(IO) is rather complicated for analysis and simulation. Several authors observed multiple steady states in CD columns (Sundmacher et al., 1999, Mohl et al., 1999). For some reactions, kinetic data are missing. Reactions 1, 2 and 3 are desired ones. Dimerisation of reactive isoamylenes (reactions 4, 5 and 6) is usually treated as unwanted in open literature, however, at commercial scale, there is no difficulty because of their high octane number and low volatility. Dimethyl ether (reaction 7) has also high octane number, but because of its large vapour pressure it is inapplicable as a gasoline component. The major problem is related to water which is introduced with methanol feed and formed during methanol dimerisation. Water adsorps on etherification catalyst more strongly then methanol and can inhibit desired reactions and for this reason the water content in reactants should not exceed 0.1 % (Piccoli and Lovisi, 1995). It was observed that even small amounts of water in distillate result in two liquid phases after cooling. Phase splitting is not problem at industrial scale but wanted if non-converted methanol is extracted with water and the afterwards distilled methanol is recycled,
3. Reactive Distillation Experiments A simplified sketch of the pilot plant installation is shown in Fig. 1. The basic equipment here are a pre-reactor and a catalytic distillation column equipped with skew situated total condenser and vertical evaporator. All units, packings, tanks and piping are made from stainless steel. The CD column (diameter 200 mm) has three sections: the rectifying section at the top of the column equipped with 1 meter of 12x12x0.4 mm Bialecki rings the catalytic section equipped with 1 meter of the structured catalytic packing MlJLTIPAK®-2 with 41 vol% of Amberlyst 35 WET in the catalyst bags the stripping section at the bottom of the column filled with two meters the same Bialecki rings as in the rectifying section
715 All parts of the column are isolated by 80 mm layer of mineral wool. Pressure on the top of column is adjusted by cooling water temperature in condenser. •C^d
<
^^—^tofsqe
stBsiii
Z
tank
1 ^
Figure 1: A sketch of pilot plant installation. Flow rates of isoamylene fraction, methanol and TAME fraction are measured using tensiometer balances, and reflux and distillate flow rates using a Coriolis-type flowmeter. The samples are analysed off line. Process data, such as temperatures, flows, levels and pressures are stored and visualised by a monitoring system based on microprocessor controller MSC68. This gives a way to identify steady states in the column. Several operational parameters have been varied and their influence on the process performance has been analysed. In particular, the feed rate has been changed in the range between 25 and 45 kg/h, the molar feed ratio of methanol/isoamylenes between 0.9 and 1.45 and the reflux ratio between 1 and 2.
4. Rate-based Modelling Traditional equilibrium stage models and efficiency approaches are often inadequate for reactive separation processes. In multicomponent mixtures, diffusion interactions can lead to unusual phenomena, and it is even possible to observe mass transport of the component in the direction opposite to its own driving force - the so-called reverse diffusion (Talyor and Krishna, 1993). For multicomponent systems, the stage efficiencies are different for different components and may range from -oo to +oo. To avoid possible qualitative errors in the parameter estimation, it is necessary to model
716 separations taking account of actual mass transfer rates (Taylor and Krishna, 1993, Noeres et al., 2002). Therefore, in this work a more physically consistent way is used by which a direct account of process kinetics is realised. This approach to the description of a column stage is known as the rate-based approach and implies that actual rates of multicomponent mass transport, heat transport and chemical reactions are considered immediately in the equations governing the stage phenomena. Mass transfer at the vapour-liquid interface is described via the well known two-film model. Multicomponent diffusion in the films is covered by the Maxwell-Stefan equations (Hirschfelder et al., 1964). In the rate-based approach, the influence of the process hydrodynamics is taken into account by applying correlations for mass transfer coefficients, specific contact area, liquid hold-up and pressure drop. Chemical reactions are accounted for in the bulk phases and, if relevant, in the film regions as well. The relevant models for the reactive distillation column and peripherals have been developed and implemented into the simulation environment ASPEN Custom Modeler™. Simulations of the heterogeneously catalysed synthesis of TAME have included 11 components. The species and the boiling points at the operating pressure of 4 bar are listed in Table 1. The key components of the inert fractions of the feed have been used to represent the hydrocarbon fractions (see Table 1). VLE is described by UNIQUAC model, with the Redlich-Kwong equation of state. In simulations, four reactions are considered: the main reactions (l)-(3) and the formation of dimethyl ether (7). Table 1: Selected components and boiling points (Ti,) at 4 bar. Component Dimethyl ether n-Butane i-Pentane 2-methyl-l-butene (2M1B) 2-methyl-2-butene (2M2B) Methanol 2-inethyl-pentane TAME Water 2,3 dimethyl-2-methoxy butane n-Decane
Representing inert C4 components inert C5 components
inert C6-C8 components
higher ethers inert C9+ components
TbPC] 12.0 41.9 74.6 77.3 85.7 104.1 110.8 140.2 143.7 156.1 237.2
The kinetics of reactions (l)-(3) have been determined for Amberlyst-35 as part of European research project INTINT (Intelligent Column Internals for Reactive Separations, see http://www.cpi.umist.ac.uk/intint/). Kinetics for reaction (7) have been measured by Kiviranta-Paakkonen et al. (1998) for Amberlyst-16. Since the properties of Amberlyst-16 and Amberlyst-35 are similar, the validity of these kinetics is assumed. According to Oost and Hoffmann (1995), the dimerisation of isoamlyenes is the main side reaction at low methanol concentrations. However, no kinetic data is available for the description of reactions (4) to (6) and therefore dimersiation is not considered in this work. Subawalla and Fair (1999) pointed out, that the hydration reactions of the isoamylene (reactions (8) and (9)) are strongly equilibrium limited and water must be present in large excess to form significant amounts of ^^r^-amyl alcohol. Thus, the reactions (8) to (10) are neglected.
717
5. Model Validation A series of simulations have been performed with the developed model for validation purposes. Figure 2 and 3 show the simulated temperature and concentration profiles (lines, solid symbols) of the main components for two experiments and the respective experimental values (empty symbols). A Experiment 13
>^
Experiment 13
1°=
^****v^A
f^p"^ K Y
^f"^^^ - • - TAME —•- C4 fraction -<^-2M2B+2M1B —*-C5 fraction —»— Methanol
' " *.
T
—j
'v
* yc ^ ^^cr*-^^**-—-u " T T I I I . • • T • • •!< . w r t - f - « - < - f - y i ^ T | ? t ~ t - r | ' ' t ' r ^ Packing height [m]
Pacldng height [m]
Figure 2: Temperature and concentration profiles (experiment 13). In experiment 13, reflux ratio is 2, the molar feed ratio methanol/isoamylenes is 0.9, with a total feed rate of 22.8 kg/h. Simulated profiles agree well with the corresponding experimental data. However, for this experiment, the TAME mass fraction in the bottom is slightly overpredicted, while the calculated bottom temperature is lower. This can be explained by the dimer and oligomer formation from isoamylene which is not considered in the model.
i \ ^
Experiment 15
A
-1
1
1
1
1
"^A***"""—^^
1
1
1
1.5 2 Pacldng height [m]
Figure 3: Temperature and concentration profiles (experiment 15). Iti experiment 15, the reflux ratio is 1, the molar feed ratio methanol/isoamylenes is 1.1, with a total feed rate of 22.5 kg/h. In this case, less high boiling by-products are formed and consequently the agreement between computed and experimental data is very good.
6. Simulation Studies The influence of the reflux ratio on conversion and on the formation of the by-product dimethyl ether has been studied (see Figure 4). All other operating parameters are chosen according to the conditions in experiment 15. For lower reflux ratios (<1.5), the selectivity for TAME with respect to methanol is lower due to the increased formation
718 of dimethyl ether. In addition, with increasing reflux ratios, the conversion is increased, since the recycle of non-converted methanol to the column is increased.
-*——"••
^
':'
"
^
^
^
J
I 0.3 I X
I
0.2 S
--o-S(TAMeMeOH) —tr-Y. (Isoamytene) - o- X (Methanol)
1I
Reflux ratio [kg/kg]
Figure 4: Influence of reflux ratio on conversion and selectivity. For the selected molar feed specifications and operating conditions, both conversion and selectivity remain almost constant for reflux ratios greater than 2.
7. Conclusions Both experimental and theoretical studies on the synthesis of TAME via reactive distillation have been performed. A rigorous model including 11 components and 4 reactions has been developed. The agreement between simulation and experiments is satisfactory. Simulations studies show a significant influence of the reflux ratio on conversion and selectivity for TAME related to methanol.
8. References Hirschfelder, J. O., Curtiss, C.F. and Bird, R.B., 1964, Molecular Theory of Gases and Liquids, Wiley, New York. Noeres, C , Kenig, E.Y. and Gorak, A., 2002, Chem. Eng. Process, in print. Kiviranta-Paakkonen, P., Struckmann (nee Rihko), L.K., Linnekoski, J.A., Krause, A.O.I., 1998, Ind. Eng. Chem. Res. 37, 18-24. Mohl, K., Kienle, A., Gilles, E., Rapmund, P., Sundmacher, K., and Hoffmann, U., 1999, Chem. Eng. Sci. 54 ,1029-1043. Oost, C , Hoffmann, U., 1995, Chem. Eng. Technol. 18, 203-209. Piccoli, R.L. and Lovisi, H.R., 1995, Ind. Eng. Res. 34, 510-515. Subawalla, H. and Fair, J.R., 1999, Ind. Eng. Chem. Res. 38, 3696-3709. Sundmacher, K., Uhde, G. and Hoffmann, U., 1999, Chem. Eng. Sci. 54, 2839-2847. Taylor, R. and Krishna, R., 1993, Multicomponent Mass Transfer, Wiley, New York. Taylor, R. and Krishna, R., 2000, Chem. Eng. Sci. 55, 5183-5229.
9. Acknowledgement The financial support of the European Commission (Contract No. GIRD CT1999 00048) and of the Swiss Federal Office for Education and Science (decree: 99.0724) is highly appreciated.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
719
Oscillatory Behaviour in Mathematical Model of TWC with Microkinetics and Internal Diffusion Petr Koci^'S Milos Marek^'^ Milan Kubicek^'^'* ^Department of Chemical Engineering ^Department of Mathematics ^Center for Nonlinear Dynamics of Chemical and Biological Systems, Prague Institute of Chemical Technology, Technicka 5, CZ-166 28 Prague, Czech Republic
Abstract Mathematical model of three-way catalytic converter (TWC) has been developed. It includes mass balances in the bulk gas, mass transfer to the porous catalyst, diffusion in the porous structure and simultaneous reactions described by a complex microkinetic scheme of 31 reaction steps for 8 gas components (CO, O2, C2H4, C2H2, NO, NO2, N2O and CO2) and a number of surface reaction intermediates. Enthalpy balances for the gas and solid phase are also included. The method of lines has been used for the transformation of a set of partial differential equations (PDEs) to a large and stiff system of ordinary differential equations (ODEs;. Multiple steady and oscillatory states (simple and doubly-periodic) and complex spatiotemporal patterns have been found for a certain range of operation parameters. The methodology of studies of such systems with complex dynamic patterns is briefly introduced and the undesired behaviour of the used integrator is discussed.
1. Introduction Legal limits for the emissions of the main pollutants in the automobile exhaust gases are becoming more and more strict. The development of new and advanced catalytic converters demands not only experimental work, but also extensive and detailed modelling and simulation studies. The models become more complex, when all the important physical and chemical phenomena are considered. Particularly the use of non-stationary kinetic models (microkinetics) with surface deposition of reaction components (Jirat et al., 1999, e.g.) and the incorporation of diffusion effects in porous catalyst structure lead to a large system of partial differential equations. Recently, nonstationary experiments on the microkinetics of typical reactions on Pt/Ce/y-AhOs three-way catalyst were performed (Harmsen et al., 2000,2001a,b). Their results were evaluated in the form of detailed reaction schema. Mukadi and Hayes (2002) used these kinetic expressions in the model of TWC monolith converter. They pointed to tiie significance of the effects of diffusion in the catalytic washcoat on the performance * Corresponding author. Tel: +420-22435 3104, fax: +420-23333 7335. Email address: milan. kubicekQvscht. cz (Milan Kubicek).
720 of the monolith. The simplest experimental arrangement for the study of internal diffusion effects is a well stirred reactor (CSTR, short monolith, or recirculation reactor) containing catalytic washcoat, where internal diffusion and reactions take place simultaneously. In such well stirred reactor spatially independent concentration and temperature in the bulk gas phase can be considered. Due to small thickness of the washcoat layer (typically tens of jLtm) it may be assumed that no temperature gradients occur within the washcoat (Wanker et al., 2000).
2. Model A spatially pseudo-ID, heterogeneous model of Pt/Ce/y-AhOs TWC monolith has been developed. It accounts for a well mixed bulk gas and a thin layer of porous catalytic washcoat, where diffusion, surface deposition and reactions occur. The model is represented by the following ODEs and PDEs. Mass balances are considered in the bulk gas (1), in the washcoat pores (2) and on the catalyst surface (3-4), enthalpy balances are written for the bulk gas (5) and the washcoat layer (6); appropriate boundary conditions are in the form of (7):
^=«'vr-«-c,-^(cr|,.s-c,)
(1)
''"•^''^^
(3)
' h.,Rj
(4)
dt
r^"
dt
P^<:^(1 -e^r
e^cl
P'^^^n (1 - es) f^. J 3c^
or
=5
p
0
^^
(6) (7)
r=0
Here Cj and cj^ are concentrations in the bulk gas and the washcoat pores respectively, u is spatial velocity, e's are porosities, p's are densities, g and w are indices for gas and washcoat respectively, kc and ^h are mass and heat transfer coefficient respectively, a is density of external surface area and r G< 0;5 > is spatial coordinate in the washcoat layer of thickness 5. Symbols 0;„, t,p and Xq denote fractions of the related components deposited on the platinum (*), ceria (s) and Y-AI2O3 (y) surface sites with the capacities LNM, ^ S C and Lsup, respectively. Rj are reaction rates defined in Table 1 - a complex set of reaction steps for the important components of automobile exhaust gases is employed. The method of lines has been used for the transformation of the complex system of PDEs (2-4) to a large system of ODEs (semidiscretisation on the set of A^+1 equidistant spatial grid-points ri=ih, i=O...N). Thus altogether 29A^+41 ODEs approximate the model with complete microkinetic scheme given in Table 1. The LSODE (Hindmarsh, 1983) implicit
721
Table 1. TWC microkinetic
scheme used in the model.
No. Reaction step
Kinetic expression
1
co+*^co*
2
02 + * - > 0 ^
3
0*H-*-^20*
^2 = ^ 2 ^ M ^ 0 2 ^ * i?3 =/?2
4
CO* + 0 * - ^ C 0 2 + 2*
^4'^^4^M^CO*^0*
5
co+o*^oco*
^5 ^ ^ S ^ M Q O ^ O * "" ^S^M^OCO*
6
OCO*-^C02-h*
^6 ~ ^6^M^0C0*
7
02 + s - ^ 0 |
^7 ~ ^^0SC^02^s
8
0 | + s->20^
i?g=/?7
9
C 0 * + 0 ^ - > C 0 2 + *H-s
^9 —^9^M^C0*^0«
^1 ^ ^l^M^CO®* " ^ l ^ M ^ C O *
10 C 0 2 + Y ^ C 0 ^
^10 — ^io^supQo25^Y ~ ^io^sup5Cco^
11
C2H2 + * ^ C 2 H ^
^11 - ^ l l ^ M ^ i H z ^ * ~^11^M^C2H*
12
C2H2 + 2* ^ C2H2**
/?i2 = ^12^M^C2H^^* ~ ^12^M^C2H^**
13
C2H* + 3 0 * -> 2C0* + H2O + 2*
/?i3 = ^ I 3 ^ M Q C 2 H * V
14
C2H*** + 30* ^ 2C0* + H2O + 4*
/?i4 = ^i4^M^C2Hr* V
15
C2H2 4-0* ^ C2H2O*
/?J5 = ^15^M^C2H2®0* ~ ^15^M®C2H20*
16
C2H2O* + 2 0 * - . 2C0* + H2O 4- *
/?i6 = ^ I 6 ^ M Q C 2 H 2 0 * V
17
C2H^ + 3 0 ^ - ^ 2 C O * + H 2 0 + 3s
^17 = ^ I 7 ^ M 9 C 2 H ^ ^ O ^
18
C2H4 + 2* ^ C2HJ*
/?jg = ^18^M^C2H4^* ~ ^18^M^C2H**
19
C2H4* ^ C2H4 + *
^19 ==" ^19^M^C2H^* ~ ^ 1 9 ^ M ^ C 2 H ; ^ *
20
C 2 H f + 6 0 * -> 2CO2 + 2H2O + 8*
/?2o = ^20^M^C2Ha* ^O*
21
C2HJ4-6O* -^ 2CO2 + 2H2O4-7*
/?2i = ^ 2 1 ^ M Q C 2 H * ^ 0 *
22
C2H4 + O* ^F^ C2H4O*
/?22 = ^ 2 2 ^ M Q 2 H 4 ^ 0 * - ^22^M^C2H40*
23
C2H4O* + 5 0 * -^ 2CO2 4- 2H2O + 6* /?23 = ^ 2 3 ^ M Q C 2 H 4 0 * %*
24
NO + * ^ N O *
^24 — ^24^M^NO^* ~ ^24^M®NO*
25
N0*4-*^N*+0*
^25 — ^25^M^NO*^*
26
NO*4-N*^N20*4-*
^26 — ^26^M^NO*^N*
27
N20*->N20H-*
^27 ~ ^27^M^N20*
28
N20*-^N2 + 0*
^28 — ^28^M®N20*
29
N*4-N*^N2 4-2*
/?29 = ^29^M^N*
30 NO 4 - 0 * ^ NO J 31 N O J ^ N 0 2 4-*
^31 ~ ^31^M^N0* ~^31^M^N02^*
The reaction subsystems for CO, C2H2, C2H4 and NOx are separated by lines. For values of the kinetic parameters cf. Harmsen et al. (2000,2001 a,b) and Mukadi and Hayes (2002).
integration method for stiff systems with intemally generated full Jacobian (mf=22) has been used for dynamic simulations.
722 Table 2. Inlet gas composition used in the simulations (balance Ni)CO
1.22% (vol.)
C2H4
380 ppm (vol.)
NO
1130 ppm (vol.)
O2
0.6-0.8 % (vol.)
C2H2
280 ppm (vol.)
CO2
12.2 % (vol.)
3. Results Existence of multiple steady states, observed when simulating the ignition/extinction of the CO oxidation reaction on Pt/y-AhOs catalyst (Eqs. 1-6 in Table 1) by increasing/ decreasing inlet gas temperature, is illustrated in Fig. 1, left. Isothermal and adiabatic courses of reactions are compared - the hysteresis region is wider and shifted to lower temperature for the adiabatic case as it reflects temperature rise and heat capacity of the reactor. However, the multiplicity is preserved in the isothermal case, hence it follows from the used kinetic scheme. Fig. 1 on the right represents spatial profiles of surface coverages of CO oxidation intermediates - in this case a non-monotonous abrupt change from zero to full coverage occurs in the center of the washcoat. In addition to multiple steady states also existence of oscillations of various types has been observed in the model. Continuation methods (Kubicek and Marek, 1983) can be used to locate positions of limit points (multiple solutions), Hopf bifurcation points (origin of oscillations) and period doubling bifurcation points. Fig. 2 shows an example of the results of such computations, using the continuation software CONT (Kohout et al.. 1.2 k 0.9 -\ 0.6 0.3 0.0 1.2 0.9 0.6 0.3 0.0 300
1
f % p - —\ ' " ' • • • • : . .
* __^^^
f
"^•^ \
T'^down •••
• ^ ^
[-
—r—
500
-
0.4
-
0.2 0.0
/
\i !
i.
v
\ 1
y
yf\ V^
10 r(|iim)
550
T'" (K)
J
V1 'i
0.6 h
— 1 —
450
0* CO*
0.8 h 1
1
400
1
oco* 1.0
Isothermal: ^^^*s. Tup ^"v,^,^^^ T down -
350
1
\
15
.«,.
20
Figure 1. Hysteresis of CO conversion. CO oxidation by Oi on Pt/^-AhO^, catalyst, 6=20 fim. - 7 ^ 2 , - 1 L N M = 5 0 mol.m' 3 ^m . 0.61 %. • Left: Outlet CO concentrations for the temperature ramp ±1 K/s. • Right: Concentration profiles of the components on catalytic Pt-centers. Steady state with higher CO conversion (cf isothermal Tdown curve) for r""=450 K.
D^^=6xl0
yoJ"(%)^
0.74
Figure 2. Dependence of the solution (yj^o on the inlet concentration of oxygen in CO oxidation, obtained by the continuation. sSS-stable steady state, uSS-unstable steady state, sP-stable periodic oscillations (minimum and maximum values), Hopf BP-Hopf bifurcation point; unstable periodic solutions are not presented. T=630 K (isothermal), 3=20 fxm, L^M=SO mol.m~^, Lose=-^2 rnol.m~^, no diffusional resistance in the washcoat.
723 y o ; up 0.20 0.15 o O 0.10 o >»0.05 0.00
^
^^ 0.20 f^
Yo
0.15 o 0.10 >>0.05 0.00
down:
o
rtol=10,hn,a3( implicit 0.61
0.62
0.63
0.66
0.67
0.68
Figure 3. CO oxidation by O2 - evolution diagrams in inlet concentration of oxygen. Inlet concentration ofOz changes with constant rate ±10'"^ %/s. D^^=6xl0~^ m^.s~^, other parameters are taken from Figure 2. 1
1
r
1400 1200 E^IOOO ^
800
°o 600
••••
1
1
1
1
I
1
0.6 h
\I"
0.4
400 200 0
-
0.2
0.0 20
X ^
CgHg
k
mmii ii ii ii iiiiii
10
20
30
...ill
40 50 t(s)
iiiiiii
60
70
II lull
15\
^
r(nm ) 1 0 \ ^
5^
80
Figure 4. Complex oscillatory behaviour in TWC operation. • Left: Outlet HC concentration. • Right: Spatiotemporalprofile ofCiHi*** surface concentration. T=630 K (isothermal), 8=20 [xm, D^^=6xl0~^ m^.s"^, L^M=SO mol.m'^, Losc=J00moLm~^, inlet concentrations are given in Table 2, y^ =0.74 %. 2002) applied to the isothermal system for CO oxidation on Pt/Ce/y-AhOs catalyst (Eqs. 1-10 in Table 1) with no internal diffusion resistance. In that case PDEs in the model can be replaced by ODEs so that the dimension of the resulting system is smaller (here 11 ODEs). The corresponding evolution diagram for the distributed system with finite diffusion coefficient is given in Fig. 3. Comparison of Figs. 2 and 3 confirms again that the observed nonlinear phenomena follow from the used kinetic scheme and the introduction of intemal diffusion effects only modifies the behaviour - we can observe the alternating existence of single and period doubled oscillations. More complex oscillations have been found when the full TWC microkinetic model (Eqs. 1-31 in Table 1) has been used in the computations, cf. Fig. 4. The complex spatiotemporal pattern of oxidation intermediate C2H2** (Fig. 4, right) illustrates that the oscillations result from the composition of two periodic processes with different time constants. For another set of parameters the coexistence of doubly periodic oscillations with stable and apparently unstable steady states has been found (cf. Fig. 5). Even if LSODE stiff integrator (Hindmarsh, 1983) has been succesfully employed in the solution of approx. 10^ ODEs, in some cases the unstable steady state has been stabilised by the implicit integrator, particularly when the default value for maximum time-step (/imax) has been used (cf. Fig. 5 right and Fig. 3 bottom). Hence it is necessary to give care to the control of the step size used, otherwise false conclusions on the stability of steady states can be reached.
724
60
80 t(s)
56 t(s)
Figure 5. Dynamic simulation of TWC operation. T-630 K (isothermal), d=10 fim, D^^=2xlO~^ m^.s~^, LNM=<S^ mol.m~^, LQ^Q=1000 mol.m~^, inlet concentrations are given in Table 2, y^,^ =0.74 %. sSS ~ integrator reaches stable steady state (^^=0.9), s2P - integrator reaches stable doubly-periodic state (i^^=0, rtol=lxlO~^, /imax=^-^ s), uSS - integrator stabilizes the apparently unstable steady state ('^^=0, rtol=l x 10~^, /imax implicit). • Left: Outlet CO concentration. • Right: Detail ofs2P and uSS differentiation.
4. Conclusions The results of simulations of TWC model with microkinetics and diffusion resistance within the washcoat enable the interpretation of the dynamics of surface coverages and overal reactions and can serve for the improvement of the washcoat design. It has been found that not only multiple steady states (hysteresis) but also various types of periodic and complex spatiotemporal concentration patterns can exist in the monolith. Thorough analysis of bifurcations and transitions among existing patterns is numerically demanding task due to dimension of the problem. Acknowledgements This work has been supported by the grants of Czech Grant Agency 104/02/0339 and 201/02/0844. The authors would like to express their gratitude to Vladislav Nevoral for the help with computations of Fig. 2.
5. References Harmsen J.M., J.H. Hoebink & J.C. Schouten, 2000, Ind. Eng. Chem. Res.,39,599. Harmsen J.M., J.H. Hoebink & J.C. Schouten, 2001a, Chem. Eng. Sci., 56, 2019. Harmsen J.M., J.H. Hoebink & J.C. Schouten, 2001b, Catal. Letters, 71, 81. Hindmarsh A.C., 1983, ODEpack, A Systematized Collection of ODE Solvers in Scientific Computing, Eds. R.S. Stepleman et al., North-Holland, Amsterdam. Jirat J., M. Kubicek & M. Marek, 1999, Catal. Today, 53, 583. Kohout M., I. Schreiber & M. Kubicek, 2002, Computers Chem. Eng., 26,517. Kubicek M. & M. Marek, 1983, Computational Methods in Bifurcation Theory and Dissipative Structures, Springer Verlag, New York. Mukadi L.S. & R.E. Hayes, 2002, Computers Chem. Eng., 26,439. Wanker R., H. Raupenstrauch & G. Staudinger, 2000, Chem. Eng. Sci., 55,4709.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
725
Methods of Analysis of Complex Dynamics in Reaction-Diffusion-Convection Models Martin Kohout^'^'*, Tereza Vanickova^'^, Igor Schreiber ^'^, Milan Kubicek^'^ ^Department of Chemical Engineering ^'Department of Mathematics ^Center for Nonlinear Dynamics of Chemical and Biological Systems, Prague Institute of Chemical Technology, Technicka 5, CZ-166 28 Prague, Czech Republic
Abstract We employ a method of numerical continuation which has been earlier developed into a software tool for analysis of spatiotemporal patterns emerging in systems with simultaneous reaction, diffusion and convection. As an example, we take a catalytic cross-flow tubular reactor with first order exothermic reaction kinetics. The analysis begins with determining stability and bifurcations of steady states and periodic oscillations in the corresponding homogeneous system. This information is then used to infer the existence of travelling waves which occur due to reaction and diffusion. We focus on waves with constant velocity and examine in some detail the effects of convection on the front waves which are associated with bistability in the reaction-diffusion system. A numerical method for accurate location and continuation of front and pulse waves via a boundary value problem for homo/heteroclinic orbits is used to determine variation of the front waves with convection velocity and some other system parameters. We find that two different front waves can coexist and move in opposite directions in the reactor. Also, the waves can be reflected and switched on the boundaries which leads to zig-zag spatiotemporal patterns.
1. Introduction Occurrence of wave solutions in spatially extended systems is caused by an interplay of nonlinear internal dynamics combined with mass or energy transport (Kapral & Showalter (1995)). Travelling solitary pulses and front waves represent the simplest cases. Fronts are associated with bistability in the reaction-diffusion system and solitary pulse waves occur in excitable media. Near the boundary of a parameter domain where the waves appear more complex patterns may emerge due to loss of stability and bifurcations of the pulse/front waves (Krishnan et al. (1999)). Here we examine the case when multiple front waves appear and give rise to zig-zag spatiotemporal patterns. Dynamics of moving spatiotemporal patterns is modelled and analyzed in a tubular reactor with a continuous supply of reactants along the reactor (e.g. via semipermeable membrane). Such an arrangement has been termed a cross-flow reactor. The chosen kinetics * Corresponding author. Email addresses: kohoutmOvscht. cz (Martin Kohout), vanickotQvscht. cz (Tereza Vanickova), schrigOvscht. cz (Igor Schreiber), kubicekOvscht. cz (Milan Kubicek).
726 correspond to a simple thermokinetic reaction in a packed-bed tubular reactor (Nekhamkina et al. (2000); Sheintuch & Nekhamkina (2001)). The waves with constant velocity in an unbounded system can be studied by employing a moving coordinate transformation and solving a boundary value problem (Kubicek & Schreiber (1998); Kohout et al. (2000, 2001)) to obtain either pulse or front waves. By applying a continuation method (Kubicek (1976); Kubicek & Marek (1983)) included in a software tool for nonlinear analysis (Kohout et al. (2002); Marek & Schreiber (1995)) the parameter dependence of the front/pulse waves can be examined. Below we discuss a particular case of coexisting stable front waves and calculate twoparameter bifurcation diagrams determining wave velocity vs. physical parameter relations. These results are then related to spatiotemporal patterns obtained by directly solving the partial differential equations which describe a bounded system. This system gives rise to an alternating pattern of fronts moving back and forth in the reactor. At the boundaries one type of front is transformed into the other one and is reflected back to the reactor. Such a pattern exists for the reaction-diffusion system (with Neumann boundary conditions) as well as for the reaction-diffusion-convection system (with Danckwerts boundary conditions). Observed zig-zag dynamics is both of theoretical and practical interest for operation of chemical reactors.
2. Model The simplest description of a catalytic-bed tubular reactor assumes that concentration and temperature gradients between thefluidand solid phases are absent. For afirstorder exothermic reaction kinetics in a cross-flow reactor the mass and enthalpy balances in the fluid and solid phases can be merged and nondimensionalized to provide the following two equations: dx
dx
^.
.
^ = -v^+/i(x,^), dy
d^y
^Tx " ^W~^Jl
dy
(1)
•^•^2(^,>'),
(2)
where jc and y denote conversion and dimensionless temperature, respectively, r and ^ are dimensionless time and axial coordinate, resp., and the terms yy
fx{x,y)^Da{\
-x)ey^y -a^{x-Xy,),
(3)
yy
h{x,y)=BDa(\
-x)ey^y -ayiy-y^,),
(4)
include consumption of mass and generation of heat by chemical reaction and mass/heat exchange accross the walls of the reactor. Here v and d represent dimensionless convection velocity and dimensionless thermal diffusivity, resp. (axial mass dispersion is neglected); B is dimensionless reaction enthalpy. Da is the Damkohler number and Le is the Lewis number. We have avoided the conventional introduction of the Peclet number since we would like to examine the effects of convection and thermal diffusion separately. Otherwise, the scaling of the variables and definitions of the dimensionless quantities are conventional, see e.g. (Nekhamkina et al. (2000)).
727 For the bounded flow system of (dimensionless) reactor length L the Danckwerts boundary conditions apply: i=0:
x = Q, d^=vy;
|=L:
|j=0,
||=0.
(5)
For the system with no convectiveflowwe use the Neumann boundary conditions:
?=0:
1 = 0, 1 = 0 .
i,L:
|
= 0. |
= 0.
Travelling waves with a constant velocity u on an unbounded interval can be studied upon coordinate transformation ^ = ^ — MT which brings the partial differential equations (1) and (2) into the ordinary differential system
y
dx ^h{x,y) v—u ^dC'' dy
(7)
y dC = 3'p. / _dyp
_
11
y'p='^ = -l[^"yp di;
(8) - ^P+f2{x,y)]-
(9)
A special solution of Eqs. (7),(8),(9) which connects two different steady states (a heteroclinic solution) then represents a front wave and a solution doubly asymptotic to a single steady state (a homoclinic orbit) represents a pulse wave.
3. Results By applying a bifurcation analysis to a homogeneous system associated with Eqs. (1) and (2) (i.e., dropping out the diffusion and convection terms), we find that for fixed Da = 0.04, B = 10.0, a^ = 0.5, y = 1000.0, x^ =y,v = 0, Le = 1.0 and varying Oy the system has two stable steady states for 0.871 < Oy < 1.034 where the lower boundary is defined by a saddle-node bifurcation and the upper boundary is identified with a Hopf bifurcation point which destabilizes the high-conversion steady state. This indicates that within this interval a front wave may exist, while above the Hopf point a pulse wave is expected since the system becomes excitable. If Oy is chosen close to the upper bound of the existence of the front wave then we find that two different stable front waves exist in the system: (1) a very steep front moving from high-conversion ("upper") steady state to low-conversion ("lower") steady state, and (2) a less steep front which moves in either direction, depending on the convection velocity. A space-time plot of conversion in Fig. la-d shows these two fronts in a bounded flow system as the convection velocity v is increased. For small v. Fig. la, the steep front spreads initially along theflowat a fast rate, reaches the end of the reactor and converts into the less steep front which is being reflected back and moves against theflowat a slower rate. This front reaches the entrance point to the reactor and annihilates. For a larger v. Fig. lb, the fast steep front is again reflected at the end but the less steep counter-front is moving very slowly. When the less steep front is exposed to a still higherflow.Fig. Ic, it begins to move along theflow,from a high-conversion state to a low-conversion state. Finally, for sufficiently large v. Fig. Id, the initially high-conversion state in the reactor near the inlet becomes converted into a low-conversion state by a less steep front which moves along the flow.
728
r b)
Fig. 1. Space-time plot of front waves - effects of convection velocity v, a) v=0.1, b) v=0.5, c) v=0,9, d) v=l.l 17; fixed parameters: Da=0.04, d=1.0, Le=1.0, ay=0.99. Here and in the following figures shade bar indicates the level of conversion. It is the less steep front which is of interest here because of its sensitivity to convection velocity v. Therefore we have used the moving wave coordinate transformation and studied the dependence of the wave velocity M on v by means of the continuation techniques. The result is shown in Fig. 2 for several values of the Lewis number which is a measure of the heat capacity of the bed. The dynamics shown in Fig. 1 correspond to L^ = 1. Because of translational symmetry, in this case the dependence is simply a straight line with slope equal to one and u can be negative or positive, depending on v as observed in Fig. 2. In fact, there is also a reflection symmetry, and since the line does not pass through origin, there is a pair of less steep waves for each v. For Le <\ (the lower limit for Le as defined here is the porosity of the bed e^OA) the dependence is highly nonlinear with a
-5
0 5 convection velocity v
1 1.5 Lewis number Le
Fig. 2. Wave velocity u vs. convection ve- Fig. 3. Wave velocity u vs. Lewis number locity V for different values of Lewis num- Le for different values of convection velocber Le; fixed parameters: Da=0.04, d=LO, ity v; fixed parameters: Da=0.04, d=LO, ay=0.99. ay=0.99.
729 range of multiply defined waves near v = 0. On the other hand, for L^ > 1 the curve is still nonlinear but much flatter. These curves are symmetric about origin. The effect of variation of Le is indicated in Fig. 3; u becomes smaller with increasing Le. There is a discontinuity on the curves for nonzero v which occurs when u = v because transformed equations (7),(8),(9) become singular at this point. The space-time plots in Fig. 4a,b show a fast and a slow motion of the fronts, respectively.
m
Fig. 4. Space-time plot of front waves - effects of Lewis number Le, a) Le=0.95, b) Le=2.0; fixed parameters: Da=0.04, d=LO, v=0.0, ay=0.99. The most interesting dynamical feature displayed by the coexisting front waves is the reflection/conversion at the boundaries. This feature is not predicted by the solutions of Eqs. (7),(8),(9) since they relate to an unbounded system. At the reflection point a steep wave is converted to the less steep one or vice versa. In this way a zig-zag pattern of waves moving back and forth is formed. This can happen for both non-flow (Fig. 5) and flow (Fig. 6) systems. Fig. 5a shows waves rebounding at both boundaries; upon a slight variation in a parameter other than v the zig-zag pattern persists (as already indicated in Fig. 4a for Le = 0.95) but a larger variation causes the less steep (and slower) wave to become unstable and a narrow zig-zag pattern near the reactor end results, see Fig. 5b (here variation of oCy is used).
b)
Fig. 5. Spatiotemporal zig-zag structures - effects of parameter oCy, a) ay=0.99, b) ay=0.984421875;fixedparameters: Da=0.04, d=LO, Le=LO, v=0.0. In aflowsystem, the zig-zag pattern is even more robust: as ay is varied, the waves may bounce at the boundaries. Fig. 6a, or the less steep (but faster) wave passes for a while through the system and then because of an intrinsic instability converts to a steeper front rebounding at the inlet. Fig. 6b.
730
Fig. 6. Spatiotemporal zig-zag structures - effects of parameter Oy, a) oCy=0.99, b) ay=0.9825625;fixedparameters: Da=0.04, d=lA Le=lA v=3.0,
4. Conclusions We have shown that multiple travelling front waves can occur in a reaction-diffusionconvection system. These waves can be studied in an unbounded system by using a wave transformation and solving a special boundary value problem with the use of continuation methods. These results provide various parameter dependences of the velocity of the wave. Moreover, in a bounded system the waves move back and forth through the system and form remarkable zig-zag patterns. Acknowledgments: This work has been supported by fund MSM 223400007 of the Czech Ministry of Education and grant No. 201/02/0844 of the Czech Grant Agency.
5. References Kapral, R. & Showalter, K. (1995). Chemical Waves and Patterns. Dordrecht: Kluwer Academic Publishers. Kohout, M., Schreiber, I. & Kubicek, M. (2000). Equadiff 99, B. Fiedler et al. eds.. Vol. 2. Singapore: World Scientific. 1090-1092. Kohout, M., Schreiber, I. & Kubicek, M. (2001). ZAMM 81, Suppl. 3, 615-616. Kohout, M., Schreiber, I. & Kubicek, M. (2002). Comp. and Chem. Eng. 26, 517-527. Krishnan, J., Kevrekidis, I. G., Or-Guil, M., Zimmermann, M. G., Bar, M. (1999). Comput. Method. Appl. M. 170, 253-275. Kubicek, M. (1976). ACM Trans. Math. Software 2, 98-107. Kubicek M., Marek, M. (1983). Computational Methods in Biftircation Theory and Dissipative Structures. New York: Springer Verlag. Kubicek, M. & Schreiber, I. (1998). ZAMM 78, Suppl. 3, 981-982. Nekhamkina, O., Rubinstein, B. Y. & Sheintuch, M. (2000). AIChE Journal Vol. 46, No. 8, 1632-1640. Marek, M. & Schreiber, I. (1995). Chaotic Behaviour of Deterministic Dissipative Systems. Cambridge: Cambridge University Press. Sheintuch, M. & Nekhamkina, O. (2001). Cat. Today , 70, 369-382.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
731
Modelling and Identification of the Feed Preparation Process of a Copper Flash Smelter Mikko Korpi^'^, Hannu Toivonen*' and Bjorn Saxen^ ' Outokumpu Research Oy, P.O. Box 60, FIN-28101 Fori, Finland. ^ Process Control Laboratory, Department of Chemical Engineering, Abo Akademi, Biskopsgatan 8, FIN-20500 Abo, Finland.
Abstract In this paper the feed preparation process of the copper flash smelter at Outokumpu Harjavalta plant is studied from a control theoretic perspective. The aim of the study is to identify a dynamic process model from experimental data and to compare different model structures. A sampling campaign was arranged to provide data for identification. The process was modelled with an adaptive ARX model and with a blending tanks model where the process units were modelled as first-order systems. A Kalman filter was used to estimate the process state. The Kalman filter was the most efficient algorithm for predicting the process output and it has been successfully used online to the process.
1. Introduction The smelting of copper concentrates at Outokumpu Harjavalta plant is carried out in a flash smelting furnace. The copper smelting process is very sensitive to variations in the furnace feed. In custom smelters, like in Harjavalta, tens of different concentrates are treated. It has become clear that the variations in the feed mixture composition cause undesirable fluctuations in the matte percent, i.e. the copper content in tapped matte, which is closely related to the oxidation grade in the flash smelting furnace. Stochastic modelling has been applied successfully to similar blending processes (Westerlund et al. 1980). Thus, it was well motivated to analyse the dynamic behaviour of the feed mixture preparation process (Korpi 2000).
2. Process Description The feed mixture of the copper flash furnace consists of different copper concentrates, copper containing precipitates and residues, silica flux, slag concentrate and flue dust. The chemical compositions vary considerably from concentrate to concentrate and the mineralogy of residues and slag concentrate differ strongly from those of the concentrates. In order ensure stable oxidation reactions and stable temperatures in the flash smelting furnace it is important to prepare the feed mixture so that the concentration fluctuations of the main components are small. The blending of the concentrates takes place by discharging day bins with different concentrate mixtures to a collecting belt conveyor. The concentrate mixture is sieved
732 and belt conveyed to two parallel steam dryers. Also an uncontrolled side flow of slag concentrate is added to the dryers. Dried concentrate falls to two dry concentrate silos below the dryers. The silos work as a buffer and the mixture also blends in the silos. From these silos the concentrate is transferred pneumatically to a third silo before feeding to the flash furnace through a loss-in-weight silo. A typical feed rate is some 60100 tons feed mixture per hour. Figure 1 presents schematically the preparation of the feed mixture for the copper flash furnace.
Figure 1. The preparation of the feed mixture for a copper flash furnace. Samples for identification were collected at A and B.
3. Sampling Campaign To achieve good data for identification one should excite the process sufficiently. However, in this study it was not allowed to excite the process in any way. One was also interested in the size and causes of the natural variation in the chemical composition of the feed mixture. Therefore, a two week data collecting campaign was arranged on the plant to identify a dynamic model between the input concentration from the day bins and the output concentration to the flash furnace. During the campaign one sample an hour was collected from the input (A in Figure 1), and the output (B in Figure 1). It was assumed that the feed to the furnace was homogeneous after blending in the dry concentrate silos. In contrast, the concentrate mixture composition to the drying was known to be inhomogeneous and rapidly varying as a function of time. Thus, on each round, the sample from the belt conveyor before drying was collected during fifteen minutes to filter the fastest variations. Samples from the slag concentrate side flow were
733 taken four times a day. The mass fraction of nine components were analysed from each sample with an XRF-analyser. Linear data reconciliation was used to estimate the flow rates from the measurements. The process conditions vary strongly as can see from Figure 2, where some of the silo masses and the mass flow rate to the flash furnace are presented. Due to the large variations in the process dynamics, it is supposed that a time-invariant model cannot describe the process properly.
§ 100
100
200
300
h
100
200
300
h
Figure 2. The masses in silos 1 and 2 (left plot) and mass flow to thefiimace (right plot).
4. Process Modelling and Identification 4.1. Time varying ARX-model The first modelling approach was adaptive prediction based on a time-varying ARX model yk ^k+i
=(Pk^k-^^k
(1)
=^k-^^k
where y^^ is output concentration vector, ^ = [y^^.i yy^.2 jk-3 w^.i Wk-2 Mk-3] is a measurement matrix of past outputs jk-i and inputs M^-J^ ^k is the parameter vector and ey^ and Wk are white noise sequences. Three input parameters and three output parameters were used in the model. The input concentration u was naturally calculated from concentrate and slag concentrate concentrations weighted according to their mass flows. The ARX-parameters ©k were estimated with the Kalman filter algorithm (Soderstrom and Stoica, 1989). As there were nine analysed mass elements that all went through the same dynamic mixing process, the estimation of ARX-parameters was quite accurate. There were always variation in some of the elements and so the fact that one was not allowed to excite the input variables was not so critical. The ability to predict the output composition one hour forward in time was thought to be quite good.
734 4.2. Blending tanks model The ARX-model showed that the blending in the process is very efficient. As there are measurements that are related to the residence times in the dry concentrate silos and the blending in the silos is efficient, it was natural to derive a deterministic model with an assumption of ideal mixing in the dry concentrate silos and the steam dryers. (Korpi 2000) The model is represented as a state space model with six times nine states, corresponding to the nine compositions in each six units. Luckily, the calculation of the model parameters is relatively simple because of the batch-wise function of the process: The loss-in-weight silo is filled about every tenth minute from the previous silo, after which concentrate is pneumatically transported to this silo from the silos after the dryers. These two silos are continuously filled with the concentrate from the dryers and the mass in the steam dryers can be assumed to be constant. The steam dryers and silo units are modelled as first-order systems ^dryerMl
~ ^l,k^dryer,k
'^ (^~ ^l,k)^input,k
,^ ,
(2a) ^silo,k+l
-^n,k^
dryer,k '^ ^2,k^silo,k
'^0-~^,k\
~ ^\2,k)^
input,k
where the dynamics are calculated using the measured silo masses and flow rates. ^input,k
^dryer
f^dry
'l,k
' ai2,k =
^silo,k
..
:
+ ^input,k
.
,
( l - « u ) and a2,it "
^silo,k ^silo,k
+
^input,k
A simulation step for loss-in-weight silo and the third silo can be written as
•^silo,k-\-\ -
^silo,k ^silo,k
^ ^in.k
^
•^in,k ' ^silo,k
+
v^^^'
^in.k
In equation (2) m, is the mass and jc, is the concentration in unity. 4.2.1. State estimation using a Kalman filter The simulation from input composition with the blending tanks model gave surprisingly good result. The next step was to use a model based filtering algorithm. The natural choice for this linear case was the discrete Kalman filter. As the analysed input compositions of concentrate and slag concentrate are noisy, these input sequences were included in the model as drifting processes. Also the slag concentrate settler was included in the model as a first order system. The Kalman filter applied to the blending tanks model gave very good predictions of the feed mixture composition. Also the effect of sampling interval to the ability to predict the feed mixture composition was studied with the Kalman filter. It was noticed that a significant improvement would be achieved in estimation capacity if the furnace feed mixture composition were analysed more often than the normal practice, which is one sample every fourth hour.
735 4.2.2. Kalman filtering for uncertain dynamics Although the Kalman filter gave a satisfying result it was also of interest to investigate if an extended filtering algorithm, where the model parameters are considered uncertain, would improve the estimation. The origin of the thought was that the efficient mass that is taking part in the mixing in a silo is probably smaller than the weighted silo mass and there are also uncertainties in the mass flow measurements, which lead to uncertainty in the model parameters. The state space model for a system with uncertain model parameters can be represented as
^kH
-
\ \ ( ^k -^TjP'k^i^kj k + ^k + S ^ i t , i ^ u
r
(3) yk =CkXk+Y,Hi^jT]j^j-^v,^
where Oj^i, //kj, ^k and Vk are uncorrected white noise sequences. The optimal state estimator for the process (3) is presented in (Pakshin, 1978). 4.3. Comparison of the models The models were compared by their ability to predict the feed mixture composition one hour forward in time. The normalised estimation error variances of one hour ahead prediction of arsenic content with time-invariant ARX, adaptive ARX, Kalman filter and the extended filtering algorithm are 0.116, 0.110,0.105 and 0.105, respectively. The normalised simulation error covariance with the blending tanks model for arsenic is 0.227. Generally the Kalman filter gave about 5 % smaller error covariance than the adaptive ARX model. The extended filtering algorithm, which takes into account the parameter uncertainties, improved the estimation, but the improvement was thought to be insignificant (less than 1 %). The estimations are presented in Figure 3.
5. Conclusions In this study it has been shown that the feed mixture preparation process could be modelled satisfactorily with a blending tanks model, where the process units are modelled as first order systems. The Kalman filter was the most efficient algorithm in estimating the process state. Also prediction with the ARX models gave satisfying result. More frequent sampling would improve the prediction significantly. Acquiring an online analyser to the smelter to analyse the feed mixture composition is highly recommended. Until now two online applications of the blending tanks model have been implemented at the smelter; a simulation model estimating the slag concentrate mass fraction in the feed mixture and a Kalman filter estimation of the feed mixture composition. Both models include a shared data reconciliation part, which estimates the masses and mass flows in the process and a shared calculation of state space model parameters according
736 to the blending tanks model. Both models also predict the process behaviour forward in time.
0.15
0.05 100
120
140
160
180
200
Figure 3. One hour forward prediction of arsenic content in the feed mixture: f *j analysis, (—) time-invariant ARX, (—) adaptive ARX, (...) Kalman filter applied to the blending tanks model.
6. Acknowledgements The authors are indebted to Outokumpu Harjavalta Metals Oy for the permission to publish the above results.
7. References Korpi, M., 2000, Modelling and Control of the Feed Preparation Process of a Copper Flash Smelter. Master's thesis. Dep. of Chemical Engineering. Abo Akademi, Turku, Finland. Pakshin, P.V., 1978, State Estimation and Control Synthesis for Discrete Linear Systems with additive and multiplicative noise. Avtomatica i Telemekhanika, No.4, 75-85. Soderstrom, T., Stoica, P., 1989, System Identification. Prentice Hall, Cambridge. Westerlund, T., Toivonen, H., Nyman, K.-E., 1980, Stochastic Modelling and SelfTuning Control of Continuous Cement Raw Material Mixing System. Modelling, Identification and Control 1, 17-37.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
737
CFD Modelling of Drag Reduction Effects in Pipe Flows Jukka Koskinen\ Timo Pattikangas"^, Mikko Manninen"*, Ville Alopaeus*, Kari I. Keskinen^'^, Karl Koskinen^, and Joakim Majander^ ^ Neste Engineering Oy, POB 310, FIN-06101, Porvoo, Finland ^ Helsinki University of Technology, Laboratory of Chemical Engineering and Plant Design, POB 6100, FIN-02015 HUT, Finland ^ Enprima Engineering Oy, POB 61, FIN-01601 Vantaa, Finland '^ VTT Processes, POB 1606, FIN-02044 VTT, Finland
Abstract The effect of drag reducers on the turbulence is modelled with computational fluid dynamics (CFD) by using a two-layer turbulence model. In the laminar buffer layer, the one-equation model of Hassid and Poreh (1975) is used to describe the enhanced dissipation of turbulence caused by drag reducers. The standard k-s model is applied in the fully turbulent regions. The flow conditions necessary to elongate the polymer, the drag reduction efficiency of polymers of different apparent molar masses and their degradation kinetics have been measured. This data has been used in the model development.
1. Introduction Turbulent drag reduction by polymer additives is the most spectacular effect at the interface of fluid dynamics and long-chain-polymer physics. One of the most important applications of long-chain-polymers, i.e. drag reducing agents (DRA), are long oil pipelines, in which remarkable savings in pumping costs and an increase in pumping capacity can be achieved. The pressure loss in a flow through a pipe may be reduced by up to 70 %. Jovanovic and Durst have shown that the long-chain polymers enhance the damping of transverse fluctuations close to the pipe wall. The enhanced anisotropic nature of turbulence close to the wall is caused by the elongation mechanism of polymer chains due to shear forces. The elongation of a polymer chain in an extensional flow follows the elongation of the fluid element if the strain rate exceeds a critical value. The drag reduction effects are related to elongation and relaxation properties of polymers and depend on polymer characteristics like apparent molar mass, molar mass distribution and chain structure, as well as the polymer concentration and shear stresses. Furthermore, fully stretched polymer chains can be degraded by high shear stresses leading to loss in drag reduction effects. The damping of turbulence should be based on anisotropic turbulence models, where the damping effects are described in each direction by polymer drag reduction properties. Attempts to model the DR-effect have been made by Hassid and Poreh (1975) with a one-equation turbulence model, Poreh and Hassid (1977) and Patterson et
738 al. (1977) with a modified A:-6: model, and Sureshkumar et al. (1997) by direct numerical simulation. In this paper the effect of drag reducers on the turbulence has been modelled with a two-layer turbulence model employing the one-equation model of Hassid and Poreh (1975) in the near-wall region and the standard /:-£• model in the fully turbulent regions. The flow conditions necessary to elongate the polymer, the DR efficiency of polymers of different apparent molar masses and their degradation kinetics were measured using a rotating shear viscometer and the results were used in the model development. The developed DR model is incorporated into the commercial CFD code STAR-CD (Computational Dynamics, 1999) for simulating local DR effects in pipe flow.
2. CFD Model Description The one-equation model of Hassid and Poreh (1975) consists of the standard transport equation for the turbulent kinetic energy and an algebraic equation for the dissipation. The equation for e in the STAR-CD implementation is
s-
k
f
3/2 (
—
/
^
Dl
^-^^^Re.
where f^=\~
(1)
exp(- Re^/^^ ) > R^/ =ylkllv, and / = /ty . In the two-layer concept, the
one-equation model is applied when/^ < 0.95 corresponding to Re/« 100. The standard k-s model is used in fully turbulent regions. The model constants have the following values A^= 34.48, CDI = C^^^^ = 0.164, CD2 = 0.336. The von Karman constant A: was chosen as an adjustable parameter because it determines directly the slope of the velocity profile in the logarithmic layer. Hassid and Poreh (1975) used a different form of the length scale / and A^ as the model parameter in their attempt to model the DReffect.
3. Adjustment of CFD Model Parameters against Measurements of Drag Reduction and Degradation Effects 3.1. Description of measuring device The measurement apparatus, shown in Fig. 1, is equipped with a rotating cylinder of diameter about 100 mm and baffles around the vessel. The rotation speed of the cylinder can be adjusted within a range of 0...3200 rpm. The vessel is standing on two-bottom discs having annular ball bearing in between allowing frictionless movement of the upper disc. The drag reduction effects can be exactly measured under turbulent conditions by measuring the torque force of the bottom plate against the balance sensor. The drag reduction in the measurement apparatus is defined DR % = 100(1 -TJTy^, where Ty, and T^^o are the measured torque values with and without DRA, respectively.
739
Fig. 1. Drag Reduction measurement apparatus. Dimensions in millimetres. 3.2. Measurements and results Drag Reduction effects for different long chain polymers of apparent molar mass within the range of 10^...30-10^ g/mol dissolved in hydrocarbon solution were measured. The density of the hydrocarbon solvent was 900 kg/m^ and kinematic viscosity 2 cSt at room temperature. The results and conditions of DR effect are shown in Figures 2 and 3. The following conclusions are drawn. The DR effect of the polymer is activated at a cylinder rotation speed of 1500 rpm. The effect enhances as the rotation speed and the shear force close to the cylinder increase. The DR effect increases as the active concentration of DRA is increased. The effect is increasing linearly between 2 and 6 ppm, decaying at higher active bulk concentrations. The maximum drag reduction of 35 % is found at a bulk concentration of 8 ppm. High molar mass polymers are more effective drag reducers in hydrocarbon solution than lower ones. The minimum apparent molar mass of 2-10^ g/mol still shows some DR effects in hydrocarbon solution. Degradation effects start at a rotation speed of 2000 rpm. The degradation is very fast at rotation speeds higher than 2700 rpm.
M=30M|^nri
DR%=«(M c^ rpTi=27D0
^ • ^ - *
35
""^ / ^ 3?^ *>^'
•
'
•
*
'
^
"
'
—•—c=2ppm
*
- — - c=4ppnn •-CFQ^prTJ [-'•o=8pprrj
¥
^15
0
".V:;t^t»», -'^'^^Ji^iSrrATAT.'.T.t
f
1
1
•,
-^••c=6ppm -<--c=8ppm|
,
20 Rstalicnspeecl rpm
Fig. 2. Drag reduction without degradation effects of the DRA-polymers at effective polymer bulk concentrations of 2, 4, 6 and 8 ppm. Polymer is degraded due to shear forces down to a specific molar mass distribution. The degradation increases at rotation speeds higher than 2700 rpm. The polymer still has DR effects after degradation depending on its apparent molar mass and concentration.
740 M=14Mg/mol, 8ppm
— 2 0 0 0 rpm I — 2 7 0 0 rpm ,-
3100rpm
^—^ time, min
Fig. 3. Degradation effects at bulk concentration ofSppm and rotation speeds of 2000, 2700 and 3100 rpm, Mo = 14-10'g/mol
3.3. Fitting of model parameters against measured data As shown above, in principle only one adjustable parameter for the Hassid-Poreh model is needed. This parameter K depends on the chain-length and concentration of the DR polymer and local wall shear stresses. The dependence is of the form K = f(c, M, r j , where c is active concentration of DRA, M is local apparent molar mass of DRA and r^ is local wall shear stress. Parameters are fitted against measured data and simulation results according to the following procedure: 1. A: is determinedfromrelation K = / ( D R % ) shown in Fig. 5(a) in section 4.2. 2. Parameters are fitted against measured DR data. 3. Parameters are fitted against CFD simulations and more accurate parameters are obtained. Model for local apparent molar mass of DRA is M = f(Mo, y, f), where Mo is initial apparent molar mass, y is the second invariant of the deformation tensor and t' is retention time.
4. Results of CFD Simulations 4.1. Simulations of drag reduction effects in pipe flows Mizushina and Usui (1977) have measured velocity profiles and longitudinal turbulent intensity profiles in drag reducing pipe flow. These data were used as first validation data for the CFD modelling. Experimental results at Reynolds numbers around 12000 at four DRA concentrations were considered. Simulations of the pipe flow were carried out in order to find the values of A:, which give the best fits to the measured velocity profiles u^ ^uju^ vs. y^=u^y/v. The calculated results are shown in Fig. 4(a) together with the experimental data. A good agreement can be achieved by adjusting a single parameter. Figure 4(b) shows the comparison of measured and simulated dimensionless turbulent kinetic energy, k^ ^kju] . In the experiment, only the longitudinal fluctuating velocity
741 component was measured, and the kinetic energy was estimated as k^u'^, i.e. the transverse components were estimated to be one half of the longitudinalfluctuation.The comparison indicates a good overall agreement. The drag reduction is also predicted well in the simulations. The predicted (measured) DR percentages are 36 (35), 56 (51) and 64 (65) for the concentrations 20, 50 and 100 ppm, respectively. 10 • stand: 2.4 \n{y^) + 5.5 • Virk:11.7ln(y*)-17 ^ 0 ppm, Re=12500 » 20ppm, Re=11500 •• 50ppm, Re=12400 > 100 ppm, Re= 13200
(a)
^ n * o
0 ppm, Re=10900 20ppm, Re=11500 50 ppm, Re=12400 100 ppm, Re=13200
(b)
Fig. 4. Comparison of CFD simulations and experimental results of Mizushina and Usui (1977). (a) Velocity profiles with revalues 0.419 (0 ppm), 0.25 (20 ppm), 0.18 (50 ppm), and 0.15 (100 ppm). (b) Profiles of dimensionless turbulent kinetic energy. Computed results are shown with lines: solid line 0 ppm, dash-dotted line 20 ppm, dotted line 50 ppm, and dashed line 100 ppm. 4.2. Simulations of drag reduction effects in measurement apparatus CFD-simulations of the measurement apparatus were carried out using a computational mesh with a near-wall refinement on the surface of the rotating cylinder. The simulations reported here were done without the additional baffle at the bottom of the tank (see Fig. 1). Computations with different /rvalues were first conducted in order to find out the dependence of the drag reduction on /c. Drag reduction was calculated from the torque on the rotating cylinder due to wall shear forces. Prediction of drag reduction as afiinctionof /cis shown in Fig. 5(a) for 2700 rpm. This result can be used to associate the relevant physical parameters to the model. Fig. 7(b) illustrates how this is accomplished for the DRA concentration. Using the measured drag reduction data for the given speed of rotation and DRA apparent molar mass, the parameter A: can be plotted as a function of DRA concentration.
742
0.35
0.25
0 K
2 4 Concentration c (wppm)
6
Fig. 5. (a) Dependence of the drag reduction in the measurement apparatus as a function of the parameter K. (b) Fit parameter K as a function of the DRA concentration.
5. Conclusions Extensive set of measurements with a special apparatus was carried out for determining the dependence of the drag reduction on the DRA concentration, DRA apparent molar mass and speed of rotation. The results showed a clear onset of the DR-effect at a cylinder rotation speed of 1500 rpm and enhanced drag reduction as the rotation speed is increased. The DR effect was found to increase with increasing DRA concentration, having a maximum of about 35 % at c = 8 ppm. Similarly, larger apparent molar mass of the polymer produced an increased drag reduction. Degradation of the DRA polymer was also measured. The two-layer turbulence model with only one adjustable parameter, the von Karman constant A; was found to be promising in describing the DR effect, although it does not include the anisotropy of the turbulence. Validation calculations against measurements in pipe flow show a good agreement. First simulations of the measurement apparatus demonstrate that the experimental results can be used to infer the dependence of K on the relevant physical parameters.
6. References Computational Dynamics, 1999. STAR-CD version 3.10 User Guide. Hassid, S. and Poreh, M., 1975, A turbulent energy model for flows with drag reduction, Trans. ASME, 97 (2), 234. Mizushina, T. and Usui, H., 1977, Reduction of eddy diffusion for momentum and heat in viscoelastic fluid flow in a circular tube, Phys. Fluids 20 (10), SI00. Patterson, O.K., Chosnek, J. and Zakin, J.L., 1977, Turbulence structure in drag reducing polymer solutions, Phys. Fluids 20 (10), S89. Poreh, M. and Hassid, S., 1977, Mean velocity and turbulent energy closures for flows with drag reduction, Phys. Fluids 20 (10), SI93. Sureshkumar, R., Beris, A.N. and Handler, R.A., 1997, Direct numerical simulation of the turbulent channel flow of a polymer solution, Phys. Fluids 9 (3), 743.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
743
Modelling and Simulation of a Combined Membrane/Distillation Process Peter Kreis, Andrzej Gorak University of Dortmund, Chemical Engineering Department, Dortmund, Germany Email: [email protected]. Fax: +49 231 755-3035
Abstract This theoretical study is focused on the process combination of a distillation column and a pervaporation unit located in the side stream of the column. This hybrid membrane process can be applied for the separation of azeotropic mixtures such as acetone, isopropanol and water. Water is removed from the side stream of the column by pervaporation, while pure acetone and isopropanol are obtained at the top and bottom of the column. Detailed simulation studies show the influence of decisive structural parameters like side stream rate and recycle position as well as operational parameters like reflux ratio and mass flow on concentration profiles, membrane area and product compositions.
1. Introduction Distillation is still the most common unit operation to separate liquid mixtures in chemical and petroleum industry because the treatment of large product streams and high purities with a simple process design is possible. Despite of this the separation of azeotropic mixtures into pure components requires complex distillation steps and/or the use of an entrainer. Industrial applied processes are azeotropic, extractive or pressure swing distillation (Stichlmair and Fair, 1998). Another sophisticated method for the separation of binary or multicomponent azeotropic mixtures is the hybrid membrane process, consisting of a distillation column and a membrane unit. The effects of synergy for such an integrated process were investigated in recent theoretical studies, e.g. for the dehydration of alcohols (Sommer et al., 2002), for the production of fuel additives such as MTBE (Hommerich and Rautenbach, 1998b) or for the separation of non-ideal ternary alcohol/water mixtures (Kuppinger et al., 2000), (Brusis et. al, 2001). Process integration allows for significant reduction of equipment and operational costs as well as considerable energy saving compared to conventional distillation processes. Despite of all these advantages membrane separation is not yet established in chemical industry due to low permeate fluxes, short membrane lifetime or the lack of general design methodology and detailed process know-how. As recent studies show a promising progress in development of reliable high flux membranes, it is very likely that such hybrid processes will be applied in industrial scale in the near future. In this work the separation of the ternary mixture of acetone, isopropanol and water using a hybrid membrane process is studied. This non-ideal mixture with a minimum-
744 boiling azeotrop between isopropanol and water occures in the production of acetone via isopropanol (Turton et al., 1998).
FeedEZZzN
Sweep gas I
!^
PV:
VP:
Pfeed>Psa.
Pfeed ^Psa
+4-+
L l Z : ^ Retentate
I
j> Permeate
Figure 1: Principle ofpervaporation (PV) and vapour permeation (VP).
2. Pervaporation Besides high selectivity and compact design, pervaporation (PV) and vapour permeation (VP) facilitate the simple integration into existing processes. Therefore both membrane processes are very suitable for hybrid processes. The principles of pervaporation and vapour permeation are very similar. Volatile components are separated by a non-porous membrane due to different sorption and diffusion behaviour. Consequently the separation is not limited by the vapour-liquid equilibrium which is the main advantage as compared to common mass transfer processes. The driving force is the gradient of the chemical potential which is generated by lowering the partial pressure of the most permeating component on the permeate side. Usually this is achieved by applying vacuum and/or an inert sweeping gas. The main difference between PV and VP is that the feed in VP is supplied as vapour whereas in PV the feed components change their aggregate state from liquid to vapour while permeating through the membrane. The energy to vaporise the permeate is provided by the liquid feed stream. Therefore the liquid stream exits the membrane module at a decreased temperature. A characteristic parameter of membrane processes is the permeability. In general the permeability P, is proportional to diffusivity DiMemb and solubility Si^Memb of each component in the membrane material: M,Memb ~
•^i,Memb ' ^i,Memb
v^)
The parameters and consequently the efficiency of PV strongly depends on the properties of the membrane material. Common membrane materials are various dense polymers and microporous inorganic membranes (zeolithes, silica, ...) either with hydrophilic or organophilic character. Furthermore composite membranes offer the possibility to combine different materials for the dense active layer and the porous support layer. Besides membrane material fluid hydrodynamics influences the efficiency of separation. The pressure drop especially on the permeate side reduces the driving force of the most permeating components.
745
3. Hybrid Membrane/Distillation Processes Depending on thermodynamic properties of the mixture the hybrid process offers multiple configuration options in order to combine the membrane module and the distillation column. Large number of separation stages and high reflux ratios are necessary to fractionate close boiling components using conventional distillation processes. For separation of such mixtures the membrane is located in the side stream of the column (fig. 2a). Both streams, permeate and retentate are fed back to the column. Due to higher separation efficiency the membrane assists the separation in the column. This leads to a significant reduction of column stages.
JT
^
f ^
"c
^
Figure 2: Hybrid membrane process to separate a) close boiling, b) binary azeotropic and c) multicomponent mixtures (Hommerich, 1998a). Most investigations are focused on the separation of non-ideal binary mixtures., e.g purification of ethanol or isopropanol. The main purpose of the membrane unit is to overcome the azeotropic point of the top product (fig. 2b). A further enrichment up to the desired product purity can either be achieved with the membrane unit or with a second column. The objective of this study is to investigate the process configuration illustrated in figure 2c. Therefore the dehydration of the ternary mixture acetone, isopropanol and water into pure components in one distillation column combined with a hydrophilic membrane unit located in the side stream of the column is analysed. The water-depleted retentate from the permeation zone is returned back to the column while the permeate is removed out of the process. In this configuration, the operation conditions for the membrane separation is more suitable because the side stream can be placed near the maximum concentration of the most permeating component which leads to an increased driving force and consequently to smaller membrane areas.
4. Modelling For a fundamental understanding of the hybrid process it is necessary to describe the interactions between two different unit operations with appropriate models. Making basic parameter studies the equilibrium stage model for distillation and a short-cut model for membrane separation is sufficient. The models are well established and the model parameters are quite accessible. This combination gives an first survey on the influence of structural and operational parameters on the concentration profiles in the column and on the maximum amount of water, which can be removed.
746
Figure 3: Hybrid process in simulation environment Aspen Custom ModelerTM On the other hand the definition of a feasible operating region using short-cut models is not possible. The prediction of the mass transfer in membranes is the decisive factor of the entire hybrid process. The resulting permeate fluxes and consequently the membrane area are very important parameters to estimate the economical potential and the feasibility of the entire hybrid process. Therefore detailed models for the membrane unit with an semi-empirical and physical background are developed in this work to characterise the membrane separation step. The flexible model structure enables the choice of different modelling approaches for permeabilities. Among them a short-cut approach with constant permeabilities of each component, a temperature dependency of permeabilities represented by the Arrhenius equation and extended model approaches (Hommerich, 1997), (Meyer-Blumenroth, 1996) are implemented to utilise different membrane materials, e.g. inorganic zeolithes or glassy and swelling polymeric membranes. Feed and permeate pressure drop, temperature loss due to permeate vaporisation and phenomena like concentration and temperature polarisation can be taken into account. Additionally different configurations like lumen and shell feed or co and counter current flow are possible. Furthermore, a rate based model for distillation (Kloker et al., 2002) is used to perform detailed process studies of the integrated process. The relevant models for the distillation column, membrane separation and peripherals are implemented into the simulation environment Aspen Custom Modeler'^'^ (fig. 3).
5. Simulation Studies The following assumptions for the theoretical studies with the developed model are made: the column diameter is 50 mm and the column is equipped with 5 meter of the structured packing SulzerBX. The feed contains 14.1 weight percent of water, 8.4 weight percent of isopropanol and 77.5 weight percent of acetone. The feed enters the column at 3 m with a mass flow of 2 kg/h. The approach of Meyer-Blumenroth is chosen to take into account the swelling behaviour of the PVA membrane material. Pressure drop on lumen and shell side is considered. The necessary model parameters were determined in lab-scale pervaporation experiments. Figure 4 illustrates the strong
747 influence of the distillate to feed ratio on the concentration profiles in the column. The reboiler heat duty is 1200 Watt and the mass flow of the side stream is set to 4 kg/h. At low distillate to feed ratios (fig. 4, left), almost pure acetone is present in the distillate, however the amount of acetone in the side stream is rather large. Therefore permeate fluxes are small and the membrane area is not sufficient to remove the total amount of feed water entering the column. If the distillate to feed ratio is increased the mole fraction of acetone in the side stream can be decreased significantly (fig. 4, right). The mole fraction of water in the side stream is high enough and the membrane enables to remove almost the total amount of feed water. Figure 5 shows the influence of heat duty and side stream mass flow on the required membrane area for the removal of 97.5% of the water entering the column. The distillate flow is 1.56 kg/h. The reference membrane area is marked in the diagram. The operational parameters are taken from the conditions mentioned above. Side stream is set to 4 kg/h and the reboiler heat duty is 1200 Watt. With increasing heat duty the suitable operation region of the hybrid process increases because high reflux ratios improve the entire separation and the liquid and vapour load in the column is increased. The water concentration in the stripping section and in the side stream is shifted to higher mole fractions. This leads to higher transmembrane fluxes and consequently smaller membrane areas. The mass flow of the side stream strongly influences the required membrane area. Liquid column profile
-0.5
0.5
1.5 2.6 3.6 Column height [m]
Liquid column profile
4.6
-0.5
0.5
1.5 2.6 3.6 Column height [m]
4.6
Figure 4: Liquid column profile of distillation at different D/F ratios. By increasing the side stream mass flow the concentration of water in the membrane feed and the water concentration difference between membrane feed and retentate is generally decreasing. In the case swelling membrane materials like PVA are applied, it is crucial that at low water concentrations the swelling of the membrane and consequently the membrane flux decrease significantly. At moderate mass flows (approx. 3-4 kg/h) small membrane areas are sufficient to reach the desired water removal. At low side stream rates the average permeate fluxes in the module are increasing but if the mass flow is raised further lower average permeate fluxes are obtained due to the phenomena described above.
6. Conclusions A flexible and robust model of pervaporation and vapour permeation with different modelling depths was developed in the simulation environment Aspen Custom
748 Modeler^^^^ Lab-scale experiments are performed to determine the model parameters of membrane separation.
spec, membrane area [m^/m^] ,1.5
2 ° 2.6 3.0 3.6 4.0
4.5 5.0 Side stream [kg/h]
Figure 5: Required membrane area to remove 97.5% of the water entering the column. The membrane model is able to describe the mass transfer through membranes and takes into account the specific effects of different membrane materials. Simulation studies with the non-equilibrium model for distillation and the semi-empirical membrane model illustrate the influence of the mass flow of the side stream and the heating energy on the required membrane area. Both parameters have a major effect on the membrane area. Rigorous models for both unit operations are necessary to perform detailed process studies of the integrated process, because all physical effects have to be taken into account especially for membrane separation.
7. References Brusis, D., Stichlmair, J. and Kuppinger, F.F., (2001), Chemie Ingenieur Technik, 73, 624. Hommerich, U., (1998a), Ph.D. Thesis, RWTH Aachen, Germany Hommerich, U. and Rautenbach, R. (1998b), J. of Membrane Science, 146, 53-64. Kloeker, M., Kenig, E.Y., Gorak, A., Markusse, P., Kwant, G., Goetze, L. and Moritz, P. (2002), In Proc. Int. Conf. "Distillation and Absorption", Baden-Baden, Germany. Kuppinger, F.-F., Meier, R. and Dussel, R. (2000) Chemie Ingenieur Technik, 72, 333338. Meyer-Blumenroth, U. (1989), Ph.D. Thesis, RWTH Aachen, Germany. Sommer, S., Klinkhammer, B. and Melin, T. (2002), Desalination, 149, 15-21 Stichlmair, J. and Fair, J.R. (1998), Distillation-Principles and Practice, Wiley-VCH, New York, 1998 Turton, R., Bailie, R.C., Whiting, W.B. and Shaeiwitz, J.A. (1998) Analysis, Synthesis and Design of Chemical Processes, Prentice Hall PTR, New Jersey.
8. Acknowledgement We are greatful to Max-Buchner Forschungsstiftung of the DECHEMA for the financial support of this research.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
749
Consequences of On-Line Optimization in Highly Nonlinear Chemical Processes Daniel J. Lacks Department of Chemical Engineering Tulane University New Orleans, LA 70118
Abstract The effects of on-line optimization, wherein process set-points are adjusted in response to changing economic conditions in order to maximize the profit, are investigated on a simple chemical process. The present results show that running a process with on-line optimization can lead to the process operating at a lower profit than if no optimization were used at all. This effect is not just that the on-line optimizer fails to other find higher profit maxima as prices change, but rather that on-line optimization can lead to lower-profit performance at the initial set of prices. The lower-profit performance will occur if pricefluctuationscause the profit maximum at which the process operates to disappear at some point in the fluctuation.
1. Introduction Large chemical processes are complex systems that can be operated under a range of conditions described by variables such as flow rates, temperatures, pressures, etc. The economic profit derived from a process is a function of these operating condition variables, and the profit depends parametrically on the prices of the products, raw materials, and utilities (e.g., steam, water, electricity, fuel, etc.). The profit can be a nonlinear function of the operating condition variables, and there may be many local maxima, minima and saddle points of the profit in the operating condition variable space. The profit function in the operating condition variable space can be called a "profit landscape" to emphasize the possibility of many local maxima, minima and saddle points. The optimum operation of the process occurs at the conditions corresponding to the global maximum of the profit landscape. The profit landscape changes with time, due to changes in the prices of raw materials, products, and utilities, and on-line optimization can be used to periodically adjust the operating conditions to follow the profit maximum (e.g., Biegler et al., 1997). On-line optimization is based on local (rather than global) optimization, due to the computational intensity of global optimization procedures and the need to carry out the on-line optimization quickly. This paper addresses consequences of the local optimization nature of on-line optimization on process performance.
2. Methods A simple chemical process is considered in this paper, based on the Haverly pooling problem (Haverly, 1979; Adhya et al, 1999). The Haverly pooling problem
750
L,xo=0
T
v,y
V,yo
^
W
L,x T Figure 1. Liquid-liquid extraction stage to remove contaminant with initial concentration Yofrom the stream V. The streams leaving the stage are in equilibrium, andy=mx. considers the mixing of crude oil streams of varying sulfer content. The value of the crude oil depends on the sulfer content, and the Haverly economic parameters are given in Table 1. The Haverly pooling problem considers the optimal mixing of input crude oil streams to yield higher valued crude oil product streams. Note that the possibility of mixing crude oil streams to yield higher value product streams is due to the nonlinear dependance of the value of the stream on sulfur content. Our modification of the Haverly pooling model is as follows. We consider the increase in value of a single input stream due to the removal of the contaminant by liquidliquid extraction. Of course, liquid-liquid extraction cannot be used to remove sulfur from a crude oil stream, and so the present model is not a direct extension of the Haverly pooling model. However, we use the Haverly economic parameters given in Table 1 (with linear Table 1. Value of stream as a function of the contaminant concentration.
y .01 .015 .02 .025 .03
Value ($/unit) 16 15 10 9 6
751 interpolations for streams with contaminant concentrations intermediate of those in Table 1, and linear extrapolations for contaminant concentrations outside the range given in Table 1). As shown in Figure 1, a contaminant of concentration yo in a stream of flowrate V is removed with a single liquid-liquid extraction stage. The extract liquid is initially free of the contaminant, and is used at the flowrate L. The contaminant concentration is assumed to be low enough such that Henry's law is followed; i.e., the equilibrium concentration of the contaminant in the original stream (y) is related to the equilibrium concentration of the contaminant in the extract stream (x) by y=mx
(1)
where m is the Henry's law constant. The mass balance over the liquid-liquid extraction stage, V(yo-y) = Lx
(2)
can be combined with the Henry's law equation, and rearranged to give the contaminant concentration exiting the liquid-liquid extraction stage, as a fimction of the extract liquid flowrate, y(L)=yo/(L/mV+l)
(3)
We examine the profits made with this using liquid-liquid extraction method, as a fiinction of the extract liquid flowrate. The extract liquid is purchased at the price a per unit volume. The profit (on a per unit basis of the input stream with flowrate V) is given by Profit = Value[y(L)] - Value(yo) + aLA^
(4)
where the stream values as a frinction of y are given in Table 1. The present study investigates this model with m=2, and an initial contaminant concentration of yo=0.04 (for which the stream value is zero).
3. Results We investigate first the optimal operation of the process with the price of the extract liquid at a=2. The profit as a function of the extract flow rate L is shown in Figure 2. The globally optimum operating condition at a=2 occurs at LA^=3.33, and corresponds to the decrease in the contaminant concentration to y=0.015. The effects of continuous on-line optimization are addressed as the price parameter a increases continuously, beginning with operation at the global maximum at a=2. These simulations are carried out by repeatedly changing the value of a in very small increments, with a local optimization following each increment. The results for the profit as a function of a are shown in Figure 3. For a^5 the profit decreases continuously as a increases, but the profit increases discontinuously at a=5. This discontinuous increase in profit implies that a steepest ascent path suddenly becomes available, which leads the on-line optimizer to an alternate (and higher) profit maximum. In other words, the local profit maximum that the process is operating at suddenly disappears, as shown by the profit landscapes in Figure 4.
752
O &^ Q.
10
L/V Figure 2: Profit as afiinction of the extract liquidflowrate,fi)ra=2.0.
o a.
a Figure 3: Profit obtained using on-line optimization, as afiinction of the cost of the extract liquid. The initial state is the circle, and thefinalstate is the square.
753 The on-line optimization results as a returns continuously to its initial value are also shown in Figure 3. Even though a returns to its initial value, the on-line optimization procedure does not return operation to the initial operating conditions. Rather, on-line optimization coupled with this fluctuation in a process parameter causes the process to operate at a lower-profit local maximum of the profit landscape. This irreversible effect is attributable to the disappearance of a profit maximum combined with local optimization, and is evident in the schematic shown in Figure 4.
4. Discussion Based on these results, we identify two detrimental consequences of on-line optimization that can arise: (1) Discontinuous changes in process set-points can occur even when process parameters change continuously and the on-line optimization is carried out frequently; these discontinuous set-point changes can upset process stability. (2) On-line optimization can cause a process to operate at lower profit conditions after process parameters fluctuate; i.e., higher profit conditions would be obtained in the absence of online optimization. Both of these effects are due to disappearances of local maxima of the profit landscape caused by changes of economic parameters, combined with local optimization. The operation at low-profit conditions following parameter fluctuations combined with on-line optimization is analogous to phenomena involving glassy materials under stress, as illustrated with the landscape paradigm. In particular, experiments have shown that aging (i.e., the slow transformation to more stable structures, which is a global
Figure 4: Changes in the profit landscape in response to changes in the price of the extract liquid.
754 optimization process) in polymer glasses can be reversed by the application of stress (Struik, 1978), and that a cycle of compression and decompression changes the ambient (i.e., aged) open-framework structure of silica glass to a denser but less stable structure (Grimsditch, 1984). Analogies with biological evolution also become evident when biological evolution is described in terms of a fitness landscape, which represents the fitness for survival as a function of genotype (Wright, 1932). For example, regressive evolution (i.e., evolution to a less fit state) can result from fluctuations in the environment, in the same way that on-line optimization can cause a process to operate at lower profit conditions after process parameters fluctuate (Lacks, 2001).
5. Acknowledgment Funding for this project was provided by the National Science Foundation (DMR-0080191).
6. References Adhya, N., Tawarmalani, M., & Sahinidis, N. V. (1999) A Lagrangian approach to the pooling problem. Ind. Eng. Chem. Res. 38 1956-1972. Biegler, L.T., Grossmann, I.E., & Westerberg, A.W. (1997) Systematic methods of chemical process design. Prentice-Hall, New Jersey. Grimsditch, M. (1984) Polymorphism in amorphous Si02. Phys. Rev. Lett. 52, 2379-2382. Haverly, C. A. (1978) Studies of the behavior of recursion for the pooling problem. ACM ACM SIGMAP Bull., 25,29-32. Lacks, D.J. (1998) Localized mechanical instabilities and structural transformations in silica glass under high pressure. Phys. Rev. Lett. 80, 5385-5388. Lacks, D.J. (2000) First-order amorphous-amorphous transformation in silica. Phys. Rev. Lett. 84,4629-4632. Lacks, D.J. (2001) Regressive biological evolution due to environmental change. J. Theoretical Biology 209,487-491. Struik, L.C.E. (1978) "Physical aging in amorphous polymers and other materials", Elsevier, Amsterdam. Utz, M., Debenedetti, P.G., & Stillinger, F.H. (2000) Atomistic simulation of aging and rejuvenation in glasses. Phys. Rev. Lett. 84,1471-1474. Wright, S. (1932) The roles of mutation, inbreeding, crosslinking, and selection in evolution. Proc. 6th Int. Cong. On Genetics, 1, 356-366.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
755
Construction of Minimal Models for Control Purposes R. Lakner\ K.M. Hangos^ ^ I.T. Cameron^ ^Dept. of Computer Science, University of Veszprem, Veszprem, Hungary ^Computer and Automation Research Institute, Budapest, Hungary ^Dept. Chemical Engineering, The University of Queensland, Brisbane, Australia
Abstract Minimal representations are known to have no redundant elements, therefore are of great importance. Based on the notions of performance and quality indices and measures for process systems, the paper proposes conditions for a process model being minimal in a set of functionally equivalent models with respect to a quality norm. Existing procedures to obtain minimal process models for a given modelling goal are discussed and generalized. The notions and procedures are illustrated and compared on a simple case study, on the example of a simple nonlinear fermentation process model.
1. Introduction Minimal representations in any discipline are of great importance both from a theoretical and practical point of view. They are known to have no redundant elements and that is why they are easier to handle and to analyse for characteristic model properties. Lumped process models are the most important and widespread class of process models for control and diagnostic applications. The majority of CAPM tools and dynamic process simulators deal only with lumped process models. Therefore we also restrict ourselves to this case. The notion and properties of, and the transformation to minimal models is well developed and understood in the area of linear and nonlinear system theory (Kailath, 1980 and Isidori, 1995). Moreover, a wide class of lumped process models can also be transformed into the form of nonlinear state-space models. Therefore, the case of nonlinear state-space models is used as a basic case for the notion and construction of minimal models. This is then extended to the more complicated case of general lumped process models.
2. Process Model Indices and the Modelling Goal A process model is jointly determined by the process system it describes and by its modelling goal (Hangos and Cameron, 2001). In order to develop a formal description of the modelling goal of a process system, the notion of model indices should first be defined. Performance indices Let us denote the set of all admissible models of a process system by M containing all the models we consider. A performance index x is a real number which is defined for every member model in M, that is X:M^9l
(1)
756 Modelling goal The modelling goal is assumed to be given in terms of performance indices { Xh •-, Xn } by setting acceptance limits for each of them in the form of inequalities Xi"""<»<Xi'^,
i=l:.,n
(2)
Performance indices for state-space models Model-based control is considered for this type of model as a modelling goal, where the requirements of a process system to satisfy the modelling goal is naturally given in terms of properties and/or parameters of a desired input-output behaviour.
3. Equivalent Process Models The above formal specification of the modelling goal can be used to define the notion of equivalent process models as follows. Functionally equivalent models When a set of admissible models is given together with a modelling goal defined in terms of performance indices as in Eq. (2), there is usually more than one model which can satisfy the modelling goal. Such models form a set of functionally equivalent models and they are defined as T^^^{M\MeH, Xi"" < XiW < Xi""^ , /=;,...,« } (3) Algebraically equivalent models A set of functionally equivalent process models can be algebraically equivalent, when one can transform any member of the set to any other one using algebraic transformations. This however is not always the case. An algebraically transformed model is the same from the process engineering point of view. Such algebraically equivalent models form a model class. Invariance of indices with respect to algebraic transformations It is then natural to require that a model index x be invariant with respect to algebraic transformations, so that, it does not change its index value if one applies an algebraic transformation. Canonical forms It is very useful to define and use a canonical form of a process model class, and represent each class with the member, which is in canonical form. The canonical form of a process model class is a member having each model element in the form which has a clear engineering meaning. (Hangos and Cameron, 2001). Both functionally and algebraically equivalent models The above properties of equivalent state-space models raises the question: "What are the conditions of functionally equivalent process models that make them algebraically equivalent"?. It follows from the definitions that such models may be obtained in case of a modelling goal specified • with performance indices being invariant with respect to algebraic transformations, • in terms of strict equalities as in Eq. (2). It means that the above requirements are necessary but not sufficient conditions for functionally equivalent models being also algebraically equivalent.
4. Ordering of Process Models If one wants to compare functionally equivalent models with respect to their simplicity, a suitable norm of simplicity should be first defined. One can then relate or order process models using this norm as is shown in this section.
757 Quality (simplicity) indices Similar to the case of performance indices, one can define other real-valued indices which characterize the quality or the simplicity of a set of functionally equivalent process models such that a quality index ^ is defined for every member in J- ^:
i--r^x-^^
(4)
Quality norm Given a set of quality indices {^i, ... , ^ ml on a set of functionally equivalent process models J- ^^ we can form a vector ^ e ST with the quality index values as its entries, such that: m) = f^i(M)...im(M)f (5) The quality norm is defined as a vector norm on the vector space 9r with the usual norm properties. Ordering of process models Based on the above notion of quality norms, one can easily compare two functionally equivalent process models Mi and M2 using a quality norm v^ with the quality indices f as follows Ml =^ M2 Ml <^ M2
if and only if if and only if
\^ (^(Mi)) = v^ (^(M2)) \^ (^(Mi)) < v^ (^(M2))
(6) (7)
It is important to note that two functionally equivalent process models M/ and M2 of equal quality with respect to some quality indices may or may not be algebraically equivalent. This depends on the invariance properties of the performance and quality indices with respect to algebraic transformation and also on the equality-inequality constraints in the formulation of the modelling goal.
5. Minimal Process Models In case of algebraically different but functionally equivalent process models we aim at finding the "simplest" possible process model, a so called minimal model, for the given modelling goal. It follows from the ordering of process models that minimal models depend on the selection of the quality indices and their quality norm. Moreover, minimal models may or may not be algebraically equivalent. Minimality for nonlinear state-space models Lumped process models with differential index 1 can be described by input-affine state-space models, where the notion of minimality is related to the number of state variables necessary and sufficient to produce a given input-output behaviour. This means, that one naturally uses the single quality index f = dim x = nfor both linear and nonlinear state-space models and orders the fimctionally equivalent models accordingly. Minimal models are known to be jointly controllable and observable (Isidori 1995). It is also important that minimal models are not unique, but form a set of algebraically equivalent models related by state transformations. Minimality for lumped dynamic process models The above minimality notion is generalized for process models satisfying the same modelling goal described in terms of performance indices of the underlying process system. The ordering of functionally equivalent process models serves as a basis for defining minimal models with minimum value of their quality norm.
758 Given a set of functionally equivalent process models p ^^ together with a quality norm v^ and the quality indices ^. Mmtn is a minimal model in F^x ^^^^ respect to the quality indices ^ if and only if there is no other element M' in F^ such that M' <^ M^in
that is
v^ (^(M')) < v^ (^(MmJ)
(8)
Note that there can be more than one minimal model in a set F 5^ Hence minimal models are not unique or they might not be algebraically equivalent.
6. Construction of Minimal Models The definition of the minimality of process models assumes that we have a set of process models from which we determine the minimal one(s). There are, in principle, two different ways of obtaining minimal models satisfying a prescribed modelling goal: • model reduction starting from a non-minimal model satisfying the modelling goal, • model construction for the given modelling goal. Model reduction of nonlinear state-space models Non-minimal input-affine statespace models can be transformed to a minimal realization form by applying a suitable nonlinear state transformation followed by state elimination. Such a model reduction is based on finding quantities which are constant along any trajectory in the state-space. These can be termed "hidden conserved quantities" (Szederkenyi et al. 2002). The first step of the model reduction is to carry out nonlinear controllability and/or observability analysis to check if the model to be reduced is jointly controllable and observable (and thus minimal). If this is not the case, then the hidden conserved quantities can be determined from the result of this analysis by solving sets of nonlinear partial differential equations (see Isidori, 1995). Generalized model reduction for process models Unlike linear or nonlinear statespace models satisfying a prescribed precise input-output behaviour and with the number of state variables as a quality norm, process models in DAE form with a general modelling goal and quality norm do not have necessary and sufficient conditions for being minimal. Therefore there is no simple check of minimality and no special model reduction procedure based thereon. A possible way of obtaining minimal models is then to reduce a functionally satisfying model in order to obtain a reduced model with lower quality norm. A generalized model reduction procedure can then be proposed, which requires a set of functionally equivalent models satisfying the prescribed performance indices and a quality norm. Then one can reduce any member of the set of functionally equivalent models by performing a sensitivity analysis (Hangos and Cameron, 2001) on the model with respect to any of its model elements to find out if the omission of the model element leaves the performance indices unchanged and reduces the quality norm. If this is the case, then the model element can be omitted and thus the model has been reduced. Although the above generalized model reduction procedure is conceptually simple, it requires an exhaustive search over the set of model elements which makes the model reduction potentially computationally hard. Minimal models constructed by an incremental modelling procedure An alternative way of obtaining provable minimal models is to use a suitable general incremental modelling procedure which uses provable minimal models and enriches
759 them gradually until the specified modelling goal is reached. The seven step modelling procedure reported in (Hangos and Cameron, 2001) offers a systematic incremental way of obtaining minimal process models with respect to the number of differential variables in the model. The minimality of the generated dynamic model is ensured by the following procedure elements: 1. The mass balance subset of the model consists of the total mass balance and (K-1) component mass balances for every balance volume, where K is the number of components. The algebraic dependence between the masses is described by an appropriate closure constraint. 2. There is only one allowable expression to bound each of the algebraic variables.
7. Case Study: A Simple Isotherm Fed-Batch Fermenter Consider a simple isothermal fed-batch fermenter with a single bio-reaction where biomass (with concentration X) is produced using some substrate (with concentration S). Assume that only substrate is fed into the reactor with concentration SF- If constant physico-chemical properties and the presence of an inert solvent are assumed then the lumped dynamic model of a simple fed-batch fermentor is as follows: dX/dt = k(S) X-F/WX dS/dt = - m k(S) X^F/W (Sf -S) dV/dt = F k(S) = k^S /(kj+S-\-k2 S^)
(El) (E2) (E3) (E4)
where three components (an inert solvent, the substrate and the biomass), together with a general chemical reaction of the form " 1/Y S —> X" are supposed. The feed flowrate is denoted by F and the volume is by V. Further assume that the reactor is operated in such a way that it starts from an initial volume Vo and a feed is fed to it until a prescribed final volume VM is reached. A possible choice for the modelling goal would be to estimate the reaction time TM and the final concentrations of the biomass XM and substrate SM with a given precision, that is Xi-T^M ^ X2 = XM , Xs- ^M ai^d the modelling goal is given in terms of inequalities like in Eq. (2). Quality norm The number of differential variables (or states) in the model, is chosen as the quality index. Model reduction The fermentor model in equations (E1)-(E4) has been analyzed by nonlinear techniques and found not to be reachable (Szederkenyi et al. 2002). The constant (hidden conserved) quantity generating the model reduction was found to be: X(X,S,V) = V ( - l A ^ X - S + Sf) = VSf- V S - i r f V X
(E5)
It can be seen that the terms in the above equation are dimensionally homogeneous all being component masses. In order to discover a process engineering interpretation of the above quantity we observe that the volume V is time-dependent and proportional to the total amount of material fed into the reactor starting from the initial time to. Therefore the first term V 5/ is proportional to the total amount of substrate fed into the reactor, the second is the total amount of the substrate V S currently present and the third term is the total amount of the substrate 1/YVX which has been transformed into
760 biomass up to the current time. Thus this hidden conserved quantity is a total mass balance for the components which are involved in the reaction. Incremental model building Our intelligent model editor (Lakner et al. 1999) obtains the minimal model by asking for the model elements (balance volumes, extensive quantities and transport mechanisms) and automatically constructs the balance equations in their extensive forms. The intensive-extensive relationships and the reaction rate equations must be defined as additional constitutive equations. Besides these relations, the closure constraint among total mass and component masses, namely Mi=M-Ms-Mjc is also produced automatically. This equation describes a relationship among the extensive (and naturally among the intensive) quantities and serves to make the resultant model minimal. The model equations constructed by the model editor could easily be transformed to obtain the model equations (E1)-(E3) by substituting the extensive quantities from their extensive-intensive relations and rearranging the balance equations into intensive forms. However, it is important to note that it is not possible to discover the non-minimality of the above model and make the necessary corrections automatically using the incremental seven step modelling procedure alone. The reason for this lies in the fact that the non-minimality of the above fermentor model is caused jointly by a combination of modelling assumptions and the selection of input variables which is generally outside the scope of the general seven step modelling procedure.
8. Conclusion and Future Work Based on the notions of performance and quality indices and measures for process systems, the paper proposes conditions for a process model being minimal in a set of functionally equivalent models with respect to a quality norm. Existing procedures to obtain minimal process models for a given modelling goal are discussed and generalized. The notions and procedures are illustrated and compared on a simple case study, that of a simple nonlinear fermentation process model.
9. References Hangos, K.M., Cameron, I.T, 2001, Process Modelling and Model Analysis. Academic Press. Isidori, A, 1995, Nonlinear Control Systems. Springer. Kailath,Th, 1980, Linear Systems. Prentice Hall. Lakner, R., Hangos, K.M., Cameron, I.T, 1999, An Assumption-Driven Case-Specific Model Editor. Computers and Chemical Engineering, 23, S695-S698. Szederkenyi, G., Kovacs, M., Hangos, K.M, 2002, Reachability of Nonlinear Fed-batch Fermentation Processes. International Journal of Robust and Nonlinear Control, 12, in print.
10. Acknowledgement This work has been supported by the Hungarian National Research Fund through grant T032479 which is gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
761
Miniplant - Effective Tool in Process Development and Design Petri Lievo\ Matts Almark\ Veli-Matti Purola^, Antti Pyhalahti^, Juhani Aittamaa^ ^ Helsinki University of Technology, Laboratory of Chemical Engineering and Plant Design, P.O.B. 6100, FIN-02015 HUT, Espoo, Finland ^ Neste Engineering Oy, P.O.B. 310, FIN-06101, Porvoo, Finland
Abstract Time and cost of process development can be significantly reduced through the use of state-of-the-art process modelling together with lab experiments and miniplant validation. Using this combination it is often possible to significantly reduce the need of the traditional pilot-plant stage in the development of a new process. A miniplant has been built at Helsinki University of Technology (HUT). It is a flexible system consisting of reactors, separation units, recycles, necessary pumps, piping and automation. The system has been used in the verification of mathematical process models and in the development of and the NExOCTANE dimerization process. Several runs have been performed and the targets were successfully achieved.
1. Introduction Process development is a long and expensive process. Time is often the decisive factor whether a new process or product will be profitable or not. Profits of new products are often the highest when they are introduced into the market and along with competition the prizes will decrease and reduce the profitability. The most time consuming stage is to make experiments in various scales. Traditionally process development is done in consecutive steps and disciplines. If the experiments in the different scales could be performed simultaneously, or at least overlapping each other, the development would be faster. This will also mean a quicker return of investment. Extensive use of simulation and mathematical modelling in the process development phase makes it possible to study the economics and the critical technical features of the plant already in a preliminary stage. This gives a predictive tool to guide the development work and to plan the experiments. Understanding of the relative importance of various phenomena enables also optimal allocation of the development resources thus shortening and focusing the development phase and saving the total cost. However, there still exist many uncertainties that cannot be modelled reliably. That is why some experiments with the whole process configuration are often required. These experiments can also be used to tune and to verify the mathematical model of the whole plant. As a tool for these requirements a new miniplant was designed and built at Helsinki University of Technology. The equipment represents the main features of a complete process unit, including reactors, separation units, instrumentation, control system and recycles. The facility has been used to study the new NExOCTANE process and to test
762 the mathematical models built for that process and to fmd the limits and defects of the models.
2. Miniplant Concept 2.1. Process modelling The development and the design of new processes are today extensively based on mathematical modelling and simulation. The entire process is modelled through combining individual unit operation models. Several flowsheeting programs (Aspen+®, HySim®, ChemCAD® Pro 11® etc.) have already found their place in engineering disciplines. For the development of the process model various physical and chemical phenomena occurring in each process unit has to be known. Most important phenomena in chemical production plants are reaction kinetics, heat and mass transfer, phase equilibrium and hydraulics. Reactors require kinetic and mass transfer models that describe how local concentrations and temperature affect the reaction rates. Separation processes require knowledge of phase equilibria (vapor/liquid, liquid/liquid) and knowledge of heat and mass transfer. Very often some specific knowledge of the equipment are also required like capacities and efficiencies of distillation trays. The parameters for these phenomena models are obtained from literature, estimation methods or experiments with special equipment designed for these purposes. Parameter estimation routines are sometimes available in flowsheeting programs mentioned above, but more often specific programs are used. At the laboratory of Chemical Engineering at HUT there are facilities for VLE measurements, which have also been used to retrieve parameters that are of interest in the dimerization process. The Laboratory of Industrial Chemistry at HUT has studied reaction kinetics and several kinetic models have been developed. The process models have been developed using the flowsheeting program Flowbat. 2.2. Miniplant It is important to remember that a simulation results are only as good as the mathematical models on which those are based. However, there are still many items that are neglected or treated insufficiently in mathematical models, such as ageing and deactivation of the catalyst and the stability of the products under actual production conditions. Many commercial flowsheeting programs still rely totally on idealistic behaviour. Many programs have only very limited number of reactor types like tube and CSTR. Common multiphase reactors where mass transfer phenomena also plays important role are missing. Also idealized separation models are common. The effect of recycles and the build-up of impurities in the system as well as the effect of impurities in the feed can be difficult to predict. These effects might entirely be left unnoticed in lab-scale experiment, usually done in quite ideal conditions using pure components or the concentrations of impurities are so small that those are difficult to detect. One significant use of miniplant is the study of recycles. The build-up of an unwanted component that might be unnoticed in laboratory reactor experiments can be found in a miniplant test run, provided that all intended recycles are present, and that the process is allowed to reach a steady state (Worz, 1995).
763 A miniplant is defined as a scale-down prototype of a good commercial plant technology, often to the smallest scale that can still generate reliable design information. The design should be a characteristic slice of the production unit and the miniplant should be operated on actual plant feed under typical plant constraints whenever possible (Robbins, 1979). The individual units of miniplant are not necessarily scaledown copies of the units intended for the production since the main target is not the equipment of scale-up but to verify that the process configuration works. The miniplant consists of laboratory scale units that represent the main features of the full production unit including recycles and instrumentation, analysis and control. The equipment must have appropriate flexibility in operation conditions (for example temperature and pressure range) and it has to allow changes in the process configuration (feed point location, recycle, etc.) for the investigation of various process alternatives. The miniplant also gives knowledge about the operability and the sensitivity of the system and it can be used to simulate some special situations that might occur in the actual process (start up, shut down). It is also applicable in finding the key control parameters and variables in the development of a control scheme of a process. A miniplant is also possible to be inherently safer than a traditional pilot plant. The amount of hazardous materials and the actual inventory in the system can be kept smaller, which reduces the potential damages in the event of unexpected problems with a new process. Smaller amounts also simplify the storage and transportation of these (Hendershot, 2000; Behr et al., 2000).
3. HUT Miniplant 3.1. Project A three-year project to build and utilize a miniplant at the Chemical Engineering laboratory was started in autumn 2000. The whole facility was designed and constructed during the first year. The design was done in a close co-operation with Fortum pilot hall. The HUT miniplant is a process where reaction and separation units are closely connected, thus it is applicable to a large variety of chemical processes from all chemical sectors. The construction and building work was completed in May 2001 and the commissioning and test runs were done during the summer 2001. The Ex-protected area of the HUT Miniplant is shown in Figure 1. 3.2. Facility To ensure flexibility the size of the reaction and separation units and the feed and sidedrawn points in column are changeable. The construction idea was to couple all equipment with quick connectors so that all devices are easy to reconnect when needed. The main facility consists of separation units connected to a series of chemical reactors. The reactors can be used as traditional prereactors followed by separation and recycle, or as side reactors closely connected to the column. The design pressure range is 0-20 bar(g) and the design temperature range is from -30°C up to 200°C. The capacity range is 5-20 litres per day.
764
Figure 1: Miniplant at Helsinki University of Technology. The reactors are fixed bed tubular reactors. The maximum length of the catalyst bed is 1300 mm, but the bed length is freely adjustable. Flow direction is either upwards or downwards. The diameter of the reactor is 15 mm (ID) and in the middle there is a 6 mm (OD) pipe for temperature measurement probes. Temperatures of the reactors are adjusted with oil/water jackets. Separation units are made from modular parts, which are connected with flanged joints. The maximum length of the packing section in the column is 2500 mm. Metal springs ( 0 4 mm) are used as packing material in the 38 mm (ID) tubes. There is no heat compensation in the. The heat losses in the column have been measured, and the results were used in the model verification. 3.3. Instrumentation and automation Six temperature probes can be place inside each reactor and 13 probes can be places inside the column. There are also 4 scales (0-50 kg, 0-100 kg) with accuracy of 0.01 kg for measuring inlet and oudet flow rates. At the moment circulation flows are measured with rotameters. All measurements are connected to the Mitsubishi MELSEC logic and monitoring software to view and save the data. For safety reasons there are also locking system in automation and a separate alarm system. 3.4. Analysis The process flows are analyzed with a gas chromatograph. Analyses from higher pressures are collected in sample cylinders. Flow is directed through the sample cylinders to get realistic samples. 3.5. Operability The pressurization of the facility is done with nitrogen and it is maintained and adjusted with a combination of relief valve and needle valve. At the moment it is possible to operate with a configuration were four sections are in different pressures at the same time. Pressure control using relief valves have been more precise than expected, almost as good as with normal pressure adjustment systems.
765
4. Research Results Process model was developed by in-house software VLEfit, Kinfit, Flowbat (Aittamaa, 2002) since these provides seamless link from parameter fitting of lab experiments to process optimisation and also contained some detailed process models that does not exist in commercial simulators. These softwares were used extensively during the conceptual design phase. The miniplant has been in operation from autumn 2001. Several longer runs have been performed, in which the miniplant was operated in accordance NExOCTANE (Sloan, 2000) process and the results were used to verify the accuracy of process models and further development of the process concept. 4.1. NExOCTANE runs Two 10 days long run have been made to study the NExOCTANE dimerization process in cooperation with Fortum. The target of the first run was also to test the whole facility and it's operability. The test runs were used to verify certain critical parts of the design made for a world scale isooctane plant. The design itself was based mainly on laboratory scale experiments, mathematical models and computer simulation. However, although the modern models are very useful it was still necessary to verify that all reactions and separations will take place as designed when all components of the industrial feed are present. For this purpose the scale of the devices is not critical, provided that the essential unit operations work properly and that correct concentrations, temperatures and pressures are realized in the unit. The HUT miniplant was used for this verification step. Dynamic process model was build by PROsimulator (Hammarstrom, 2002) based on detailed static model and it was used to study the control and start-up strategy. One test run in miniplant was solely devoted to define start-up and catalyst activation procedure. As a result of the runs it was noticed that the miniplant facility is operational in spite of the quite many manual adjustments. Of course that also means that the system has to be staffed with skilful personnel at all times during the operation. On the other hand manual control also means that the operator is aware of the changes in the process, which is beneficial when studying new process configurations. Predefined conversion and selectivity criteria and operational targets of the NExOCTANE process were successfully obtained in both runs. Valuable process knowledge was obtained. Detailed technical models at Fortum and some academic models at HUT have been compared and validated through the miniplant results. The scale up step from a miniplant to a world scale unit is immense, but the modern modelling tools make it possible. As proof of this, a 530000 t/y isooctane plant is currently in operation, constructed without building any semitechnical scale pilot-plant. 4.2. Packing efficiency measurement runs During the summer 2002 the efficiencies of the miniplant columns and heat losses were verified. The efficiency of the packing used was tested with cyclohexane - n-hexane mixture. Binary composition, pressure and reboiler duty were the adjusted parameters. The amount of the ideal plate per meter was measured to be from 15 to 20, which gives an approximate height of 5 - 6.5 cm for one ideal plate. Heat losses were evaluated for
766 the reboiler, condenser and column parts separately, and the measured results were compared to those calculated from general heat transfer correlations. The correspondence was found to be good. 4.3. General results The basic design of the system is flexible enough, and it is quite easy to change the equipment for different purposes.
5. Conclusions Detailed understanding of thermodynamics, phase equilibrium, mass transfer and reaction kinetics enables to construct rigorous models of chemical production plants. These models, however, contain many uncertainties of a full-scale unit that can be learned and validated with experiments in a laboratory scale miniplant production unit. Combining the rigorous mathematical modelling and the results obtained from the miniplant forms a new process design methodology. This combination will reduce and focus the experiments in large and expensive pilot units significantly. This new concept leads to an economical and flexible way to validate the rigorous model of the production unit already at laboratory scale yet having the benefits of a continuous operation. The results of the experimental runs and the use of the computational model will help the targeting of larger scale test runs when needed. The final result will be that the time and the cost of the new process development can be significantly reduced and several process alternatives can be flexibly studied resulting into better processes in all respects. It has been shown that it is possible to run reliably a fairly complex chemical process containing several recycles in a very small scale.
6. References Aittamaa, J., http://www.hut.fi/Units/ChemEng/software.html, 31 October 2002. Behr, A., Ebberts, W., Wiese, N., Miniplants - A Contribution to Inherent Safety?, Chem.-Ing.-Tech., 72 (2000) 1157-1166 (german). Hammarstrom, L., http://www.nesteengineering.com/document.asp?path= 1488; 1495; 1522;4471;4472;4474, 31 January, 2002. Hendershot, D.C., Process Minimization: Making Plants Safer, Chem. Eng. Prog., 96(1) (2000) 35-40. Robbins, L.M., The Miniplant Concept, Chem. Eng. Prog., 75(9) (1979) 45-48. Sloan, H.D., Birkhoff, E.G., Gilbert, M.F., Nurminen, M., Pyhalahti, A., Isooctane Production from C4's as an Alternative to MTBE, Presented at NPRA 2000 Annual Meeting, March 26-28, 2000, San Antonio. Worz, O., Process Development Via a Miniplant, Chem. Eng. Proc, 34 (1995) 261-268.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
767
A Generalized Adsorption Rate Model Based on the LimitingComponent Constraint in Ion-Exchange Chromatographic Separation for Multicomponent Systems Young-il Lim\ Soren Christensen* and Sten Bay Jorgensen^ Department of Chemical Engineering, Technical University of Denmark (DTU) 2800 Kgs. Lyngby, Denmark *Keniira Dermiark A/S, Innovation Department, Kastelsvej 5, 7000 Fredericia, Denmark
Abstract In species exchange processes (e.g., ion-exchange chromatography column), conventional adsorption rate models describe mass transfer (or exchange) between phases, assuming the existence of a counterpart species. In contrast, the adsorption models may not be useful in an inert enviroimient (or inactive zone) where adsorption/desorption carmot take place because of lack of counterpart species. In packed-bed chromatography described as a distributed dynamic system, a wide range of concentrations including zero-concentrations can be distributed over the column length and the concentration profiles move with time. Hence, the moving active and inactive zones are mixed over the column length. If a conventional adsorption rate model is employed in the inactive zone, computational solutions show that such a model can lead to unphysical negative concentrations. This study aims to develop a model such that conventional LDF (linear driving force) type models are extended to inactive zones without loosing their generality. Based on a limiting component constraint, an exchange probability kernel is developed for multi-component systems. The LDF-type model with the kernel is continuous with time and axial direction. Two tuning parameters such as concentration layer thickness and fimction change rate at the threshold point are needed for the probability kernels, which are not sensitive to problems considered.
1. Introduction One of the most widely used adsorption rate model is the so-called Linear Driving Force (LDF) approximation [Carta & Lewus, 2000]. It is not surprising that the models are applicable to the situation where adsorption/desorption can take place, i.e., the active zone where both adsorbents and desorbents are present. In contrast, the conventional rate models are normally not applicable to the inactive zones where adsorption/desorption cannot occur due to lack of a limiting-component. Such a rate ^ To whom correspondence is to be addressed, email: lim(a).kt.dtu.dk, tel: +45 4525 2804, fax: +45 4593 2906 ^ email: [email protected]. tel: +45 4525 2800
768 model usually leads to negative concentrations during numerical solution, when counter-species (i.e., a limiting component) is exhausted. In chromatographic separation processes, the active and inactive zones can be intermixed over columns and move with time. Since it is not easy to distinguish between the active and inactive zones, it is necessary to develop a generalized rate model which is applicable to the whole range of concentrations over both the active and inactive zones. The new model must be a continuous function in time, so that a DAE time integrator can be used. In the rate model to be proposed, a thin buffer layer (the so-called concentration thickness) is estabhshed between the two zones. Using exchange probability functions derived from Limiting-Component Constraints (LCC), it is achieved that adsorption/desorption normally can take place over the buffer layer but adsorption/desorption is almost impossible under the buffer layer. Lim & Jorgensen (2002) proposed the generalized adsorption rate model for binary systems. In this study, the generalized adsorption rate model involving two exchange possibility kernels (i.e., sum and product kernels) is extended to multi-components systems. For the possibility kernels, two tuning parameters, (i.e., the concentration thickness and function change rate at the threshold point) are required. These tuning parameters are found to be insensitive to the problems considered. For illustration, the new model is compared with a conventional one in a SMB (Simulated Moving Bed) ion-exchange process. In the numerical solution, negative concentrations in the inactive zone are efficiently restricted.
2. Rate Model Based on Limiting Component Constraints (LCC) The constraint is deduced from the mass conservation law. Definition: For species exchange separation problems where the net flux of adsorption and desorption must be equal to null, adsorption (or desorption) on the solid particle takes place only under the existence of counterpart species in the other phase. For multi-component systems, assuming concentrations in the solid phase are not negative (nj>0), the constraint can be interpreted as follows: Condition I: When Cj>0 for all components, adsorption and desorption are always possible for all component. rj=k(n*-nj)
Condition 11: When Ck=0 and Cj^^k^O, adsorption of j-component on particles is possible but k-component adsorption is impossible. rj=k( n* -nj), rk=k( nl -nk)<0 Condition III: When Cj=0 for all components, adsorption and desorption are impossible. rj=0, for l<j0 at any moment and position. A general rate model must satisfy the four conditions simultaneously over the whole chromatographic column. The conventional rate models involving the adsorption isotherms (or equilibrium equations) satisfy the conditions I/II without any modification. However, the conditions III/IV are not guaranteed. Using the probability
769 kernel of a sigmoid-type (Lim & J0rgensen, 2002), a generalized rate model is developed to fulfill the two last conditions. An exchange probability kernel that satisfies conditions I/II/III is called the sum kernel ((|)sum) and a kernel that satisfies conditions I/II/IVis called the product kernel ((t)product)2.1. Sum kernel A logical sum of [Condition I n Condition II n Condition III] results in a kernel containing the concentration sum of all components. The sum kernel inspired by sigmoidal function which allows a rapid change at a threshold point but continuously is: 1+e
(1)
Ncomp
e = a. 2(Ci-53/2),N,„„p>2 Under conditions I and II, the probability kernel, (t)sum~l- In the case of condition III, 4^sum~0- Thus, the sum kernel acts, only when all components are limiting-components simultaneously. 2.2. Product kernel A logical sum of [Condition I n Condition II n Condition IV] results in a kernel containing the concentration products of (CiCj). The product kernel is defined with a negative concentration layer (-6j) that is the same magnitude as the above concentration thickness (6j) but of opposite sign. t,
=
r product
i
^ ^
^ e
/r 1^ 9 = a^- min ([(Ci+5i/2)(Cj+5j/2)J_^)
(2)
Ncomp>j>i
where m^ax is the number of combinations selecting two components among Ncomp without order. N -CN -1) (3) The above kernel fulfills conditions I/II/IV with tolerable negative concentrations of -5i and -6j. The positive thickness (5j) for the sum kernel is used in order to make sure of j.generai^Q ^^ q^Q ^^^ ^jj j (^^ondition III). The negative thickness (-5i and -5j) for the product kernel is to guarantee (t)product^O at Cj=0 and Ck^j>0 (condition II). 2.3. Generalized rate model To satisfy all of the four conditions at the same time, the exchange probability kernel must be a logical sum of the sum and product kernels. For species exchange problems, A generalized adsorption rate equation is therefore expressed as follows: j
~~ T sum ' T product * ^ j ' J ~" ^ • • • ^^ comp ~ ^
general _ _ Ncom ~
Ncomp-1 , Y " -general ^ j
(4) ^ ^
770 Here, the parameters (a and 5j) must be determined properly so that they are not sensitive to the problems considered. For instance, we use in our numerical study 6j=max(Cj^ feed, j=l...Ncomp)xlO"'^. Once the concentration thickness is determined, the sigmoid parameter, a, is calculated on the basis of an expected value ((^Q) at Cj=6j. For 24n((^,/(l-^J) , 0«(t),<^Q at Cj=5j is expected, oc> 5; For graphical illustration of the three kernels (i.e., sum kernel, product kernel, and sumproduct kernel), consider a binary system with components A and B. Fig. 1 shows exchange probabilities of the kernels as function of concentration A and B. Here, 5=0.2 and a=100 are used. If conventional rate models are used without any modification, the probability of adsorption is equal to unity everywhere, as shown in Fig. 1 (a). For the sum and product kernels, some probabilities are not null beyond physical boundaries (CA^O and CB^O). When the two kernels are combined, exchange probabilities have effective values only within the physical boundaries and have intermediate values only within the tolerable buffer layer, -5< CA(=CB) <5. (a| original rata model
Fig. 1. Exchange probability values ((p) with concentrations of A and B according to kernels (5=0.2, a=100).
3. Numerical Study For an ion-exchange SMB process with three components (Ncomp=3), consider 15 columns involving 5 sections such as wash water (3 columns), production (3 columns), slip water I (4 columns), regeneration (4 columns) and slip water II (1 column) sections. See Lim & Jorgensen (2002) for the column configuration and operating conditions. A cation exchange between counter-ions A^^, C^ and B^ takes place in the columns: (5) ^(aq) + 3BR (resin) +cU^A. ^ 2 (resin) + C • R (^esin) "^ ^^(aq)
771 At the beginning of the first shifting, concentrations of the Hquid/solid phases for 15 columns are initialized as follows: CA(0,z.,) = CB(0,Zi,0 = Cc(0,Zi,.) = 0,fori = l...N^-l, I = l...Nc nA(0,Zj,) = 0.24nT,nB(0,Zii) = 0.75nT,nc(0,Zi,) = 0.01nT, fori = l,N^, l = l,N, where N^ is the number of meshes per column. Nc the number of columns and nj resin capacity. The feed concentration (Cj), adsorption coefficient (kj) and axial dispersion coefficient (Daxj) are given for each component j . We consider a thermodynamic equilibrium equation for the electrolyte solution [Marcussen, 1985], where an extended Debye-Hiickel model is combined with the Wilson VLE relation to predict equilibrium concentrations between the mobile and stationary phases [Smith & Woodbum, 1978]. As a preliminary test, the governing equation (i.e., convection-diffusion-reaction PDE) is solved using an ODE time integrator for the binary system including only A and B components. Fig. 2 shows effects of the three kernels on liquid concentrations. Abnormal concentration profiles are found in inactive zones for the conventional rate model (Fig. 2a) and that with the sum kernel (Fig. 2b). The product kernel will play an effective role for multi-component systems. For the three-component system, numerical results are shown in Fig. 3, using the generalized rate model proposed in this study. At the last shifting (ST* shillings), concentration profiles are depicted over columns for three different times of the cycle time (T=5min). The conventional rate model combined with the sum-product kernel works well even in inactive zones.
(a) originaIrate mode 1
> a>
#•
1
.
#
f f f 1
'^\\\ \\ \
OS
1 f
*"^*-/ / ^ / // 1
~> 0) cf o
I
1 r^
o O
^
\ \%
N *
c: (D O
«*i
c o O
2
column, N (d) sum*product keri^el
>
/"r
1 1 1 f 1 ' r 1 /
(D
\
if
1
column, N
2
X^;-,
y
O
O
1
2
column, N
Fig. 2, Liquid concentration profiles for the binary system according to kernels in the 3 production columns after nine shiftings (dashed line: CA, solid line: Cg).
772
Column number 1
1
p V o o 0' ^
_L
L
i
(c)t= 5.0
•d^Mi
1
1 1 1
1
•
1
1
1 1 1
-
^ M
^"•"_ Column number Fig. 5. Liquid concentration distribution for the ternary system over the 15 columns within one cycle time at 3V^ shiftings (circle: CA, solid line: Cg, cross: Cc).
4. Conclusion For ion-exchange packed-bed chromatographic adsorption problems, a generalized adsorption rate model is proposed for multi-component systems without losing generality of conventional rate models. The new model with the exchange probability kernels can describe both active and inactive zones of the chromatographic column. The time-continuous kernels based on the LCC are developed in two respects: 1) an adsorption rate becomes zero when adsorbents are not present in the liquid phase, 2) concentrations are not less than zero. The sum kernel is for the former and the product kernel for the latter situation. Consequently, this model is considered as a concentrationdependent rate model. The generalized rate model satisfying the LCC yields reliable results so that negative concentrations are controlled within 1% of the maximum concentration. The new model will be useful to simulate a start-up of chromatographic processes before reaching at the cyclic steady-state, since the active and inactive zones are mixed in chromatographic columns during start-up. Furthermore, the new model is needed for a SMB operation involving a washing step to rinse the column.
5. Reference Carta, G. and Lewus, R.K., Adsorp., 6(2000), 5-13. Lim, Y.I. and Jorgensen, S.B., J. Chromatogr. A, 2002, submitted. Marcussen, L., Superfos-DTU internal report, Dept. Chemical Engineering, DTU, Denmark, 1985. Smith, R.P. and Woodbum, E.T., AIChE. 24(1978), 577-587.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
773
Comparison of Various Flow Visualisation Techniques in a Gas-Liquid Mixed Tank Tatu Miettinen, Marko Laakkonen, Juhani Aittamaa Laboratory of Chemical Engineering, Helsinki University of Technology P.O. Box 6100, FIN-02015 HUT Finland, email: [email protected], [email protected],[email protected]
Abstract Computational Fluid Dynamic (CFD) and mechanistic models of gas-liquid flow and mass transfer at turbulent conditions are useful for studying local inhomogeneities and operation conditions of gas-liquid stirred tanks. They are applicable also as scale-up and design tools of gas-liquid stirred tank reactors and other gas-liquid contacting devices v^ith greater confidence compared to purely heuristic design methods. Experiments are needed for the development and the verification of these models. Various flow visualisation techniques have been utilised to obtain experimental results from local gas hold-ups and bubble size distributions (BSD) in a gas-liquid mixed tank. Particle Image Velocimetry (PIV), Phase Doppler Anemometry (PDA), Capillary suction probe (CSP), High-speed video imaging (HSVI) and Electrical Resistance Tomography (ERT) techniques have been applied. The applicability of various techniques is dependent on the location of the measurement, the physical properties of the gas-liquid flow, the gas hold-up and the size of the tank. Local characteristics of the gas-liquid flow have been measured for air-water dispersion in a baffled 13.8 dm^ mixed tank at various gas feeds and impeller rotational speeds. BSDs have been measured in the tank using CSP, PIV and PDA techniques. CSP, PIV and ERT have been used for the determination of local gas hold-ups. HSVI has been applied for the visualisation of the breakage, the coalescence and the shapes of the bubbles. Results from the applied techniques have been compared with each other and their advantages, disadvantages and limitations have been discussed.
1. Introduction Gas-liquid mixed tanks are used for various operations in industrial practise. The design of gas-liquid mixing units and reactors is still done by empirical correlations, which are usually valid for specific components, mixing conditions and geometries. Computational Fluid Dynamic (CFD) techniques have been used successfully for single-phase flow, but gas-liquid flow calculations are still tedious for computers. Therefore, simpler and more accurate multiphase models are needed. In order to verify multiphase CFD calculations and to fit unknown parameters in the multiphase models, experimental local bubble size distributions and flow patterns are needed. Bubble breakage and coalescence functions can be fitted against the local, timeaveraged BSDs. In this way, a generalised model for the mass transfer area that includes
774 the dependence on local dissipation of mixing energy and physical properties of dispersion can be developed.
2. Visualisation Techniques Capillary suction probe technique (CSP) CSP technique (Barigou and Greaves, 1992, Genenger and Lohrengel 1992) is a single point invasive method, which has been used to measure bubble size distributions (BSD) and gas volume fractions (Tabera et al. 1990). In the photoelectric suction probe technique bubbles are sucked through the capillary where they are transformed into cylindrical slugs of equivalent volume. The measuring probe, which encloses the capillary, consists of lamps and phototransistors. The electrical resistance of the phototransistor changes every time when a bubble passes the sensor. The sizes of bubbles are calculated utilising the distance between the detectors, the times between changes in the resistance of consecutive detectors and the diameter of the capillary. The CSP technique is useful for opaque and dense dispersions that are beyond the applicability of most optical techniques. Probes are also inexpensive relative to most optical methods. CSP does not apply to very small vessels, since the continuous sample stream reduces the volume of dispersion and disturbs the flow pattern. Furthermore bubbles might break having collided with the funnel shaped edge of the capillary causing error to the BSDs. Electrical Impedance Tomography (EIT) During the last years tomography has obtained intensive research to characterise multiphase flows (Fransolet et al. 2001). EIT is a non-invasive technique that applies to opaque dispersions. In EIT experiments resistivities are measured between the electrodes that cover the part of the walls of the vessel. The continuous phase must be conductive and the difference in conductivity between the continuous phase and the dispersed phase must be distinct. The resistivity distributions are reconstructed to produce three-dimensional images of the resistivity field. Tomography techniques are relatively slow compared to the time scale of flow in a mixed tank so it is not suitable for the determination of BSDs. Phase Doppler Anemometry (PDA) Laser Doppler Velocimetry (LDV) (Joshi et al. 2001) and PDA (Schafer et al. 2000) are optical techniques that have been used to determine BSDs, gas hold-up and flow patterns. Detectors observe the Doppler shift and phase difference when bubbles pass through the volume of the intersection of two laser beams. Doppler effect is related to the velocities of bubbles and the phase difference is related to the sizes of bubbles. Particle Image Velocimetry (PIV) PIV (Deen et al. 2002) takes also advance of laser light. A pulsing laser light scatters from the bubbles and illuminates part of the dispersion. Illuminated volume is imaged using CCD digital cameras. The local displacements of bubbles between two laser pulses are measured from the taken pictures. Displacement vectors can be measured also for the liquid phase by adding scattering particles. Therefore PIV can be used to determine local BSDs and relative velocities (slip) between a dispersed and a continuous phase simultaneously. Furthermore, gas hold-up is obtained from the PIV results when depth, and area of PIV pictures are known.
775 Imaging techniques Planar imaging techniques like conventional photography and high-speed video imaging HSVI (Takahashi et al. 1992) have been used to visualise multiphase flows in mixed tanks. The observations about the mechanisms of bubble breakage, coalescence and wake effects can be used for the development of mechanistic bubble functions (Takahashi and Nienow 1993). HSVI requires plenty of well-directed light. PDA, PIV and HSVI are non-invasive optical techniques, which apply only for transparent solutions. High concentrations of bubbles hamper the visibility of the measurement volume and attenuate the intensity of light. Because of this, optical techniques can be used to investigate small-sized vessels and at low bubble concentrations. Table 1. The comparison of various techniques. Method
Applicability
Bubble size >0.5 nmi
Gas hold-up Less than 25% Less than 5%
Physical properties Low viscosity of dispersion Transparent dispersion
CSP
BSDTgas hold up
PDA
BSD, gas hold-up, flow pattern Gas hold-up
30|im1.2 mm
BSD, gas hold-up, flow pattern Visualisation
0.1-
Less than 4%
Conductive continuous phase Transparent dispersion
Not limited
Less than 1%
Transparent dispersion
EIT
PIV
HSVI
0-99%
Other notiOcations Inexpensive, simple Modelling & calibration needed Calibration experiments needed Expensive, tedious data processing -
3. Experimental Experiments were carried out in a flat-bottomed cylindrical glass vessel (0.0138 m^), which was equipped with four-bladed radial impeller and four baffles. Gas was fed through a 0.66 mm (inner diameter) single-tube nozzle, which was located in the middle of vessel, 30 mm from the bottom of the tank (Figure 1). Experiments were carried out for air-tap-water system at atmospheric pressure and room temperature 22 °C. Surface tension 69 mN/m was measured with Sigma 70 Tensiometer. Gassing rates and stirring speeds were varied between 0.1-1.0 dmVmin and 300-600 rpm. The locations of the experiments are presented in Figure 1. Locations of experiments with various techniques do not coincide. It was impossible to carry out measurements under the impeller with the capillary and bubbles were too large elsewhere but near the impeller with PDA. Baffles also set some restrictions to the PDA and PIV techniques.
776 PDA
Capillary •1
.2
Figure 1. Dimensions of the stirred tank and locations of the experiments in the tank.
4. Results and Discussion 4.1. Bubble size distributions BSDs were calculated from 1000 to 5000 bubbles per measured location by CSP and PDA. 4000 to 70000 bubbles were used in PIV experiments depending on mixing conditions and the location. In the PIV technique, the smallest detectable bubble size was 0.10 mm due to spatial resolution of the CCD camera and the largest observed bubbles were approximately 8.5 mm. With PDA bubbles from 0.03 mm to 1.3 mm were observed. The inner diameter of capillary was 1.2 mm and detected bubbles were ranged from 0.8 mm to 6 mm. Smaller bubbles were out of range, since they did not form slugs inside the capillary. On the other hand, larger bubbles were not observed, because they did not exist or they broke into smaller ones during the sampling. The overall volume of bubbles in one experiment was determined by collecting bubbles into the measuring burette that was supplied with the pressure meter. BSDs were calibrated with the assistance of total volume of the collected bubbles and the pressure difference between the burette and atmospheric pressure. Capillary and PDA results were in close agreement. The peaks of the BSDs were around 1 mm, which is close to the limit of both techniques, so it was not possible to see overall bubble size range either with capillary or with PDA. The same peaks were close to 0.2 mm in the PIV experiments, which differs a lot from the results with capillary and PDA. It was observed later that some of the large bubbles were identified as groups of small bubbles. This seemed to be the reason for the deviation. 4.2. Gas hold-ups Local bubble concentration i.e. the gas hold-up is in relation to the ability of bubbles to coalesce. Therefore, local gas hold-ups are required for the development of the models. Local gas hold-ups were determined at positions A-F with the PIV and capillary techniques. The position A was not accessed with the capillary due to impeller. PIV gas hold-ups were determined from the depth, width and height of PIV pictures. The width and the height of PIV pictures were determined by the optical settings of camera. The depth of illuminated plane in the dispersion was obtained from the calibration experiments with a bubble gel. Sensitivity analysis denoted that local gas hold-up determined from the PIV results is relatively insensitive to the depth of the illuminated
777
plane. In the capillary technique gas hold-ups were measured from the volume of the sucked gas and liquid. A problem of this method is the selection of sampling rate that gives correct local gas hold-ups. Isokinetic sampling can be reached when the sampling rate of bubbles is equal to the arriving rate of bubbles at the tip zone of capillary (Greaves and Kobbacy, 1984). Larger gas hold-ups were obtained with capillary than with PIV. Local PIV gas holdups were certainly too small due to problems in bubble identification algorithm. Another reason for the differences between the PIV and capillary might be a false sampling rate of capillary. The problem of isokinetic sampling arises partially from the fact that bubbles rise up relative to liquid due to buoyancy. Since the capillary probe was located vertically in the dispersion, gas hold-up become overestimated in the experiments. Absolute gas hold-up values in the performed experiments were low and therefore the absolute differences in the values obtained with the capillary and the PIV technique are relatively low. Gas hold-ups were determined from the EIT reconstructions by using the resistivity distribution of continuous phase as a reference. Gas-liquid resistivity distributions were compared to the reference and three-dimensional images were formed.If resistivity is assumed to depend linearly on the gas hold-up, relative differences in gas hold-up are obtained from EIT results between the various locations. Actually, the relation between the conductivity and the gas hold-up is slightly non-linear (Mwambela and Johansen 2001) and therefore calibration experiments are needed to determine gas volume density distributions. Due to the fluctuating nature of gas-liquid flow some abnormal resistivity distributions were obtained with EIT and the averaging of several experiments is necessary to get accurate resistivity fields from the mixed tank. Abnormal resistivity distributions were also found at the boundaries like at the liquid surface and the bottom of the tank.
5. Conclusions The applicability of various flow visualisation techniques was tested in a mixed tank. CSP was used to measure bubble size distributions and gas hold-ups. In order to provide meaningful data, calibration, suction speed and the size of the capillary have to be determined carefully. To carry out experiments with the PIV technique a reliable imageprocessing algorithm is needed to recognise bubbles from the images. PDA was observed to detect very narrow bubble size range from 0.03 mm to 1.3 mm, which limited its applicability to the vicinity of impeller in the tank. The EIT is a promising technique for the determination of gas volume density distributions from a mixed tank, but due to the fluctuating nature of gas-liquid flow the averaging of several experiments seems to be necessary to get a reliable resistivity distributions in the vessel. To obtain the relation between the gas hold-up and resistivity, calibration experiments are needed. The imaging of gas-liquid flow was useful for the detection of phenomena that were not observed with other experimental techniques. Imaging revealed also the complexity of two-phase flow, which partially explains the differences in the results obtained with the applied techniques. Every technique has its limitations and disadvantages and therefore the visualisation of multiphase flow in stirred tanks is still a challenging task. Further research and improvements of measurement techniques are therefore needed.
778
6. References Barigou, M., Greaves, M., Bubble-size distributions in a mechanically agitated gasliquid contactor, Chem. Eng. Sci. 47 (1992), 2009-2025. Deen, N.G., Westerweel, J., Delnoij, E., Two-phase PIV in bubbly flows: Status and trends, Chem. Eng. Technol. 25 (2002), 97-101. Fransolet, E., Crine, M., L'Homme, G., Toye, D., Marchot, P., analysis of electrical resistance tomography measurements obtained on a bubble column, Meas. Sci. Technol. 12 (2001), 1055-1060. Genenger, B., Lohrengel, B., Measuring device for gas/liquid flow, Chem. Eng. Proc. 31 (1992), 87-96. Greaves, M., Kobbacy, K.A.H., Measurement of bubble size distribution in turbulent gas-liquid dispersions, Chem. Eng. Res. Des. 62 (1984), 3-12. Joshi, J.B., Kulkarni, A.A., Kumar, V.R, Kulkarni, B.D., Simultaneously measurement of hold-up profiles and interfasial area using LDA in bubble columns: predictions by multiresolution analysis and comparison with experiments, Chem. Eng. Sci., 56 (2001), 6437-6445. Mwambela, A.J., Johansen, G.A., Multiphase flow component volume fraction measurement: experimental evaluation of entropic thresholding methods using an electrical capasitance tomography system, Meas. Sci. Technol. 12 (2001), 1092-1101. Schafer, M., Wachter, P., Durst, F., Experimental investigation of local bubble size distributions in stirred vessels using Phase Dobbler Anemometry. 10^^ European Conference on Mixing 2000,205-212. Tabera, J., Local gas hold-up measurement in stirred fermenters. I Description of the measurement apparatus and screening of variables, Biotechnol. Tech. 4(5) (1990), 299-304. Takahashi, K., McManamey, W.J., Nienow, A.W., Bubble size distributions in impeller region in a gas-sparged vessel agitated by a Rushton turbine, J. Chem. Eng. Jpn. 25(4) (1992), 427-432. Takahashi, K., Nienow, A.W., Bubble sizes and coalescence rates in an aerated vessel agitated by a Rushton turbine , J. Chem. Eng. Jpn. 26(5) (1993), 536-542.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
779
A Hybrid Optimization Technique for Improvement of P-Recovery in a Pellet Reactor L. Montastruc, C. Azzaro-Pantel, A. Davin, L. Pibouleau, M. Cabassud, S. Domenech Laboratoire de Genie Chimique- UMR CNRS/INP/UPS 5503 ENSIACET, 118 route de Narbonne 31077 TOULOUSE Cedex , France Mail: [email protected]
Abstract Emphasis in recent years has been focused on improving processes which lead to enhanced phosphate recovery. This paper studies the precipitation features of calcium phosphate in a fluidized bed reactor in a concentration range between 50 et 4mg/L and establishes the conditions for optimum phosphate removal efficiency. For this purpose, a hybrid optimization technique based on Simulated Annealing (SA) and Quadratic Progranmiing (QP) is used to optimize the efficiency of the pellet reactor. The efficiency is computed by coupling a simple agglomeration model with a combination of elementary systems representing basic ideal flow patterns (perfect mixed flow, plug flow,...). More precisely, the superstructure represents the hydrodynamics conditions in the fluidized bed. The "kinetic" constant is obtained for each combination. The two levels of the resolution procedure are as follows: at the upper level, SA generates different combinations and at the lower level, the set of parameters is identified by a QP method for each combination. The observed results show that a simple combination of ideal flow patterns is involved in the pellet reactor modeling, which seems interesting for a future control.
1. Introduction Phosphorus recovery from wastewater accords with the demands of sustainable development of phosphate industry and the stringent environment quality standard. In this context, the past decade has seen a number of engineering solutions aiming to address phosphorus recovery from wastewater by precipitation of calcium phosphates in a recyclable form (Morse et al., 1998). An advanced alternative is to apply the so called pellet reactor (Seckler, 1994). The purpose of the study presented in this paper is to develop a methodology based on modeling for optimizing the efficiency of the pellet reactor. The article is divided into four main sections: first, the process is briefly described. Then, the basic principles of modeling are recalled. Third, the hybrid optimization strategy is presented. Finally, typical results are discussed and analyzed.
2. Process Description The process is based on the precipitation of calcium phosphate obtained by mixing a phosphate solution with calcium ions and a base. More precisely, it involves a fluidized
780 bed of sand continuously fed with aqueous solutions. Calcium phosphate precipitates upon the surface of sand grains. At the same time, small particles, i.e., "fines", leave the bed with the remaining phosphate not recovered in the reactor. A layer of fines which has agglomerated is observed at the upper zone of the fluidized bed. The modeling of fines production involved amorphous calcium phosphate (ACP) for the higher pH values and both ACP and DCPD (DiCalcium Phosphate Dihydrate) for lower pH tested as suggested elsewhere (Montastruc et al. 2002b). Both total and dissolved concentrations of phosphorus, pH and the temperature were measured at the outlet stream. In order to measure the dissolved concentrations, the upper outlet stream was filtered immediately over a 0.45 jLtm filter. The sample of total phosphorus was pretreated with HCl in order to dissolve any suspended solid. The phosphate removal efficiency (r|) of the reactor and the conversion of phosphate from the liquid to the solid phase (X) are defined as:
where Wpin represents the flowrate of the phosphorus component at the reactor inlet, Wp,tot gives the total flowrate of phosphorus both as dissolved and as fines at the reactor outlet and Wpsoi is the flowrate of dissolved P at the reactor top outlet. If r|agg is the agglomeration rate, that is, the ratio between phosphorus in the bed and in the inlet stream, the following relation can be deduced: r] = r}a,,X
(3)
The phosphate covered grains are removed from the bottom of the bed and replaced intermittently by fresh sand grains. In most studies reported in the literature (Morse et al., 1998), the phosphate removal efficiency of a single pass reactor, even at industrial scale, has an order of magnitude of only 50%. Let us recall that the pellet reactor efficiency depends not only on pH but also on the hydrodynamical conditions (Montastruc et al., 2002a).
3. Modeling Principles Two models are successively used to compute the reactor efficiency. In the first level (see Figure 1), the thermochemical model determines the quantity of phosphate both in the liquid and solid phase vs. pH value, temperature and calcium concentration. Moreover, this model quantifies the produced amount of ACP and DCPD as a function of the initial conditions (Montastruc et al., 2002b). The second step would involve an agglomeration model requiring (Mullin, 1993) the density value of the calcium phosphate which have precipitated in the pellet reactor and also the fines diameter. Moreover, the agglomeration rate depends on the hydrodynamical conditions
781 particularly the eddies sizes. These values are difficult to obtain and require a lot of assumptions which are difficult to verify practically. Influent [P]
s pH
T E P
Thermodynamical model for precipitation
[Ca]
[P]Solid Sand amount
[Pluquid
Reactor network model
s
Flow rate Q Effluent [P]Grain
[P]F
[P] total
T E P
Figure 1. Principles of pellet reactor modeling.
4. Reactor Network Model To solve the problem, another alternative is used to compute the pellet reactor efficiency, which implies the identification of the pellet reactor as a reactor network involving a combination of elementary systems representing basic ideal flow patterns (perfect mixed flows, plug flows,...) (see Figure 2). The combination of elementary systems representing basic ideal flow patterns is described by a superstructure (Floquet et al., 1989). This superstructure contains 4 perfect mixed flows arranged in series, 2 plug flows, 1 by-pass, 2 dead volumes and 1 recycling flow, and represents the different flows arrangement (integer variables) that is likely to take place in the fluidized bed. Let us recall that more than four series of perfect mixed flows produce the same effect that a plug flow. The precipitation phenomenon is seen as agglomeration, which is represented by Smoluchowski's equation (Mullin, 1993).
dt
—kN-Nj (i=fines, j=grains)
which can easily be as follows:
(4)
782
(5)
' '
dt
N is the particle concentration (m'^) and C is the concentration (mg/m^). K and k represent kinetic constants (m'^.s'').
"'-4
(6)
-7rr: 3 '
The bed porosity e is calculated with a modified Kozeny-Carman equation:
1-e
= 130
sup
p.
'
^
(7)
Ps-Pi
where TJ is the grain radius, Vsup the superficial velocity (m/s), i) the kinematic viscosity (m^/s) and p the density (kg/m^). The continuous variables are the "kinetic" constant (K), the flowrate (5) and the reactor volumes (8). The goal is to obtain the same combination for different conditions of flowrates in the pellet reactor. In fact, the superstructure represents the hydrodynamical conditions in the fluidized bed. The "kinetic" constant is obtained for each combination. The problem solution depends only on flowrate and sand amount. A summary of the global methodology to compute the efficiency is presented in the Figure 1.
PLUG FLOW
Ob
do
cq
PLUG FLOW
Figure 2. Superstructure detail.
do
L^JFmes
Q
783
5. System Resolution At the upper level of the procedure, the scheduling of the basic structures is first optimized by a Simulated Annealing algorithm (SA). The dynamic management of the different constraints generated by the structures induced by the stochastic simulated annealing algorithm is then solved by a Quadratic Progranmiing (QP) (QP package from the IMSL library). At the lower level, the set of parameters is identified for a given structure by QP. The objective function for QP is to minimize the square distance between the experimental and the computed points. The simulated annealing procedure mimics the physical annealing of solids, that is, the slow cooling of a molten substance, that redistributes the arrangement of the crystals (Kirkpatrick et al., 1983). In a rapid cooling or quenching, the final result would be a metastable structure with higher internal energy. The rearrangements of crystals follow probabilistic rules. In the annealing of solids, the goals is to reach given atomic configurations that minimize internal energy. In SA, the aim is to generate feasible solutions of an optimization problem with a given objective function. As careful annealing leads to the lowest internal energy state, the SA procedure can lead to a global minimum. As a rapid cooling generates a higher energy metastable state, the SA procedure avoids to be trapped on a local minimum. The Simulated Annealing algorithm implemented in this study involves the classical procedures. For SA, the criterion is based on minimization of the QP function with a penalty term proportional to the complexity of the tested structure. The S A parameters are the length of the cooling stage (Nsa), the initial structure and the reducing temperature factor (a). The usual values for Nsa are between 8 and 2 times the chromosome length whereas for a the values are between 0.7 and 95. The Nsa and a values used throughout this study are respectively 7 and 0.7.
6. Results and Discussion In this study, two cases are presented as a function of different penalty terms for two values of total flowrate of the solution to be treated. Table 1. Comparison between the experimental results and the modeling results.
Penalty term Experimental Tiagg
for 50L/H 90L/H Total Reactor volume for 50L/H
9Qim Modeling rjagg Error Kinetic constant
for 50L/H 90L/H
Case 1 10* 0.742 0.523 1.9L 1.3L 0.7396 0.5242 0.2% 4.830
Case 2 0 0.742 0.523 1.9L 1.3L 0.7423 0.5231 0.01% 4451
The results obtained show that the combination is different as a function of penalty term. On the one hand, it is interesting to notice that if the penalty term is very low or
784 equal to zero, the resulting error is also low but the combination is more complicated than the one obtained with a higher penalty term (Tablel). On the other hand, this combination induced a more important error between computed and experimental results, thus suggesting that the method is sensitive to the required precision. For 100 runs of SA, the CPU time is the same for the two cases that is 7 min (4.2 s for each S A) on a PC architecture.
W^
00
Case 1 Case 2 Figure 3. The best combination obtained for 2 values of the penalty term.
7. Conclusions In this paper, a hybrid optimization technique combining a Simulated Annealing and an QP method has been developed for identification of a reactor network which represents the pellet reactor for P-recovery, viewed as a mixed integer programming problem. Two levels are involved: at the upper level, the SA generated different combinations and at the lower level, the set of parameters is identified by an QP method. The observed results that for the two values of the total flowrate of the solution to be treated, show that a simple combination of ideal flow patterns is found, which seems interesting for the future control of the process.
8. References Floquet, P., Pibouleau, L., Domenech, S., 1989, Identification de modeles par une methode d'optimisation en variables mixtes, Entropie, Vol. 151, pp. 28-36. Kirpatrick, S., Gellat, CD., Vecci, M.P., 1983, Optimization by simulated annealing. Science, Vol. 220, pp.671-680. Montastruc, L., Azzaro.Pantel, C , Cabassud, M., Biscans, B., 2002a, Calcium Phosphate precipitation in a pellet reactor, 15* international symposium on industrial crystallization, Sorrento (Italia), 15-18 September. Montastruc, L., Azzaro.Pantel, C , Biscans, B., Cabassud, M., Domenech, S., 2002b, A thermochemical approach for calcium phosphate precipitation modeling in a pellet reactor, accepted for publication in Chemical Engineering Journal. Morse, G.K., Brett, S.W., Guy, J.A., Lester, J.N., 1998, Review: Phosphorus removal and recovery technologies. The science of the total Environment, Vol. 212, pp.69-81. Mullin, J.W., 1993, Crystallization, Third Edition, Butterworth Heinemann. Seckler, M.M., Bruinsma, O.S.L., van Rosmalen, G.M., 1996, Phosphate removal in a fluidized bed -2. Process optimization , Water Research, Vol. 30, N°7, pp. 1589-1596.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
785
Modelling of Crystal Growth in Multicomponent Solutions Yuko Mori, Jaakko Partanen, Marjatta Louhi-Kultanen and Juha Kallas Department of Chemical Technology, Lappeenranta University of Technology, P.O.Box 20, FIN-53851 Lappeenranta, Finland
Abstract The crystal growth model was derived from Maxwell-Stefan equations for the diffusioncontrolled growth regime. As a model system, the ternary potassium dihydrogen phosphate (crystallizing substance) - water (solvent) - urea (foreign substance) system was employed. The thermodynamic model for the present system was successfully derived by the Pitzer method and allowed calculating activity coefficients of each component. The resulting activity-based driving force on each component and other solution properties; mass transfer coefficient, concentration of each component and solution density, were introduced to the Maxwell-Stefan equations. The crystal growth rates were successively determined by solving the Maxwell-Stefan equations. The model was evaluated from single crystal growth measurements. The urea concentrations, supersaturation level and solution velocity were varied. The results showed that experimental and predicted growth rates are in acceptable agreements
1. Introduction In industrial crystallization processes crystals are usually grown in a multicomponent system. The crystal growth rate in a multicomponent system may differ from the crystal growth rate in a pure binary system significantly; thus, it is worth of understanding the growth kinetics in multicomponent solutions where the foreign substances, other than the crystallizing substance and the solvent, are present. In general, growth process is simply described by the diffusion layer model, in which growth units diffuse to the crystal-solution interface (mass-transfer) and then incorporated into crystal lattice (surface integration) (Myerson, 1993). According to this model, the slowest step in the growth process determines the crystal growth rate. If the mass transfer is the controlling resistance, the crystal growth rate can be determined only from the mass transfer process. At the present study, the crystal growth model in multicomponent solutions was derived on the basis of the Maxwell-Stefan equations (Wesselingh and Krishna, 2000). The model was applied for the growth process from the ternary potassium dihydrogen phosphate (KDP) - water - urea system. The KDP was considered to be the crystallizing substance and urea the foreign substance, respectively. In this study relatively high urea concentrations were employed in order to emphasise the diffusion of urea species. The non-ideal properties of multicomponent solutions in the model were estimated by applying a simple thermodynamic model to the system. The Pitzer method was used to model the activity coefficients of the KDP solute and urea molecule. The parameters in the model were estimated using the binary and ternary equilibrium data. The resulting activity based driving force on each component and other solution properties; mass transfer coefficient, concentration of each component and solution density, were introduced to the Maxwell-Stefan equations. The crystal growth rates were determined by solving the Maxwell-Stefan equations.
786 In addition, the growth experiments of a single KDP crystal were carried out to verify the growth rate model.
2. Crystal Growth Model Using Maxwell-Stefan Equations Let us consider a KDP crystal exposed to the supersaturated KDP of aqueous urea solution. Due to the gradient of chemical potential the KDP species diffuse to the crystal surface and integrate into crystal lattices, and successively crystal is grown. At the same time the species of urea and water diffuse when each chemical potential gradient is greater than friction with the other components. Figure 1 describes the concentration profiles of above components adjacent to a growing crystal. Here the film theory was applied. Thus, the concentration and activity gradient is linear in the film. When the steady state is achieved, in the case of rapid reaction, Xn is nearly jci^. In this study it was assumed that xu equals to xip. The ternary diffusion is generalized using the linearized Maxwell-Stefan equation (Wesselingh and Krishna, 2000). The difference equations of mass transport of each component are: KDP(l) — j^
__
z=
|- -
(1)
^1,2^
^1,3^
Urea (2) _
Aa2
3ciA^2~^2^i
X2N2-X2N2
^1,2^
(2)
^2,3^
Water (3) _ Aflj _ X^N^ -X^N^
3C.2^3 ~ ^ 3 ^ 2
^1,3^
^3
(3)
^2,3^
where x, and a- are average mole fraction [-] and activity [mol/kg-solvent] of component /, respectively, c average solution concentration [mol/m^], Ni the flux of component / [mol/m^ s] and ktj the mass transfer coefficient between component / and 7. It should be remarked that two of above equations are independent. Apart from an extremely high growth rate, the inclusions of water and urea species into a KDP crystal do not take place. Thus, the following bootstrap relations are obtained: (4)
N2=N,=0 After applying eq.(4) to eqs. (1) and (2) and rearranging, eqs. (1) and (2) become:
{xia-^^i^yi^ifi-^ia)
^2a "^ ^2/3
^3a "^ ^3fi
A^i
^la+^lfi
^1,2
^1,3
c^ +c,
^2/3
^2a
«2a+«2yS
A^l
(5) (6)
^1,2 (^a + ^ , 5 )
Additionally the following constrains are satisfied: ^\a "*" ^2a "^ ^3a ~ ^
(7a)
^1/8 "^ ^2)8 "*" ^3fi ~ ^
(7b)
787 Each activity was calculated from its concentration (see section 2.1). Thus, if xi^, xi^ and X2a are given, unknown variables reduce to two, ^2^^ and A^i, which can be solved using eqs. (5) and (6). When A^i is determined, the growth rate of a KDP crystal is determined by the following relation: G=^
(8)
where G is the growth rate [m/s] and Q the crystal density [mol/m^]. a H20(3)^3J Bulk solution
Urea (2) ^la
KDF(iyia I
^^ .^ ^ \^ Crystal-solution Boundary layer, Az interface, / Figure 1. The concentration profile of each component adjacent to a growing crystal. 2.1. Calculation method of activities The activity is defined by activity coefficient yand molality-based concentration m [mol solute/kg solvent] as: a=ym
(9)
The activity coefficients of KDP ions and urea molecule in KDP-water-urea system were modelled using the Pitzer method. The equations for activity coefficients ^of KDP ions and urea molecule in the ternary KDP - urea - water system can be described as: ln(r,.)=r
+2B,.,.m,. + / ( B ' ) + 2 i _ , . m _
(10)
InG',- ) = / ' ' + 2B,.,_m,. + f{B') + 2 A _ ^ . m _
(11)
^ u/
urea,K
where A
K
urea,A
A
urea,urea
urea
^
^
^^ ,. is the ion-molecule interaction coefficient and /l„,,, „,,, the molecule-
urea,K orA
urea,urea
molecule interaction coefficient, respectively. The quantities in eqs. (10) and (11) are described in literature (Covington and Ferra, 1994). Using solubility data for the binary KDP-water system and for the ternary KDP-water-urea system, (A^^^^ ^^ + A^^^^ ^- j was estimated to 0.017122 (Mori et al., 2002). The mean ionic activity coefficients of KDP is calculated as:
V
llf^+B ^
^
^
m
+B . _m .+/(B')U.034244m„„„
K*A- A-
K + A-
K^ •'^ ' i
"""
,^^^ (13)
On the other hand, the coefficient /i,,,.^„,.^ was estimated from the isotonic method
788 (Scatchard, 1938). The equilibrium state of aqueous solutions of potassium chloride and urea is expressed as: ^^KCl^KCl
=^ureaKea
(14)
where ^ is the osmotic coefficient. (I)KCI was calculated by the Pitzer equation for the osmotic coefficient (Pitzer and Mayorga, 1973) and ^^^^ea was derived from the GibbsDuhem equation as: rurea
~
urea,urea
urea
K^-^)
After introducing both data of the equilibrium concentrations, calculated (/>Kch and eq. (15) to eq. (14), the error of eq. (14) was minimised by the least squares method with respect to A^rea, urea- The estimated value of A^rea. urea is -0.02117. Finally the activity coefficient of urea is calculated as: ^^0.034244m,-0.04234.m_
^ ^ .
2.2. Estimation of mass transfer coefficient The Maxwell-Stefan equations contain the mass transfer coefficients. In the film theory the mass transfer coefficient is obtained by:
Az where :^,^. is the Maxwell-Stefan diffusion coefficient for the component pair / andy" [m Is] and Az the boundary layer thickness [m]. The ternary diffusion coefficient strongly depends on the solution concentration. In order to calculate accurate mass transfer coefficients, experimental data of diffusion coefficients at the interest concentrations and temperatures are necessary. However, data are not available at concentrations and temperature used at the present study, it was assumed that the ternary diffusion coefficients were equal to the binary diffusion coefficients. The binary diffusion coefficients of the KDP-water pairs and the ureawater pairs were taken from literature (Mullin and Amatavivadhana, 1967; Cussler, 1997). The values were transformed into the Maxwell-Stefan diffusivities using the thermodynamic correction factor. The Maxwell-Stefan diffusivity of the KDP-urea pairs in the ternary system was approximated to the limiting diffusivity since the mole fraction of water is close to 1 and estimated by the following model proposed by Wesselingh and Krishna (2000):
where ^i/"~'^ and ^^2,3'"'^^ is the Maxwell-Stefan binary diffusivity of KDP-water pairs and of urea-water pairs at infinite dilution, respectively. The boundary layer thickness is only function of flow conditions and it was determined from the growth rate experiments in the binary system at different solution velocities.
3. Experimental Procedure Growth rate measurements were performed in the growth cell on a single KDP crystal.
789 Experimental setup consists of a 1-litre jacketed vessel, a peristaltic pump and two heat exchangers in addition to a growth cell. The optical microscope, which equipped the digital camera, was amounted above a cell and it allowed taking images of a growing crystal at regular intervals. The saturated solutions of KDP in water and urea solutions of 1.0, 2.5 and 5.0m were prepared in a vessel. Two levels of activity-based supersaturation, Aa / a =0.022 and 0.037, were employed for all solutions. Additional two levels of Aa/« , 0.045 and 0.052, were only applied for pure solution. Before each run, the solution temperature was increased to 50°C. The solution was pumped to the flow cell through a glass heat exchanger by which it was cooled to the crystallization temperature of 30°C. After passing the flow cell, the solution returned to the mother liquor vessel via a heating jacket with which it was heated up to 50°C. Supersaturation and the solution velocity were kept constant during each run. The solution velocity was varied from 0.00165 m/s to 0.05 m/s. When the solution had reached thermal equilibrium, the seed crystal with the dimensions of about 2.5x2.0x1.0 mm^was introduced into the cell. After operation conditions were stabilised, the first image of the growing crystal was registered, and later images were taken at intervals of 10 minutes in 6 h duration. All the images of the crystals were analysed using the image analysis software AnalySIS in order to determine the normal growth rates of the (101) face.
4. Results and Discussion 4.1. Growth rate of KDP in the binary system at different flow velocities The growth rate of a KDP crystal in a pure solution was measured as a function of flow velocity at constant activity-based supersaturation, Aa / a =0.037. At each condition the steady level of growth was achieved during experiments. The experimental results indicate that the growth process is diffusion-controlled at the flow velocity lower than 0.033 m/s. Thus, the flow velocity, D= 0.005 m/s, was chosen for the growth measurements in the ternary system. The mass transfer coefficients were determined by applying the binary Maxwell-Stefan equations to the measured growth rates and subsequently the boundary layer thickness was obtained at different flow velocities. 4.2. Growth rate of KDP in the ternary system at different supersaturations and urea concentrations The growth rate of a KDP crystal in the ternary KDP-water-urea system was measured at different activity-based supersaturations and urea concentrations. For urea concentrations of 1.0m, the growth rate was achieved to be steady. However, for urea concentrations of 2.5m and 5.0m, the crystal growth was firstly stabilized at the same level as the growth in a pure solution at the same supersaturation. After a period of time, the growth was declined slowly and stabilized at the second steady level of growth. Finally, after a time the growth is levelled down slowly. It is interpreted that at the first stage urea enhanced the growth of a KDP crystal (Kuznetsov et ai, 1998). However, it is difficult to discuss the role of urea on the initial growth promoting in the diffusioncontrolled process at the present study. The observed results at the final stage can be understood that the diffusion coefficient decreases due to the aging of the solution. This phenomenon was also observed in the KDP-water and the KDP-1.0m urea solution systems. It was shown that the effect of the solution aging is more significant as the urea concentration increases. The second steady level of growth was considered to be the growth rate in the present system. Figure 2 shows the computed growth rate from Maxwell-Stefan equations (5)(6) in the ternary system compared with the experimental data. In Fig. 2 the computed values accord with experiments reasonably well. The deviation might be decreased
790 when the concentration-dependence on the diffusivity is taken into account. 8.E-08 7.E-08 J 0)
CD
^ ^ ^ 2 CD
(syy
6.E-08 -j 5.E-08 4.E-08 J 3.E-08 2.E-08
-l 1.E-08 1 \
O.E+00 0.00
/v^»**
o a
.^'
•
• r^*
Pure KDP KDP+1.0murea i KDPf2.5murea i KDPfS.Om urea 1 - K D P pure wmm K H P - L I D m 1 irpp>
.^^ -^^^^ 0.02
^— 0.04
- KDP+2.5m urea \ - KDRfS.Omurea 1—
0.06
0.08
0.10
Figure 2. Growth rate computed from Maxwell-Stefan equations in the ternary system (lines) compared to the experimental data (symbols).
5. Conclusions At the present study the diffusion-controlled growth process from the ternary system was modelled by the Maxwell-Stefan equations. The estimation methods of the required parameters in the model were shown. The model was evaluated from single crystal growth measurements in the ternary system. The results showed that experimental and predicted growth rates were within acceptable agreements.
6, References Covington, A.K. and Ferra, M.I.A., 1994, A Pitzer mixed electrolyte solution theory approach to assignment pH to standard buffer solutions, J. Solution Chem., 23, 1. Cussler, E.L., 1997, Diffusion mass transfer in fluid systems, 2nd ed., Cambridge University Press, Cambridge. Kuznetsov, V.A., Okhrimenko, T.M. and Rak, M., 1998, Growth promoting effect of organic impurities on growth kinetics of KAP and KDP crystals, J. Crystal Growth, 193, 164. Mori, Y., Partanen, J., Louhi-Kultanen, M. and Kallas, J, 2002, The influence of urea on the solubility and crystal growth of potassium dihydrogen phosphate. Proceeding of ISIC15, September 15-18, Italy, 1, 353. Mullin, J.W. and Amatavivadhana, A., 1967, Growth kinetics of ammonium-and potassium -dihydrogen phosphate crystals, J. Appl. Chem., 17, 151. Myerson, A.S., 1993, Handbook of Industrial Crystallization, 1st ed., ButterworthHeinemann, Stoneham. Pitzer, K.S. and Mayorga, G., 1973, Thermodynamics of electrolytes. II. Activity and osmotic coefficients for strong electrolytes with one or both ions univalent, J. Phy. Chem., 77(19), 2300. Scatchard, G., Hamer, W.J. and Wood, S.E., 1938, Isotonic Solutions. I The chemical potential of water in aqueous solutions of sodium chloride, sulfuric acid, sucrose, urea and glycerol at 25°, J. Am. Chem. Soc, 60, 3061. Wesselingh, J.A. and Krishna, R., 2000, Mass transfer in Multicomponent Mixtures, 1st ed.. Delft University Press, Delft.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
791
Towards the Atomistic Description of Equilibrium-Based Separation Processes. 1. Isothermal Stirred-Tank Adsorber J. p. B. Mota Departamento de Quimica, Centre de Quimica Fina e Biotecnologia, Faculdade de Ciencias e Tecnologia, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal
Abstract A new molecular simulation technique is developed to solve the perturbation equations for a multicomponent, isothermal stirred-tank adsorber under equilibrium controlled conditions. The method is a hybrid between the Gibbs ensemble and Grand Canonical Monte Carlo methods, coupled to macroscopic material balances. The bulk and adsorbed phases are simulated as two separate boxes, but the former is not actually modelled. To the best of our knowledge, this is the first attempt to predict the macroscopic behavior of an adsorption process from knowledge of the intermolecular forces by combining atomistic and continuum modelling into a single computational tool.
1. Introduction Process modelling is a key enabling technology for process development and design, equipment sizing and rating, and process optimization. However, its success is critically dependent upon accurate descriptions of thermodynamic properties and phase behavior. Molecular simulation has now developed to the point where it can be useful for quantitative prediction of those properties (Council for Chemical Research, 1999). Although there are several molecular simulation methodologies currently available, bridging techniques, i.e. computational methods used to bridge the range of spatial and temporal scales, are still largely underdeveloped. Here, we present a new molecular simulation method that bridges the range of spatial scales, from atomistic to macroscale, and apply it to solve the perturbation equations for a multicomponent, isothermal stirred-tank adsorber under equilibrium controlled conditions.
2. Problem Formulation Consider an isothermal stirred-tank adsorber under equilibrium-controlled conditions. € is the bulk porosity (volumetric fraction of the adsorber filled with fluid phase), 6p is the porosity of the adsorbent, Fi > 0 is the amount of component / added to the adsorber in the inlet stream, and Wi > 0 is the corresponding amount removed in the outlet stream; both Fi and Wi represent amounts scaled with respect to the adsorber volume. The differential material balance to the /th component of an m-component mixture in the adsorber yields € dQ -f (1 - 6)6p dqi = dF/ - dW/,
(1)
where Cf and qi are the concentrations in thefluidand adsorbed phases, respectively. Since the fluid phase is assumed to be perfectly mixed, dW/ = yi dW = ci dG,
(2)
792 where yi is the mole fraction of component / in the fluid phase and dG is the differential volume of fluid (at the conditions prevailing in the adsorber) removed in the outlet stream, scaled by the adsorber volume. Substitution of Eq. (2) into Eq. (1) gives € dc/ -f- (1 - 6)€p dqi = dFi - a dG.
(3)
When Eq. (3) is integrated from state n — 1 to state n, the following material balance is obtained: €Acl'^^ + (1 - €)€pAq^^^ = AF,^"> - c,^"-^/2>AG^">,
A0^"> ^ 0^"^ - (/><"-^\
(4)
In Eq. (4) the superscript denotes the state at which the variable is evaluated and 1
/.AG^")
AG("> Jo is the average concentration of component i in the volume AG*"^ of fluid removed in the outiet stream. If AG^"^ is small enough, then afirst-orderimplicit approximation for Eq. (5) holds, ^(«-i/2) ^ ^(n) ^ o[AG<")],
(6)
and Eq. (4) can be approximated as [6 + AG<">]c|"> + (1 - e)epq\"^ = €c\"-'^ + (1 - €)epg|"-») + AF\"\
(7)
Given that the inlet value AFJ"^ is an input parameter, the terms on the r.-h.-s. of Eq. (7) are known quantities. To simplify the notation, the r.-h.-s. of Eq. (7) is condensed into a single parameter denoted by wi and the superscripts are dropped. Eq. (7) can be written in this shorthand notation as (6 -F A G ) Q + (1 - €)€pqi = Wi.
(8)
This equation requires a closure condition which consists offixingthe value of either AG or the pressure P at the new state. Here we show that Eq. (8), together with the conditions of thermodynamic equilibrium for an isothermal adsorption system (equality of chemical potentials between the two phases^), can be solved using the Gibbs ensemble Monte Carlo (GEMC) method in the modified form presented in the next section.
3. Simulation Method In the GEMC method (Panagiotopoulos, 1987) the two phases are simulated as two separate boxes, thereby avoiding the problems with the direct simulation of the interface between the two phases. The system temperature is specified in advance and the number of molecules of each species / in the adsorbed phase, A^/p, and in the bulk, NIB, may vary according to the constraint MB 4- MP = M", where Ni is fixed. If Eq. (8) is rewritten in terms of NIB and N/p, the following expression is obtained: NiB +
MP
= Ci =
—-Wi,
(9)
(1 - €)€p ^When one of the phases is an adsorbed phase, equaUty of pressure is no longer a condition of thermodynamic equilibrium. This is because the pressure within the pore of an adsorbent is tensorial in nature, whereas in the bulk fluid the pressure is a scalar.
793 where A^AV is avogadro's number and Vp is the volume of the box simulating the adsorbed phase. In Eq. (9) the value of C/ is expressed as a function of Vp instead of the volume VB of the box simulating the bulkfluid.The reason for this is that Vp is alwaysfixed,whereas, as we shall show below, VB must be allowed tofluctuateduring the simulation when the pressure is an input parameter. Obviously, for Eq. (9) to be valid the values of VB and Vp must be chosen in accordance with the relative dimensions of the physical problem, i.e. ^
^
= - - ^ .
(10)
Since the GEMC method inherently conserves the total number of molecules of each species, Eq. (9) is automatically satisfied by every sampled configuration provided that each Ci turns out to be an integer number. This is why GEMC is the natural ensemble to use when solving Eq. (9). Unfortunately, in general it is not possible to size VB and Vp according to Eq. (10) so that each C/ is an integer number. To overcome this problem, Eq. (9) is satisfied statistically by allowing Ni to fluctuate around the target value C/ so that the ensemble average gives {Ni}=Ci.
(11)
This approach is different from that employed in a conventional GEMC simulation where Ni isfixed.When AG is an input parameter, the sizes of the two simulation boxes are fixed and their volumes are related by Eq. (10). On the other hand, when the pressure of the bulk fluid is imposed, the volume VB must be allowed tofluctuateduring the simulation so that on average the fluid contained within it is at the desired pressure. Once the ensemble average ( VB) is determined, the value of AG follows from Eq. (10): (Vn) AG = (l-€)€p^-€. (12) Vp It is shown in detail elsewhere (Mota, 2002) that if an equation of state for thefluidphase is known, the bulk box does not have to be explicitly modelled: computations on the bulk box amount to just updating the value the NIB as the configuration changes. Thermodynamic equilibrium between the two boxes is achieved by allowing them to exchange particles and by changing the internal configuration of volume Vp. The probability of acceptance of the latter moves (molecule displacement, rotation, or conformational change) is the same as for a conventional canonical simulation: min{l,exp(-^AC/)},
(13)
where fi = I/UBT, with kB the Boltzmann's constant, and At/ is the internal energy change resulting from the configurational move. The transfer of particles between the two boxes forces equality of chemical potentials. The probability of accepting a trial move in which a molecule of type / is transferred to or from volume Vp is, respectively, 2iCC(NiP
NiP + U:
MB
= min acc(N/P
NiP - 1;:
MB
-> NiB - 1) L
VP^/CNB.O)
\ ' ->
MB
NiP + 1 + 1)
= mir>(1, txp{-fi[U(s^'^-^) ^ \ Vp^y;(NB,i)
- f/(s^'T)]}\,
(15)
794 where [/(s^'^"^^) is the internal energy of configuration S^'P+^ in volume Vp, NB = [A^iB,..., NmB], and /i(NB, k) is the fugacity of species / in a gas mixture at temperature T and mole-fraction composition ^\B A'B + ^
NiB+k A^B + ^
NmB A^B + ^
These acceptance rules imply that a box is first chosen with equal probability, then a species is chosen with afixed(but otherwise arbitrary) probability, andfinallya molecule of that species is randomly selected for transfer to the other box. How the equation of state is actually employed to compute fi depends on the type of problem being solved. If AG is an input parameter, VB is fixed throughout the simulation and the gas mixture is further specified by its number density PNB-\-k = (^B + ^ ) / ^ B - If, on the other hand, the pressure is fixed, its value defines the state of the mixture. The statistical mechanical basis for Eqs. (14) and (15) is discussed elsewhere (Mota, 2002). All that remains to complete the simulation procedure is to generate trial configurations whose statistical average obeys Eq. (11). Let us consider how to do this. First, note that the maximum number of molecules of species / that may be present in the simulation system without exceeding the material balance given by Eq. (9), is obtained by truncating Q to an integer number, which we denote by int{Ci). The remainder 5/ (0 < 5/ < 1), which must be added to int{Ci) to get C/, is 8i = Ci - intid).
(17)
To get the best statistics Ni — NIB H- A^/P must fluctuate with the smallest amplitude around the target value C/, which is the case when Ni can only take the two integer values int(Ci) or int{Ci) -h 1. It is straightforward to derive that for Eq. (11) to hold, the probability density offindingthe system in one of the two configurations must be M{Ni -> int{Ci)) a 1 - 5/,
N{Ni -> int{Ci) + 1} a 5/.
(18)
In order to sample this probability distribution, a new type of trial move must be performed which consists of an attempt to change the system to a configuration with int{Ci) or int{Ci) + 1 particles. This move should not be confused with the particle exchange move given by Eqs. (14) and (15); here, a particle is added or removed from the system according to the probability given by Eq. (18). It is highly recommended that the box for insertion/removal of the molecule always be the bulk box (except for the rare cases that NiB becomes zero). This choice is most suited for adsorption from the gas phase where, in general, the bulk phase is much less dense than the adsorbed phase and, therefore, more permeable to particle insertions. Furthermore, given that the bulk box is not actually modelled the molecule insertion/removal amounts to just updating the value of NIB •
4. Application Example Due to lack of space, the few results presented here are primarily intended to demonstrate the applicability of the proposed method. The pore space of the adsorbent is assumed to consist of slit-shaped pores of width 15 A, with parameters chosen to model activated carbon. The porosity values are fixed at 6 = 0.45 and £p = 0.6. The feed stream is a ternary gas mixture of CH4(30%) / C2H6(50%) / H2(20%). The vapor-phase fugacities were computed from the virial equation to second order, using coefficients taken from Reidetal.(1987).
795 Table 1: Input parameters and output variables employed in each phase of the simulated operation of the adsorber Phase
Output variables
Input parameters
# steps
pin) AG^"^ = 0 P^"^ = 1 0 bar AG("> pin) AF^"^ = 0 ^AGin) _ 100/c.(n-l) Notes: c = ^Ci\ ^nearly equivalent to setting AF in) _ 400 mol/m^; ^nearly equivalent to setting A W^"^ = 90 mol/m^. I II III
0 in-\)
10 36 21
CH4 and C2H6 were modelled using the TraPPE (Martin and Siepmann, 1998) unitedatom potential. The Lennard-Jones parameters for H2 were taken from Turner et al. (2001). The potential cutoff was set at 14 A, with no long-range corrections applied. The interactions with the carbon walls were accounted for using the 10-4-3 Steele potential (Steele, 1974), with the usual parameters for carbon. The simulations were equilibrated for 10"* Monte Carlo cycles, where each cycle consists of N = NB -{- Np attempts to change the internal configurations of volume Vp (equally partitioned between translations, rotations and conformational changes) and N/3 attempts to transfer molecules between boxes. Each particle molecule attempt was followed by a trial move to adjust the total number of molecules of that type according to Eq. (18). The production periods consisted of 3 x 10"* Monte Carlo cycles. Standard deviations of the ensemble averages were computed by breaking the production runs into five blocks. The simulation reported here consists of the following sequential operating procedure applied to an initially evacuated adsorber: (I) charge up to P = 10 bar, (II) constantpressure feed, and (III) discharge down to F = 1.25 bar. This example encompasses the major steps of every cyclic batch adsorption process for gas separation, in which regeneration of the bed is accomplished by reducing the pressure at essentially constant temperature, as is the case in pressure swing adsorption. The input parameters and output variables for each phase are listed in Table 1. Figure 1 shows the simulated pressure profile plotted as a function of either F or W for each phase. The corresponding gas-phase mole fraction profiles are plotted in Figure ??. During charge the adsorbed phase is enriched in C2H6 and CH4, which are the strongly 1
10
1
1
1
1
1
1
1 •
•
1
^
8
:^
T? ®
./
^ a,
1
. / •
\
\
1
4 2 •
0
•
./ 1
0
1
1
1
2
1
3
1
4
1
5
6
10-3xF'(mol/m3)
1
7
1
0
1
1
3
1
1
1
6 9 12 10-3xF"(mol/m3)
1
15 0
0.5 1 1.5 lO-^xW^^imoVm^)
Figure 1: Simulated pressure profile. (I) charge; (II) const-pressure feed; (III) discharge.
796
0.8 —•
•
/ ^^'
0.6 0.4
/
0.2
A
""••••••••••••••a
v^ 0
0
1
2
3
4
5
6
10-^xF'(mol/m3)
7
0
3
6 9 12 10-3 X F " (moym^)
15 0
• •...;:rtrf.<
0.5 1 1.5 lO'^ x \V™ (mol/m^)
2
Figure 2: Simulated gas-phase mole fraction profiles. (•) CH4, (A) C2H6, (M) H2. adsorbed components, leaving most of the H2 in the gas phase. Then, feed is introduced at constant pressure until the composition of the adsorbed phase is in equilibrium with that of the feed stream. When this state is attained, the adsorber can be regarded as being fully saturated since there is no more accumulation in the adsorber and the composition of the product stream is equal to that of the feed. During discharge, the adsorber is depressurized to give a product stream rich in the more strongly adsorbed component (C2H6).
5. Conclusions The theoretical approach presented here represents a successful attempt to develop an abinitio orfirst-principlescomputational methodology to predict the macroscopic behavior of an adsorption process from knowledge of the intermolecular forces and structural characteristics of the adsorbent. The method is not restricted to adsorption processes, and is equally applicable, for example, to vapor-liquid equilibria if the adsorption box is replaced by a box simulating the bulk liquid. In this case the simulation would be very much like the traditional flash calculation with an imposed operating temperature. We are currently extending the methodology to handle the more general case of the nonisothermal system. The use of molecular simulation techniques, such as the one presented here, could grow in importance, gradually supplanting many empirical constitutive models that are used in process-scale calculations today. Acknowledgement. Financial support for this work has been provided by the European Commission, under contract ENK6-CT2000-00053.
6. References Panagiotopoulos, A.Z., 1987, Molec. Phys 61, 813. Mota, J.P.B., 2002, J. Chem. Phys (submitted). Reid, R.C., J.M. Prausnitz and B.E. Poling, 1987, The Properties of Gases and Liquids, 4th ed. McGraw-Hill, Singapore. Steele, W.A., 1974, The Interaction of Gases wih Solid Surfaces. Pergamon, Oxford. Martin, M.G. and J.I. Siepmann, 1998, J. Phys. Chem. B 102,2569. Council for Chemical Research (CCR), 1999, Technological Roadmap for Computational Chemistry (posted on the internet site of the CCR). Turner, C.H., J.K. Johnson and K.E. Gubbins, 2001, J. Chem. Phys. 114,1851.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
797
Dynamic Modelling of an Adsorption Storage Tank Using a Hybrid Approach Combining Computational Fluid Dynamics and Process Simulation J. p. B. Mota,^ A. J. S. Rodrigo,^ I. A. A. C. Esteves,^ M. Rostam-Abadi^'^ ^Departamento de Quimica, Centro de Quimica Fina e Biotecnologia, Faculdade de Ciencias e Tecnologia, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal ^Department of Civil and Environmental Engineering, University of Illinois, Urbana, IL 61801, USA. ^Illinois State Geological Survey, Champaign, IL 61820, USA
Abstract This paper reports the successful integration of a commercial CFD package within the process simulator of an onboard adsorption storage tank for methane fuelled vehicles. The two solvers run as independent processes and work with non-overlapping portions of the computational domain. The codes exchange data at the boundary interface of their domains to ensure continuity of the solution and its gradient. A software interface dynamically suspends or activates each process as necessary, and is responsible for data exchange and process synchronization. The hybrid computational tool is able to accurately simulate and optimize the discharge dynamics of a new tank design.
1. Introduction Computational Fluid Dynamics (CFD) and Process Simulation are important tools for the design and optimization of chemical processes (Bezzo et al., 2000). CFD is a particularly powerful tool for the study of fluid dynamics and heat transfer with detailed account of complex equipment geometry. Nonetheless, despite many recent improvements, CFD's ability to describe the physics or solve the underlying numerical problems in several application areas is still limited. Adsorption technology is an application area where the numerical methods employed in most CFD packages are inadequate to solve the strongly coupled nonlinearities introduced by the presence of the adsorbed phase. Here, we report the successful integration of a commercial CFD package (FLUENT 5.4, by Fluent Inc.) within the process simulator of an onboard adsorption storage tank for methane fuelled vehicles. The combined tool accurately simulates and optimizes the discharge dynamics of a new tank design incorporating an external heating jacket. Although the discharge of an on-board methane adsorption storage tank is typically a slow process, under realistic discharge conditions the consumed heat of desorption is only partially compensated by the wall thermal capacity and by the heat transferred from the surrounding environment (Chang and Talu, 1996; Mota, 1999). It is also unfeasible to operate the reservoir under sub-atmospheric pressure, since excessive compression hardware would be necessary to extract and boost the gas pressure. As a result, the reservoir cools during discharge, which leads to a lower net deliverable capacity than that for isothermal operation because more gas is retained in storage at depletion pressure. By partially transferring heat from the hot exhaust gas downstream the combustion engine to the adsorbent
798
1200
1400
1600
1800
exhaust gas natural gas
Engine speed, rpm
Figure 1. Experimental exhaust temperature from the Renault engine (Shiells et ah, 1989): , diesel; , compressed CH4.
Figure 2. Schematic drawing of an on-board jacketed reservoir for methane storage by adsorption.
bed using a heating jacket, the average bed temperature is increased, thereby reducing the residual amount of gas left in storage. As shown in Figure 1, the exhaust gas leaves the combustion chamber at very high temperature. Although much of its enthalpy is lost to the environment in the exhaust tube, in a well-designed system exhaust gas at relatively high temperature is readily available to exchange energy with the tank. There are many shell-and-tube heat exchanger designs that would be efficient at transferring heat to the adsorbent bed. Here, we have selected the simplest configuration that does not require internal modification of a conventional tank. The prototype design is illustrated in Figure 2. It consists of jacketing the storage tank so that heat is transferred to it by forced convection from the exhaust gas as it flows along the annular space of the jacket. The jacket eliminates the need for internal coils, yet gives a better overall heat-transfer coefficient than external coils.
2. Simulation Method Our process simulation tool ANGTANK (Mota et al., 1997,2001) and FLUENT address different regions of the adsorption tank, i.e., the computational domains employed by the two codes do not overlap. ANGTANK models the nonisothermal adsorption dynamics in a cylindrical packed bed of adsorbent, whereas FLUENT models the hydrodynamics and heat transfer of the exhaust gas flowing in the annular space of the jacket and also takes into account heat transfer in the cylinder wall. Both codes employ two-dimensional axially-symmetric cylindrical coordinates. The different regions of the computational domain are depicted in Figure 3. ANGTANK employs the method-of-lines approach (Schiesser, 1991) to solve the conservation equations taking into account the adsorption dynamics. The spatial derivatives are discretized using the control-volume method (Patankar, 1980), and converted into a large system of differential-algebraic equations (DAEs) to which an an efficient stiffDAE solver is applied. In the early stages of the project the DAE system was solved using DASSL (Brenan et al., 1989), but it was later replaced by the more advanced DAEPACK numeric component DSL48S (Tolsma and Barton, 2000). DSL48S contains a number of extensions to DASSL including exploitation of sparsity using the Harwell library MA48
799
Exhaust gas inlet i^
~"
- > Methane inlet/outlet - Jacket
(c)
Internal wall
Figure 3. Distribution of the computational domain among the two solvers, (a) physical domain in two-dimensional axially-symmetric cylindrical coordinates; (b) computational domain handled by ANGTANK; (c) computational domain handled by FLUENT. routines and an efficient staggered-corrector sensitivity algorithm. Furthermore, the additional information required by DSL48S (notably sparse Jacobian matrix, analytical partial derivatives with respect to model parameters, and sparsity information) are generated automatically with other DAEPACK components. To render the hybrid solution procedure computationally more efficient, the numerical grids employed by FLUENT and ANGTANK do not have to match at their interface. To allow for thisflexibility,data is exchanged between grids using an interpolation procedure that is consistent with the control-volume-based technique employed by both solvers. The two codes run as independent processes and communicate through shared memory. The software interface, which has been implemented as a user-defined function in FLUENT, dynamically suspends or activates each process as necessary, and is responsible for data exchange and process synchronization. This strategy leads to an optimum allocation of CPU usage. The two codes interact with each other as follows. Suppose that we wish to advance the solution from current time r„ to time tn-\-i. At the start of the step FLUENT is active whereas ANGTANK is suspended. Before computing the new solution, FLUENT first updates its boundary conditions. To do this, it provides ANGTANK with the walltemperature profile data, T^^\ These data are defined on the boundary interface between the two computational domains, which is identified as 'internal wall' in Figure 3. Once this has been done, FLUENT is suspended and ANGTANK is reactivated. The latter interpolates the data from FLUENT'S grid to its own grid. It then advances the solution in the adsorbent bed from tn to ^„+i using the newly received T^^ data. Before being suspended again, ANGTANK computes the packed-bed-side wall heat flux data, -Uyj • keVT^^'^^\ and interpolates them from its grid to FLUENT'S grid. It also updates the value of the exhaust gasflowrate, which is another input to FLUENT, and then sends the new boundary data to FLUENT. Now, the CFD code can compute the solution at r„_|_i. The data exchange between ANGTANK and FLUENT ensures continuity of temperature and heat flux along the outer surface of the cylinder wall, and is the mechanism by which the two solutions are synchronized. An algorithm describing the software interface is provided in Figure 4, the function calls given there are specific to the Windows 2000 operating system and handle the notification of events across process boundaries.
800 FLUENT
DLL interface
• Initialize
ANGTANK > Initialize •WaitForSingleObject (hTwOK, INFINITE);
J
->- • Get the face-center values of T^^"' in Ru- • • Retrieve the face-center values of T^" from shared memory; interpolate the data ent grid and store them in shared memory; from Fluent grid to local grid; • SetEvent (hTwOK); f>- • Advance the • Advance the solution inside the tank from solution in •WaitForSingleObject (hHFwOK, tn tof„+i; the annular I INFINITE); space from r„ • Compute the face-center values of —tiw • tor„+i; »>-• Return from shared memory the facekeVT^"'^^^; interpolate the data from local center values of—liw-keVT^"^^^ in Fluent • n -^ n + 1; grid to Fluent grid; store the interpolated grid; i data in shared memory; • SetEvent (hHFwOK);
J Figure 4. Algorithm of software interface to manage data exchange and synchronization ofANGTANK and FLUENT.
3. Results and Discussion The hybrid solution procedure described in the previous section is computationally more demanding than one that does not rely on the CFD package to predict the heat transfer from the exhaust gas. In fact, this simpler approach was adopted in the early stages of the project, the heat transfer process was modelled using a mean heat transfer coefficient estimated from correlations for convective heat transfer in annuli. However, it was soon realized that this method has a high degree of uncertainty when the heat transfer process takes place under unsteady-state conditions and when the thermal entry length spreads over an appreciable extent of the domain. These conditions are always met in the application under study. The heat capacity, viscosity, andflowrate of the exhaust gas can be related to the methane discharge flow rate F by a simple combustion model. Fuel (methane) and oxidant (air) are presumed to combine in a single step to form a product: CH4 + a(202 + 7.52N2) = CO2 + 2H2O -f 2(a - 1)02 + 7.52aN2,
(1)
where a = 1.2 is the air-fuel ratio. Assuming that this model is a good approximation to the real combustion process, the Reynolds number for flow of the exhaust gas in the annular space of the jacket is given by MCO2 + 2MH2O + 2(Qr l)Mo2 + 7.52aMN2 (2) F, 7r(/? + ^u;) where R is the cylinder radius, Cw is the wall thickness, and MCO2»• • • ^ ^^2 ^^^ ^^e molecular weights. Equation (2) shows that Rcc is independent of the thickness of the annular space of the jacket, and further analysis reveals that under normal discharge conditions the flow in the jacket is laminar. The unnecessariness of turbulence modelling reinforces our confidence on the accuracy of the heat transfer data obtained using FLUENT. The hybrid computational tool has been successfully employed to size, simulate and optimize the new tank design. As an illustration of the results obtained, we compare in
RCe
801
-3D
-20
-10
0
1Q
20
30
40
50
60
Figure 5. Comparison of the temperaturefieldfor a conventional cylinder (left) and that for a jacketed tank (right) with the same geometry, during a three-hour discharge. Figure 5 the temperature field for a conventional storage cylinder (left) with that for the new design incorporating a jacket (right). The exhaust gas inlet temperature is 80°C. At the end of the discharge the mean temperature in the standard cylinder has dropped 27°C, whereas in the jacketed cylinder it has been increased by 8°C above its initial value. Given that the jacket takes up space that in a conventional storage tank can be filled with adsorbent, the performance of the proposed prototype should be compared with those of two standard tanks: one having the same volume as the prototype and the other having the same weight. This leads to two different values of the dynamic efficiency, respectively rjy and r/u;, for the same exhaust gas inlet temperature. These two cases are suitable benchmarks for mobile applications in which the limiting constraint is, respectively, volume (^rjy) or weight of storage (^rjw)Figure 6 shows the influence of discharge duration on the exhaust temperature required for increasing the net deliverable capacity of the storage cylinder to isothermal performance level (r] = 1). The results presented are for the two benchmark cases and refer to two different values, Cc =2 mm and Cc = 5 mm, of thickness of the annular space of the jacket. As expected, higher exhaust temperatures are necessary to attain the equivalent of isothermal performance when the comparison is made on volume basis than on weight basis. If the discharge duration is increased, which is equivalent to decreasing the fuel dischargeflowrate, then isothermal performance can be reached with lower exhaust temperatures. Decreasing Cc improves the performance of the storage cylinder because heat transfer to the carbon bed is enhanced. This increase in performance is more pronounced when the comparison between prototype and regular cylinder is made on a volume basis. The energy demand of a city vehicle, equipped with 3 cylinders like the one considered
802 250
u
1
1 —
— • o
200 hh
ec = 5 mm ^c = 2 mm weight basis J volume basis
i•\N^
\\^> \ 4 ^ \ \ >W
& 150 [
100 h
X^
^ >V"-o. * ^ ^^^•^ ^"*o— V ^
"-rrr**-•--.. ""-o.^ • • - ^ — 0 — 4 ^ ^ " • *
«
50
1.5
2
2.5
3
3.5
4.5
Discharge duration, h Figure 6. Required exhaust temperature to attain isothermal performance (rj — A) as a function of discharge duration. R/L =. 10/74; Cu) = 5 mm; (thermal conductivity of adsorbent bed) ke =2xl0~^ Wcm'^K-K in this work and travelling at cruising speed, gives a discharge duration of about 3 hours. Figure 6 shows that, in this case, the required exhaust temperatures to attain the isothermal performance level are in a perfectly feasible range (80-100°C).
4. Conclusions The case study presented here shows that computational fluid dynamics and process simulation technologies are highly complementary, and that there are clear benefits to be gained from a close integration of the two.
5. References Bezzo R, S. Macchietto, C.C. Pantelides, 2000, Comp. Chem. Eng. 24, 653. Brenan K.E., S.L. Campbell, L.R. Petzold, 1989, Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. Elsevier, New York. Chang, K.J., O. Talu, 1996, App. Thermal Eng. 16, 359. Mota J.PB., E. Saatdjian, D. Tondeur, A.E. Rodrigues, 1997, Comp. Chem. Eng. 21,387. Mota, J.RB., 1999, AIChE J. 45,986. Mota J.P.B., A.E. Rodrigues, E. Saatdjian, D. Tondeur, 2001, in Activated Carbon Compendium: Dynamics of natural gas adsorption storage systems employing activated carbon, Ed. H.Marsh, Elsevier, Amsterdam. Patankar, S.V., 1980, Numerical Heat Transfer and Fluid Flow, McGraw-Hill, New York. Schiesser, W.E., 1991, The Numerical Method of Lines Integration of Partial Differential Equations. Academic Press, San Diego. W. Shiells, W., R Garcia, S. Chanchaona, J.S. McFeaters, R.R. Raine, 1989, SAE paper No. 892137. Tolsma, I.E., RI. Barton, 2000, Ind. Eng. Chem. Res. 39,1826.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
803
Online HAZOP Analysis for Abnormal Event Management of Batch Process Fangping Mu, Venkat Venkatasubramanian* Laboratory for Intelligent Process Systems, School of Chemical Engineering Purdue University, West Lafayette, IN 47907, USA
Abstract Hazard and operability analysis (HAZOP) is a widely used process hazard analysis method for batch processes. However, even though HAZOP analysis considers various potential accident scenarios and produces results that contain the causes, consequences and operator options for these scenarios, these are not generally available or used when those emergencies occur in the plant. In this work, we describe an approach that integrates multivariate statistical process monitoring and HAZOP analysis for abnormal event management. The framework includes three major parts: process monitoring and fault detection based on multiway principal component analysis, automated online HAZOP analysis module and a coordinator. A case study is given to illustrate the features of the system.
1. Introduction Batch and semi-batch processes play an important role in the chemical industry. They are widely used in production of many chemicals such as biochemicals, pharmaceuticals, polymers and specialty chemicals. A variety of approaches to a safe batch process have been developed. Process Hazard Analysis (PHA) and Abnormal Event Management (AEM) are two different, but related, methods that are used by chemical industry to improve the design and operation of a process. Hazard and operability (HAZOP) analysis is a widely used PHA method. AEM involves diagnosing abnormal causal origins of adverse consequences while PHA deals with reasoning about adverse consequences from abnormal causes. When an abnormal event occurs during plant operation, the operator needs to find the root cause of the abnormality. Since a design stage safety analysis methodology, such as HAZOP analysis, overlaps with many of the issues faced by monitoring and diagnostic systems, it seems reasonable to expect some re-use of information. Henio et al. (1994) provided a HAZOP documentation tool to store safety analysis results and make the results which are relevant to monitor situation available to operators. Dash and Venkatasubramanian (2000) proposed a framework that uses the offline HAZOP results of automatic HAZOP tool HAZOPExpert in assessment of abnormal events. In all of these works, off-line HAZOP results are used in assessment of abnormal events. This approach has two main drawbacks. Firstly, it suffers from the problem related management of HAZOP results and the updating of HAZOP results
Corresponding author. Tel: 1-765-494-0734. E-mail: [email protected]
804 when the plant is changed. Secondly, the worst-case scenario is considered during offline HAZOP analysis. During on-line application, when abnormal event occurs, lots of on-line measurements are available and these measurements can be used to adapt the hazard models for efficient abnormal event management. The approach based on offline generated HAZOP results ignores online measurements. In this work, we describe an approach to integrate multivariate statistical process monitoring and online HAZOP analysis for abnormal event management of batch processes. The framework consists of three main parts: process monitoring and fault detection, automated online HAZOP analysis module and a coordinator. Multiway PCA is used for batch process monitoring and fault detection. When abnormal event is detected, signal-to-symbol transformation technique based on variable contribution is used to transfer quantitative sensor readings to qualitative states. Online HAZOP analysis is based on PHASuite, an automated HAZOP analysis tool, to identify the potential causes, adverse consequences and potential operator options for the identified abnormal event.
2. Multiway Principal Component Analysis (PCA) for Batch Process Monitoring 2.1. Multiway PCA (MPCA) Monitoring and control are crucial tasks in the operation of a batch process. Multivariate Statistical Process Monitoring (MSPM) methods, such as multiway PCA, are becoming popular in recent years for monitoring batch processes. The data from historical database of past successful batch runs generates a threedimensional array X(IxJxK), where I is the number of batches, J is the number of variables and K is the number of sampling times, in a given batch. The array X is unfolded and properly scaled to a 2-dimensional matrix X(IxJK). PCA is applied to generate the score T, loading matrix P and residual E as X = T P + E (Nomikos and MacGregor 1994). This model can also be used to monitor process performance online. At each sample instance during the batch operation, Xnew(JxK) is constructed by using all the data collected up to the current time and the remaining part of X is filled up assuming that the future deviations from the mean trajectories will remain for the rest of the batch duration at their current values. Xnew is scaled and unfolded to x\ew(lxJK). The scores and residuals are generated as, ? = Pjc , e = jc -t P - Two statistics, *-*
new
"^new^
•^new
new
namely T^ and SPE-statistic, are used for batch process monitoring. The T^-statistic is calculated based on the scores while SPE-statistics is computed based on residuals. When abnormal situation is detected by MPCA model, contribution plots (Nomikos, 1996) can be used to determine the variable(s) that are no longer consistent with normal operating conditions. The contribution of process variables to the T^-statistic can be negative, which can be confusing. In this paper, we propose a new definition of variable contribution to T^-statistic which avoids the negativity problem. Given that j-2 _^rQ-i^ _ 11^-1/2Jp _|U-i/2pjp _L-i/2y« II
I
I
I
^
, we can define the variable
2^j t^ jKX-R J^ II
contribution to T-statistic as Con^' =\s~^'^v-^ ^x-A • Using Box's approximation J
I
* jKxR
jK 11
(Box 1954), its confidence can be estimated as Con^.^ = g^-^zl(h^-^)'
805 where ^J' =trace{b^)ltrace{h\ hf = {trace(b)}^/trace(b^) and b = cow(Xi^jj^)Pjj^^j^S~^Pjj^^j^. X is the data set used to obtain the model. At time instance k, the contribution of variable j to SPE-statistic can be defined as Conf^ =e((k-l)J -i- j)^ and its confidence limits can be calculated fi-om the normal operating data as Con^^^ = ^^'^ Z^(
)" ^^^^^ ^^j and v^j are the mean and
variance of the contribution of variable j to SPE obtained for the data set used for the model developed at time instant k. a is the significance level. 2.2. Signal-to-symbol transformation A knowledge based system, such as PHASuite, takes the inputs as qualitative deviation values such as 'high', 'low' and 'normal'. We can transform signal measurements to symbol information based on variable contributions and shift direction of each process variables at the current sample. If T^-statistic indicates the process to be out of limits at time interval k, the qualitative state of process variable j can be set as, high, if Con^ J > Con^j^ and jc, ^ > 0 0'^ = low, if Conl J > Conl j ^ and x^ • < 0 normal, otherwise If SPE-statistic is out of limit at time interval k, the qualitative state Q^^^of process variables can be set similarly. If both T^- and SPE-statistic are out of limit, we can combine them as.
Qkj =
high, if Qlj = high or Ql'f = high \ low, if Qlj = low or Ql^f = low normal, otherwise
Note that it is not possible for g['. =high while Q^^^ =1OW or g [ ' =low while Q^^^ =high according to the above definition. 2.3. Multistage batch processes Many industrial batch processes are operated in multiple stages. Batch recipe defines the different stages of a batch process. For example, for a batch reaction, the first stage can be a heating stage, and the second can be a holding stage. Usually the correlation structures of the batch variables are different for different stages. For multistage batches, it is natural to use different models for the different stages in order to achieve better results. In this work, separate MPCA models for each stage are used. For online monitoring, one needs to shift from one model to the other when one stage ends and the next stage begins.
806
3. Online HAZOP Analysis 3.1. PHASuite—an integrated system for automated HAZOP analysis PHASuite is an integrated system consisting of HAZOPExpert: a model-based, objectoriented, intelligent system for automating HAZOP analysis for continuous processes, BHE: a model-based intelligent system for automating HAZOP analysis for batch processes based on HAZOPExpert, and iTOPs: an intelligent tool for procedure synthesis. In this system, colored Petri Nets are chosen to represent the HAZOP analysis as well as batch and continuous chemical processes. Operation-centered analysis and equipment-centered analysis are integrated through abstraction of the process into two levels based on functional representation. Causal relationships between process variables are captured in signed directed graph models for operation and equipment. Rules for local causes and consequences are associated with digraph nodes. Propagation within and between digraphs provide the potential causes and consequences for a given deviation. PHASuite has been successfully tested on a number of processes from chemical and pharmaceutical companies (Zhao 2002). Multiway PCA process monitoring Online prsdiclion monitoring I
r
T'-andSPE-slatBtK:
]•-»{
3.Z:
^
OnNne HAZOP anatyait result
Figure 1. Software components of the proposed online HAZOP analysis system. 3.2. Online HAZOP analysis module Based on PHASuite, this module provides the capability to reason about the potential causes and consequences of abnormal event identified by the process monitoring and fault detection module. For online HAZOP analysis, digraph nodes are classified as measured or unmeasured according to the sensor settings. When process monitoring and fault detection module detects an abnormal event, the qualitative states of measured digraph nodes are determined based on signal-to-symbol transformation. Starting from each measured process variable, if the state of the variable is not 'normal', simulation engine qualitatively propagates backward/forward from the corresponding digraph node to determine the states of unmeasured digraph nodes for causes/consequences. The propagation is a depth-first propagation. The backward search is to detect the causes for
807 the abnormal situation while forward search is to generate potential consequences. After all the measured process variables are scanned, the rules for causes and consequences are applied to each digraph node to generate potential causes and consequences for the deviations detected. This is a conservative design choice that favors completeness at the expense of poor resolution. Pure qualitative reasoning can generate ambiguities and possibly generate lots of infeasible situations. Quantitative filtering can be used to filter out some of these infeasible situations. When an abnormal event is detected, process sensors provide the quantitative information, which can be used for quantitative filtering. The quantitative information collected by sensors is sent to online HAZOP analysis module to set the states of corresponding process variables and is used for filtering when the online HAZOP analysis results are generated.
4. Integrated Framework for AEM Using HAZOP Analysis The overall structure of the proposed framework is shown in Figure 1. Client-Server structure is used to design the system where PHASuite is built as a server and process monitoring module is a client. Therefore, PHASuite can be used offline or online depending on the situation. The complete system has been developed using C++ running under Windows system. Object-oriented programming techniques were used for the development of the system.
5. Illustrative Example This example involves a two-stage jacketed exothermic batch chemical reactor based on a model published by Luyben (1990). The reaction system involves two consecutive first-order reactions A —> B —> C . The product that we want to make is component B. The batch duration is 300min, and the safe startup time is lOOmin. Measurements in eight variables are taken every 2 minutes. By introducing typical variations in initial conditions and reactor conditions, 50 normal batches, which are defined as normal operation condition data, are simulated. 5.1. Results According to batch recipe, this process is operated in different stages. The first stage is a heating stage and the second is a holding stage. Usually the variations in the correlation structure of the batch variables are different for different stages. Figure 2 gives the variance-captured information for the whole process by 5 principal components. The two stages are clearly visible and we can define the first 100 minutes as the heating stage and the next 200 minutes as the hold stage. Two multiway PCA models are built for heat and hold stage, separately. Case 1: Fouling of the reactor walls This fault is introduced from the beginning of the batch. T^-statistic, which is not shown here, cannot detect the fault. Figure 3 shows SPE-statistic with its 95% and 99% control limits for the heating stage. SPE-statistic identifies the fault at 12 minutes. At that time, the variable contribution plot for SPE is shown in Figure 4.
808 Variable 3, which is reactor temperature, shows the major contribution to the abnormal event. Its qualitative state is set to be 'low' based on the signal-to-symbol transformation formula and the qualitative states of all other measured variables are 'normal'. Online HAZOP analysis is performed and the results are given in Table 1.
200 Time(minutes)
Figure 2. Cumulative percent of explained variance.
Figure 3. SPE-statistic for heating stage.
Figure 4. Variable contributions to SPEstatistic at sample 6.
Table 1. Online HAZOP analysis results. Deviation Causes Low 1) agitator operated at low speed; temperature 2) fouling induced low heat transfer coefficient; 3) cold weather, external heat sink, or lagging loss
Consequences 1) incomplete reaction
6. Conclusions This paper presents a framework for integrating multivariate statistical process monitoring and PHASuite, an automated HAZOP analysis tool, for abnormal event management of batch process. Multiway PC A is used for batch process monitoring and fault detection. After abnormal event is detected, signal-to-symbol transformation technique based on contribution plots is used to translate signal measurements to symbol information, and is input to PHASuite. PHASuite is then used to identify the potential causes, adverse consequences and potential operator options for the abnormal event.
7. Reference Box, G.E.P., 1954, The annals of mathematical statistics. 25:290-302. Heino P., Karvone, I., Pettersen, T., Wennersten, R. and Andersen, T., 1994, Reliability Engineering & System Safety. 44 (3): 335-343. Dashes, S. and Venkatasubramanian, V., 2000, Proc. ESCAPE. Florence, Italy. 775-780 Luyben, W.L., 1990, Process modeling, simulation and control for chemical engineers. McGraw-Hill, New York. Nomikos, P. and MacGregor, J.F., 1994, AIChE Journal, Vol. 40 No. 8 pl361-1375 Nomikos, P., 1996, ISA Transactions, 35, 259-266. Zhao C , 2002, Knowledge Engineering Framework for Automated HAZOP Analysis, PhD Thesis, Purdue University.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
809
Analysis of Combustion Processes Using Computational Fluid Dynamics - A Tool and Its Application Christian Mueller, Anders Brink and Mikko Hupa Abo Akademi Process Chemistry Group Abo Akademi University, 20500 Abo/Turku, Finland
Abstract Numerical simulation by means of Computational Fluid Dynamics (CFD) has developed over recent years to a valuable design tool in engineering science. In the beginning mainly applied to address fluid dynamic questions it is nowadays capable to predict in detail conditions in various complex technical processes. State of the art commercial CFD codes are almost always set up as multi-purpose tools suitable for a wide variety of applications from automotive industry to chemical processes and power generation. However, since not highly specialized in all possible fields of application, CFD codes should be rather seen as a collection of basic models that can be compiled and extended to individual tools for special investigations than as readily applicable tools. In power generation CFD is extensively used for simulation of combustion processes in systems like utility boilers, industrial furnaces and gas turbines. The purpose of these simulations is to analyze the processes, to optimize them with regard to efficiency and safety and to develop novel techniques. Since combustion processes have been the target for CFD software for long, also standard models available in the codes are of high quality as long as modelling of conventional combustion systems is concerned. However, as soon as characteristics of novel combustion systems or fuels or detailed effects within a certain process are of interest, the limits of these standard models are reached easily. At this point extension of standard models by process specific knowledge is required. This paper presents some of the opportunities CFD offers when applied to analyse different combustion systems. Practical examples presented are ash deposition predictions on heat exchanger surfaces and walls in a bubbling fluidised bed furnace and detailed nitrogen oxide emission predictions for the same furnace type. Furthermore, the extension of a standard model using process specific data is presented for the fuel conversion process in a black liquor recovery furnace.
1. Introduction Computational Fluid Dynamics (CFD) has grown over the years from a plain mathematical description of simple mass and heat transfer problems to a powerful simulation tool applicable in almost any technical branch. It is nowadays commonly accepted as research tool and its potential for industrial design and development work has been discovered. From the various opportunities this tool offers, two are
810 outstanding: Firstly, the possibility to predict physical and chemical phenomena in technical systems that cannot be easily evaluated with experimental techniques, like the processes in industrial furnaces, and secondly, the cost efficiency and speed insight into these processes is obtained compared to experimental procedures. The latter gets especially obvious when parametric studies on different conditions towards the optimum solution are performed. 1.1. Combustion system analysis using computational fluid dynamics Detailed analysis of combustion processes, especially large-scale industrial ones, is a complicated matter due to the high temperatures of up to 2000 K that produce an extremely unfriendly environment for experimental investigations. For those processes numerical simulation by means of CFD is an excellent alternative investigation method. As long as the combustion of standard fuels in pulverised fuel fired or fluidised bed units is concerned, current multi-purpose CFD software gives a very good insight into the process. The turbulent flow field, the conversion of particles and gaseous species and the heat transfer are well described by standard models and allow an accurate description of the phenomena e.g. in a process furnace or the combustion chamber of a power boiler (Knaus et al., 2001). However, next to these general phenomena that are most relevant for the overall design of the combustion process, more specific aspects are getting interesting when processes need to be optimised for certain operational conditions. Here the focus may be e.g. on low emission levels for certain species which requires a substantial improvement of the chemical approaches currently available in most CFD codes. On the other hand, the purpose of the investigation may be an increase in boiler availability taking into consideration alternative fuels and design characteristics and resulting operational effects. An even bigger challenge is the adjustment of existing CFD codes to novel combustion processes and fuels that include new physical and chemical phenomena. For those cases established modelling approaches need to be significantly extended.
2. Computational Fluid Dynamics in Combustion Processes Examples of Problem Specific Modelling Approaches Hereafter three examples are presented for the application of CFD to analyse advanced combustion systems. Each of the examples covers a specific technical problem and shows how standard CFD models need to be adjusted to address individual questions. The first example deals with the increase of boiler availability due to reduced ash deposition on furnace walls and superheater surfaces. The second one addresses the question of reduced nitrogen oxide emissions from a bubbling fluidised bed combustor and the last example presents a novel model for black liquor droplet combustion. 2.1. Ash deposition A novel trend in boiler operation is the use of alternative fuels like biomasses and biomass mixtures instead of fossil fuels. Biomass is known to lead to ashes with a wide melting range starting at low temperatures and therefore ash related operational problems are ranking very high on the list of reasons leading to significant reduction of boiler availability. Ash related problems strongly depend on fuel specific aspects such
811 as mineral matter distribution in the fuel, aspects specific to the used combustion technique as well as design aspects unique for combustion chambers of any operating unit. The overall goal in biomass combustion related research is therefore the prediction of potential operational problems originating from fuel and oxidiser entering the combustion chamber and those problems originating from the design of individual furnaces. Hence an advanced ash behaviour prediction tool for biomass combustion in fluidised bed combustors combining computational fluid dynamic calculations (CFD) with chemical fractionation analysis and multi-component multi-phase equilibrium calculations has been developed (Mueller et al., 2002a). From the advanced fuel analysis the ash forming elements of the fuel are identified, their melting behaviour is calculated under furnace conditions and a stickiness criterion as function of ash particle temperature is defined for each individual fuel. In the CFD calculations this stickiness criterion is utilised by checking the particle temperature at its impaction on a wall or superheater surface. If the particle temperature is above the stickiness criterion, the ash particle sticks at the wall and the location is recorded as location for possible deposition. On the other hand, if the particle temperature is below the criterion, the particle rebounds back to the furnace and continues its flight. Figure 1 shows a deposition map for the back wall of a bubbling fluidised bed freeboard. The coloured dots show the locations for particle hits at the specified temperature on the wall and clearly indicate the areas of possible deposition in this furnace. The picture on the left of the figure shows the deposit situation in the real furnace and serves as validation for the applicability of the tool.
Particle Temperature [K] —GRID O AIRINLfTS • 1050-1150 (K) * 1150-1250 (K) • 1250-1350 (K)
m,
Figure 1. Visual validation of ash deposit prediction in the freeboard of a bubbling fluidised bed furnace. 2.2. Nitrogen oxide (NOx) emissions Nitrogen oxides are mainly formed through three paths. In the fuel-N path nitrogen containing species in the fuel can form NO or N2. In the two other paths the fixation of N2 in air is involved. One of these is the well known thermal-NO path, where radicals react at high temperatures with N2 to form NO. The other one is the often called prompt-NOx path, where hydrocarbon radicals react with N2. For most of these paths global reaction models are available (Mitchell and Tarbell, 1982; De Soete, 1974; Bowman, 1979) which can be also found in most current CFD codes. If a certain path is
812 dominating the formation of NO^, it might be possible to use these standard models for quantitative NOx predictions. However, in general the only description that is detailed enough to guarantee high quality predictions is the one based on a detailed reaction mechanism. For a simple hydrocarbon such a mechanism typically consists of more the 50 species and several hundreds of reversible reactions. Unfortunately, there are only a few turbulence-chemistry interaction models that can account for such a mechanism. One such model is the Eddy Dissipation Concept (EDC) by Magnussen (1989). Here, results for NOx emissions from a peat-fired bubbling fluidized bed furnace obtained using a combination of a skeletal mechanism, i.e., a mechanism with only the most relevant reactions of a detailed mechanism retained, together with the EDC are presented. However, before the simulations can be started, a number of processes present in the full boiler need to be described or simplified for the model. E.g., at present calculation of the dense bubbling bed is not possible, or too time consuming. Hence, the computational domain focuses on the freeboard region and starts above the bed surface. Another difficulty is the accurate modelling of the fuel supply. In the present case the fuel is peat. It is assumed that 90% of the peat is pyrolysed in flight before arriving at the bed (Lundmark, 2002). The remaining 10% fuel is assumed to be fully oxidized when entering the freeboard from the bed surface. At present, there are no detailed models available to determine the composition of the pyrolysis gas with respect to nitrogen-containing species. The values have to be assigned based on experience and naturally also on the nitrogen content of the fuel. The same uncertainty exists for the determination of the composition of the main pyrolysis gas. In this case the simplification has been made that the pyrolysis gas consists of CH4 and H2O only with retaining approximately the right heating value as well as the flue gas composition.
l-z I O.OOe+00
Figure 2a) Left, outline of the grid used in the CFD simulation b) Right, NO mass fraction.
813 Figure 2a shows the outline of the grid used in the CFD calculation. From the figure it can be seen that there are a number of different inlets. In detail this are six fuel inlets, four start-up burners, six secondary air openings, four coal burners and six tertiary air openings. Some of these openings are divided into an inner non-swirling part and an outer swirling part. In the present case, data for the air supply can be taken directly from the operating system. Figure 2b shows the calculated NO mass fraction. According to the measurements for the present case, the NO is 160 mg/m^. This corresponds to a mass fraction of approximately 1.3-10""* NO. In the calculation, the predicted NO levels are almost twice as high. However, taking the uncertainties of the composition of the pyrolysis gas as well as the primary gas coming from the bed into consideration, the agreement is satisfactory. Earlier attempts to achieve this agreement in a similar case with standard models have failed (Brink et al., 2001). 2.3. Black liquor combustion The black liquor combustion process is unique from the process as well as from the fuel point of view. It starts with generation of droplets while spraying the liquor into the furnace, continues with the thermal conversion of the droplets and burnout of the char carbon in flight and on a char bed in the bottom of the furnace and ends with the recovery of the chemical compounds contained in the liquor. This series makes it obvious that the quality of an overall simulation of the process is strongly dependent on an accurate droplet combustion model. However, the description of the droplet conversion is a challenging task due to the special characteristics of the fuel. These are its high water content ranging up to 40%, and the almost even split of the solid part of the fuel in combustible species and low melting inorganic compounds originating from the pulping process. In addition to this unique fuel composition, the burning behaviour of black liquor is strongly liquor dependent and is characterised by significant liquor specific swelling of the droplet during devolatilisation.
Figure 3. Experimental setup with muffle furnace, quartz glass reactor, video system and online gas-analysers. Plots show change in diameter during conversion of a 2.47 mm droplet at 900° C in 3% oxygen. Comparison of experimental data (left) and modelling results (right) (Mueller et al, 2002b).
814 Starting from an earlier work by Frederick and Hupa (1993) a new simplified black liquor droplet model is developed to replace the standard droplet model in CFD simulations of black liquor recovery furnaces. Liquor specific input data obtained from single droplet experiments is incorporated into the new droplet model. The model is implemented in a commercial CFD code and simulations in an environment that represents well the experimental setup of the single droplet furnace are performed (Figure 3). This way, model expressions for droplet swelling during devolatilisation and carbon release curves during devolatilisation and char carbon conversion can be validated. After this validation procedure the model can be used for full scale recovery furnace simulations.
3. Conclusions Multi-purpose CFD codes are nowadays a frequently used and well accepted tool in academia and industry. Already the available standard codes must be regarded as powerful tools that can be successfully applied to various technical disciplines including combustion processes. In this field at present the real value in CFD calculations lies in predicting trends that occur when operational conditions are changed. This statement is true for the above presented ash deposition predictions as well as for the NOx emission predictions and is validated for both cases with experimental data. However, in the future the real power of CFD codes lays in the possibility to extend and adjust them with process specific data to tailor-made tools applicable to address individual technical problems and specific questions. The successfully developed and validated simplified black liquor droplet combustion model presented in this paper proves this assessment.
4. References Brink, A., Bostrom, S., Kilpinen P. and Hupa, M., 2001, The IFRF Combustion Journal, ISSN 1562-479X, Article Number 200107. Bowman, C.T, 1979, Prog. Energ. Combust. Sci, Student ed.. Vol 1, p. 35. De Soete, G.G., 1974, 15. Symp. (Int.) on Combustion, p. 1093. Frederick, W.J., Hupa, M., 1993, Report 93-3, Combustion and Materials Chemistry Team, Abo Akademi University, Turku/Finland. Knaus, H., Schnell, U., Hein, K.R.G., 2001, Prog, in Comput. Fluid Dynamics Vol. 1, No.4, pp. 194-207. Lundmark, D., 2002, Diploma Thesis, Abo Akademi University, Turku/Finland. Magnussen, B.F., 1989, 18. Int. Congress on Combustion Engines, Tianjin/China. Mitchell, J.W. and Tarbell, J. M., 1982, AIChE J, 28:2, pp. 302. Mueller, C , Skrifvars, B.-J., Backman, R., Hupa, M., 2002a, Progress in Computational Fluid Dynamics, to appear. Mueller, C , Eklund, K., Forssen, M., Hupa, M., 2002, Finnish-Swedish Flame Days, Vaasa/Finland.
5. Acknowledgement This work has been supported by the Academy of Finland as a part of the Abo Akademi Process Chemistry Group, a National Centre of Excellence.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
815
Modelling of the Free Radical Polymerization of Styrene with Benzoyl Peroxide as Initiator K. Novakovic, E.B. Martin and A.J. Morris Centre for Process Analytics and Control Technology University of Newcastle, Newcastle upon Tyne, NEl 7RU, England [email protected]; [email protected]; katarina.no vako vie @ncl.ac.uk
Abstract This paper demonstrates, through the use of a polymerization example, how mechanistic models can be built and used prior to carrying out an experimental study. Using knowledge available from the literature, it is shown that parameter ranges can be calculated within which comparable experimental results can be expected. The system chosen was the free radical polymerization of styrene with benzoyl peroxide as initiator. This polymer-initiator system was selected since a model was not already available in the literature. The model was developed in the programming language gPROMS and was validated using data obtained from a laboratory batch polymerization.
1. Introduction The traditional approach to the modelling of any chemical or biochemical process, such as polymerization, is to first undertake experimental work and then to estimate the model parameters from the data (e.g. Villermaux and Blavier, 1984; Lewin, 1996; Ghosh, Gupta et al., 1998; Krajnc, Poljansek et al. 2001). This paper proposes an alternative approach. It demonstrates how useful information can be gained from building a mechanistic model that can be used to influence the experimental study. Once the initial conditions and/or the ranges in which the conditions are expected to lie for the experimental study have been identified, theoretical modelling can be performed. By using knowledge that is available from the literature, in this case for a polymerization process, it is shown that the parameter ranges can be predicted within which comparable results can be expected. In this way, a better understanding of the relationship between the operating conditions of reactors and the quality of the polymer produced can be established prior to carrying out laboratory experiments. In this article, the term polymer quality is defined as the set of structural characteristics of macromolecules such as number average and weight average molecular weight and polydispersity (the ratio of two average weights, weight average and number average, respectively). In this study the overall kinetics of chain polymerization (Odian, 1991) were used with the steady state assumption being applied to eliminate total concentration of all free radicals. In addition, the overall rate of monomer growth in the polymerization mixture, and the number average and weight average molecular weights were calculated using the first and second order moments for dead polymers (Villermaux and Blavier, 1984). Modifications were made to calculate those assumptions relating to possible transfers and to deal with the termination mechanism for which it was not possible to determine whether it would occur through disproportion or coupling. The modeling of free radical
816 polymerization of styrene with benzoyl peroxide as the initiator was selected as the demonstrator process which was then validated using laboratory data. The prediction of conversion, the range in which the values for number average and weight average molecular masses and the values for polydispersity are expected to lie are presented. In addition a comparison of the model results with the experimental data for the chosen polymer-initiator system is described. Finally the influence of benzoyl peroxide as the initiator in the polymerization of styrene can be compared with other initiator influences such as azo-bis-isobutyronitrile (AIBN) and bis-4-t-butylcyclohexyl peroxydicarbonate (Perkadox 16) reported by other researches (Villermaux and Blavier 1984). The nomenclature for all relationships in the following three sections is given in the 'Nomenclature' chapter.
2. Modelling Isothermal Batch Polymerization The polymerization of unsaturated monomer, in this case styrene, by chain polymerization is first discussed. The mechanism consisting of initiation, linear propagation and termination by combination and/or disproportion, as presented in many textbooks (Odian 1991), is adopted in this study. Thus based on the defined mechanism, the rate of decomposition of initiator can be presented as:
dt The rate of monomer disappearance, which is synonymous with the rate of polymerization, is given as the sum of the rates of initiation and propagation. Since the number of monomer molecules reacting in the initiation step is much less than the number involved in the propagation step, the initiation step could be neglected and the rate of monomer disappearance can be set equal to the rate of propagation. In addition since the rate constants for all the polymerization steps are the same (Odian 1991), the rate of propagation can be defined as: Rp = kp'M*
(2)
M
Equation (2) is not usable because it contains a term for the total concentration of all free radicals M* This quantity is difficult to measure. Thus to eliminate Af from the analysis, the assumption of steady state is made, i.e. the concentration of radicals increases initially but almost instantaneously attains a constant, steady-state value. This means that the rate of initiation and termination of the radicals are equal. According to this, the quasi-steady concentration of free radicals is given by:
V
'
J
The kinetic chain length can then be calculated according to the following equation: /?„ ~ Ri
k„MC ~2-f-kj-A
(4)
817 Processes in which macromolecules are produced are termination by coupling and/or disproportion, and transfer to other molecule, i.e. monomer. In this case it is assumed that there is no transfer to other molecules. Since it is not known which termination mechanism (coupling or disproportion) is in the majority, and since only one termination rate constant can be calculated, two extreme cases are considered. The calculated termination rate constant will be assumed to be equal to the termination rate constant by coupling and the termination rate constant by disproportion, as presented below. ^ = krC' dt ^
and
^ = 2.krC' dt ^
^'^
The extent (conversion) of the reaction is calculated according to: X
{M^-M)
(6)
M
with the overall rate of monomer growth in the polymerization mixture (Villermaux and Blavier 1984) being given by:
dt
P
Assuming no transfer to monomer is present and because only one termination rate constant can be provided, two cases are considered: •
Termination occurs only by coupling (8)
dt
•
<
\
I
Termination occurs only by disproportion (9)
dt
>
"t
/
The Number Average molecular weight can be represented as: M„=m(fi, •/')/P = m n ,
(10)
and the Weight Average molecular weight is given by: M^:=m(il2P)/(ii'iP)=m\X2/Hi
(11)
Polydispersity is then calculated as: PD = M^/M„
(12)
818 The expressions for the kinetic rate constants and the value for initiator efficiency, /, that are appropriate for styrene polymerization with benzoyl peroxide as initiator have been taken from the literature (Biesenberger and Sebastian 1983), (Berger and Meyerhoff 1989), (Buback 1995), (Moad and Solomon 1995). 1-2
^^^6.378-10—exp(_29700//?r)
(13)
f=0.8
kp =10 7.630 exp(-7740//?r)
(14)
kt =1.255 10^ •exp(-1675//?r))
(15)
Values for k^, kp and kf were calculated according to the temperature to be set in the batch reactor.
3. Comparison of Model and Experimental Results The proposed model was then validated with data obtained from a laboratory batch polymerization reactor (Boodhoo 1999). The polymerization system consisted of styrene as monomer in an initial concentration of 7.28 mol/dm^, benzoyl peroxide as initiator in an initial concentration of 5.1-10"^ moUdw? and toluene as solvent in an initial concentration of 1.567 mol/dm^. Batch temperature was set at 90°C and agitation speed was 500 rpm. The model results in the case of termination only by coupling, and termination only by disproportion, are compared with experimental results. The results are presented in Figs. 1, 2, 3 and 4. BU-n
30000 a
70-
. . » "
60-
fso. 0
fi
2 40 B
^
20
»•'
| |
20000
° ° D D
0
&^ ^ r 15000
9
^
A
A. A A A A
1^
e
10-
25000
— "^
e
C30O
I
< ffl 10000
B
01i—
^ 20
_ ,
40
,
1
1
1
,
,
60
80
100
120
140
160
I
5000
^
0
)
50
Time (min) o Experiment
o Model Kt=Ktc
A Model Kt=Ktd
Fig. 1. Conversion (model and experimental results) as a function of polymerization time.
100
150
Time (min) 0 Experiment
o Model Kl=Ktc
AModelKt=Ktd
Fig. 2. Number average molecular weight (model and experimental results) as a function ofpolymerization time.
819
20
40
60
80
100
120
140
0
20
0 Experiment
a Model Kt=Ktc
40
60
80
100
120
140
160
Time (mIn)
Time (min) A Model Kt=Ktd
Fig. 3. Weight average molecular weight (model and experimental results) as a function ofpolymerization time.
0 Experiment
o Model Kt=Ktc
A Model Kt=Ktd
Fig. 4. Polydispersity (model and experimental results) as a function of polymerization time.
4. Discussions and Conclusions As can be seen from Fig. 1, conversion is well predicted by the proposed model. The model results are in agreement with the experimental results within a confidence interval of ±5%. Fig. 2 presents the results for number average molecular weight as a function of time. As can be seen, the experimental data set lies, for the whole of the polymerization process, between the two extreme cases, termination only by coupling and termination only by disproportion. At the beginning of the polymerization process, the experimental data lies exactly between the two extreme cases but after 60 minutes of polymerization, the experimental data fluctuates toward the termination only by coupling and reaches this extreme mechanism after 140 minutes from the beginning of the polymerization. Fig. 3 presents the weight average molecular weight as a function of polymerization time. At the beginning of the polymerization process, for the first 50 minutes of the process, the main mechanism of termination is by disproportion. Between 50 and 120 minutes from the beginning of the polymerization, the experimental results again lie between the two extreme mechanisms of termination modelled and as the reaction approaches the last stage, after 120 minutes, the main mechanism becomes termination by coupling. Experimental results for polydispersity are more likely to agree with coupling as the only mechanism of termination. This can be seen from Fig. 4. Comparing the influence of benzoyl peroxide (BPO) as the initiator in the polymerization of styrene with other initiator influences, such as azo-bisisobutyronitrile (AIBN) and bis-4-t-butylcyclohexyl peroxydicarbonate (Perkadox 16) as reported by other researches (Villermaux and Blavier 1984), it can be concluded that the results achieved with BPO as initiator have the same trends as when AIBN is used as initiator The pre-experimental modelling approach proposed can be used to provide initial predictions of conversion and to help determine the interval in which the molecular weights will occur. This could be very useful in future experiments since the model is able to provide an indication as to what to expect under certain experimental conditions. However to be able to predict more accurate molecular weights, it would be necessary to determine both termination rate constants.
820
5. Nomenclature A - Initiator concentration, mol/dm^ C - Quasi-steady concentration of free radicals / - Initiator efficiency kd - Initiator decomposition rate constant, s'^ kp - Propagation rate constant, dm^-mol'^s*^ kt - Termination rate constant, dm^mol'^s"^ ktc - Termination by combination rate constant, dm^mol'^s"^ ktd Termination by disproportion rate constant, dm^mol'^s'^ L - Kinetic chain length M - Monomer concentration, mol/dm^ Af - Concentration of free radicals, mol/dm^ m - Monomer molecular weight, g/mol Mo - Monomer concentration at the beginning of polymerization, moMdw?
Mn - Number Average molecular weight, g/mol Mw - Weight Average molecular weight, g/mol jLij - First order moment for dead polymer jLi2 - Second order moment for dead polymer P - Macromolecule cone, mol/dm^ PD - Polydispersity R - Universal gas constant, 1.986 cal/molK /?j - Rate of initiation, mol/dm^ Rp - Rate of propagation, mol/dm^ 7 - Temperature in reactor, K X - Extent of reaction
6. References Berger, K.C. and Meyerhoff, G., 1989. Propagation and Termination Constants in Freeradical polymerization. Polymer Handbook. Immergut. New York, WileyIntercsience: II/67-II/79. Biesenberger, J.A. and Sebastian, D.H. 1983. Principles of Polymerization Engineering. New York, John Wiley. Boodhoo, K.V.K. 1999. Spinning Disc Reactor for Polymerization of Styrene. Chemical and Process Engineering. Newcastle upon Tyne, University of Newcastle, Buback, M.E.A., 1995. Critically Evaluated Rate Coefficients for Free-radical Polymerization. I Propagation Rate Coefficient for Styrene. Macromol. Chem. Phys. 196: 3267-3280. Ghosh, P., Gupta, K.S., Saraf, D.N., 1998. An Experimental Study on Bulk and Solution Polymerization of Methyl Methacrylate with Responses to Step Changes in Temperature. Chemical Engineering Journal 70: 25-35. Krajnc, M., Poljansek, J., Golob, J., 2001. Kinetic Modeling of Methyl Metacrylate Free-Radical Polymerization Initiated by Tetraphenyl Biphosphine. Polymer 42: 4153-4162. Lewin, D.R., 1996. Modelling and Control of an Industrial PVC Suspension Polymerization Reactor. Computers Chem Engng 20: S865-S870. Moad, G. and Solomon, D.H., 1995. Chemistry of Free-Radical Polymerization, Oxford-Elsevier Science. Odian, G.G., 1991. Priciples of Polymerization. New York, John Wiley & Sons. Villermaux, J. and Blavier, L., 1984. Free Radical Polymerization Engineering - II Modeling of Homogeneous Polymerization of Styrene in a Batch Reactor, Influence of Initiator. Chemical Engineering Science 39(1): 101-110.
7. Acknowledgements KN would like to thank the UK ORS Scheme and CPACT for providing funding for her PhD studies.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
821
Combining First Principles Modelling and Artificial Neural Networks: a General Framework R. Oliveira Department of Chemistry - Centre for Fine Chemistry and Biotechnology Faculty of Sciences and Tecnology, Universidade Nova de Lisboa, P-2829-516 Caparica, Portugal, Tel: +351-21-2948303, Fax: +351-21-2948385, E-mail: [email protected]
Abstract In this work a general hybrid model structure for stirred-tank bioreactors is proposed. The general structure combines first principles modelling with artificial neural networks: the bioreactor system is described by a set of mass balance equations, and the cell population system is represented by an adjustable mixture of neural network and mechanistic representations. The identification of unknown parameters from measurements is studied in detail. The sensitivities equations are derived for the general model enabling the analytical calculation of the Jacobian Matrix. The identification procedure consists of a least squares optimisation that employs a large-scale Sequential Quadratic Programming (SQP) algorithm. The methodology is outlined with simulation studies.
1. Introduction Hybrid modelling has been recognised as a valuable methodology for increasing the benefit/cost ratio of bioprocess modelling (Schubert et al. (1994), Preusting et al. (1996)). The main design concept is that the a priori mechanistic knowledge is not viewed as the only relevant source of knowledge, but also other sources, like heuristics or information hidden in databases, are considered as valuable complementary (not alternative) resources for model development. The application of hybrid modelling to chemical and biochemical reactors has been exemplified in several works. The most widely adopted hybrid structure is based on the mass balance equations, like in the traditional first principles approach, but the reaction kinetics are modelled with artificial neural networks (ANNs) (Psichogios and Ungar (1992), Schubert et al (1994), Montague and Morris (1994), Feyo de Azevedo et al. (1997), Chen et al. (2000)). Unfortunately, even for such simple hybrid structures, there are many theoretical issues, such as identifiability and stability, that are not well characterised. In fact, most of the reported studies are eminently problem-oriented. In the current work, some theoretical aspects related with stability and identifiability in hybrid modelling are studied. The problem is tackled by formulating a general dynamic hybrid structure valid for a wide class of problems. The resulting dynamical system is then studied in a systems engineering perspective. The methodology is outlined for the
822 secreted protein production process described in Park and Ramirez (1989) with simulation studies.
2. Theoretical developments 2.1. General dynamic hybrid model As discussed previously a principal design issue in hybrid modelling is that it should allow to incorporate several different sources of knowledge. The first step in the present study is to define a flexible system structure that allows to incorporate different forms of knowledge, but also simple in the sense that one must be able to characterise it in terms of identifiability and stability or other important properties. With this main concern the following system structure is proposed:
dc — = KH(c)p-Dc + u at p = N(c,W)
(la) (lb)
where c is a vector of n concentrations, K a nxm yield coefficients matrix, H(c) a mXr matrix of known kinetic expressions, p(c) a vector of r unknown kinetic functions, D is the dilution rate, u is a vector of input volumetric rates, N(-) is a network function and W a vector of nw parameters. The main idea is to insert all the a piori first-principles knowledge in Eq. (la) whereas all other sources of knowledge are inserted in Eq. (lb). The Eq. (la) is the general dynamical model proposed by Bastin and Dochain (1990). The Eq. (lb) states that the term p is computed by a network function. This network function refers to connectionist systems in general; not only the usual neutral networks but also Fuzzy or statistic networks may be considered. With this mathematical formalism, first priority is given to mechanistic knowledge, while other types of knowledge may also be activated in the model through Eq. (lb). Three important properties of system (1) should be pointed out: i) the representation of the kinetic rates in Eq. (1) is rather generic both for chemical as well as for biological reaction catalysis (e.g, Bastin and Dochain (1990), Dochain et al. (1991)), ii) the framework introduced by this expression enables to use other modelling techniques for establishing p. Instead of a single neural network, m neural nets, a fuzzy system or several combinations thereof are possible, iii) provided that all functions in matrix N(c,W) are continuous, differentiable and bounded, the Bounded Input Bounded Output (BIBO) stability results presented in Bastin and Dochain (1990) also apply to system (1), and also very importantly, parameter sensitivities may be computed. 2.2. Identification Equation (lb) establishes a parametric (or semi-parametric) non-linear relationship between p and c where a set of nw parameters W are involved. These parameters must be identified from measurements. Irrespective of the type of relationship defined in Eqs (lb), the goal of the identification procedure is to obtain the parameter vector W that minimises the deviation between the model and real process outputs. The real process reaction kinetics cannot be measured directly; only the concentrations can be measured using adequate measuring devices. By definition, the reaction rates can be calculated using Eq. (la). In practice, only a partition of r equations is required
823
p = [K,H(c)ff-^+Z)c„-u, t
at
(2)
where index a denotes a partition of r state variables of Eq. (lb). From Eq. (2) an important condition for the identifiability of model (1) arises. Model (1) is identifiable if and only if the rxr matrix KaH(c) is non-singular. Two possible strategies may be adopted. Method I is a two-steps procedure: in the first step the unknown kinetics are estimated for instance using Eq. (2). In the second step an optimisation algorithm minimises the errors between the estimated and modelled reaction rates .^The application of Method I is exemplified in Chen et al (2000). The main drawback of this methodology is that the concentrations are often measured off-line with high sampling times yielding inaccurate reaction rates estimates. Method II is more common in the context of hybrid modelling and consists in minimising the deviation between the measured and estimated concentrations:
argmin|E = l x t / - < ^ / M c ; - c j | w [ 2j-^i J
(3)
This method requires that the model equations (1) are integrated numerically between measurements. The numeric integration may be time consuming especially when many measurements are available. Psichogios and Ungar (1992) applied this strategy for training ANNs embedded in mass balance equations. They suggested to employ the sensitivities method for calculating parameter gradients. The evaluation of gradients is less time consuming than the numeric alternative. For the particular case of hybrid model (1), the sensitivities equations may also be derived provided that functions N(c,W) are continuous and differentiable. The differentiation of E in W results in the following Eq.: dE
^fdE^fdc]
1 ^
^
Jdc)
^ = 2:\-^\\^\ =-^S-2e,S| —I 1=1 acj,|awj,. p ^ '^awj,
(4)
with ei=(c'i -Ci). The matrix 3c/3W must be computed for each measured point. This can be accomplished through the sensitivities equations. The sensitivity equations are obtained by differentiating Eqs. (la-b) in W. After some manipulations the following equations are obtained:
dt
A]..^.Bw,.,. = K,„p|.K.|-.,.,B.Kp^,
The set of equations (5) must be integrated simultaneously with Eqs. (la-b). The initial condition for Eq. (5) should be (9p/9W)t=o=0 because the initial value of state variables is independent of parameters W.
824
3. Results and discussion The model described in Park and Ramirez (1988) for fed-batch production of a recombinant protein will serve as an example to outline the proposed methods. The mass balance equations take the following state space format 1 0 0] X 7.3 0 0 o 0 1 0 0 0 0 ij
0
x 0
0
0 (P,-PJ
X juiS) S-So fpiS) -D\
(6a)
Pt
P^
being X the biomass concentration, S the glucose concentration, Pt the total protein concentration, P^ the secreted protein concentration, D is the dilution rate (D=F/V being F the input feed rate and V the medium volume inside the bioreactor) and So the substrate concentration in the input stream. The true kinetic expressions are the following
JU(S) =
4J5ju(s) 21.875 Se -55 ^ fp(S) = , 0(5) = 5-HO.I 0.12-hju(s) (S + 0A){S + 62.5)
(6b)
Two simulations were carried out with process simulation time of 16 h. The sampling times were 1 min for on-line measurements (F and V) and 15 min for off-line measurements (X, S, Pt and Pm ). The two resulting datasets had 960 data records. In order to excite the process and to obtain wide variations in S, the feed rate was the control output of a glucose on-off controller varying between 0.01-0.2 L/h and glucose between 10-0.1 g/L. The glucose concentration in the inlet feed was So=40 g/L. The initial X and S were chosen randomly from the uniform distribution in the intervals 0-2 g/L and 0-0.5 g/L, respectively. The initial concentrations for total and secreted protein were Pt(0)=0 and Pm(0)=0 respectively. Gaussian errors were added to X,S, Pt and P^ with standard deviations of 0.25, 0.25,0.025 and 0.025, respectively An hybrid model was developed considering that both the mass balance equations (Eq. (6a)) are known. The only part of the process that is considered to be unknown, in a mechanistic sense, is the 3 kinetic Eqs. (6b). As such, the matrix of known kinetic expressions was H=diag([X, X, (Pt - Pm)])- The 3 unknown rate expressions were modelled with a BP neural network with one input (glucose concentration), 8 hidden nodes and 3 outputs. As such the hybrid model consists on Eq. (6a) and the additional equation: [ ^ , / ^ , o ] ' = J/ag t , , , / , , ^ , , , 0 ^ 3 , ] ) s ( W 2 s(Wi s(S) +
(7)
being Wi,B2, W2, B2 parameter matrices associated with connections between nodes in the neural net, and s(.) the sigmoid function. The parameter vector W represents a vectored form of matrices Wi,B2, W2, B2 and comprises in this case 42 scalar parameters.
825
(a)
(b)
Figure 1. Hybrid model simulation results for the test dataset. (a) biomass (b) secreted protein. Full lines represent measured values; dashed lines represent modelling results.
The first study was to identify the parameter vector W using method I. It was impossible to obtain good estimates of the kinetic rates because the data was too noisy and the sampling time was not sufficiently low for resolving the process dynamics. The same unsatisfactory results were obtained with splines least-squares fitting with euler discretisation and middle point discretisation. The off-line sampling and the high dynamical behaviour precludes the application of method I. The results obtained with method II were however very promising. The algorithm employed was a large scale SQP optimisation. Only one dataset was used for identification. The simulation results for the test dataset (not used for parameter identification) are plotted in Figs. (la-b). The mean square error for the test dataset was 5xlO'^.(with concentrations scaled to their average value). The prediction capabilities of this model, as measured by the test dataset modelling errors, seam to be rather satisfactory. In Fig. (2a-b) the identified kinetic functions are plotted together with the true curves (Eqs. (6b)) as functions of S. In this particular example, only one process experiment was sufficient to identify the specific (Fig. 2a) total protein production rate (not shown in the picture). In the case of the specific growth rate a more careful analysis of Fig (2a) shows that the modelling accuracy degrades for glucose concentrations higher than 10 g/L. The reason for this fact is that there are no measurements available in this concentration range, as may be seen in Fig. (lb). In the case of specific total protein
826
(a) •g 0,05 8-0,00 10 glucose (g/L) _6,00n 45,00 o ^24 , 0 0 ^ f 3,00-1
(b)
§)2.00
taie
I 1,00 0)
identified
0,00 10
15
20
glucose (g/L)
Figure 2. Kinetics modelling results: (a) specific growth rate, (b) protein secretion rate. Full lines represent the true kinetics and dashed-lines the modelling results. production rate, the modelling results are not good for very low glucose concentrations because only few measurements are available in this range. In contrast with Fig. (2a), Fig. (2b) shows that the modelling results for the protein secretion rate 0(S) are very bad. It was verified (not shown in the pictures) that the known kinetic function h33=(FtPm) is most of the time very small or even zero. This fact renders (S) unidentifiable because one cannot invert h33. Still, the multiplication of both functions h33x(S) is identifiable and the identification algorithm managed to produce good secreted protein prediction results (Fig. Ic).
4. References Bastin, G. and Dochain, D., 1990, Elsevier, Amsterdam. Chen, L., Bernard, O., Bastin, G., Angelov, P., 2(X)0, Control Eng. Practice, 8, 821-827. Dochain, D., Perrier, M., Ydstie, B.E., 1991, Chem. Eng. Sci., 47,4167-4178. Feyo de Azevedo, S., Dahm, B., Oliveira, F.R., 1997, Comp. chem. Engn., 21, 751-756. Montague, G., Morris, J., 1994, Trends BiotechnoL, 12, 312-324. Park, S. and Ramirez, W.F., 1988, AIChE Journal, 34(9), 1550-1558. Preusting, H., Noordover, J., Simutis, R., Lubbert, A., 1996, Chimia, 50(9), 416-417. Psichogios, D.D. and Ungar, L.H., 1992, AIChE Journal, 38(10), 1499-1511. Schubert, J., Simutis, R., Doors, M., Havlik, I. and Lubbert, A., 1994, Chem. Eng. TechnoL, 17,10-20.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
827
Classifying and Proposing Phase Equilibrium Methods with Trained Kohonen Neural Network S. Oreski\ J. Zupan^and P. Glavic^ ^Faculty of Chemistry and Chemical Engineering, PO Box 219, SI-2000 Maribor, Slovenia, emails: [email protected], [email protected] ^National Institute of Chemistry, Hajdrihova 19, PO Box 30, SI-IOOO Ljubljana, Slovenia, email: [email protected]
Abstract The Kohonen neural networks were chosen to prepare a relevant model for fast selection of the most suitable phase equilibrium method(s) to be used in efficient vaporliquid chemical process design and simulation. They were trained to classify the objects of the study (the known physical properties and parameters of samples) into none, one or more possible classes (possible methods of phase equilibrium) and to estimate the reliability of the proposed classes (adequacy of different methods of phase equilibrium). Out of the several ones the Kohonen network architecture yielding the best separation of clusters was chosen. Besides the main Kohonen map, maps of physical properties and parameters, and phase equilibrium probability maps were obtained with horizontal intersections of the neural network. A proposition of phase equilibrium methods is represented with the trained neural network.
1. Introduction A proper selection of phase equilibrium methods is critical factor for efficient process design and simulation. But among many different phase equilibrium methods it is very difficult to choose the most appropriate ones. Therefore much effort has been made to build an advisory system for appropriate phase equilibrium methods selection. In the past the advisory systems were expert systems, which were advising the engineers through the sequence of questions and answers, i.e. CONPHYDE (Baiiares-Alcantara et al., 1985), TMS (Nielsen et al., 1991) and PHYP (Oreski and Glavic, 1997). For the same purpose the artificial neural network can be used. When trained, neural networks are able of quick response. The obtained results are better or at least of the same quality as results gained with other methods. The additional advantage of the neural networks is that they give results also when they are not possible to obtain with classical methods. In chemical engineering and chemical industry the diversity and number of neural network applications has increased dramatically in the last few years. The neural networks are used in fault detection, diagnosis, process identification and control, process design and simulation. The applications have been discussed by Bulsari (1995) and Renotte et al. (2001). The neural networks are also used as criterion functions for optimisation with known mathematical model and unknown process parameters (Dong et al., 1996). In the field of phase equilibria, the neural networks are used in
828 vapor/liquid equilibrium prediction. The neural network applications represent a part of (Alvarez et. al., 1999) or a complete (Sharma et al., 1999, Buenz et al., 1999) vapor/liquid or physical property predictive tools. Except in our work (Oreski and Glavic, 2001 and 2002), in the field of phase equilibria the artificial neural networks have been used for prediction only and not for classification so far.
2. Method When determing the neural network model, which will be able to solve the classification problem, four main characteristics of the problem were exposed: A large number of data exists, represented with objects consisting of diverse combinations of physical properties and parameters. The domain of phase equilibrium methods is not covered with all mathematically possible combinations of physical properties and parameters. A classification is to be made by neural networks. The reliability of the phase equilibrium methods proposed must be estimated. According to the nature of the problem we were trying to solve, the Kohonen neural network was employed among several different neural networks as one with the most appropriate architecture and learning strategy. Kohonen neural network In the application the Kohonen network is based on a single layer of neurons arranged in a two-dimensional plane. A matrix presentation of the network is chosen, because the matrix description shows very clear relation between the input data and the planes (Figure 1).
Figure 1: A matrix representation of two-dimensional Kohonen neural network layout. The aim of Kohonen learning is to map similar signals to similar neuron positions. The learning procedure is unsupervised competitive learning where in each cycle the neuron c is to be found with the output most similar to the input signal:
^(x,-w.fl
c
j = l,2,...,n
(1)
In each cycle, also the weights of the neuron c are corrected to improve its response for the same input in the next cycle. And the weights of all neurons in the neighborhood of the c-th neuron are corrected by a fraction that decreases with increasing topological distance from c:
829 (new) W.. = W
f+,(f)a(^,,)(x,-wf>)
(2)
The next object is input and the process repeated (Zupan and Gasteiger, 1999).
3. Research Results 3.1. Data preprocessing The combinations of physical properties and parameters briefly represent different chemical processes. They describe chemical bonds, structure of the components, working conditions, further calculations desired, accuracy of the methods, simplicity and speed of calculations, data availability and exact definition of phase equilibrium methods applicability in vapor-liquid and liquid-liquid regions. The combinations of phase equilibrium methods represent one or more phase equilibrium methods that are appropriate for designing and simulating such chemical processes. Fifteen ones are chosen that are usually used in practice: Soave-Redlich-Kwong, Peng-Robinson, Benedict-Webb-Rubin, Starling, LeeKesler-Plocker and virial equations of state, Margules-1, Margules-2, van Laar, Wilson, NRTL, UNIQUAC, ASOG, UNIFAC, and regular solution activity coefficient methods. The data were collected from experts and literature and expressed with more than 7000 data objects of form X(y, xi, ...xu). Variables jc, are representing eleven different physical properties and parameters, variable y is representing one appropriate phase equilibrium method attached out of fifteen possible. With preprocessing procedure 4228 learning objects were constructed from data objects in a form of 46-dimensional vectors Xs(Xsu Xs2, ..-, Xs46)j haviug a distributed presentation of all variables jc^, (first 31 variables representing 11 different physical properties and parameters, and last 15 variables representing a target vector of all fifteen phase equilibrium methods). 3.2. Training of Kohonen networks and resulting maps According to the number of learning objects several Kohonen neural networks of sizes from 50x50 to 70x70 with 46 weights vv,, on neuron j at different number of epochs were trained. Out of them the 70x70 neural network trained with all learning objects through 900 epochs was chosen for further analysis as neural nertwork architecture yielding the best separation of clusters. The main Kohonen map of the neural network consists of about 1800 evenly distributed grouped labels ' V , 'S', 'L', T, indicating different regions (vapor, vapor-liquid, liquid and liquid-liquid), and empty spaces (Figure 2). Labeled neurons were activated by one or more learning objects, empty spaces were not activated by any of them. With horizontal intersections of the trained neural network, 46 single maps of physical properties and probability maps of phase equilibrium methods were obtained. The first 31 maps are representing physical properties (Figure 3 represent map for physical property temperature). The last 15 maps are representing probability phase equilibrium maps (Figure 4 represent probability map for the UNIFAC method). When inspecting the main Kohonen map and all 46 maps with overlapping, a transparent and expected correlation was found among them.
830 ILLLLLL LL LL L LL LLL LLLLL LL LL LLLL LLl L l l SSSS SSI L Lllll L Z L LL L L L L l l I ILL LL L L L LL L L L L SI L L L L LL LLL L L LL 1 1 L SSS S SI IL L L L L L L L l l LL 1 1 L SS SI XL L L L LLL L LL L Lll 1 S S X IL LLLLLL LL LLLL 111 LL S S SSI I L L L LLLLLL 1 111111 1 ! L L S S I IS LLL LLL 111 11 ] L S SS SSI ] IS SSS SS L L LLL 11 11 ]L L L L SSSS S I L L 1 IS S S S LL L 111 1 11 1 S S S SS LI S I 8S SS SSS L LI 1 1 1 1L L L 1 S S S LLL 11 S S IS S S L LL 1 1 1 11 L LL 1 SS S L 11 IS S SS SS SSS S L L 11 1 11 1 S SS S L I I S SS S SSSS 1 1 1 1 1 SS S LL LLI ISS S SSSSSS SS S ILL L 1 LL L! S SS I I S S LL LL LL 1 LL LLLL SSS SS 11 LL LI ISS S S S S SS L L L 11 LL L L IS S S S SS S SS L 1 L 11 L L S S SS SS 1 1 1 L I IS S S S SS S SS LL 1 L LL S S S SS S S L LL I IS S S SS LL 111 L S S S S SS S SS 1 1 LI IS S S SSS SS SS 111 1 LLL S SS «BXB a a S B B B i , ! S B B S B S S S S 1 I 47 I S S S S S 1 1 1 S S S S S S S S S S LIL LI 46 IVV V V V V S S S S S 1 1 1 SSS S SS S S S S S LI 45 IV V V S S S 1 1 1 SS S SS S SSS S S LL L LI 44 IV V V VV SSS SSS L 1 S S S S SS S S S S L l l I 43 I V V 8 S S L L S SSS SSS SS L L l ILI 42 IV V VV V S SS S L SS SS S SSSSS SS S L I I 41 IV V V VV S S S L L SS S S S S 1 LL LL LI 40 IV V V S SSSS S L LI S SSS SSS SSS SS S 8 S 1 L LI 39 IVV V V V S S S L S S S S S S S L I 38 I VV V V V V L LI SS S SSSS 8 8 S 8 SSS 111 37 IV VV V V L 8 S 88 88 8 S 8 8 8888 I 3 6 I V V V V V V V V S S S S S S S S S S S S S S S SSI 3 5 I V V V V VV S S S 888 8 8 S 88 8 888 88 81 34 IV V V V V VV VVV 8 88 8 8 8 8 SS S S 8 SSSS SSI 33 IV VV V VV V V VV V V SS 8 88 888 8 8 8 8 SSSSI 32 I V V VV V VV V V 8 888 8888 8 S 8 8 88 I 31 IV V V V V V VVV SS 8 8 8 8 8 888 88 88 S 8 SSS 8 S S I 30 IV VV VV V V V V VVV 8888 8 8 SSSSS 8 888 S8I 29 I V V V V V V V V V V 888888 8 88 88 8 8 S S S S I 28 IVV V VV V V V VVV 888 8 88 88 8 8 8 8 SSI 27 I V V VVVV VV V VV 8 SSS 8 8 S S S 888 8 I 26 IVV V VVV V V V V V V 888 SS 8 8 8 8 8 SSI 25 IV V VVV V V V V V V V V V V S S 8 S S S S S S SSS S I 24 IVV V V V V V V V V V V V VVV S S S S S 88 8 8 881 23 IV V V V V VVVV V V V V V S S S S S S S S S I 22 IV VV VV V V VV V V V V V V 888 S SSS 88 8 SSS 8 8 SSI 21 IV VV V VVV V V V V V VVV VVV 8 88 8 SSS S I 20 IV V V V V V V V V V V VV 8 8 S SS 8 88 SS SSI 19 IV V VV VV V V V VVVV VVVVV VVV S S S S 8 S I 18 I V V VV V VV V V V VV 8 8 8 8 8 8 SS 8 8 SSI 17 IV V V V V WWW VV V V V V V V SSSS S 888 SSSS S I 16 IV V VV V V VVV V V VVV VVV 8 8 8 S 88 S SSI 15 IV V VVV VVV W W V V V V V V S S 8888 I 14 IV V V V V V V V VVV VV V VV 8 S 8 8 88 8 S SSI 13 I V VV V VV VVV VV V V V VVV V 8 8 8 SSSSSSS 8 I 12 IV V V V V V V V V V V V 88 8 8 8 88SSSS8I 11 IV V V VVVV V V V VVVV V V V V V V V 8 8 8 8 S S S I 10 IV V V V V V VV VV V VV V V 8 8 SSSS 8 I 9 I V VVV VV V VV VV VVV V VVV V VV 8 LL L 8 SSS S I 8 IVV V V V V V V L L L L L I 7 IV VV V V V V VV VVV VV VVVV VVV LL LL LL L L LL LLL 8 8 8 SI 6 I V V V V V V V V V VVV VVVV V V L L LLLL L 8 8 I 5 IV VV V V VVV VVV VV V VV V LL L L LL LLL L 8 8 SI 4 IV V VV V V VVVVV V VVV V V LL LL L L LL I 3 IV V V VVVVVVV V V VVVVV V LL L LL LLL LL SSSSI 2 I V V V V V V V V V V V V V V L L l L L I 1 IVVVVVV V V VV V V VV V VV V V VV V V LL LLLLLL L L L LLLL LL 8 SSI + + 1234567890123456789012345678901234567890123456789012345678901234567890 I
Figure 2: The main Kohonen map of 70x70 neural network trained through 900 epochs. iiiiiii 11 11 1 11 111 11111 11 11 1111 1111 1 m i l 1 t 1 1 2222 221 I l l l l l l l l l l 1 1 1 I 111 11 1 1 1 11 1 1 1 1 1 1 1 21 I I 1 1 1 11 111 1 1 11 1 11 1 t 1 222 2 21 II 1 1 1 1 1 1 1 1 1 11 11 1 11 ] 11 22 21 II 1 1 1 111 1 11 1 111 ] 1 2 2 Z 11 m i l l 11 1111 111 11 1 1 1 112 2 221 I 1 1 1 m m 1 m m i i i i 1 1 2 2 I 12 m 111 111 11 1 1 1 ] 2 22 221 12 2 2 2 22 1 1 1 1 1 11 11 1 1 1 1 ] 2222 2 I 12 2 2 2 11 1 111 1 11 1 1 1 1 2 2 2 22 11 I 22 22 222 1 11 1 " 2 2 2 111 11 1 11 1 11 1 2 2 22 2 1 11 12 2 22 22 222 2 22 2 1 I I 2 22 2 22 2 11 111 122 2 222222 2 22 I _ _ _ 11 1 1 1 1 1 11 1 1 1 1 2 2 2 222 22 11 11 11 122 2 2 2 2 22 1 1 1 1 1 11 1 1 22 2 2 2 22 2 11 11 12 2 2 2 22 2 22 1 1 1 11 1 1 2 22 2 2 22 1 1 11 12 2 2 2 22 2 22 11 1 1 11 2 2 2 22 22 111 1 I 12 2 2 22 1 1 111 1 2 2 2 2 22 2 2 _ _ _ _ 12 2 2 222 22 22 111 1 111 2 22 2 2 22 2 22 1 1 II 12 2 2 2 2 22 1 1 2 2 22 2 2 2 2 2 1 Z I 2 2 2 2 2 1 1 1 2 2 2 2 2 2 2 22 2 11 1 IZ Z33 3 3 3 3 2 2 22 2 1 1 1 222 2 22 2 2 2 2 2 IZ Z3 3 3 2 2 2 1 1 1 22 2 22 2 222 2 2 11 1 IZ Z3 3 3 33 222 2 22 1 1 2 2 2 2 22 2 2 22 l l l Z Z 3 3 2 2 2 1 1 2 222 222 22 1 1 1 IIZ Z3 3 33 3 2 22 2 1 22 22 2 22222 22 2 1 1 Z Z3 3 3 33 2 2 2 1 1 22 2 2 2 2 1 11 11 IZ 13 3 3 2 2222 2 1 11 2 222 222 222 22 2 2 2 1 1 II 133 3 4 3 2 2 2 1 2 2 2 2 22 2 1 I I 33 4 3 3 3 1 11 22 2 2222 2 2 2 2 222 111 14 44 3 3 1 2 2 22 22 2 2 2 2 2222 I 14 44 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 22 2 22 221 1 4 4 3 3 3 3 2 2 2 222 2 2 2 22 2 222 22 21 15 4 4 3 3 33 333 2 22 2 2 2 2 22 2 2 2 2222 221 15 55 4 33 3 3 33 3 3 22 2 22 222 2 2 2 2 22221 I 5 4 33 3 33 3 3 2 222 2222 2 2 2 2 22 I 15 5 4 4 4 3 3 3 3 22 2 2 2 2 2 222 22 2 2 2 2 222 2 221 15 55 44 4 4 4 3 333 2222 2 2 22222 2 222 221 I 5 5 5 4 44 4 4 4 3 222222 2 22 22 2 2 2 2 2 2 1 155 5 44 4 4 4 333 222 2 2 2 22 2 2 2 2 221 I 5 5 5544 44 4 33 2 222 2 2 2 2 2 222 2 I 144 5 555 5 5 4 4 4 3 222 22 2 2 2 2 2 221 14 5 555 5 5 5 5 5 5 44 4 4 2 2 2 22 22 22 2 2 2 2 I 1 4 4 55 5 5 5 5 5 5 4 4 3 333 2 2 22 2 22 2 2 2 21 14 4 5 5 5 5555 5 4 4 4 4 2 22 2 22 2 2 2 I 14 44 55 5 5 55 5 4 4 4 3 3 222 2 222 22 2 2 2 2 22 221 14 44 5 555 5 5 5 4 4 444 333 2 22 2 222 2 I 14 4 4 5 5 5 5 5 5 5 4 33 22 2 22 2 22 22 221 14 4 44 55 5 5 5 5544 33333 333 2 2 2 2 2 2 I I 4 4 55 5 55 5 4 3 33 2 2 2 2 22 2 2 2 2 221 14 4 4 4 5 555555 55 5 4 4 3 3 3 2222 2 222 2222 2 I 15 4 44 5 5 555 4 4 333 333 2 2 2 2 22 2 221 15 4 444 455 55 55 4 4 3 3 3 3 2 2 2222 I 15 5 4 4 4 5 5 5 4 4 4 33 3 33 2 2 2 2 22 2 2 221 Z 5 44 4 55 555 44 4 3 3 333 3 2 2 2 2222222 2 I 13 4 5 5 4 4 4 55 4 3 3 22 2 2 2 2222222Z 13 4 5 5 44 4 S 4 4 3 3 3 3 33 3 3 3 33 2 2 2 2 2 22 I 13 4 5 5 5 5 44 55 3 33 3 3 2 2 2222 2 I 1 4 555 44 5 44 33 3 3 3 3 333 3 33 2 11 1 2 2 2 2 21 144 4 5 4 4 3 3 l l l l l I 14 44 5 5 5 5 44 433 33 3333 333 11 1 1 1 1 1 1 1 1 1 1 1 22 2 21 I 4 44 5 5 55 4 4 333 3333 3 3 1 1 1 1 1 1 1 2 2 1 13 4 4 4 5 554 444 33 3 33 3 11 1 1 11 1 1 1 1 2 2 21 13 4 44 5 5 55544 4 444 3 3 11 11 1 1 11 I 13 4 5 5555544 4 3 33333 3 11 1 11 1 1 1 1 1 22221 13 3 4 4 4 5 55 5 4 3 33 3 1 1 1 1 1 Z Z 3 3 3 3 3 4 4 4 45 4 5 5 5 4 3 3 3 3 3 3 3 3 11 1 1 1 1 1 1 1 1 1 1 1 1 1 11 2 22Z ^1234567890123456789012345678901^456789012345678901^
\
_53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
:
Figure 3: Map representing physical property temperature. Label 1 indicates T
831 9 9 9999
99 9
991
9 991
9 99 9 9 9 9 9 9 9 9 9 9 9 999 99 99 99
11 71 7
99 899 9
99999 7
7
999
6 988 28 78738 1 1632 1 3 41241 1 167
8 41
9
7
9
8 81
9 9
99 9 9
699 9 9998 8 9 99
9 9 99 99 89 9 8999 99 8 9999 9 98 898 1
1234567890123456789012345678901234567890123456789012345678901234567890
Figure 4: Probability map for UNIFAC method. Labels have values between 1 and 9 (9 means the method is very appropriate for the given combination of physical properties and parameters, 1 means the method is not a very good choice).
4. Kohonen Neural Network as Consulting System Presenting neurons in a form of columns, values of trained weights were obtained at individual neurons, showing the applicability of the methods for certain combinations of physical properties and parameters. On Figure 5 two randomly chosen neurons are represented. The meaning of their trained weights can be read according to the definition of domain. The neuron 3562 on position (51,62) was activated with at least one learning object and has label T , indicating the liquid-liquid region (Figure 5a)). iieuron(53,64) iieiiron(51,62) a)|@(fi)(g)(5)(g)(g)®®®(g)®(g)®(g)(a)(§)(g)@(g)®®®®@(D®
—-1
ttilliiiiiiiiiitiliiittillttiliittiiliiiiiiiit Figure 5: Two random neurons of the selected trained network. The neuron is active for polar organic components at low working pressure and temperatures lower then Tb. Phase splitting is expected in liquid region. The methods must be accurate but slow, appropriate to simulate liquid-liquid equilibrium. The neural network has found the methods UNIQUAC (^3562,42=8), NRTL (^3552,43=7) and UNIFAC (W3562,45=8) as the most appropriate ones. The neuron 3774 on position (53,64) was not activated with any of objects and has no label (Figure 5b)). Anyway the
832 neural network self learned that UNIFAC method had to be appropriate also for similar polar organic components at medium pressure in phase splitted liquid regions. Because of the lack of data about chemical system the method was found to be as the only one adequate to the solving problem but with the highest certainity (w3774,45=9).
5. Conclusions Using Kohonen unsupervised learning it was possible to classify phase equilibrium methods on the basis of different combinations of physical properties and parameters. The trained neural network can estimate the reliability of appropriate phase equilibrium methods. It can be used similarly as expert system. Because of all weights trained it gives results also in situations for which it was not learned - there exist more than 3000 unlabeled neurons with weights trained. This is an advantage over classical expert systems which, in the best case, can only warn the user against unsolvable situations.
6. References Alvarez, E., Riverol, C , Correa, J.M. and Navaza, J.M. 1999, Design of combined mixing rule for the prediction of vapor-liquid equilibria using neural networks, Ind. Eng. Chem. Res. 38, 1706. Baiiares-Alcantara, R., Westerberg, A.W. and Rychener, M.D. 1985, Development of an expert system for physical property predictions, Comp. Chem. Engng. 9, 127. Buenz, A.P., Braun, B. and Janowsky, R. 1999, Quantitative structure-property relationships and neural networks: Correlation and prediction of physical properties of pure components and mixtures from molecular structure. Fluid Phase Equilibria 158-160, 367. Bulsari, A.B., Ed., 1995, Neural networks for chemical engineers, Elsevier, Amsterdam. Dong, D., McAvoy, J. and Zafiriou, E. 1996, Batch-to-batch optimization using neural network models, Ind. Eng. Chem. Res. 7, 2269. Nielsen, J.M., Gani, R. and O'Connell, J.P. 1991, Computer-oriented process engineering: TMS: A knowledge based expert system for thermodynamic model selection and application, Eds. L. Puigjaner and A. Espuna, Elsevier, Amsterdam. Oreski, S. and Glavic, P. 1997, A knowledge based system for selection of phase equilibria models. Hung. J. Ind. Chem., 25, 161. Oreski, S., Zupan, J. and Glavic, P., 2001, Neural network classification of phase equilibrium methods, Chem. Biochem. Eng. Q. 15, 3. Oreski, S., Zupan, J. and Glavic, P. 2002, Artificial neural network classification of phase equilibrium methods - Part 2, Chem. Biochem. Eng. Q. 16,41. Renotte C , Vande, W.A., Bogaerts, P. and Remy, M. 2001, Neural network applications in non-linear modeling of (bio) chemical processes. Measurement and Control, 34, 197. Sharma, R., Singhal, D., Ghosh, R. and Dwivedi, A. 1999, Potentional applications of artificial neural networks to thermodynamics: vapor-liquid equilibrium predictions, Comp. Chem. Engng., 23, 385. Zupan, J. and Gasteiger, J. 1999, Neural networks in chemistry and drug design, Wiley - VCH, Weinheim.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
833
An Initialisation Algorithm to Solve Systems of Nonlinear Equations Arising from Process Simulation Problems Jorge R. Paloschi Cambridge - UK [email protected]
Abstract An algorithm is presented that, given an initial guess for the solution to a set of algebraic nonlinear equations, produces a new estimate (expected to be better). The algorithm finds a partition of the domain into two sets of variables- an initialised (i.e., fixed) set and a "solved for" set. The dimension of the former is much smaller than the latter, and the variables of the "solved for" set can be found by solving reduced problems with initial values given for those variables in the initialised set. Four examples are presented on which a Newton solver fails to find the solution while it converges when the new initial point produced by the algorithm is used. The Aspen Custom Modeler™ conmiercial simulator is used to pose the examples.
1. Introduction Posing a new problem in an equation-oriented process simulation program presents the difficulty of providing initial estimates for unknown variables. Providing good initialization is a challenge when solving simulation or optimisation problems in steadysate or dynamic mode. In the latter, this is related to the initialisation of the ordinary differential equation (ODE) system. When solving very large problems (i.e., more than 100000 equations) it is not practical to initialise all variables. Commercial simulators have built-in heuristics to help in tackling this problem. More modern ones, built around an object-oriented paradigm, allow very sophisticated ways of initialising variables. Nevertheless, for very large problems, finding a good initial point is still an important challenge. Different solution approaches can be used to converge problems when a good initial point is not available. These include Homotopy or Continuation techniques (e.g., Allgower et al., 1990), Interval solvers (e.g., Neumaier, 1990) and Global Optimisation (e.g., Maranas and Floudas, 1995). Instead of using a different solution method, we propose to find a better initial point by using the same equations that have to be solved as guidance. The objective of this paper is to present an algorithm that, from an initial estimate given by the user, obtains a better one which can then be fed into a numerical solver. Furthermore, the algorithm only needs the initialisation of a small subset of the original variables. That is, we will not be presenting a new solver but a tool that can be used in conjunction with a numerical solver to either solve problems faster or even solve them at all. It is basically a tool to improve robustness.
834 We highlight some details of the proposed algorithm in Section 2. In Section 3, we present results obtained by applying the algorithm in Section 2 to four problems of different complexity, posed using a commercial simulator. Finally, we provide our conclusions in Section 4.
2. Proposed Algorithm Assume we want to solve a system of equations f(x)=0
(1)
where x G Q, a bounded subset of 91°, and f: iQ^Sl*^. The aim of the algorithm is to split Q into Q = Qi X Q2, and the range of f(x) into Oi x O2 of same dimensions as Qi and ^2- The split is done in such a way that the variables in Q2 are assigned to the equations in O2 (in an unique way) and can then be solved for if the variables in Qi have fixed values. Therefore, from a given initial point XQ in Qi, a new initial point XQ can be found. The partition Qi is the set of initialised variables, while Q2 is the set of "solved for" variables. The aim is to minimize the dimension of Qi, i.e., the number of variables that need to be initialised. All the variables in Q2 are obtained by solving an optimisation problem. 2.1. Algorithm description The algorithm consists of building the sets Qi and ^2^ starting from empty ones, as well as constructing a new initial point XQ to replace a given XQ. An equation is chosen, from those not yet entered in either Oi or O2 , and then, from all the variables occurring in that equation, one is chosen as a variable to solve for, fixing as many variables as necessary in order to have just one remaining in the chosen equation: i = 0 , Xo' = Xo , Qi = ^2 = ^1 = ^2 =0 While (i0) Xa^^^"^ = Optimize (equi) if (X2^^°"^ found successfully) vari goes into Q2, ^Q.'^i goes into O2 Y
XQ
else
'i _ -
^ isol X2
else vari goes into ^i, equi goes into Oi eqUi goes into Oi
When the algorithm finishes, the subset Q is split into Qi x Q2, and the vector XQ contains the new initial point with all the components in Qi being taken from XQ, while those of Q2 are obtained with the algorithm. At the point XQ , f(x) is such that all components in O2 are small, within the given tolerance used in the algorithm. In what follows, we will use the notation Q12 = Q- (Qi u ^2)-
835 In the remaining of this section, we describe each step of the algorithm in some detail. 2.2. Algorithm details 2.1.1. SortAndSelectEquation step This step of the algorithm consists of selecting the next equation to be used. Since the aim is to minimize the number of variables that need to be fixed, ideally, we should choose an equation containing only one variable. Therefore, all remaining equations (i.e., from those not yet in Oi or O2) are sorted according to the number of variables in ^12, and the one with the minimum of them is chosen. In addition, how variables occur in the equations is also taken into account (i.e., whether the variables occur linearly or nonlinearly). The process is similar to a Forward Triangularization process, where the equations are weighted according to the number of variables in Q12 as well as how do they occur in the equations. 2.1.2. FindVarlnEqu step Given an equation index, from all the variables belonging to it in the set Q12, the one which appears in most other equations is chosen as output variable and moved from Q12 into ^2. and this variable index is returned by the algorithm. If more variables remain in Q12 belonging to the given equation, they are all moved to Qi, i.e., they will be variables to be fixed for the initialisation. If no variables are available, 0 is returned. 2.1.3. Optimize step The optimization step consists of solving the following problem for a given component i off(x):
min
E
{x
-XQ
)
\
f.(x) = 0,xea where x^ is the component X2^^\ Because the nature of the algorithm, the variables occuring in the equation fi(x) which are not x^ have all been solved for (or fixed) already. Notice that if fi(x)=0 has a solution in Q with x''=Xo'^, V k^j, then the solution is trivial. This is exploited by trying it first (i.e., using a one variable solver), and in our experience it succeeds in most cases. Only in case of failure the optimization problem is solved. The optimization problem (2) is not expensive since it only has one nonlinear constraint and the optimization variables are only those appearing in equation i.
3. Examples of Use The commercial simulator Aspen Custom Modeler^^ (ACM) has been used to pose various examples for testing the proposed algorithm. The examples are chosen because they are difficult to solve from their initial points. In all the examples, the standard Newton solver available in the simulator fails to find the solution, while it converges with the new initial point found by the algorithm.
836 In this section we present first some implementation details and then results obtained using four examples of different sizes in ACM. 3.1. Implementation details The system (2) is optimized using the successive reduced quadratic programming (SRQP) solver available in ACM. Before calling SRQP, a first attempt is made to solve the single variable nonlinear problem fi(x)=0 by using an unpublished algorithm of our own which will fmd a solution, if a feasible one exists.The SRQP solver and singlevariable algebraic solver are implemented following an Object Oriented design and make use of the Aspen Open Solvers (AOS) interfaces described in Paloschi et al (2000). The use of the AOS interfaces makes it very easy to plug in different solvers. In ACM, a Block Triangular Decomposition is applied to the nonlinear equation system before the solution process. The algorithm described in this paper is used to solve the nonlinear blocks arising after the decomposition, hence these will be smaller than the original problem. 3.2. Examples 3.2.1. Small heat transfer example The first example is a small set of five heat transfer equations in the only nonlinear block found in the simuation problem: cp_cw * fcwin * rho_cw * (tcwin - tcwout) + Q = 0 Q + rate*vol*heatreaction + rho*cp*fout*tout - rho * cp * fin * tin = 0 Q - (u * area * (tout - tcwout)) = 0 rate - kO * exp( - (Eact / Rgas) / (273.15 + tout)) * sqr{cAout) = 0 rate * vol+ (fout * cAout - fin * cAin) = 0
The five variables in bold are those solved for in this block, the others are either fixed or solved in previous blocks. The following are the occurrence matrices for this problem, as well as the initial points used and the solution (the L indicates linearity, while NL indicates nonlinearity). The one in the left is the original that is input to the algorithm, the one on the right is the one produced by the algorithm. cAout
n"' 2
Q
L L L
3 4 5
NL L
Uo
0.02
rate tcwout tout
L L L NL L
0
0
25
5 4 2 3 1
L L L 25
xo
rate
tout
NL L
L L L
rc"
8.73
Solution 8.74
Q
tcwout cAout 1
"T" NL L L L
L L
377.97 1.3E+06 349.8
0.02 1
427.9
0.0141
0
427.9
As it can be seen, after applying the permutations obtained by the proposed algorithm, the resultant occurrence matrix is lower triangular (if the greyed-out column(s) corresponding to the variables in Q.\ are not considered). The algorithm partitions the
837 problem in such a way that cAout is the only variable in Qi. The bounds for this variable are [0,20] and has an initial value of 0.02 while the solution is 0.010367. Notice that equations 2, 3 and 5 are linear in the variables that are solved for, but equation 4 is nonlinear in the assigned variable. In all cases there is no need to solve the optimization problem because the equations have solutions with the available values of the variables, in particular equation 4 with the values of rate and cAout. The new initial point provided by the algorithm is very different, and the variable values, while well off from the solution, are consistent with equations 2 to 5. 3.2.2. Heat exchanger network example The second example is a small heat exchanger network described in detail in Paloschi(1997). It is decomposed in five nonlinear groups, the largest being of size five. In this case, the dimension of Qi is also one in the five decomposed blocks. In this example, and for all the five nonlinear blocks solved after the decomposition, only one equation requires to solve the optimization problem because the equation does not have a solution at the given variable values. With the new initial point found with the proposals in this paper, the Newton solver succeeds, while it fails with the original initial point. 3.2.3. SSMeth example The next example is a small flowsheet containing a reactor, a separation and a recycle. It can be found as SSMeth in the examples provided with the ACM distribution. There are seven components in each process stream. After the decomposition, the size of the nonlinear block to which the proposed algorithm is applied is 61 variables. The algorithm produces a partition where Qi has five variables. The Newton solver in ACM fails to find the solution using a bad initial point provided. The new initial point provided by the algorithm, when used with the Newton solver, converges to the solution. In this case, we need to solve an optimization problem containing six equations. All the other 50 equations in Q.2 are solved successfully by the single variable algebraic solver. 3.2.4. WaterHammer The last example, WaterHammer, is the largest of all and is also taken from the set of examples provided with the ACM distribution. One of the process models involves a PDE, hence part of the equations are the product of discretizations. After the decomposition, a nonlinear block of 706 variables is left to solve. A bad initial point is chosen where all variables are set to the value l.e-6 (a very bad initial point). The algorithm produces a set Q.i with 22 variables, and by initializing these with l.e-4 a new initial point is obtained, from which the Newton solver converges. The value of the initialized variables at the solution are in the interval [2.0,3.0]. In this case, from the 684 equations that need to be solved, only six need to solve the optimization problem. In all the other 678 equations, the single-variable solver finds the solution.
838
4. Conclusions We have presented an algorithm that can be used to obtain better initial points to solve simulation problems. The algorithm finds a partition of the problem variables into a set of initialised variables and its complement, the set of variables to be solved for. The size of the former set is minimized (i.e., the number of variables that have to be initialised). The variables that are solved for are obtained by solving either a single-variable algebraic equation or an optimisation problem with just one constraint. The latter case only happens in a very small proportion of the equations in all the tested examples. The proposed algorithm is tested using four examples of varying complexity from a commercial simulation package. In all cases, the Newton solver available in the package fails to solve the problem using the bad initial point given. The algorithm produces partitions that need a few variables to be initialised, and obtain new initial points from which the Newton solver converges.
5. References Allgower, E. and Georg, K., 1990, "Numerical continuation methods-An introduction". Springer Verlag. Maranas, C. and Floudas, C , 1995, "Finding all solutions of nonlinearly constrained system of equations", J. Global Optim., 7 ,143. Neumaier, A., 1990, "Interval methods for systems of equations", Cambridge University Press. Paloschi, J., 1997, "Bounded homotopies to solve systems of sparse algebraic nonlinear equations", Comp.Chem.Engng, 21 ,pp 531-541. Paloschi, J., Laing, M. and Zitney, S., 2000, "Open Solvers Interfaces for Process Simulation", Paper presented at AIChE Annual meeting.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
839
Modelling Cells Reaction Kinetics with Artificial Neural Networks: A Comparison of Three Network Architectures J. Peres\ R. Oliveira^ and S. Feyo de Azevedo* 1- Department of Chemical Engineering - Institute for Systems and Robotics Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal 2 - Department of Chemistry - Centre for Fine Chemistry and Biotechnology, Faculty of Sciences and Technology, Universidade Nova de Lisboa , P-2829-516 Caparica, Portugal
Abstract The present work compares three neural network architectures for modelling reaction kinetics in biological systems: the Mixture of Experts (ME) network, the Backpropagation (BP) network and the Radial Basis Function (RBF) network. The methods are outlined for the case of the growth kinetics of the Saccharomyces cerevisae yeast. The S. cerevisae yeast is able to grow through 3 different pathways. The main results show that a ME network with 3 linear expert modules was able to discriminate between the 3 pathways. The network was trained with the Expectation Maximisation method. A Gaussian gating system produced three input space partitions, one for each of the pathways. The 3 expert modules developed expertise in describing the kinetics of each of the pathways.
1. Introduction The application of Artificial Neural Networks (ANNs) for modelling the reaction kinetics in biological systems has been exemplified in many works (e.g. Schubert et al. (1994), Montague and Morris (1994), Feyo de Azevedo et al. (1997)). Conventional BP networks and RBF networks are the most employed architectures. One important issue related to the nature of the cell system is the fact that cells may process substrates through different metabolic pathways. This is the case of diauxic growth on two carbon sources. Or the case of aerobic/anaerobic growth depending on the presence or absence of dissolved oxygen in the medium. For example, S. cerevisae can grow trough three different metabolic pathways for exploiting energy and basic material sources and is able to switch between a respiratory metabolic state and a reductive metabolic state (Sonnleitner and Kappali (1986)). BPs and RBFs networks have some limitations for approximating discontinuous inputoutput systems. BPs tends to exhibit erratic behaviour around discontinuities (Haykin, 1994). RBFs are voted for local mappings but suffer from generalisation and intensive computation problems especially for resolution of fine details. There are strong reasons to believe that modular networks architectures may be advantageous for modelling reaction kinetics in biological systems. A modular network architecture consists of two or more (small) network modules mediated by a so-called gating network which decides
840 how to combine their outputs to form the final output of the system. The learning of such networks is based on the principle of divide-and-conquer, i.e., the network modules compete to learn the training patterns. This type of architecture performs task decomposition in the sense that it learns to partition a task into two or more functionally independent tasks and allocates distinct networks to learn each task (Jacobs et al. (1991). Microorganisms reaction kinetics are ruled by a rather complex network of metabolic reactions that can be viewed as being composed by a set of interconnected modules representing different pathways: glycolysis, TCA cycle, etc.... Hence a modular network structure is hypothetically highly compatible with the internal structure of the system 'cell reaction kinetics'. A second relevant point in favour of modular networks is that they fit better discontinuous input-output systems (Haykin, 1994). These features indicate that this type of networks could be advantageous to model the reaction kinetics. This paper compares three types of networks: the ME, BP and RBF networks. The S. cerevisae yeast serves as an example to illustrate the application of the networks. The main objective of this study is to verify if modular network architectures, which are supposed to be able to perform task decomposition, are able to discriminate between reaction pathways in complex biological reaction schemes.
2. Methods The Mixture of Experts (ME) network developed by Jacobs and Jordan (1991) was adopted in this work. The ME architecture consists of a set of A: expert networks and one gating network (Fig. 1). The task of each expert / is to approximate a function f, : x-^y over a region of the input space. The task of the gating network is to assign an expert network to each input vector x. The final output y is a linear combination of the expert networks. The interesting property of this network is that it is able to learn to partition a task into two or more functionally independent tasks and to allocate distinct networks to learn each task. The training of the ME network may be performed using a maximum likelihood parameter estimator. For the class of nonlinear regression problems (which is our case) the objective is to map a set of training patterns {x,d}.
Figure 1. Block diagram of a 'mixture of experts' network; the outputs of the expert networks are mediated by a gating network.
841 The goal of the learning algorithm is to model the probability distribution of {x,d}- The output vector of each expert can be interpreted as a parameter of a conditional target distribution. In the case of a Gaussian distribution, the probability of a desired target d of dimension q, given the input x of dimension p and given the expert / is
P(d|x,0 = — ^ e x p ( - l | | d - y , | h
(1)
(2;r)/2 The expert outputs y, correspond in this case to the conditional mean of the desired response d given the input vector x and that the ith expert network is used, y, = £ [ P ( d | x , 0 ] . The outputs of the gating networks gi are interpreted as the conditional probability P(/|x) of picking the expert i given de input x. The probability of the desired target given the input x is thus
P(d|x) = j ; F ( / | x ) P ( d | x , 0 = X ^ /
rrexp(--||d-y,||')
(2)
The learning algorithm for this architecture, and in the light of the probabilistic interpretation made so far, can be viewed as a maximum likelihood parameter estimation problem. The criterion for estimating the synaptic weights w, of each expert / and of the synaptic weights a in the gating network is to maximise the density function of Eq. (2). Usually the natural logarithm of F(d|x) is preferable to use (notice that P(d|x) is a monotonic increasing function of its arguments). Over a set of p training patterns and after some manipulation the maximum likelihood function /(x,w,a) is
/(x,w,a) = 2^1n5^^i(xMa,)exp(—||d,-y,(x,,w,.)f)
(3)
being w=[v»^i,W2,...,Wjt]^ and a=[ai,a2,...,ad^ the vector of weights of the expert networks and gating network respectively. The expert modules may be linear, y, = W/X, or nonlinear functions, for instance, a small BP network. The gating network outputs have a probabilistic interpretation and must obey to two constrains: all gi must be positive and they must sum to one for each x. The gating network may be defined by a set of A: 'softmax' processing units (Jacobs and Jordan (1991)):
gi =exp(u,.)/^exp(up,/ = l,...,^
(4)
being u, a linear combination of input vector x and connection weights a,, u, = a,x. The softmax functions provide normally a 'soft' partition of the input space.
842 The learning algorithm must update the synaptic weights w, of all expert networks and weights a, in the gating network in order to maximise function (3). Jacobs and Jordan (1991) applied gradient ascent weights updating algorithm where the weights w, and a, are updated simultaneously. Jordan and Jacobs (1994) applied the Expectation Maximisation (EM) algorithm for training the network, which proofed to converge much faster then the gradient ascent algorithm.
3. Results and Discussion Case Study 1: Model of the SpeciHc Growth Rate by Blackman In this simple example the objective is to approximate the Blackman model for the specific growth rate (//) as a function of substrate concentration (5): M S
S
JU(S) = K M
S>K
(5)
M
being JLL and KM two kinetic parameters. Eq. (5) has a discontinuity for S=KM\ the objective of this study is to assess the behaviour of the networks when dealing with such discontinuous models. Eq. (5) was used to generate data, with KM=02 g/L and ju =0.17 h*, and for glucose concentrations ranging between 0 -1 g/1 with intervals of 0.002 g/1. A ME network with 2 linear experts and a softmax gating network was trained on this data with the gradient ascent method. The total number of parameters was 8, which is the minimum number possible. The training algorithm converged very easily and rapidly, yielding a final mean square error of 9.7x10'^. The results are shown in Figs (2). The ME network was able to partition the input space at the discontinuity, as expected, and each of the experts were assigned to one or the other partition. One can notice a small curvature around the discontinuity because the 'softmax'
•
1
(a)
'J
•
(b)
/ S
f 1 0 0
0.2
0.4 0.6 glucose (g/1)
0.8
1
0
J
',
0 .2
0.4 0.6 glucose (g/1)
0.8
1
Figure 2. Approximation results of the ME network to the Blackman model, (a) specific growth rate, (b) gating network outputs gj (-, solid line) and g2 (-, dash line). functions produce a soft partition of the input space. A BP network with one hidden layer, sigmoid activation functions and with 7 and 10 parameters produced a mean
843 square error of 5.5x10"^ and 5.7x10'^ respectively. The performance of the BP net with the same number of parameters is quite similar to that of the ME network. The RBF network with 8 and 31 parameters produced an error of 3.6x10"^ and 3.1x10"^ respectively. For describing this fine detail around the discontinuity, the RBF net requires much more parameters than the other two networks. Case Study 2: S. cerevisae cultivation process The S. cerevisae cells can metabolise glucose via two pathways under aerobic conditions: oxidative and/or reductively, with ethanol being the end product of the reductive pathway. The cells are able to use ethanol as a second substrate (the phenomenon of diauxic growth), but ethanol can be metabolised oxidative only. The 3 metabolic pathways may be stated by the following macroscopic reactions: S + NH3 + O2 -> X + CO2 + H^O (jLios) S + NH3 -^ X + E + CO2 + H2O (jLirs) E + NH3 + O2 -> X + CO2 + H^O (jLioe)
(Rl) - oxidative glucose uptake (R2) - reductive glucose uptake (R3) - oxidative ethanol uptake
where S is glucose, X is biomass and E is ethanol. ^ios» l^s and [ioe are three specific growth rates associated with each pathway. Sonnleitner and Kappeli (1986) proposed a kinetic model, assuming this reaction mechanism, based on the bottleneck concept. The key concept in the bottleneck model is that there is a maximum rate for oxidative glucose and ethanol uptakes, which are governed by the yeast' maximum respiratory capacity. The cells cannot grow simultaneously though pathways 2 and 3. Growth switches between pathways 2 and 3 depending on the available respiratory capacity (which depends on the concentration of dissolved oxygen) and on the actual glucose uptake rate (which is dependent on the glucose concentration in the medium). The total growth rate is the sum of three growth rates related to 3 pathways. The main goal in this case study is to model the specific growth rate and to verify if the ME network is able to detect the switch between pathway 2 and 3. Three batches were simulated with constant feed rates of 0.05, 0.5 and 2.5 1/h. Data of total growth rate as a function of glucose concentration and ethanol concentration (we assumed that oxygen was never a limiting substrate) was collected with sampling intervals of 0.1 h. The total number of points used for training was 78. This data was used to train and compare the 3 networks. The results obtained with the ME network with 3 linear experts (9 parameters) are plotted in Figs. (3a-b). The gating network employed was a gaussian network and the training algorithm was the EM algorithm. The mean square error obtained was 1.6x10"^. The interesting point to be noticed in this example is that the ME was able to discriminate between the 3 possible combinations of reactions. A BP network with 9 parameters produced a mean-square error of the same order of magnitude (1.38x10'^), indicating that there is no apparent advantage of using a ME network in this example. The results produced by a RBF network with 9 parameters are worst as it was in the previous case study. The mean square error obtained was 1.7x10'^.
844 1
0.45
f
(a)
(R1)+(R2) 0.35 oV
(R1)+(R3)
d
1
(b)
ft" 3
0.25
O
•1
(R1)+(R2)
i
(Ri)+(R3)
J j(Ri)
00
0.15 \(R1) 0.05
0 5 10 cultivation time (h)
1 5 10 cultivation time (h)
Figure 3. Results for one batch, (a) specific growth rate estimates with a ME with 3 experts (9 parameters): measured values (o, dots), estimated values (-, solid line), (b) Gaussian gating network outputs: gi (..., dot line), g2 (-, solid line) and gs (--, dash line).
5. Conclusions The presented work studied the application of modular networks for modelling reaction kinetics in biological processes. The study was restricted to the very simple ME architecture, with linear expert modules. The main results showed that the ME was able to perform task decomposition, in the sense that it could decompose the input space in three partitions that in reality correspond to 3 different growth pathways. In terms of modelling errors, it was shown that the ME did not represent an advantage in relation to the BP network, at least for the 2 simple case studies presented. However, it is expected that for more complex multidimensional problems, the performance of the ME network would be better. Work is being currently done in that direction.
6. References Feyo de Azevedo, S., Dahm, B., Oliveira, F.R., 1997, Computers Chem. Engn., 21, Suppl., 751-756. Haykin, S., 1994, Neural Networks: A comprehensive foundation. Prentice Hall. Jacobs, R.A., Jordan, M.I., 1991, Advances in Neural Information Processing Systems 3, R.P., Lippman, J.E., Moody, D.J., Touretzky, Eds, 767-773. San Mateo, CA Morgan Kaufmann. Jacobs, R.A., Jordan, M.I., Barto, A.G., 1991, Cognitive Science 15, pp. 219-250. Jordan, M.I., Jacobs, R.A., 1994, Neural computation, 6, pp. 181-214. Montague, G., Morris, J., 1994, Trends Biotechnol., 12, 312-324. Schubert, J., Simutis, R., Doors, M., Havlik, I., Lubbert, A., 1994, Chem. Eng. Technol., 17,10-20. Sonnleitner, B., Kappeli, O., 1986, Biotech. Bioeng., 28,927-937.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
845
Object-Oriented Components for Dynamic Hybrid Simulation of a Reactive Distillation Process J.Perret, R. Thery, G. Hetreux, J.M. LeLann Laboratoire de Genie Chimique, UMR CNRS 5503, BP 1301, 5 Rue Paulin Talabot, 31106 Toulouse cedex 1, France, email: [email protected], [email protected], [email protected], [email protected]
Abstract PrODHyS (Process Object Dynamic Hybrid Simulator) is an object-oriented environment for the hybrid dynamic simulation of chemical processes. The aim of this paper is to introduce a small part of the main reusable object components available in this library and to illustrate its potentialities through the modeling and the simulation of a reactive distillation column.
1. General Description of the Simulation Platform PrODHyS 1.1. Objective of PrODHyS The development of trade software such as Aspen Dynamics or ProsimBatch proves the interest of manufacturers in the use of dynamic simulation for process design and operation. However, some specific process cases may lead end user to develop dedicated simulator of his own. In this context, a library of common building blocks which allows a modular modeling and an equation-oriented simulation of processes seems to be relevant and useful. Providing these software components is the main objective of PrODHyS. Developed in our research department, it results from the unification of works performed since several years in this domain (Jourda et al., 1996; Sargousse, 1999; Moyse, 2000; Hetreux et al., 2002; Perret et al., 2002). 1.2. Object-Oriented Approach PrODHyS is based on an object-oriented approach which emerges as an efficient and concrete response to extensibility, reusability and software quality requirements. Each element entity is defined as an abstract object which has to be derived via object mechanisms (inheritance, aggregation, genericity, etc.) in order to build more complex ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ' i^
Classes of the ©temertaty w"^'/'/''
„ ^,„ „ .„
•»,.^'Y\
ofthesystem
,^„ ^ ^ „i.-,, .,
Figure 1. Structure of the PrODHyS library.
846 elements. Currently, this library is made up with more than 1000 classes distributed into six main packages (figure 1). 1.3. ODPTPN formalism {Object Differential Predicate-Transition Petri Net) A original feature of PrODHyS is the hybrid modeling of each device. Various formalisms are proposed in the literature (Sibertin-Blanc C , 1985; Champagnat et al., 1998; Zaytoon, 2001). The ODPTPN formalism is a mixed approach which combines continuous and discrete models in a same structure and offers a great level of abstraction. The association of Predicat-Transition Petri Net with differential algebraic equations (DAE) systems are particularly well fitted to describe dynamic and comportmental aspects. Places with DAE system are called hybrid places and are drawn with two concentric circles. The kernel of the simulator is split up into 3 units : a discrete solver, a continuous solver and a simulation manager which manages the interactions between the solvers. Figure 2 shows the simulation cycle. Discrete solver JS.
Stinuiatioii manager
Conttruous solver
Solv* the dis cret pa it of the model
STAI
Concatenate D AE • y s t e m * of all marlced hybrid places (ex: model2+ model 3'^ ^cbal continuous model)
i 1 •
no
stabki state reached?
y»
Concatenate condKions of ail transitions located belowa marked place
Integration of the global DAE system ^
•
AConlnousvaiabte o^..i<-
^S^'l!!^ orawant?
/vA^ 1 satisfied ?
(ex: condition 1 only) \yes
Figure 2. Simulation cycle. The integration of object concepts is attractive to describe static and structural aspects. Classes generate entities that encapsulate both attributes (such as continuous variables) and methods which handle these data (including equations). In PrODHyS, objects are introduced in Petri net as well as Petri nets are introduced inside objects (figure 3). Tokens are typed and carry various kind of objects. Individualization and definition of classes for tokens make the model more compact without information loss. Hence, an ODPTPN is also characterized by typed arcs (that ensure model coherence) and transitions with conditions (expression with attributes and methods of allowed tokens) and actions (instanciate or destroy tokens, execute allowed token methods).
2. Considered Application: Reactive Distillation 2.1. Interest Several authors have shown the industrial profit in the development of chemical engineering equipment based on the integration of different functions in a single device (Taylor et al., 2000; Stankiewicz et al, 2000). Multifunctional reactors are processes that combine reaction with other operations like heat exchange or separation in order to enhance chemical conversion. Reactive distillation is certainly one of the most significant example. It presents several benefits (such as reduction of energy consumption, overcome of the limitation of the thermodynamic chemical equilibrium, limitation of side reactions, decrease of waste production) and it is applied in various
847 Table 1. Phenomena that may occur in reactive distillation. Phenomena that take Iplace
Reaction in liquid and/or vapor phasej Equilibrium reactions i» Homogeneous or heterogeneous catalytic reactions !• Appearance of a second liquid phase
Column conf iguratton Theoretical stage model iTechnological choicesi' Rate-based model Model for liquid-vapor !• Reactive and non-reactive zone in the same column lequilibria Zone with different plate technologies| Multiple feeds
Figure 3. areas of chemical engineering: esterification reactions as production of ethyl acetate (product chosen in this paper), etherification reactions. However, the difficulty to forecast the effects caused by the interaction between reaction and vapor-liquid equilibrium make this kind of operation particularly complex to design and control. 2.2. Required functionalities Table 1 sums up the functionalities the dynamic simulator of reactive distillation process should present. It enumerates physical phenomena that may occur, technological choices a designer could make and different models commonly used to represent a reactive distillation process. All these points are not exhaustive; the constant evolution regarding reactive distillation processes and the combinatory involved by the various structural choices of the process require an open and flexible environment. In this context, the use of an object-oriented software components library such as PrODHyS appears to be the best deal between encoding evolutivity and development time.
3. Modeling of a General Device 3.1. Material In PrODHyS, the model of material is slightly coupled with the device (association relationship). Indeed, material is a ReactivePhaseSystem-type token (figure 4). When this token marks a hybrid place of the associated device, the resulting continuous model is built by the concatenation of DAE system of the actual material state with the DAE system of the actual device state. Material Object Qualitative attributes Quantltatlva atlrllMJtot 1 Nc,x.y.h.T,P Ui,Uv Physical and chemical chaiacteristics — ^
Liquid/ ^^a-*{l
Relations Owners
s^>
C
UquWHjbJj
J Vapor
7 vx^ ^>-|}*-'"^ / I
\h-mh(T,P^M}\
y=Ki.x
Ki-mK(T,P^,yH> h-mh(TJ'^)=0
,=7jVc \H-mH{TJ>,y}=0\
'^^•N^
H-mH{T,P,y)=0
Figure 4. Simplified modeling of material.
MatMiai rofcan
Figure 5. ElementaryDeviceOut with 3 material output ports.
848 3.2. SpeciHcation of a device In PrODHyS, each constitutive device of a flowsheet is defined according to three axis : • Its topology: A device can be elementary or made up of other devices structured in a hierarchical way. The device topology describes its structure and formalizes exchanges between devices through elements named port. A port corresponds to a data sharing between the inner part of the device and the outside (environment or another device). This mechanism avoids "connection" equations such as Fjout=F2ouf Two kinds of port are defined: communication and transport ports (material and energy). • Its model: The description of the model according to the ODPTPN formalism requires two steps: the first one consists in defining the set of possible states of the process as well as transitions and events responsible of its evolution; the second one consists in defining mathematical models (one for each hybrid place) which describe the continuous behavior of the device. •
Its configuration: The application of a configuration completes the definition of a device. For example, it specifies values for geometric parameters, the choice of physical models, the initial marking, etc.
3.3. Elementary device Elementary devices are general reusable elements used to model complex devices by inheritance or composition. At this level, only material and energy balance equations are taken into account. In consequence, states of such devices are only dependent on input or output flows activity. As ports provide the corresponding flow variables, they are defined as token in order to build the most general element. So much a device owns ports, so much its Petri net includes port tokens. When a port token marks a hybrid place of a device, the associated flow variable is taken into account in the DAE system. The management of these tokens is done either with internal transitions (in the case of specialized device) or with command places (when the port status is dependent on the evolution of a another device). In order to preserve the model consistency, the output port activation has to activate in chain the input port of the above connected device. Figure 5 shows an example of elementary device found in PrODHyS.
4. Object Modeling of the Reactive Distillation Column From a conceptual point of view, a column is made up of a condenser, a reboiler (association of a tank and an energy feed), a flow divider and a column body (made up of several sections owning a set of packing stages or plate stages). Figure 6 shows this possible decomposition. All the resulting elements are built with classes which define elementary devices. For example, the reboiler is a composite device. It is made up of two devices : first, a storageTank device defining a closed tank ; second, an EnergyFeed which is a specialization of an ElementaryDevice. Note that within PrODHyS environment, the flux induced by a potential gradient is positive by convention when it goes from an output port to an input port.
849
i^dn
^
Energy irpul port
i:
,n^n«,,on,npUpc
^ Decomposition
nm ^1 f>t«w$tia9*
||
i:^i|Figure 6. Column decomposition. So, the reboiler and the condenser only differ by the sign of the heat duty. Reboiler behavior is described by a Petri net which manages Petri nets of its devices. Figure 7 illustrates the connection between devices and a recipe.
5. Simulation Results Esterification of acetic acid and ethanol into ethyl acetate is a common studied model in reactive distillation. To obtain ethyl acetate at a higher purity than the binary azeotrope ethanol-ethyl acetate, a two feed plates column and a strong excess of acetic acid are usually required. Excess acid allows to break of the binary azeotrope which tends to form at the top of the column. Figure 8 presents the configuration of the column and the adopted kinetic law. Thermodynamic model for liquid phase lies on the WILSON activity coefficient model. An initial system (298 K) made up of acetic acid, ethanol and water is introduced in the reboiler and then heated. The column fills up stage by stage starting from the bottom up. After an lh30 total reflux operation, feeds are introduced. The reboiler fills up until it reaches the required holdup. Then, reflux rate is set to a finite value (R = 10). Figure 8 presents the evolution of the ethyl acetate composition
Figure 7. A composite device (reboiler) with a recipe.
850
n •J
'- ^ noft-raaetiv* 0 8 3.10'm'
- distillate
(0 0,8
' stage 2
i
' stage 3
« 0,4 i o
L- '1 AcAc * Etoh c» AcEt > Watar ^ = ^CAe/icCBOh^here k, = 1.74.10* •ri««T.»RT
stage 4 I stage 12 - stage 15 ottom
< UJ
0 1000
2000
3000
4000
5000
6000
7000
8000
time (sec)
Figure 8. Ethyl acetate production. with time for five plates as well as for the condenser and the reboiler. During this operation, a yield of about 67% is obtained and ester purity at distillate (92%) is widely higher than in the azeotrope (54%). ( r : moU-\h-*; R = 2 cai.mo|-i)
6. Conclusion The design of PrODHyS follows an industrial software development process based on the use of UML and C++. Currently, the library is successfully used to simulate several kinds of devices. Object philosophy offers a natural approach to process modeling and the ODPTPN formalism eases the management of complex dynamic hybrid simulation.
7. References Champagnat, R., Esteban, P., Pingaud, H., Valette, R., 1998, Modeling and simulation of a hybrid system through Pr-Tr PN-DAE model, ADPM'98, 131-137, Reims. Hetreux, G., Thery, R., Perret, J., Lelann, J.M., Joulia, X., 2002, Bibliotheque orienteeobjet pour la simulation dynamique des procedes: architecture et mise en oeuvre, SIMO, Toulouse, France. Jourda, L., Joulia, X., Koehret, B., 1996, Introducing ATOM, the Applied Thermodynamic Object-Oriented Model, Computer & Chemical Engineering, 20A,S157-S164. Moyse, A., 2(X)0, Odysseo, plate-forme orientee-objet pour la simulation dynamique des procedes, PhD thesis, INP, Toulouse, France. Perret, J., Hetreux, G., LeLann, J.M., 2002, Composants orientes-objets pour la simulation dynamique de procedes: aspects hybrides, SIMO, Toulouse, France. Sargousse, A., 1999, Noyau numerique orientee-objet dedie a la simulation des systemes dynamiques hybrides, PhD thesis, INP, Toulouse, France. Sibertin-Blanc, C , 1985, High-level Petri nets with Data Structure, 6th European Workshop on Petri Net Application, Espoo, Finland. Stankiewicz, A., Moulijn, J.A., 2000, Process intensification: Transforming chemical engineering. Chemical engineering progress. Vol. 1, 22-34. Taylor, R., Krishna, R., 2000, Review: modeling reactive distillation, Chem. Eng. Sci., vol. 50, 5183-5229. Zaytoon, J., 2(X)1, Systemes dynamiques hybrides, HERMES Sciences publications.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
851
Using the HSS Technique for Improving the Efficiency of the Stochastic Decomposition Algorithm Jose M. Ponce-Ortega, Vicente Rico-Ramirez* and Salvador Hernandez-Castro^ Instituto Tecnologico de Celaya, Av. Tecnologico y Garcia Cubas S/N, Celaya, Gto. C.P. 38010, Mexico ^Universidad de Guanajuato, Facultad de Quimica, Col. Noria Alta S/N, Guanajuato, Gto., C.P. 36000, Mexico
Abstract This work focuses on the Stochastic Decomposition (SD) algorithm of Higle and Sen (1996) for tw^o-stage stochastic linear programming problems with complete recourse. Such an algorithm uses sampling when the random variables are represented by continuous distribution functions. Traditionally, this method has been applied by using the Monte Carlo sampling technique to generate the samples of the stochastic variables. However, Monte Carlo methods can result in large error bounds and variance. Hence, some other approaches use importance sampling to reduce variance and achieving convergence faster that the method based on the Monte Carlo sampling technique. This work proposes to replace the use of the Monte Carlo Sampling Technique in the SD algorithm by the use of the Hammersley Sequence Sampling (HSS) technique. Recently, such a technique has proved to provide better uniformity properties than other sampling techniques and, as a consequence, the variance and the number of samples required for the convergence of the SD algorithm are reduced. The approach has been implemented as a computational framework that integrates the GAMS modeling environment (Brooke et ai, 1998), the HSS sampling code and a C++ program. The algorithm is tested with a chemical engineering case-study and the results are discussed.
1. Introduction There is a huge body of literature on stochastic linear programming including books and numerous articles (e.g. Birge and Louveaux, 1997; Bouza, 1993). The main class of stochastic linear problems involves two stages and the concept of recourse (stochastic linear program with recourse, SLPwR). In the first stage, the choice of the decision variables, x, is made. In the second stage, following the observation of the values of the random variables, u, and the evaluation of the objective function, a corrective action (known as recourse), v, is suggested. The standard mathematical form of a SLPwR problem is given by Equations (1) and (2). Equation (1) is referred as the first stage problem: Min c^x-\-Q(x) Subject to: Ax = b X>0
(1)
852 where A is a coefficient matrix, c is a coefficient vector and Q(x) is the recourse function defined by Q(x)=E[Q(x,u)]. E is the expectation operator and Q(x,u) is obtained from the second stage problem, Equation (2): Q{XyU)
= Min q^ v
Subject to: W{u)v = h(u)-T{u)x V>0
(2)
In Equation (2), ^y is a coefficient vector and W, h and T are matrices whose elements in principle might depend on the random variables u. The matrix W is known as the recourse matrix. Fixed recourse means that the recourse matrix, W, is independent on M, whereas complete recourse means that any set of values that we choose for the first stage decisions, jc, leaves us with a feasible second stage problem. One of the main algorithms for stochastic linear programming with fixed recourse is the Stochastic Decomposition (SD) algorithm (Higle and Sen 1996). The SD algorithm uses sampling when the random variables are represented by continuous distribution functions. As a result, estimations for the lower bound of the recourse function are based on expectation. The SD algorithm is based on the addition of linear constraints (known as cuts) to the first stage problem. Hence, there are two types of cuts successively added during the solution procedure: Feasibility cuts and optimality cuts. A feasibility cut is a linear constraint which ensures that a first stage decision is second stage feasible. Notice that a complete recourse problem does not need the addition of feasibility cuts. On the other hand, an optimality cut is a linear approximation of Q(x} on its domain of finiteness, and is determined based on the dual of the second stage problem. As such, each optimality cut provides a lower bound (linear support) on Q(x). Rico-Ramirez (2002) provides a detailed description of the algorithm and the implications and derivation of both of the types of cuts.
2. Stochastic Decomposition Algorithm In the SD algorithm, it is necessary to sample at each iteration from a continuous probability distribution. For that reason, the estimation of the lower bound of Q(x) is based on expectation. The algorithm presented here corresponds to that provided by Higle and Sen (1996). Higle and Sen (1996) assume complete recourse (no feasibility cuts are needed) and provide a simplification involving a restricted solution set for the dual of the second stage problem in order to decrease the computational effort. The steps of that algorithm are given next. • Step 0 Set v=0, Vo=/j0/. ^''=-^. x^ is assumed as given. • Step 1 Set V = V +7 and sample to generate an observation u"independent of any previous observation. • Step 2 Determine the coefficients of a piecewise linear approximation to QM'. a) Solve the dual program of the second stage problem:
853 Max
n^{\-T^,x")
subject to:
n^ W
< q
to find the values of the vector of the second stage dual variables n, ify, and make V^^^v-i^'^v b) Get the coefficients of the optimality cut:
< = -h^Vh V it=l
where Ti^k is the solution to the problem (for all k\k?^\^: Max n^ {hj^ -T^ x") subject to: neV^ Observe that the solution vector to this problems can only be one of the vectors already included in the set of solutions, Vy. c) Update the coefficients of previous cuts: a^
=
al'^
k=
l,...,v-l
k=
i,...,v-i
V
/»;
= —A'"'. V
•
•
Step 3 Solve the first stage problem after the addition of the optimality cut: Min c^x + e' subject to: Ax = b e'+Plx > al, k= l,...,v to obtain jc'""^ Go to Step 1. The algorithm stops if: a) Change in the objective function is small b) No new dual vectors are added to the set of dual solutions, V.
3. HSS Sampling Technique One of the most widely used techniques for sampling from a probability distribution is the Monte Carlo sampling technique. Monte Carlo methods can result in large error bounds and variance. Variance reduction techniques are statistical procedures designed to reduce the variance in the Monte Carlo estimates. Importance sampling and Latin Hypercube Sampling (LHS) are examples of variance reduction techniques. Recently, an efficient sampling technique (Hammersley Sequence Sampling, HSS) has been developed by Kalgnanam and Diwekar (1997). HSS uses an optimal design scheme for placing the n points on a ^-dimensional hypercube. This scheme ensures that the sample set is more representative of the population, showing uniformity properties in multidimensions, unlike Monte Carlo and Latin Hypercube techniques (See Figure 1). It was found that, in most of the cases, the HSS technique is at least 3 to 100 times faster than LHS and Monte Carlo techniques and hence is a preferred technique for uncertainty analysis as well as optimization under uncertainty.
854
>
0.5
>0.5
Figure 1. Sampling in a unit square by using (a) HSS and (b) Monte Carlo techniques.
4. The Approach and Its Computational Implementation In this work we compare the performance of the SD algorithm when the HSS and the MC sampling techniques are used to sample the random variables. The computational implementation of the algorithm involves a framework that integrates the GAMS modeling environment (Brooke et al., 1998), the sampling code (FORTRAN) and a C++ program which generates the appropriate LP problems for each SD iteration. The implementation is shown in Figure 2.
C++ Code
^
1) Generation of an approximation to Q(x): sampling and multiple generation and solution of LP's
Sampling (FORTRAN)
GAMS - OSL
^r 2) Addition of optimality cut and solution to the 1st stage problem
GAMS - OSL
^ Figure 2. Computational implementation of the SD algorithm.
5. Chemical Engineering Case-Study Our case-study corresponds to a stochastic version of the boiler/turbo generator system problem presented by Edgar et al. (2001). The system may be modeled as a set of linear constraints and a linear objective function. The demand on the resources are considered as uncertain variables in the problem. The distributions used for the demands are shown
855 in Table 1 and the plant is shown in Figure 3. To produce electric power, this system contains two turbo generators. Turbine 1 is a double extraction turbine and Turbine 2 is a single extraction turbine. To meet the electric power demand, electric power might be purchased. The resulting SLPwR was solved by using the SD algorithm with MC and HSS sampling techniques. Seeking simplicity, we will not show the constraints of the model (See Edgar et al., 2001). The results are described in the following section.
Table 1. Probability distributions of the uncertain demands. Resource Medium-pressure steam (195 psig) Low-pressure steam (62 psig) Electric power
Demand [267,000 : 274,000] lb„/h [97,000 : 103,000] lb„/h [22,000 : 26,000] kW
Distribution Uniform Uniform Uniform
635 psig stream
X
Pressure reducing valve
PI
Turbine 1
(power)
P2 (power)
Purchased power
Condensate
195 psig stream
Pressure reducing valve
62 psig stream
Figure 3. Case-study: boiler/turbo generator system (Edgar et al, 2001).
6. Results and Conclusions The values obtained for the objective function with MC and HSS sampling techniques are shown in Figure 4a. Figure 4b shows the error of those values when compared to the convergence value of the objective function. It can be observed that the error presented with the HSS sampling is lower than that obtained with the MC sampling. After the solution of several other SLP's, the reduction on the number of iterations and the error seems to be a general advantage of the HSS with respect to MC and other sampling techniques. Current research efforts focus on using a fractal approach to characterize the
856 error presented with each of the techniques; we are also searching for an extension of the analysis for stochastic mixed-integer linear programs. It is expected that, since every node of a branch and bound algorithm can be individually seen as a SLP, the number of iterations and computer time when using HSS should be dramatically reduced. (a)
(b) ;
1 1 HI
Ilk . 0 + 100
0 Iteration -HSS
MC
Figure 4. (a) Objective value for the case-study using SD with MC and HSS techniques, (b) Error of each iteration with respect to the convergence value of the objective.
7. References Birge, J.R. and Louveaux, F., 1997, Introduction to stochastic programming, SpringerVerlag, New York. Bouza, C , 1993 Stochastic programming: the state of the art, Revista Investigacion Operacional, 1993, 14(2). Brooke, A., Kendrick, D., Meeraus, A. and Raman, R., 1998, GAMS- A User's Guide, GAMS Development Corporation, Washington, D.C., USA. Edgar, T.F., Himmelblau, D.M. and Lasdon, L.S., 2001, Optimization of chemical processes, McGraw-Hill, New York. Higle, J.L. and Sen, S., 1996, Stochastic decomposition, Kluwer Academic Publisher. Kalgnanam, J.R. and Diwekar, U.M., 1997, An efficient sampling technique for off-line quality control, Technometrics, 39(3), 308. Rico-Ramirez, V., 2002, Two-Stage Stochastic Linear Programming: A Tutorial, SIAG/OPT Views and News, 13(1), 8-14.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
857
The Effect of Algebraic Equations on the StabiUty of Process Systems Modelled by Differential Algebraic Equations* B. Pongracz^ , G. Szederkenyi, K. M. Hangos Systems and Control Laboratory, Computer and Automation Res. Institute HAS H-1518 Budapest P.O. Box 63, Hungary Dept. of Computer Science, University of Veszprem, Veszprem, Hungary ^e-mail: [email protected]
Abstract The effect of the algebraic constitutive equations on local stability of lumped process models is investigated in this paper using local linearization and eigenvalue checking. Case studies are used to systematically show the influence of algebraic equations on the open loop local stability of process systems using illustrative examples of a continuous fermentation process model and a countercurrent heat exchanger.
1. Introduction Lumped dynamic process systems are known to be modelled by differential and algebraic equations (DAEs). The differential equations originate from conservation balances for the extensive conserved quantities while the algebraic constitutive equations describing physico-chemical properties, equations of state, reaction rates and intensive-extensive relationships complete the model (Hangos and Cameron 2001). The general form of DAE process models consists of an input-affme differential part, and the algebraic equations are given in an implicit form: -^
= f{x,z)-}-Y^gi{x,z)ui
0 =
h{x,z)
(1) (2)
where x is the state vector, u = [ui.. .Up\^ is the vector of manipulable control inputs Ui and z is the vector of algebraic variables. Note that control inputs only occur in the differential part of the model. Dynamic nonlinear analysis techniques (Isidori 1995) are not directly applicable to DAE models but they should be transformed into nonlinear input-affine state-space model form by possibly substituting the algebraic equations into the differential ones. There are two possible approaches for nonlinear stability analysis: Lyapunov's direct method (using an appropriate Lyapunov-function candidate) or local asymptotic stability analysis using the linearized system model. * Extended material of the paper is available on http://daedalus.scl.sztaki.hu
858 In this paper, only the latter will be considered for the purpose of showing the influence of algebraic equations on open loop stability of process systems using illustrative examples of a continuous fermentation process model and a countercurrent heat exchanger. Special emphasis is put into the effect of different mechanisms, such as convection, transfer and reaction, occurring in lumped parameter process systems on local stability.
2. Local Stability Analysis of Lumped Process Models This section contains the basic notions and techniques which are used for local stability analysis of lumped process models. 2.1. The structure of nonlinear DAE process models The structure of lumped process models depend on both the mechanisms taking place in the system and on the choice of input variables. Two practically important different cases are considered. 1. Inlet intensive potential variables as inputs If the control inputs are chosen to be the intensive potential variables at the inlets then the differential equations (1) of the above general DAE process models are in the following special form (Hangos et al. 2000):
where the coefficient matrices Atrans^ Boconv and Binconv are constant matrices originating from the convective terms, while Q^ is a smooth nonlinear function representing the transfer and source terms, respectively. 2. Flowrates as input variables If the flowrates of the convective flows are chosen to be the input variables, then the differential (conservation) equations take the following special form: X = AtransX
+ Q^{x,
p z ) - ^ ^ Qconvi (x, z)Ui
(4)
1=1
where Atrans is a constant matrix term, while the nonlinear smooth functions Qconv and Q^ originate from the convective terms and source terms, respectively. Under the assumption that physico-chemical properties are constant and specifications result in an index 1 model, the algebraic equations are always substitutable into (1). 2.2. Open loop local stability analysis of DAE models For the purpose of stability analysis, we need to linearize the DAE model around a steady state operating point [x* z*]^, which is in the following form in the case of the general model (1-2): dx
0 = oxfi
Z / *
*^
r *
*\
-{-ygx(x*,z*)
g2{x*,z*) ... gp{x* ,z*)ju
(5)
OZ
dh
z
(6)
(JZ ( x * , z * )
for given operating point values of the input variables u* {i = 1,... ,p), and with the centered variables x = x — x'^.'z = z — z* and u = u — u*.
859 If | j L ^ ^. is invertible (which is equivalent with that the model has a differential index equal to one), the vector of centered algebraic variables 'z can be explicitly expressed in terms of state variables 'x yielding to a purely differential representation:
\
^
^
J I(X*,Z*)
The operating point(s) [x* z^'Y can be determined for prescribed input values u* by solving (1-2) with a; = 0 which means the solution of an algebraic system of equations. A necessary condition on the solvability of the system of equations above is that the number of differential (algebraic) equations equals to the number of differential (algebraic) variables (degree of freedom equals to zero), and the original DAE system has differential index 1. 2.3. Mechanism-wide local stability analysis of DAE process models We investigate the effect of mechanisms (transfer, convection, reaction) on local stability using that both (3) and (4) are broken down into additive terms of these mechanisms. Earlier results show that transfer is a stabilizing term, because the eigenvalues of the matrix Atrans are on the open left-half plane (Hangos and Perkins 1997), and in case of constant mass holdups in each balance volume, Kirchoff convection matrices ensure that convection may also be a stabilizing term. Further mechanism-wide stability considerations of the locally linearized models in the above two input variable cases are as follows. 1. Inlet intensive potential variables as inputs The linearized model of (3) with the algebraic dependence (2) is in the following form: •*' — I -^trans "r J^outconv ~r I r \ OX
~ OZ
\dz)
dx
X -f Binconv'^
(8)
(x*,z*)/
Since the coefficient matrices Atrans, Binconv and Boutconv in Eq.(3) are constant matrices, the algebraic dependence (2) only affects the transfer and source terms in the model and thus has a major effect on the open loop stability of the system. 2. Flowrates as input variables The linearized model of (4) with the algebraic dependence (2) is similar to the former case: ^
_
/.
4-
(gconvi{x*,Z*)
(dQ^ dx
m"
dQ^fdh\~'dh dz \dz J dx
. . . gconvp{x*,Z*)jU
(9)
The main difference is that convection is affected by the inputs therefore the state matrix of the linearized model contains the transfer and source terms only.
860
3. Case Study 1: A Continuous Fermentation Process A simple continuous fermentation process (for example in (Takamatsu et al. 1975)) is used as a case study with constant liquid volume V. The liquid feed (F), the temperature and all physico-chemical properties are assumed constant. The state variables are the concentration of biomass {X) and of that the substrate (5). The control input of the system is the substrate feed concentration Sp which is an intensive potential at the inlet as described in (3) and there is no transfer term. The reaction rate expression is given by an algebraic equation for the reaction rate r. X
=
-fx + r ^S
S 0 =
(10) ^ ^ Q
1
(11) (12)
v^ /z(X,5)-r
3.1. Stability of the simple fermenter We will show that the stability of the model depends on the reaction kinetics only. The linearized model of the fermenter is a special case of (8) with no transfer effect {Atrans = 0) in the following form: X
-i
-0
0
dr I 1 dr I
Ml 1)
Y dx\
Y ds\*
axU
+
X S
J/
0
z
SF
(13)
V
The state matrix A of the linearized model consists of the sum of the diagonal output convection term (Boutconv) and the reaction term (Asource), where only the source term depends on the steady state. Since ^ is a matrix polynomial of the source term (A = —y{Asource)^ + {^sourceY) and the linearized reaction term is singular because there is a single reaction term, the eigenvalues of A can be computed according to (Gantmacher 1959): dr F F F X{A)i = - —+0= - —, X{A)2 = -y-^trace{Asource)\^ = dX ^
^dr_ 'VdS
(14)
It leads to the stability condition dr 'dX
]_dr_
'Yds
V
(15)
3.2. Stability of the simple fermenter with different reaction kinetics With five different reaction kinetic expressions (// functions), the model exhibits different stability properties. Investigation is performed by eigenvalue checking of the linearized models at the operating point(s) in the following cases. 1. Constant characteristics fi = K results in a linear time invariant (LTI) model which is globally asymptotically stable. This case is the basis of all the following models, containing only the effect of the differential variables. 2. The linear reaction rate /x = Kx gives also an LTI model with the operating point of biomass wash-out, which is stable if K < — v 3. The simplest nonlinear, a bi-linear reaction rate /x = KSX causes two operating points: a wash-out point and an other one.
861 Table 1. The effect of reaction kinetics Model type linear time invariant linear time invariant nonlinear input affme with operating points (1),(2)
Reaction kinetics r= K r = KX r = KSX
Pmflg^' - X ki+S + k2S'^
with operating points (1),(2) nonlinear input affme with operating points (1),(2),(3)
Stable if unconditionally K<^
/IN ^ i p M m a x
nonlinear input affine
r=E^L^X r —
Eigenvalues* F F -yyF u- 17i" (- 1V) '- ^ ^ , -5 V ;,K-f (2)-|:, -Sl,K+^ (2)-^, (l)-f, (2)-f, (3)-f,
\M ALI \L2 AL3
F
( l ) - f c 7 + 5 | r < T7 (2) A M < 0 (1)ALI < 0
(2) \L2 < 0 (3) A L 3 < 0
* where 5 J, is the value of SF at the operating point. ^ = ,
2fc2F t^maxSl,
jr
.
^ - ^ ^ ' ^M = fc./.^axV^ ^ ' ( R - S * ) ( f e 2 F ( F - M m a x V)/?+|x2^„^ v 2 _2F2fci fc2-S^tmax V F + F ^ )
Kl+S^ +
fc2gB'
A^maxKlV''
4. With the monotonous nonlinear characteristics /x = ^k'^s ^ ^ ^ similarly get to two operating points. 5. A qualitatively different nonlinear non-monotonous reaction rate function is /i = kS'sTk2S^^' which induces two real operating points (apart from the wash-out point). These three points have the usual stability property pattern (two stable and the other unstable). This case indicates that the lack of monotonicity is the one which drives the stability pattern and can result in multiple real operating points. Local stability properties of the models with different reaction kinetics are summarized in Table 1. As an important conclusion, the connective term alone is stable independently of the steady state, moreover it may stabilize the effect of the source term.
4. Case Study 2: A Cascade of Heat Exchangers A countercurrent heat exchanger will be considered in this section, which is modelled as a cascade of K simple heat exchanger cells leading to a lumped parameter system. Constant volumes and physico-chemical properties are assumed in every balance volume. The volumetric flowrates of hot and cold liquid streams are Vh and Vc. The dynamics of the system is described by the intensive form of the energy balance equations of both sides for every cell {Thf, and Tc^, k = 1... K\ and the algebraic variables (Z^, k = l...K) describe the transfer effect: Tc,
=
T^{Tc,^^-Tc,)-h:—7jr^^ CpcPc 'c
(16)
^^fc
=
Tri^hk-i
(17)
-Thk)-
Vh
0 = UA{Th^-TcJ-Zk
^ ^ .r Zk Cphph Vh
(18)
The potential input variables of the system are the volumetric flowrates (vh and Vc) and the inlet temperatures (Tho and Tc^^ J . 4.1. Stability of the heat exchanger model It is well-known that in case of constant physico-chemical properties, pressure and mass holdups in each balance volumes, the convection term is stable in asymptotic sense, and the transfer is also stable in Lyapunov sense (Hangos and Perkins 1997).
862 Two cases will be considered according to the input specifications. 1. If the input specification contains theflowratesu = [vc VhY and the inlet temperatures are constants then the resulted model is bilinear in the input term. The state term is linear, and contains the effect of the transfer only, therefore A = AtransThis matrix is block diagonal consisting of identical 2 x 2 diagonal blocks AokThese blocks are singular with rank{ADk) = 1 determining the eigenvalues of A: TIA \{ADk)i = 0, X{ADk)2 = trace{ADk) = - ( - ^ ^ +
JJ A ,, ) < 0
(19)
therefore the system is globally on the boundary of stability. 2. With input specification containing the inlet temperatures u = \Tcj^_^^ TUQY ^^^ constant flowrates the state space model of the system is linear. The state matrix is in the form oi A = Aconv + Atrans with Atrans being the same as in the first case. A = Aconv IS a KirchofF convection matrix (Hangos and Perkins 1997) with its eigenvalues being negative, thus A = Aconv is a negative definite matrix. Since Atrans is negative semi-defmite, A = Atrans + Aconv is negative definite therefore the system is globally asymptotically stable. In conclusion: convection has a globally stabilizing effect on the system,
5. Conclusion Local asymptotic stability of lumped process systems modelled by DAE models is investigated in this paper using local linearization and eigenvalue checking. The effect of algebraic constitutive equations influencing the source as well as the transfer term in the differential conservation balances is considered. Case studies are used to systematically show the influence of algebraic equations on the open loop local stability of process systems using illustrative examples of a continuous fermentation process model and a countercurrent heat exchanger. Acknowledgement This work has been supported by the Hungarian National Research Fund through grant no. T032479 which is gratefully acknowledged.
6. References Gantmacher, F. R. (1959). The Theory of Matrices. Chelsea Pub. Co. New York. Hangos, K. M. and I. T. Cameron (2001). Process Modelling and Model Analysis. Academic Press. Hangos, K. M. and J. D. Perkins (1997). Structural stability of process plants. AlChE 7oMmfl/43, 1511-1518. Hangos, K. M., J. Bokor and G. Szederkenyi (2000). Analysis and control of nonlinear process systems. In: Proceedings of the 9th Nordic Process Control Workshop. pp. 89-105. Isidori, A. (1995). Nonlinear Control Systems. Springer. Berlin. Takamatsu, T., I. Hashimoto, S. Shioya, K. Mizuhara, T. Koike and H. Ohno (1975). Theory and practice of optimal control in continuous fermentation processes. Automatica 11, 141-148.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
863
The CAPE-OPEN Interface Specification for Reactions Package Michel Pons ATOFINA Centre Technique de Lyon, Chemin de la Lone, BP32, F-69492 PIERRE-BENITE Cedex, France, [email protected]
Abstract Reaction models are necessary in the chemical process industries for a number of purposes which are most often related to the modeling, simulation and control of production processes: process synthesis, process simulation, plant optimization and production control are typically some of the domains concerned with the use of reaction models within unit operation models. To provide interoperability of reaction models within a number of software applications, a specific part of the CAPE-OPEN standard has been devoted to these simulation components called Reactions Packages. CAPEOPEN Reactions Packages are described in terms of the interfaces that they must support, their interaction with a process modelling environment and the functionality they are expected to support. The interfaces defined support both kinetic and electrolyte reactions.
1. Introduction In the process simulation domain considered here, software tools such as PRO/II®, Hysys.Process® or Aspen Plus®, are extremely valuable to the process engineer for a large set of activities: such tools have emerged as standard tools on any process engineer desk and they are largely deployed throughout the process industries. But however sophisticated they may be, each of these tools cannot cover the entire range of tasks a process engineer has to fulfil. So a process engineer may need to use several software tools supplied by different vendors, for example to carry on a single design project. From July 1999 till March 2002, the Global CAPE-OPEN (GCO) consortium (Pons et al., 2001) involved a large set of leading process industry companies, researchers, and software vendors in Europe, Asia, and North America. GCO has been partially funded by the European Commission in the Industrial and Materials Technologies research and technological development programme. The objective of the GCO project was to deliver the power of component software and open standard interfaces in computeraided process engineering. It followed another project partially funded by the European Commission, CAPE-OPEN, which ran from January 1997 till June 1999. The CAPEOPEN project proved the feasibility of the concept of such an interface standard while
864 Global CAPE-OPEN delivered commercial software components implementing the parts of the interface standard delivered at the end of the CAPE-OPEN project. Indeed Global CAPE-OPEN used CAPE-OPEN results and capitalized on further opportunities that can be gained from open standard interfaces for process simulation. A reaction model is a typical component of simulation systems, along with unit operations, thermodynamic servers, physical properties databanks, etc. The reaction model may provide information on how it is built, or can choose not to provide such information but just to provide computation mainly of reaction rates so that these terms may be readily used in mass balances within unit operation models. The same applies for energy terms. By implementing a conmion interface standard, a reaction model component may be deployed on its own, independently of the process simulator it is used in. That develops the reusability of reaction models throughout unit operations and process simulators. A reaction model is contained by a Reactions Package software component exhibiting the specific CAPE-OPEN interfaces discussed here.
2. Development Process of Reactions Package Interface SpeciHcation As for the other pieces of work pertaining to the CAPE-OPEN standard, the development process applied here is based on the UML methodology which has been adopted by a number of organizations involved in software development. Such a methodology, when properly applied, ensures that the resulting specification meets the needs for which it has been developed. A number of user requirements specific to the Reactions Package interface set have been developed. These user requirements are the basis of an analysis phase to ensure a proper design of the interface set. These requirements have been established in a textual format as well as through use cases defining situations in which the interface set is active. Then a list of necessary methods is created for each interface within the Reactions Package set. The sequence of calls of each method is described in sequence diagrams while each method is described in details especially through the arguments it supports. A Reactions Package contains the definition of a set of kinetic reactions for a defined set of compounds and phases. It allows a client to request the calculation of Reaction Properties, for example the reaction rate. A Reactions Package may allow a client to reconfigure the reactions that it defines, for example to change activation energies. Equilibrium Reactions in electrolyte mixtures are supported by combining the concepts of the Reactions Package and of the CAPE-OPEN Thermodynamic Property Package. The resulting Electrolyte Property Package supports both the Property Package and the Reactions Package interfaces. The combined set of interfaces allows a client to request the calculation of equilibrium reaction properties and thermodynamic property calculations from a single package. This is the mechanism used to support Electrolyte mixtures within the CAPE-OPEN standard. The development is the result of a collaborative work between a number of organisations such as Aspen Technology Ltd, Hyprotech Europe SEL and Denmark Technical University. The Reactions Package interface specification (Rodriguez et al., 2002) is available free of charge from the CO-LaN web site (CO-LaN, 2002).
865 2.1. Textual user requirements The standard must allow creating a Reactions Package software component that only contains information relevant for the reaction phenomena considered. Thus, an equilibrium reaction or a set of equilibrium reactions grouped in a Reactions Package does not necessarily need to know how to calculate the physical properties of the mixture upon which the reaction will take place. It follows that a Reactions Package has to be able to interact with a Property Package that will provide it with the necessary physical properties if required. That defines the scope of work. Once a Reactions Package component has been installed on a computer, it will be available to a CAPE-OPEN compliant Process Modelling Environment (PME) as any other CAPE-OPEN component. A Reactions Package has to be selectable among a list of possible Reactions Packages presented to the user by a PME. Upon selection of the Reactions Package, the PME will be responsible for creating the Reactions Package and making it available for the simulation. That defines the usage. Two scenarios are envisioned for the creation / selection of a Reactions Package. These are: • A Reactions Package belongs to a larger Reactions System, and it has been created as a specialisation of the Reactions System for a particular process. In this scenario the user will launch the Reactions System, and will configure a reaction set using the native Reactions System mechanism. Upon completion the Reactions System will offer the user the possibility of exporting the reaction set as a CAPE-OPEN component, i.e. a Reactions Package component. • A Reactions Package has been created for a specific process; i.e. does not belong to any Reactions System. In this case two sub cases can be distinguished: o The Reactions Package is pre-configured, i.e. it handles a specific set of reactions with a specific set of products and reactants. Thus, when it is selected, it is already ready to be used. o The Reactions Package allows re-configuration, i.e. it can have a default configuration or not, but in either case, the Reactions Package allows the user to specify components participating in the reaction, reactions taking place in the reaction set, and the various parameters involved in the reaction rate expressions. The Reactions Package behaves then as a small Reactions System. It is expected that when a PME loads a Reactions Package, or an Electrolyte Property Package, an end-user will be able to view the contents of the Package using either a User Interface supported by the Package or via a generic interface constructed by the PME. A PME is not expected to use the Reactions Package to fill in its own data structures, as if the user had entered the data directly. Filling in PME data structures would require standardisation of the forms of the reaction models supported by Reactions Packages. This level of standardisation is deliberately not included because the standard needs to be flexible enough to cover any formulation of the well-known reaction models as well as custom reaction models.
866 2.2. Use cases Use Cases describe the actions and relationships between a set of actors. Fourteen Use Cases have been defined ranging from the selection of Reactions Packages, to obtaining the list of Reactions Packages available, to saving or restoring a Reactions Package, to configuring a Reactions Package. For example a Reactions Package has to be allowed to expose its parameters (e.g. activation energy, pre-exponential factor, etc) in a way that its clients can edit and modify their values. This functionality can be necessary in particular for a regression package. Some Reactions Packages will not allow this, but it is important that the option to expose these parameters is supported. A Reactions Package has to be able to provide its clients with enough information to solve the mass and energy balances of the physical system in which the Reactions Package is operating. This information includes for the case of kinetic reactions: • Compounds participating in the reaction/reactions as reactants and products • Stoichiometry of the participating reactions (reactants stoichimetric coefficients will be indicated by negative number, products by positive numbers) • Base compound of each reaction • Reaction units (e.g. kgmole/h-kg cat) • Phase to which the reaction rate expressions will be applied • Reaction rates for the various participating reactions at a given set of conditions (e.g. T, P and z) • Heat of reaction rates for the various participating reactions at a given set of conditions (e.g. T, P and z) • Derivatives of the reaction rates with respect to temperature and composition (e.g. expressed as mole fractions, fugacities, partial pressures, etc). For the case of equilibrium reaction, the information includes: • Compounds participating in the reaction/reactions as reactants and products • Stoichiometry of the participating reactions (reactants stoichimetric coefficients will indicated by negative number, products by positive numbers) • Chemical equilibrium constant for the various participating reactions at a given set of conditions (e.g. T, P and z) • Heat of reaction rates for the various participating reactions at a given set of conditions (e.g. T, P and z). Sequence diagrams are used to explain the sequence of actions and interactions between actors and the software objects such as interfaces. It is another representation of the Use Cases, showing to any software developer how the different parties involved should react. A full set of sequence diagrams may be found in the Reactions Package interface specification stored on the CO-LaN web site (CO-LaN, 2002). 2.3. Interface diagrams The following interfaces have been defined to support the User Requirements and Use Cases described above. They are applied primarily to two new CAPE-OPEN components: • Reactions Package Manager - Similar in scope to a Thermo System component. A Reactions Package Manager component manages a set of
867
•
Reactions Packages. It may instantiate a Reactions Package and it may optionally allow new Reactions Packages to be created. Reactions Package - A Reactions Package component defines a set of reactions involving a specific set of compounds and phases. The reactions are defined primarily in terms of their stoichiometry. A Reactions Package may allow some parameters of its set of reactions to be configured, it may also allow the structure of the reactions to be changed, but both of these behaviors are optional. o
o
o
o
o
o
o
ICapeReactionsPackageManager - Similar in scope to the ICapeThermoSystem. These interfaces will be implemented by a Reactions Package Manager component, ICapeReactionsRoutine Similar in scope to ICapeThermoPropertyPackage. A software component or a PME that can calculate values of reaction (or reaction related) properties will implement this interface. It may also be implemented by a Physical Property package component that deals with electrolytes, ICapeReactionChemistry - A component or a PME that needs to describe a set of reactions will implement this interface. A set of reactions is described in terms of the compounds that take part in the reactions and the compounds that are produced. For example, in the case of electrolyte systems, salt complexes and ions. In the case of detailed reaction mechanisms, radicals, ICapeReactionProperties Similar in scope to ICapeThermoMaterialObject. A component or a PME that needs to provide access to the properties of a particular reaction will implement this interface, ICapeKineticReactionContext - This interface allows a reaction object to be passed to a component that needs access to the properties of a set of kinetic reactions, ICapeElectrolyteReactionContext - This interface allows a reaction object to be passed to a component that needs access to the properties of a set of equilibrium reactions, ICapeThermoContext - which allows a Material Object to be passed between a PME and the Reactions Package components it is using so that the Reactions Package components can make Physical Property calculation calls.
3. Implementation (Carey, 2002) and (Zitney and Syamlal, 2002) have reported the first commercial implementation of this part of the CAPE-OPEN standard. This implementation will allow the Fluent CFD package to obtain information on reaction sets embedded in a Reactions Package within Aspen Plus®. That will provide, together with the Unit Operation and Thermophysical CAPE-OPEN interface implementations, the interoperability between FLUENTTM and Aspen Plus®. Supported by the U.S. Department of Energy which plans to reduce the time and cost of development of new
868 power plants by using simulation, the project aims at seamlessly linking computer models that represent individual power plant modules such as boilers, gasifiers, fuel cells and turbines.
4. Conclusion Allowing process simulator users to choose from any CO compliant Reactions Package is a major milestone for Computer Aided Process Engineering (CAPE) users. Independent Reactions Package suppliers can now make their reaction models available to CAPE users in a plug and play environment, and companies can use their proprietary reaction models in the same fashion. A process model result is only as good as the individual models and the data it uses, so having the Reactions Package CO standard is a critical step forward to the successful implementation of CO principles in the CAPE marketplace.
5. References CAPE-OPEN Laboratories Network, 2002, CO-LaN web site - http://www.colan.org Carey, C , 15 March 2002, Integration of CFD and Process Simulation in the Vision 21 Progranmie, IChemE CAPE Subject Group meeting "CFD - Tool or Toy? The Role of CFD in Computer-Aided Process Engineering", Cranfield, UK. Pons, M., Braunschweig, B., Irons, K., Gani, R., Mauer, P., Banks, P., Roux, P., Mathisen, K.W., 2001, Global CAPE-OPEN project results to date, 6"^ World Congress of Chemical Engineering, Melbourne, Australia, (September 23-27, 2001). Rodriguez, J.C., Pinol, D., Forcadell, F., Sama, S., Gani, R., Halloran, M., March 2002, Work Package 2, Open Interface Specification for New Modules, T2.1+2 Reactions Interface Specification. Zitney, S., Syamlal, M., 2002, European Symposium on Computer Aided Process Engineering - 12, J. Griedvink and J. van Schijndel (Editors), Elsevier Science B.B., pp 397-402.
6. Acknowledgements Contributions from Juan Carlos Rodriguez, Daniel Pinol, F. Forcadell, Sergi Sama all from Hyprotech Europe SEL, Rafique Gani from Denmark Technical University and Michael Halloran from Aspen Technology Ltd are gratefully acknowledged throughout the development of the Reactions Package interface specification.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
869
Rigorous Optimization of Reactive Distillation in GAMS with the Use of External Functions N. Poth, D. Brusis, J. Stichlmair Lehrstuhl fur Fluidverfahrenstechnik, Technische Universitat Mtinchen, Boltzmannstr. 15, 85747 Garching; email: [email protected]^.tum.de
Abstract A rigorous simulation and optimization of reactive distillation processes usually is based on nonlinear functions for a realistic description of the reaction kinetics and the vapor-liquid-equilibrium. Within GAMS models, this description leads to very complex models that often face convergence problems. By using the new so-called external functions, the situation can be improved by transferring calculation procedures to an external module. In this work, the external functions, written in the programming language Delphi, are used for the rigorous MINLP cost-optimization of a reactive distillation column for the production of methyl acetate. In this example, realistic thermodynamics including the dimerization of the acetic acid in the vapor-phase as well as realistic reaction kinetics are considered. The thermodynamic accuracy of the result is comparable to standard simulation software.
1. General Outline In a reactive distillation column the chemical reaction and the separation by distillation take place simultaneously in a counter current column. This can offer advantages as for instance new product splits, reduced hardware requirement, increased conversion or selectivities, as well as energy savings which has been confirmed by many realized reactive distillation plants of industrial scale. Given these potential benefits, reactive distillation has been subject to a lot of research activities during the last years. However, a serious exploration of the advantages of reactive distillation requires optimization for a rigorous comparison of process alternatives. As demonstrated by former publications, the GAMS modeling system has been successfully used for the MINLP-optimization of single reactive distillation columns (Poth et al., 2001; Jackson and Grossmann 2001). The strong nonlinear functions required for a realistic description of the reaction kinetics and the vapor-liquidequilibrium in these cases lead to very complex GAMS models that may face convergence problems. The latest versions of the GAMS software offer a promising tool for overcoming these difficulties. According to Brusis et al., 2002, it is possible to transfer some calculation procedures to an external module. This external module can be written in another programming language as C, Fortran or Delphi. So, intermediate calculations can be arranged in the external module which delivers back the results for the function values
870 and the derivations to the GAMS program. The interface between the GAMS program and the external module is arranged by a special mapping of the relevant variables and equations that is described below.
2. The Synthesis of Methyl Acetate - Thermodynamic Background The production of methyl acetate is a classical example for the successful application of reactive distillation. The main reaction is: Methanol + Acetic acid <-> Methyl acetate + Water methyl acetate (a)
(1) methyl acetate
reactive azeotrope
reactive distillation lines reactive section water (c) methanol (b)
acetic acid (d)
Figure 1: distillation lines and chemical equilibrium for the methyl acetate system.
methanol
water
Figure 2: column configuration for the methyl acetate production.
The conversion is limited by the chemical equilibrium. The thermodynamic situation for the simultaneous reaction and distillation is qualitatively given in figure 1 showing a mole fraction tetrahedron for this system. Each of the corners represents one pure component. The solid black dot indicates a first minimum-azeotrope. The curved surface illustrates the chemical equilibrium of reaction (1). As the second azeotrope, marked with a circle, lies on that surface, it is a so-called reactive azeotrope. The distillation information is included in the figure as well: along the surface of the chemical equilibrium several reactive distillation lines are shown qualitatively. The arrows point towards the direction of decreasing temperatures. With that thermodynamic information, the reactive distillation process synthesis can be accomplished. According to Stichlmair and Frey, 1999, pure products and complete conversion can be obtained with a separate feeding of the reactants. The heavier boiling reactant, acetic acid, is fed to the top section of the column, whereas the methanol enters at the lower end of the reaction zone. The result is the methyl acetate process design that is shown in figure 2, as reported by Agreda et al., 1990. In comparison with a complex sequential process of reaction and distillation, this simple reactive distillation process design realizes great savings in capital and operational costs. A realistic thermodynamic modeling has to consider the fact that the vapors of carboxylic acids, such as acetic acid, do not fulfill the ideal gas law. A common
871 approach to account for this non-ideal behavior is the equiUbrium formation of an acetic acid dimer. Furthermore, it is assumed that all species can be treated as ideal gases. This case has been studied by Marek, Standart, 1954, and their results are applied here. So, the effect of the dimerization is included in a correction factor for the vapor-liquidequilibrium.
In equation (2) x- is the liquid mole fraction, p^^ the vapor pressure, Yi the liquid activity coefficient, F^ a correction factor for pure components, z, the true vapor mole fraction, and p is the system pressure. F. and the relation between z, and y., the vapor mole fraction without dimerization, are calculated according to Marek, Standart, 1954. The temperature and pressure dependent mole fraction based equilibrium constant for the dimerization can be found in Fredenslund et al. 1977. With a heat of reaction of about - 61 kJ/mole , the vapor phase dimerization affects the enthalpy calculations as well. For the realistic modeling of the reaction kinetics the approach of Popken et al. 2001 is included. They present an adsoprtion-based reaction kinetic model that can be applied to the heterogeneously catalyzed production of Methyl acetate in catalytic packings. In contrast to the authors, the Wilson VLE-model is used throughout the whole calculation.
3. External Functions In the present work, the straightforward calculations for the thermodynamics, for the column sizing, and for the reactions kinetics were transferred from the GAMS program into a dynamic link library generated by Delphi. The interaction of GAMS and the dynamic link library requires a special mapping of the equations and variables. In the following, this is demonstrated for an example calculation of the phase equilibrium. In the GAMS program the vapor-liquid-equilibrium of equation (2) may be defined by:
The coefficients
M„^ contain the thermodynamic information. The first index
n-L.N indicates the number of the theoretical stage, the second index k = l..K the component. The mapping between GAMS and the external module consists of a fixed list of equations and a vector for the exchange variables with a fixed order. For the example of the phase equilibrium, the first K • N equations of the equation list are reserved for the implementation of (3), starting with A^ equations for k = l, then N equations for k=2, etc. Also, the variable vector with the relevant variables jc, y , T , and M may be ordered as the following equation (4) illustrates:
872
^
~^l..N
^l..N,l ^l..N,2 --^L.N.K
y L.N .1 y L.N ,2 -yL.N.K
^ L.N ,1 ^ L.N ,2 " ^ L.N ,K
^ ^
KN
Then, the definition of the external function in the GAMS code is as follows:
n-T„^YX{N +
n^{k-l)-N)-xJ^fXN^K-N^n^{k-l)-N)-yJ
+ {N-\-K'N-\-K-N + n-^{k-l)-N)'M^,
=X=
(5)
n-h{k-l)'N
The right hand side of equation (5) declares the equation number in the list. The left hand side defines the variables' relations. A variable is addressed by the position number in the vector by "(position in the vector) • variable"; so, (N + n + {k -1)- N) is the position of x^ j^ in the vector. During the calculation procedure, GAMS calls the external module by means of the position of an equation in the equation list. Let leg, leg G 1..{K • N ) , be this position for an example GAMS call. From leg, a certain n and k can be determined. With these, the calculation for leg in the external module is realized by assigning a function value. In the actual assignment, the algebraic expression is re-arranged for a final result of zero. For the calculation of (3) with n and k , this could be for example: / = M„,, -ln{Zn.,
hn.,
)MFn.,
)Myn.,
)-ln{p/pl,s
)
(6)
In the same way, the external module has to provide derivations with respect to all variables in the variable vector. Therefore, the derivative of the function expression in (6) has to be calculated for given n and k . For example, only Zn(Y„ k)^^ (6) depends on jc^^ which has the position (N -\-n + {k -1)- N) in the variable vector. So, the derivative assignment for the equation with leg with respect to x^ ^ is d[N + n + {k-l)'N]=dln(}^,)/dx^,
(7)
With this mapping, a fixed relation between the GAMS program and the external module is established. Thus, the nonlinear thermodynamic functions still affect the GAMS program. However, the GAMS model size is reduced significantly because the intermediate variables resulting from a stepwise calculation of the complex properties models are omitted. Also, the derivations can be provided analytically which reduces numerical errors. For more complex external calculations, numerical derivations might work properly as well. Also, with the external equations, the handling of exceptions in case of critical variable values is much simpler.
873
4. Results and Conclusion In a first approach, the use of the external function in GAMS is compared to an Aspen Plus simulation based on the same thermodynamic data. In this simulation for a reactive distillation with fixed stage numbers and feed stages, the acetic acid dimerization is neglected and a simple reaction kinetic model based on the liquid mole fractions is applied for comparison purposes. The results show exactly the same condenser and reboiler heat duties for both calculations. The perfect agreement between these two calculations which is also confirmed by the concentration profiles clearly demonstrates that the use of the external functions in GAMS gives as thermodynamically accurate results as Aspen Plus. Also for the simplified reaction kinetics, figure 3 illustrates the influence of the vaporphase dimerization of acetic acid. In Aspen Plus, this effect is included by specifying the Hayden OConnell option in the vapor-liquid-equilibrium. However, this includes real gas effects as well, which are not included in the compared GAMS model. So, this different data basis is mainly responsible for the small deviations in the results. Neglecting the acetic acid dimerization, the reboiler loads for both calculations are Q = 1597 kW . Comparing this to the results displayed in figure 3 for the same simulations, the enthalpy effect of the dimerization becomes apparent. Finally, figure 4 displays the concentration profiles and the column configuration of a cost optimal column obtained by MINLP-optimization for number of reactive and nonreactive stages. As the MINLP formulation is still part of the GAMS programm, this aspect of the modeling is the same as in Poth et al., 2000. In this calculation, the acetic acid dimerization and the above mentioned realistic reaction kinetics are included. GAMS
• — c N ^ Aspen
Q = -1845kW ' f ^ \
Q = -1843kW
acetic acid,
p = 1 atm
Q = 1841 kW
Figure 3: concentration profiles and column configuration fi}r a comparison of Aspen Plus and GAMS with external fiinctions including acetic acid dimerization. This result clearly demonstrates that the external functions provide a powerful tool for the MINLP optimization with realistic models. Compared to pure GAMS models, the convergence behaviour, especially for the RMINLP, is improved, as is indicated by the drastic reduction of iteration steps and a more stable solution path. However, some
874 convergence problems in the mixed-integer part persist. On the other hand, in spite of less iterations, the external functions sometimes lead to longer overall solution time.
^ ^ . .
Q = -689 kW
stage
acetic acid 10 mole/s 20X p = 1 atm Total annualized costs 106 000 € metlianol 10 mole/s 20X
H
m
Distillate purity 0.99 mole/mole Conversion 99% = 751 kW
^ ^ ' Figure 4: concentration profile and column configuration of a cost optimal reactive distillation column for the production of methyl acetate.
5. Literature Agreda, V.H., Partin, L.R., Heise, W.H., High-Purity Methyl Acetate via Reactive Distillation, Chem. Eng. Prog. 86 (1990) 2,40-46. Brusis, D., Grossmann, I.E., Stichlmair, J., Optimization of a Distillation Column with Using External Functions in GAMS, submitted for publication in 2002. Fredenslund, A., Gmehling, J., Rasmussen, P., Vapor-liquid equilibria using UNIFAC, Elsevier, Amsterdam, 1977, chap. 2. Jackson, J., Grossmann, I.E., A Disjunctive Porgramming Approach for the Optimal Design of Reactive Distillation Columns, Comp. Chem. Eng. 25 (2001), 16611673. Marek, J., Standart, G., Vapor-liquid equlibria in mixtures conatining an associating susbstance - I. Equilibrium relationships for systems with an associating component. Coll. Czech. Chem. Commun., Engl. Edn., 19 (1954), 1074-1084. Poth, N., Frey, Th., Stichlmair, J., MINLP Optimization of Kinetically Controlled Reactive Distillation, 11^*^ Symposium on Computer Aided Process Engineering, Supplementary Proceedings Volume, 2002, 79-84. Popken, T., Steinigeweg, S., Gmehling, J., Synthesis and Hydrolysis of Methyl Acetate by Reactive Distillation Using Structured Catalytic Packings: Experiment and Simulation, Ind. Eng. Chem. Res.. 40 (2001), pp. 1566-1574. Stichlmair, J., Frey, Th., Reactive Distillation Processes, Chem. Eng. Technol. 22 (1999)2,95-103.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
875
Effect of Time-Scale Assumptions on Process Models and Their Reconciliation Heinz A Preisig & Mathieu R Westerweele Dept of Chemical Engineering Norwegian University of Science and Technology N-7268 Trondheim, Norway
Abstract Traditionally, models are generated building on macroscopic field theory substituting as much of the transformations as possible with the aim to generate models of sets of ordinary differential equations when ever possible. Today, computer programs are available solving Differential Algebraic Equations efficiently rendering most of the algebraic substitutions unnecessary. Our Modeller tool builds models on the fundamental principles, namely the conservation of extensive quantities and adds the flows, reactions, state variable transformations and properties as algebraic equations. Unknownflowsand unknown kinetics are substituted with additional information such as equilibria information. We show that the thereby introduced index problem can be resolved through local model reduction. The resulting model is always a DAE of index 1 and guaranteed structurally solvable.
1. Introduction MODELLER is a project in which a computer-aided modelling tool is being developed. The project is overall running for 15 years and is currently in its third generation. Similar project have been reported by Moe (1995), Perkins et al. (1996), Bieszczad (2(X)0), Krogh (1998), Marquardt (1994), all pointing in the same general direction. As part of the project we have developed a view on the various involved objects such that when assembling simulation models we obtain consistent representations that are index 1 DAE systems, which structurally are solvable. Models that are completely defined and have the structural solvability property we call proper (simulation) models and will be introduced first. Secondly we analyse the effect of unknown quantities and define on what alternative information can be added making time-scale assumptions about the parts that are not known from theory or experiments. The combination of the model with the unknown component and the assumption leads invariably to index problems, which is to be resolved in the last step. The analysis will demonstrate that the index problems are always local in the network and can be resolved through locally reduce the state space appropriately yielding structurally consistent and solvable index 1 problems. Examples demonstrate the mechanism of the approach.
876
2. Network Modelling The MODELLER project builds on a network modelling approach with the basic components of the network being a model for the capacity, called system, and models for connections. The capacities represent storage elements for extensive quantities, which, when conserved, are referred to as fundamental extensive quantities, i.e. mass and energy. The connections represent transfer of extensive quantities between capacities and have no capacity, what in thermodynamics is called a thermodynamic wall, i.e.: an idealised object representing the boundary surface between adjacent systems. Transport, modelled by connections, is driven by the difference in the states of the two coupled systems with driving forces being the respective potentials. For example for heat it is the temperature difference, whilst for mass it is either a pressure difference or a difference in the chemical potential (concentration) in the two, coupled phases. All objects are classified. For example, systems fall into the major classes of information processing systems, realizing control and other similar process components and physical systems, representing capacities of extensive quantities. Physical systems, in turn, are classified into dynamic systems being again split into lumped or distributed systems, source systems, sink systems and steady state systems. Source systems are infinitely large or ideally controlled infinitely small capacities for which the state is known and which have only unidirectional outflow connections to the plant. Sinks are similarly defined as infinite capacities but only knowing inflows, whilst steady-state systems are of negligible capacity. This classification is motivated by time-scale considerations. The infinite elements are operating infinitely slowly and are thus constant or infinitely fast when ideally controlled. Steady state systems add an infinite dynamic element to the model often being used to represent phase boundaries or other dynamic components that fall out at the top end of the window of the time-scale being modelled dynamically with the model. Connections are classified according to the type of transfer they represent, such as: mass, heat or work. Cutting these explanations in view of the limited space short and excluding distributed capacities, we define a simple, lumped system by the following components: conservation principle flow reaction — transposition secondary state properties
X = F z, -h R r,,
(1)
h-={gy^y)'^{y,,y,,pj'
ie{o,i},
(2)
t:=t(yj>E-,).
(3)
Is'~ h^h'^'^ys) '
(4)
P;.:=P,-.(y.'Pp.)
•J^{z,r,y,p}.
(5)
The first equation, equation (1) (note the use of the = sign) represents the differential conservation principle, with x being a fundamental extensive quantity, which we call fundamental extensive quantity because it is conserved. The z^, equation (2), represents the flows connections with the system's immediate neighbours, whilst the r^, equation (3), is the reaction or transposition vector describing the transposition of the modelled fundamental extensive quantities in the system s. The matrix F represent the connection
877 matrix. Its elements are taken from the set {-1,1} with -1 indicating a reference direction of an outflow, +1 a reference direction of an inflow. The matrix R represents in the case of reactions the stoichiometric matrix and in more general terms the transposition ratio matrix. Both matrices' rows refer to the fundamental extensive quantity, whilst the columns refer to theflowin thefirstcase and to the extensive quantity being transposed, in the second case. Equation (4) define the secondary states and the last equations, equation (5), define the properties of the respective parts. Notice that all algebraic equations are a function of the system state and properties only with the exception of the connections, which are also a function of the state of the systems they are connected to, here generically labelled with an e for environment (of the system s). The flow equations captures two representations, one for / := 0, which is a transport equations, whilst the case / := 1 is a transformation with S selecting a subset, usually a single element, from the vector y. The selected quantities are always intensive. Obviously, steady-state systems are readily derived from the above equations and represent de facto a steady-state version of the above (x^ := 0). A complete system is described in a condensed representation by stacking the individual parts up and padding of the flow matrix and the transposition matrices with zeros where no flows and no reactions are present: X
=
Ez-fSl,
z
:=
(iy,y)'z(y,P,);
I
:=
r(y,p^),
y
— y(y'^'P^)'
(6) /€{o,i},
(7) (8) (9)
Theflowand transposition matrix are now block matrices with the systemflowand transposition matrices as blocks in the appropriate places. All vectors are stacks of the respective system vector versions. To complete the representation, we add two more equations, namely X t
:=
x^+ / x^/, h := X-(/,£.).
(11) (12)
The first links the differential state to the state through integration and the second allows for defining the initial conditions using secondary state information. Defining the vector v := N y, ^ P. | 7 G {z, r, y, /?} > i and a vector of known constants 9 we are ready for the definition of a proper simulation model, where proper implies structurally solvable. Definition: A simulation model consisting of the equations {(6) (7) (8) (9) (10) (11) (12)} as being proper if the sets of equations v := v(x,9) is block-lower triagonal and with the block being solvable such that the equation system is only a function of the state x and and x^ := x^(y^, 0^) is also block-lower triagonal and solvable for x^ only being a function of the constant vector 9^.
878
3. Not Knowing It All The model being proper yields a structurally solvable index 1 DAE model. Though what if we do not know it all; for example a flow is not known, kinetics are not all known or some properties are missing? Some of it can be handled, but for a price: information must be added in the form of assumptions. There are simple assumptions, such as property is constant, thus not a function of the state. Those are easy to handle and do only remove algebraic complexity and reduce the fidelity of the model at obvious places. The more complex ones are if the lack of information makes it impossible to compute flows or reactions. At this point it is necessary to resort to more restrictive measure and make timescale assumptions. There are three commonly made assumptions, which are (i) Steady state assuming a system to exhibit a very fast dynamic relative to the modelled dynamic window, thus shifting this system out at the top end, the short time scale and assume eventdynamics, (ii) (Phase) equilibrium in which one assumes very fast communication of extensive quantity such that the two coupled systems are in equilibrium with respect to the affected extensive quantity. The most common case is thermal equilibrium and phase equilibria. (iii) (Reaction) equilibrium in which one assumes very fast reactions, such that the reactions are viewed as instantaneous. The first assumption leads to a relation between flows and reaction, whilst the second allows eliminating streams and combine systems with respect to the affected extensive quantity. The third assumption allows elimination of the fast reactions substituting instead a reaction equilibrium. The above-mentioned cases are the most common ones, but the conditions can be relaxed: The base condition is that the assumptions must substitute for the unknown information. Generally one may add equations of the type: 0
=dx —
(13)
s v(v,p,e)
with the S being selection matrices. Adding the assumptions will introduce algebraic loops, which can be detected using bi-partite graph analysis. In the loops there are states, which is the place the loop is resolved: in each loop adding two fundamental states and defining a new fundamental state variable, which is their sum, removes a fundamental state. This new equation is possible because the states are fundamental states for which the conservation and thus the superposition principle applies. The combination of two states is easy to achieve by a matrix operation on the set of differential equations. Let cof := [0,0,..., 1, ...,0] with the 1 at the /^position. The matrix I •= [^]yi would thus be an identity matrix. Defining ^ :=
(of
and
multiplying the set of differential equations with it removes the two time derivatives XQ and Xb and replaces them by their sum Xa-\.b • Adding the algebraic equation Xa-\.b '=Xa+Xb to the algebraic section completes the model reduction procedure. In the process of adding two states together, the streams between the two systems being combined are eliminated, which include the ones that are not known.
879 In procedure for removing a reaction is very similar only that here a null space is generated for the part of the transposition matrix that describes the fast reactions. An example has been discussed in Westerweele et al. (2000), Westerweele (2003).
4. Example: Distillation Column We describe a distillation as two series of lumped systems that are connected in a countercurrent fashion with streams connecting the two parts representing the two moving phases. The model is quite common. For example one can assume two lumps for each tray one for the gas phase and one for the liquid phase the two communicating mass through the phase boundary, represented as a system, and both being connect to their respective neighbours up and down the column of the same phase. We focus here on the mass, only because of the space limitations and simplify the interaction of the column network with the environment assuming that it is only one phase that communicates, say phase a, which would likely be the liquid phase. The network for the column is represented by: n
:= E„,„fi„|„+E„|,n„.|, + F . , p % + Ep|pnp|p+E^I„n^l„ + F^lpnoc,p.
(14)
For clarity we use here the notation . Next we assume that the capacity of the interface is negligible, thus assuming nj := 0. which yields that the flows in each interface are equated: 4x|/ •=fi/jp='-fia|p• The interface systems are removed by multiplying with a matrix W. that has no rows where the interface systems are located. But we do not know the flow anyhow, which asks for a phase equilibrium assumption: Xa •= Kxp • and the elimination of a set of states, say all the ones of phase p achieved by multiplying the ODEs with the respective matrix ^ g . Let g := ^ W. then fin
:= fiF^|„ax|a + fiEp|pnp|p +
fiE^I„n^|a+fiI„l,n„|p.
(15)
Introducing a variable transformation by defining a secondary state mole fraction, assuming in correspondence with the literature that the local hold-ups (ris ~ na) and the total internal flow (n^i^) do not change defining the inverse of the local time constants as V5 := ^5 n7^ then it is easy to derive that the model can be rewritten into: fiX£
=
VafiE„|„X„ + V/fiE^|„X^ + VpfiE„I^X„.
(16)
which is indeed the sought first-order model for the concentration change in the column (Skogestad (1997), Rademaker et al. (1975)).
5. Conclusions We model physical-chemical-biological plants using a network approach reflecting the basic principle of macroscopic thermodynamics. The models build on the fundamental quantities, the quantities that are conserved. All transformations, transfer laws and transposition kinetic laws are added as algebraic parts forming a DAE. The model is always describing a complete system in the sense of a relevant universe, thus including all sources and sinks of fundamental extensive quantities.
880 For solvability, we define a proper simulation model in that the state must be computable through integration, given a proper initialisation and a proper algebraic part, which is local to every system and pair of system in the case of flow descriptions. Assumptions are being added substituting for not known information, such as flows and reactions. Forflowsit will usually be an equilibrium relation or a steady state assumption that provides the required information. In the case of unknown transposition kinetics it is either steady state assumptions or reaction equilibria that must be introduced. Introducing equilibria information results index problems, which can be readily resolved by making use of the fact that the model builds on fundamental extensive quantities. The model is reduced eliminating the unknown flows and reactions by adding systems together in the case of eliminating flows and defining sums of the affected quantities and in the case of reactions it is a null space computation which eliminates the unknown reactions from the set. Substituting the phase and reaction equilibria, respectively completes the model reduction yielding a proper index 1 problem.
6. References Bieszczad, J. (2000), A Framework for the Language and Logic of Computer-Aided Phenomena-Based Process Modeling, PhD thesis, Massachusetts Institute of Technology, Massachusetts, USA. Krogh, J.A. (1998), Generation of problem specific simulation models with an integrated computer aided system, PhD thesis. Technical University of Denmark, Dept of Chem. Eng., Kastrup, Denmark. Marquardt, W. (1994), Towards a process modelling methodology. Technical report, Aachen University of Technology, Aachen, Germany. Moe, H.I. (1995), Dynamic process simulation: studies on modeling and index reduction, PhD thesis, Norwegian Institute of Technology, Trondheim, Norway. Perkins, J.D., Sargent, WH., VazqiEZ-Roman, J. & Cho, J.H. (1996), Computer generation of process models, Comp Chem Eng, 20, 635-639. Rademaker, O.J., Rijnsdorp, J.E. & Maarleveld, A. (1975), Dynamics and control of continuous distillation column^ Elsevier, Amsterdam. Skogestad, S. (1997), Dynamics and control of distillation columns a turorial introduction, Trans IchemE75. Westerweele, M.R. (2003), Five Steps for Building Consistent Dynamic Process Models and their Implementation in the Computer Tool Modeller, PhD thesis, Eindhoven, Institute of Technology, Eindhoven, Netherlands. Westerweele, M.R., Akhssay, M. & Preisig, H.A. (2000), Modelling of systems with equilibrium reactions, MACS Symp on Mathematical Modelling, Vienna, Austria, 697-700.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
881
A Nonequilibrium Model for Three-Phase Distillation in a Packed Column: Modelling and Experiments Jens-Uwe Repke, Olivier Villain, Gtinter Wozny TU Berlin, Institute of Plant and Technology TU Berlin, Sekr. KWT 9, Strasse des 17. Juni 135, 10623 Berlin, Germany email: [email protected]
Abstract A nonequilibrium model (Figure la) is developed for the simulation of a three-phase distillation process in a column equipped with a structured packing. The model is taking into account the mass transfer resistance between all three existing phases. Furthermore the convective and conductive part of the heat transfer rate is calculated, which is crucial for the precision of the model. The development of the model was strongly connected with experimental investigations of a packed column. When a heterogeneous azeotropic mixture is separated by distillation in a packed column, specific steady states can occur, which cannot be described by the use of an equilibrium model (Figure lb). The observed remarkable column behaviour is caused by the fluiddynamic situation of the two immiscible liquid phases on the structured packing surface. The simulation results are in good agreement with the experimental data of the packed column.
t Q
Ull
'.mm„^i
IT
IT t H n (a)
(b)
Figure 1. Structure of the three-phase distillation models: (a) a three-phase nonequilibrium section, (b) a three-phase equilibrium stage.
1. Introduction There are only a few experimental investigations for the analysis of three-phase distillation in packed columns. Most of them deal with random packing and only a very rare number treat structured packing. An overview of the investigations of three-phase
882 distillation in packed columns is given e.g. in (Repke and Wozny, 2002). Keeping this fact in mind, that experimental data for three-phase distillation in packed column is very limited, it is understandable, that no model for the description of distillation of heterogeneous mixtures in packed columns was developed. In the field of three-phase distillation in tray columns different models have been developed in the past and recent years. In general, the models were based on the assumption of equilibrium between all three existing phases, but there are also two proposals, which used a nonequilibium model (Lao and Taylor, 1994; Eckert and Vanek, 1999) to describe a three-phase operated tray. In these two papers (Lao and Taylor, 1994; Eckert and Vanek, 1999) the mass transfer between the vapour and one liquid phase is considered but between some other phases the transfer rates are neglected. The applicability of these models for the calculation of three-phase distillation in packed columns was not considered up to now. To our knowledge, the only description of one gauze packed segment by the rectification of the heterogeneous mixture ethanol-benzene-water, is given by (Saito et al., 1999).
2. Modelling The focus of the investigations is on the theoretical and experimental analysis of the three-phase distillation in a packed column. The project is aimed to the development of a model for the calculation of the three-phase distillation in a packed column. The conventional three-phase equilibrium stage model and a new nonequilibrium model have been developed. The explanation of the new nonequilibrium model and the validation of this nonequilibrium model represent the main topic in the lecture. The principal structure of a three-phase nonequilibrium section is shown in (Figure la.) The mass and energy transfer across all three phases is considered in the developed model. A comprehensive presentation of the complete model is given in (Repke, 2002) and also in (Repke and Wozny, 2000). The Maxwell-Stefan equations are used for the calculation of the mass transfer rates. Modelling the three-phase distillation based on nonequilibrium contains some specific features compared to the normal "two-phase distillation" or the equilibrium model. In the equilibrium model of three-phase distillation only two of the three equilibrium equations are independent. In the nonequilibrium model every phase is balanced separately. Therefore all three equilibrium equations are used in the model for the interfaces. A further characteristic is, although a three-phase problem is existing, that only the mass transfer between two phases has to be calculated at every interfacial area. Additionally, the convective and conductive part of the heat transfer have to be taken into consideration, as the own investigations presented. Often the conductive part is neglected due to the small difference of the temperatures of the phase interface and the bulk phase. For the modelling of the three-phase distillation this simplification is inadmissible. A special problem of nonequilibrium models is the calculation of the transport properties and the interfacial area. In a first step, for the three-phase nonequilibrium model customary methods of two-phase are applied in the simulation program. The interactions between simulation and experiment show the practicability of this
883 procedure. Both models were integrated in a rigorous simulation program which is written in fortran 77.
3. Experiments in a Three-Phase Operated Packed Column On the base of various heterogeneous azeotropic mixtures, as acetone/toluene/water or 1-propanol/l-butanol/water, the distillation behaviour and its modelling in a packed column is discussed. A detailed description of the experimental equipment and the results is presented in (Repke and Wozny, 2002) and in (Repke, 2002), but a brief overview is listed in (Table 1). l-PropanoI(97,2°C) A vapour fraction A liqiuid fraction • liquid fraction in the bottom © azeotropic point —equilibrium model —" nonequiiibrium model (liquid phase)
87,6 °C
1-Butanol (117,7 °C)
92,9 °C
Wasser (100,0 °C)
Figure 2. Composition triangle of three-phase distillation at total reflux for 1propanol/1-butanol/water; in the packed column: experiment vs. simulation. Rectification experiments under total reflux condition show the influence of the special fluiddynamics of heterogeneous mixtures on the separation efficiency of the packed column (Repke and Wozny, 2002; Siegert et al., 2000). On the basis of these experiments a first verification of the two models is carried out. Table 1. Basic information about the packed column. Packing Overall height Effective packed height Internal diameter
Sulzer Optiflow C.36 Approx. 7.5 m Nearly 2.5m 0.7 m
884 Table 2. Operating conditions of three-phase distillation for 1-propanol/lbutanol/water at total reflux. 0.565 P a " ' F-factor y i-propanoi = 0.023 mol/mol Vapour fraction - packed entry y water = 0.75 mol/mol y 1-butanoi = 0.227 mol/mol Pressure drop 0.62 mbar Pressure on the top 1013 mbar In (Figure 2) and (Figure 3) a comparison between the experimental data and the simulation results is shown. The operating conditions are mentioned in (Table 2). The equilibrium model as well as the nonequilibrium model described the experimental data with a good accuracy. As explained before, for the calculation of the transport values as mass transfer coefficients and interfacial areas usually customary "two-phase" methods are used. With this background, the results for the nonequilibrium model are remarkably. On the other hand it was elaborated, that in the case of three-phase distillation only a two-phase problem exists at every interface and the profiles illustrate the acceptability of this assumption. 23
V \*
23 2,0
A
experimental
•^—— equilibrium model
v. A
- - - nonequilibrium model (vapour)
1,8
— — nonequilibrium model (average liquid)
M \
1 1,4
\
•g iS 1,1 a 0,9
"A
An
0,7
A
03 0,2 85
90
95
100
temperature [°C]
Figure 3. Temperature profile of three-phase distillation for 1-propanol/1butanol/water at total reflux in the packed column: experiment vs. simulation.
In addition to the experiments at total reflux, further experiments at finite reflux show the necessity of a nonequilibrium model. (Figure 4) shows the temperature profile of the
885 three-phase distillation of acetone/toluene/water at finite reflux. The operating conditions of the experiment are given in (Table 3).
Figure 4. Temperature profile of three-phase distillation run for acetone/toluene/water at finite reflux in a packed column: experiment vs. simulation.
The observed behaviour of the packed column can only be described and explained by regarding of the nonequilibrium model. The equilibrium model fails in this case. The physical reason for the unusual behaviour is based on the volumetric ratio of the both liquid phases in the area of the miscibility gap. As well as the fact, that the liquid phase can flow separately on the packed surface and not in an ideal mixture like in a tray column. A very detailed description is given in (Repke, 2002); furthermore additional heterogeneous mixtures are considered there. Table 3. Operating conditions of three-phase distillation for acetone/toluene/water at finite reflux. Reflux ratio 8 Pressure drop 0.69 mbar Pressure on the top 1018 mbar Feed stream 4.36*10-^ kmol/s Distillate stream 2.73*10-^ kmol/s X acetone = 0 . 4 0 6 m o l / m o l
Liquid feed fraction
X water = 0.406 mol/mol X toluene = 0.445 mol/mol
886
4. Conclusion In the presentation the nonequilibrium model for the calculation of the three-phase distillation in a packed column is presented. The model considers the mass transfer between all three existing phases. A remarkable point is that by using normal "twophase" methods mass transfer coefficients and interfacial areas can be calculate in a first step. The study is strongly influenced by the interaction between the development of the model and the experiments. The validation was carried out in a packed column of the institute on experiments at total and finite reflux. Two different heterogeneous azeotropic mixtures have been separated by distillation. The distillation run at finite reflux showed an unusual behaviour, which is based in the heterogeneity of the liquid phase. In this case a nonequilibrium model is needed. A comparison of the simulation results and the experimental specifications shows the performance of the developed nonequilibrium model.
5. List of Symbols a E F i, j k L
interfacial area heat transfer rate feed flow rate section/station and component number equilibrium ratio liquid phase/flow rate
N Q V x y z
mass transfer rate heat exchange with the environment vapour phase/flow rate liquid mole fraction vapour mole fraction height of a section
6. References Eckert, E., Vanek, T., 1999, Application of the Rate-Based Approach to Three-Phase Distillation Columns. Ind. Eng. Computers Chem. Engng. 23, pp. 331 - 334. Lao, M., Taylor, R., 1994, Modelling Mass Transfer in Three-Phase Distillation. Ind. Eng. Chem. Res. 33, pp. 2637 - 2650. Repke, J.-U., Wozny, G., 2000, A Nonequilibrium Model for Three-Phase Distillation in a Packed Column. ADCHEM 2000, Pisa, Italy, 14.-16.06.2000, pp. 1031 1036. Repke, J.-U., Wozny, G., 2002, Experimental Investigations of Three-Phase Distillation in a Packed Column. Chem. Eng. Techn. 25, No. 5, pp. 513 - 519. Repke, J.-U., 2002, Experimentelle und theoretische Analyse der Dreiphasenrektifikation in Packungs- und Bodenkolonnen. Fortschritt-Bericht VDI, Reihe 3, Nr. 751 VDI-Verlag Dusseldorf 2002, ISBN 3-18-375103-8. Saito, N., Abe, Y., Kosuge, H., Asano, K., 1999, Homogeneous and Heterogeneous Distillation of Ethanol-Benzene-Water System by Packed Column with Structured Packing. J. Chem. Eng: Japan 32, No. 5, pp. 670 - 677. Siegert, M., Stichlmair, J., Repke, J.-U., Wozny, G., 2000 Heterogeneous Azeotropic Distillation in Packed Columns: Experimental Results. Chem. Eng. Technol. (23), 12,2000, pp. 1047 - 1050.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
887
Connecting Complex Simulations to the Internet an Example from the Rolling Mill Industry Dipl.-Ing. S. Roth, Technical University of Berlin Dr.rer.nat. H.-U. Loffler, Siemens AG I&S IP PEP Prof. Dr.-Ing. G. Wozny, Technical University of Berlin
Abstract A simulation service for the steel industry laid out as a web application will be presented here. The software architecture and the concepts behind that program may serve as an example for other industrial applications accessible for a broader clientele. Making an application available through the internet moves the focus of the users from in-house specialists to clients and also opens a door to a world of plenty: interconnectability, cooperation with other tools and even e-business to name but a few. This article shows how a formerly stand-alone software simulation tool was extended by an internet client and an interface without expensive middleware software. Also the intense cost of large implementation teams can be saved. The software Hybrex® WEB presented in this article shows how this can be done by using standards like Java servlets, Java beans and CORBA / RMI. Hybrex® WEB was developed by Siemens AG, I&S IP PEP in close cooperation with the Technical University of Berlin, Institute for Process Dynamics and Control. The software design and the current implementation are shown below. The interoperability of the software can be proven exemplarily by an intelligent agent system using Hybrex® WEB as simulation service. A general way how to evolve stand alone applications into web solutions is presented. In the conclusion a short glance at future enhancements and current developments like the XML interface will be given.
1. Introduction In today's industry a lot of isolated software tools are designed for different tasks. Most of them are unable to be integrated into other applications or environments. These highly sophisticated solutions are often operable by specialists only. They even might be forgotten, get lost or they are simply no longer usable as environments and requirements change. There is the kind of applications using so-called "standard solutions": These may be small Excel programs or simulations using algorithmic solution providers like Matlab® or other business applications known as a standard for a particular business branch. Regarding the chemical industry as an example, there are the different applications of Aspen® which have become a quasi standard during the last years. Aspen® Plus for example brings along a variety of calculation models, equations, material data etc.
888 These models are largely agreed upon by experts in the whole world. Customized and own models / equations may be entered by the Aspen Plus®' interface to integrate Excel code as well as old Fortran algorithms. 1.1. Starting basis: a closer look at software tools of the steel industry Calculation models in the steel industry look different. There is no such widely agreed upon software tool like Aspen®. Also the technique of flowsheet simulation is not used as a standard in this branch. Simulation tools are often stand-alone solutions intended to be used for specific problems. The algorithms developed for process automation are also used for simulation. This makes simulation results more reliable and comparable to actual values of the mill. In the steel industry the most popular programming language is C and C++. This is due to the need of fast real time algorithms during process control. Hybrex® (Peuker et al., 1999) is a tool that uses the technique of flowsheet simulation for rolling mills. It was developed by Siemens AG and the Technical University of Berlin during the recent years. It is suitable for layout, plant optimization and process analysis. It connects several real-plant calculation models like roll force calculation, microstructure, and temperature spreading etc. The software to be extended by a new, modern internet application was a Windows C++ program, using Microsoft Foundation Classes (MFC), binding several calculation libraries via COM and addressing several Access databases. 1.2. The idea is born As Hybrex® requires a certain operation system (Windows NT) and Microsoft's Office suite, it is difficult to supply a potential customer with a demo version running on any client. Also the material database used by Hybrex® is not intended to be supplied to anyone, because it holds a lot of Siemens know how and research data of the last two decades: flow curves, material values etc., valuable and strictly confidential data, Siemens would not want to give outside. Solutions like dongles for the software cause logistic problems. Encryption of the material database solved this issue, but perhaps are not safe for all time. Thus the idea of a web based solution was born: Leave confidential data in-house, offer client-role-based access to the simulation by a comfortable web interface. It was in the late nineties when companies discovered the internet as a potential business area of the future: e-solutions promised to be the key to future markets. An interface of Hybrex® accessible over the internet could be a basis for e-business.
2. The Application Hybrex® WEB From 2000 until September 2002 the internet solution Hybrex® WEB was designed and implemented. 2.1. Components and design The application comprises four parts: 1. The original C++ Hybrex® calculation core, extended by a CORBA interface. 2. The "simulation request broker": a sort of middleware server written in Java, whose main task is to synchronize concurrent client calls, transfer configuration data from
889 the client to the Hybrex core and give back result structures after calculation. It has a CORB A and RMI interface to communicate data. A bunch of small Java servlets handling login procedures and arrange communication between client and broker. They offer a tunnel through the firewall. The Web client: let's call it a "rich client": a Java program offering a comfortable graphical user interface for parameterizing the mill. This program can be executed as Java applet in a simple standard browser or as Java Web Start application (see "Java Web Start", 2002).
Application xxx Web-Client X HTTP,
Intelligent Agent Software
Web-Client Y
Figure 1: Architecture of Hybrex WEB. Figure 1 shows how the different parts work together: broker and Hybrex® core working in the Siemens network and external applications like the web client connecting through servlets to the broker. The communication between the broker written in Java and the Hybrex® C++ Simulation is established via CORB A. This allows the use of several Hybrex®-instances at the same time. The broker is able to spread multiple requests from the internet on several Hybrex® instances, which is important for the scalability of the system. The simulation request broker may be contacted by other Java applications directly using the standard RMI interface shipped with all Java versions. Like this, the web server's servlets communicate with the broker very easily. Applications outside the Siemens network have to pass a firewall. As neither RMI nor CORB A are allowed to pass a firewall directly, simple servlets were created to handle that problem. In this manner applications as the web client communicate with Hybrex simply using standard HTTP (or HTTPS in later versions) without any problems crossing firewalls. 2.2. Separate development of the data model The whole design of Hybrex® WEB and its components heavily depends on the strict separate development of the object model from the rest of the application. This is a widely appreciated design pattern found frequently in the object oriented world
890 (Yourdon and Coad, 1991). It allows to separate model and view in the web client and simplifies the communication between the different components of Hybrex® WEB. The whole object model of a rolling mill is implemented as a standalone Java bean. This bean is used by the web client for parameterization of the plant and to read the calculated values for graphical presentation in tables and diagrams. The simulation request broker uses the same object model and extends it by CORBA functionality. Any other (Java-) application can directly use Hybrex® for calculation by addressing one function expecting the object model and reading the model with the calculated result values in return. Thus, It only needs to describe a mill using the model bean and trigger a calculation on the server. 2.3. Extensibility: an example with an intelligent agent software The focus during development of Hybrex® WEB was always on building an extensible environment and not just concentrate on one application like the web client. The purpose of making Hybrex® accessible via internet was to make its capabilities available for more applications, to make the step from a proprietary single solution towards a world wide available simulation service. Another application making use of Hybrex® was developed at Siemens: a so called "reflexive diagnostic unit (RDU)". It is designed as an intelligent agent in an intelligent agent environment written in Java (Heimke 2001, Dirscherl 2002). This particular RDU analyses the logfiles of the Siemens MicroStructureMonitor (Loffler and Doll, 2001). This monitor produces online large amounts of log files, writing down microstructure values calculated from real plant data. It produces warnings in the log files if certain values exceed predefined thresholds. As there may be multiple warnings this might be information overflow for a human, but not for a RDU. This intelligent agent scans the log files, creates statistics and is able to react autonomously on certain events. This reaction can be a warning email for certain personnel, a recalculation of values with other software tools and - thanks to the web interface - a recalculation of some values using the Hybrex® calculation service. This agent software takes the same model bean, parameterizes a plant corresponding to the real plant where the MicroStructureMonitor is running and then feeds this model to the simulation request broker. Similar requests can be sent to the server at any time. An example test has been run at Siemens and successfully used Hybrex® WEB for recalculation of different values.
3. Conclusion During two years of development an impressing simulation service has been produced: extensible, open to use, and applicable for a diversity of problems and tasks. A powerful web client enables users anywhere in the world to parameterize whole plants, calculate them by the Siemens simulation service and draw the results. At the moment this service is restricted to the Siemens Intranet due to the company's privacy policy. Figure 2 shows the web client displaying roll force calculation results for a five stands mill.
891
''Vcf
^ttemoSstaMits
r»«l.r^»j^r.rt;r,»
ifcharts
^ ^
iijiiliiiu,,, ^...iiiiiii I :<*a:>j.r»4..*»>,m
r lllliiiiii %iiilllll t,»Jt«<*Sit<'m«vtiM-s»Mt
^4«MiL**i»;**t**«is*J«
t«+sm^,fa::<>?i*s»»
^
t.»»'»*rl*f?m*.;*wm
"^lllll'Eiilll ?'*m>*«*^m*!fJ«*?n»
«jm5^«$|i<***.f«»s!*4»
^^i^tm Mm^rm}i^msteie«iere^^ai(»i OK
Figure 2: The Hybrex WEB-client displaying roll force calculation results. 3.1. Open source contra ^^standard office components" Every step of development was accompanied with the search for open standards in order to evaluate its usability for future projects. Instead of developing own protocols or using COM, CORBA was used for communication between the C++ Hybrex® core and the broker. Besides several advantages it still lacked some wanted features: laborious to implement and lacking object oriented features like inheritance. On the Java side quite a lot of useful open source tools have been created during the last years. One of them is the JFreeChart-library (JFreechart, 2002): a tool which is very easy to use for drawing all kinds of charts, curves and diagrams (see also: figure 2). 3.2. Current and future development: a focus on XML The use of one object model for several different applications already proved to be most valuable. Yet the solution as Java bean is restricted to Java applications only. It would be useful to open the simulation service Hybrex® WEB to other potential clients as well. XML gained more and more importance during the last years. This W3C (W3C, 2002)standard is easy to use and serves as the future language for web services. By describing only the contents of the data there can be several different approaches to visualize it. This is equivalent to the separation of model and view discussed earlier. The same data can be converted into other proprietary formats for other tools which will always be found in different industry sections. It is a matter of fact that only the fewest programs
892 implement some standards like STEP (Grabowski et al., 1989) etc. They rather follow their own rules instead. The creation of an XML interface for Hybrex® WEB offers a lot of advantages: • The whole object model can be serialized into a human-readable W3Cconform document, which can in turn be read and written by Hybrex® WEB. • There are a lot of supporting tools for the XML standard, especially for Java. Comfortable parsers make data exchange easy. • Not only Java applications can use Hybrex® WEB using XML as exchange format. In turn Hybrex® WEB could be able to address further tools and applications by simply converting the XML data structure into other formats using transformation stylesheets: XSLT (Burke, 2002). XSLT provides uniform methods of conversion into other data formats without the need of coding highly specific parsers. In combination with XML Hybrex® WEB will be an adequate Web Service ready for the future. It has been proven that a formerly proprietary solution can be turned into a widely useable application service using standards and benefiting from modern object oriented design, language and implementation tools.
4. References Burke, E.M., 2002, Java und XSLT. O'Reilly, Koln. Dirscherl, M., 2002, Die Kopplung von Produktionsanlage und Simulation Uber Mobile Agentensysteme. Diploma thesis at the Friedrich-Alexander-University Erlangen-Niirnberg. Grabowski, H., Anderl, R., Schilli, B., Schmitt, M., 1989, STEP - Entwicklung einer Schnittstelle zum Produktdatenaustausch. VDI-Zeitung 131-12, pp. 84-96. Heimke, T., 2001, Eine neue mobile Diagnose-Software ist hochdynamisch im Einsatz. In: Stahlmarkt 6, S. 50 - 52. Java Web Start, 2002, http://java.sun.com/products/javawebstart. JFreeChart, 2002, http://www.object-refmery.com/jfreechart. Loffler, H.U., Doll, R., 2001, Siemens Microstructure Monitor: Commercial Application of Microstructure Modelling in Hot Strip Mills. In: The first joint international conference on recrystallization and grain growth (ReX & GG), Aachen, pp. 1001-1006. Peuker, T., Sorgel, G., Muller, H., 1999, Flowsheet simulation - a new tool for rolling mill planning. In: Metallurgical Plant Technology, MPT, 3/1999, pp. 126-134. Yourdon, E., Coad, P., 1991, Object Oriented Analysis. Yourdon Press Computing Series. W3C, 2002, http://www.w3c.org.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
893
Non Equilibrium Model and Experimental Validation for Reactive Distillation D. Rouzineau, M. Meyer, M. Prevost Laboratoire de Genie Chimique, ENSIACET 118 Route de Narbonne, 31077 Toulouse Cedex 4, France e-mail: [email protected]
Abstract Firstly, a non equilibrium model is developed in order to simulate non ideal multicomponent reactive separation processes. This model is characterised by mass and energy transfer description completed up by hydrodynamics consideration based on the film theory model. The Maxwell Stefan approach is used for the description of mass transfer without restrictive hypotheses. Moreover, there are no restrictive hypotheses about the type and localisation of the chemical reactions. Secondly, the numerical analyse of this model end in the setting up of a sure and stable strategy, specially to the differentiation index and the initialisation coherence. Thirdly, an experimental apparatus is set up in order to validate the numerical results. It represents a section of a packing distillation column fed by two fully controlled flows. The experiments were performed for the homogeneously catalysed esterification of acid acetic and methanol to produce methyl acetate and water. Several runs have been realised by varying the flow rates and compositions of the feeds, as well as the concentration of catalyst. For each one, the simulation results are in good agreement with the vapour composition and the liquid temperature profile, without any parameter adjustment. In addition, the need of taking into account reaction contribution in the diffusional layers is clearly proofed.
1. Introduction The reactive distillation processes which combine reaction and gas liquid separation are of increasing interest for scientific investigation and industrial application. Nowadays, simulation and design of multi component reactive distillation is carried out using the non equilibrium stage model (NEQ model) due to the limitation of conventional equilibrium stage efficiency calculations for equilibrium model (Lee & Dudukovic (1998), Baur & al. (2000), Taylor & Krishna (1993), and Wesselingh (1997)). So, the NEQ model is developed by numerous authors. But there is a lack of experimental data in order to validate the model. Some input/output measurements are available but they provide little information about the behaviour inside the column. With this in mind, our paper is focus on the NEQ models and experimental validation. The first part deals with our non equilibrium model; focused on the diffusional layer near the interface, where complete multi component reactive mass and heat transfer is described. The numeric resolution, avoid the bootstrap problem, is discussed in the second part. Finally, our experimental pilot and the different experiments to validate the model are presented in the last part.
894
2. Nonequilibrium Model Theory A schematic representation of the non equilibrium model (NEQ) is shown in Fig.l. This NEQ stage may represent a tray or section of packing. It is assumed that the bulk of both vapour and liquid phase are perfectly mixed and that the resistance to mass and heat transfer are located in two thin films at the liquid/vapour interface (film theory, Krishna & Standard, 1976 ; Krishna, 1977). Interface interi
Vi
Stage j
<
4-1
M
^
^4-X
44^
IZ
'JH-l
thin films resistances for mass and heat transfer Figure 1 : The nonequilibrium model. 2.1. Bulk equations The stage equations are the traditional equations based on mass balances and energy balances in the bulk phase for each stage (see Taylor & Krishna (1993)). These equations take into account reactions, and there are no restrictive hypotheses as to the nature and the localisation of the chemical reactions. The bulk variables (composition, molar flux, temperature, energy flux) are different to the interface variables. The temperature of the vapour and the liquid phases are not assumed to be equal. The entire column represented as a sequence of such stages. 2.2. Interface equation The interface equations link the two phases. Physical equilibrium is assumed at the vapour liquid interface for each component. Moreover, the mass and energy transfer fluxes through the interface should be continuous. 2.3. Mass and heat transfer A novel model is used to compute heat and mass transfer through the diffusion layer considered in the film theory. Indeed, the fluid is considered as an n component reactive non ideal mixture. The balance equations for simultaneous heat and mass transfer are written in steady state, taking into account the reactions. For mass transfer, the Maxwell Stefan diffusion law is used, in a novel formulation. Neither the diffusion coefficients, nor the molar flux due to the reaction, are considered to be constant. The complete formulation for mass transfer for n non ideal components is : NRC
^JsJ
Mass transfer continuity (1):
dz (
Maxwell Stefan diffusion law (2)
5ij+x, j=i
for i from 1 to n
NRE
L + V D . R . + V t)!.^. = 0 for ' from 1 to n 9lnYi 9x,
3x, ^
° (x,Nj-XjN,)
895
Equilibrium equation (3):
^ j ~ 1 1 ^i^^
^^^-^ ^^^^ ^ ^^ ^^^^^
i=l
In our model, a n component formulation is proposed without using the summation equation. This choice is discussed in a previous paper (Rouzineau et al, 2001). No assumption is made on the type or the number of reactions, thus they can be controlled by kinetics or instantaneously equilibrium. So, in addition, the mass transfer rate change due to the chemical reaction. For the heat transfer, the Dufour and Soret effects are neglected and the diffusion heat rate is evaluated by Fourier's law.
3. Numeric Resolution The numeric resolution has been achieved in two steps. First of all, the system due to the mass and heat transfer in the diffusion layer (DAE system) is solved by a DAE integration based on Gear method (Le Lann, 1998). With this integration, we obtained the molar and energy fluxes and the compositions in the diffusional layer. To efficiendy use a DAE integrator there are two main problems to overcome. First of all, a robust procedure leading to a coherent initial state (ie. all algebraic equations must be satisfied) before starting the integration has to be used. And secondly, an automatic substitution procedure is used to reduce the number of mass balances in order to take into account the chemical equilibrium constraints and to reduce the differentiation index to 1. This resolution has been described in a previous paper (Rouzineau et al, 2001). The initial values are the interface values. Secondly , the general balances in both phases and at the interface leads to a system of differential and algebraic equations with boundary conditions at each end. So a discretisation method is used and the resulting algebraic system is solved by a traditional Newton's method. This general balances use values resulting from the integration of the equations in the diffusional layer.
4. Experimental Validation An experimental pilot has been developed to have comparison with simulation results. 4.1. Materials The glass column consist of four packed section with glass rashig rings. The totally packing height is about one meter and the column diameter is 8 cm. This column has no reboiler and no reflux. It represent the reactive section where the top liquid flow and bottom vapour flow are totally controlled. The top liquid flow is pre-heated and a dry evaporator generate the bottom vapor flow (see figure 2). Vapour sample and liquid temperature can be measured on each packing section. The operational variables such as feed flow rate, feed temperature, and column temperature profile are controlled by a process control unit. 4.2. Experiments The experiments were performed for the homogeneously catalysed esterification of acid acetic and methanol to produce methyl acetate and water. Sulphuric acid is chosen as homogeneous catalyst. This catalyst is feed by the shower ®. Five experiments have been realised by changing the flow rates (range from 4.34 kg/h to 8.35 kg/h for liquid feed and range from 2.89 kg/h to 6.34 kg/h for vapour feed) and compositions of the feed, as well as the concentration of catalyst, in order to modify the rate of reaction. For
896 each run, the partial and global mass balance are tested in order to validate the measurements. 4.3. Simulation Calculation were made for the above systems with our non equilibrium model. The thermodynamics data specification are provided by DECHEMA and UNIQUAC model is used. The reaction is considered like a reaction controlled by kinetics and the rate constant is given by a study in our laboratory. It depends of the quantity of catalyst. So the expression is : r = 333 with [H""] in ml/lsoimion c c
.3[H^]exp[^Jc
'Acelate^Eau
The estimation of the film thickness is obtained from the average values for the binary mass transfer and diffusion coefficient, estimated by traditional correlation (Onda (1968) correlation for the mass transfer coefficient. Fuller and Wilke Chang correlation for the vapour and liquid diffusion coefficient).
® ©
Evaporator Condenser Random packing Electric preheated Flow control Alimentation pump Exchanger hydraulic keep shower Sample I Thermocouple
Figure 2: Experimental pilot plant. 4.4. Results and exploitation For each run, the experimental and calculate values of outputs (flow rates, concentrations), the vapour composition and liquid temperature profiles are compared; a good agreement is systematically observed without need of parameter adjustment. For illustration, one run is detailed. The liquid feed is water and acid acetic (0.76 mass) mixture with a flow of 4.34 kg/hr, and the vapour feed is pure methanol with a flow of
897 2.89 kg/hr. The flow of the catalyst is about 32g/hr. The conversion of Acetic Acid is about 26%, so the reaction is notified and the concentration gradient are important along the column height. The non equilibrium model show a quite good accordance to the experiment, with the consideration of measurement error (figure 3 shows compositions profiles along the column for one run and for simulation). An other simulation is done without taking into account reaction contribution in the diffusional layers. The profile compared with experimental and simulated with reaction in the film are shown in figure 4. It is clear that the results are different, so the importance of taking into account the reaction in the diffusional film is underlined. And faster the reaction is greater the gap between this two cases is.
Methyl Acetate
Calculated profile Experimental values
0,3
0.4
0,5
0,6
0,7
Vapour mass fraction
Figure 3 : Experimental and calculated composition profile. 1,1 -|
Calculated with reaction in the film Calculated without reaction in the film Experimental
0,3
0,4
0,5
0,6
0,7
0,8
Vapour mass fraction
Figure 4 : Composition profile with and without reaction in the film.
0,9
898
5. Conclusion We have developed a non equilibrium model for multi component reactive separation techniques. This model is solved numerically by a sure and stable strategy. The originalities of this model are the Maxwell Stefan formulation which is solved in this complete formulation and the absence of restrictive assumptions concerning the reaction. To validate the model, an experimental pilot has been developed. It is a part of column where inlet flux are controlled, and local accurate temperatures and compositions profiles are measured. For each experiments, which concern the production of methyl acetate, the results of steady state simulation are in good agreement with the experimental data and demonstrate the importance to take into account the reaction in the diffusionnal layer. So, the non equilibrium model seems to be a well adapted toll for the simulation, design and optimisation of reactive distillation.
6. List of symbols ai Ct
Dij
Ea Ke Kj Ni
n NRE NRC Ri
Activity component i Total concentration (mol/m^) Maxwell Stefan binaries diffusion coefficient i-j (mVs) Activation energy = 10000 cal/mol equilibrium constant for esterification = 5.2 Equilibrium constant for equilibrium reaction j Molar flux component i (mol/mVs) Number of constituents Number of equilibrium reactions Number of control reactions Rate of reaction j (mol/ m /s)
R
Xi
z Yi
CCij
constant perfect gas = 1.989 cal/mol reaction rate of esterification (mol/ys) Molar fraction component i Space coordinate (m) Activity coefficient component i Stcechiometric coefficients of component i for control reactions j Stcechiometric coefficients of component i for equilibrium reactions control j Order of component i in equilibrium reaction] Enhancement of equilibrium reaction] (mol/m^/s)
7. References Baur, R., Higler, A.P., Taylor, R., Krishna, R., 2000, Chem. Eng. J., Journal 76, 33-47. Krishna, R., 1977, Chem. Eng. Sci., 32, 659-667. Krishna, R., Standart, G., 1976, AIChE J., 22, 383-389. Le Lann, J. M., 1998, Habilitation a dinger les recherches, 80 pages. Lee, Jin-Ho, Dudukovic, M.P., 1998, Computers and Chemical Engineering 23, 159172. Onda, K., Tackeuchi, H. & Okumoto, Y., 1968, J. Chem. Eng. Jpn., 1, 56-62. Rouzineau, D., Prevost, M. & Meyer, M., 200, European Symposium on Computer Aided Process Engineering, ESCAPE 11, Computer - Aided Chemical Engineering, Vol 9, page 267 - 272, Danemark. Taylor, R. & Krishna, R., 1993, Multicomponent Mass Transfer, Wiley Series in Chemical Engineering, (New York). Wesselingh, J.A., 1997, Distillation and Absorption, Vol. 1, 1-21.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
899
Hierarchical Fuzzy Modelling by Rules Clustering. A Pilot Plant Reactor Application. Paulo A. C. SALGADO (1); Paulo A. F. N. A. AFONSO (2) (1) Universidade de Tras-os-Montes e Alto Douro - 5000-911 Vila Real; Portugal (2) Escola Superior de Tecnologia e Gestao de Agueda, 3454-906 Agueda, Portugal
Abstract This paper presents a Fuzzy Clustering of Fuzzy Rules Algorithm (FCFRA) that allows the automatic organisation of the sets of fuzzy IF ... THEN rules of one fuzzy system in a Hierarchical Prioritised Structure. The proposed FCFRA algorithm has been successfully applied to the modelling of a nonlinear small scale Pilot Plant Reactor.
1. Introduction Fuzzy modelling is one of the techniques currently being used for the modelling of nonlinear, uncertain and complex systems. An important characteristic of fuzzy models is the partitioning of the space of system variables into fuzzy regions using fuzzy sets. In each region, the characteristics of the system can be simply described using a rule. A fuzzy model typically consists of a rule base with a rule for each particular region. Fuzzy transitions between these rules allow for the modelling of complex nonlinear systems with a good global accuracy. One of the aspects that distinguish fuzzy modelling from black-box techniques, like neural nets, is that fuzzy models are to a certain degree transparent to interpretation and analysis. A system can be described with a few rules using distinct and interpretable fuzzy sets but also with a large number of highly overlapping fuzzy sets that unable any possible interpretation. When a fuzzy model is developed using expert knowledge, usually it remains interpretable. On the other hand, some degree of redundancy and thus unnecessary complexity caimot be avoided when automated techniques are applied to acquire fuzzy models from data. One of the main objectives of this article is to show that automated modelling techniques can be used to obtain not only accurate, but also transparent rule-based models from system measurements. Hence, the information in a fuzzy system y(jc) will be organized as a set ofn fuzzy sub-systems/I(JC),^(JC), ...,^(JC). Each of these systems may contain information related with particular aspects of the system fix). This objective can be reached by using an algorithm that implements Fuzzy Clustering of Fuzzy Rules (FCFRA). The proposed algorithm allows the group of a set of rules into c subgroups (clusters) of similar rules, producing one representation of the fuzzy systems in a hierarchical structure, e.g., the HPS (Hierarchical Prioritised Structure) and PCS (Parallel Collaborative Structure), proposed by Paulo Salgado (2001).
900 2. Hierarchical Fuzzy Clustering of Fuzzy Rules The implementation of the proposed FCFR algorithm is based on a concept of the relevance of the rule set of a fuzzy system (Salgado 2002). This concept is a measure of the relative importance of the rule sets that describes a given region of the input/output space, where different metrics can be defined. 2.1. The relevance concept A generic fuzzy model is presented as a collection of fuzzy rules in the following form: Rii IF x\ is Ail and X2 is An ... and jc, is Ai„ THEN y = Zi(x) where x = (;Cj, JC2, • • •, JC^ ) eU and y e F are linguistic variables, Ay are fuzzy sets of the universes of discourse C/, e R, and Z/( 3c) is a function of the input variables. Typically, z can take one of the following three forms: singleton, fuzzy set or linear function. Fuzzy logic systems with centre average deflizzification, product-inference rule and singleton flizzification are in the following form: M
where p' (Jc) = ji' (^)/ X ^ ' (^) ^^ ^^^ fuzzy basis functions (FBF), M represents the number of rules, 0' is the point at which the output fuzzy set / achieves its maximum value, and jJ is the membership of antecedent of rule /. Consider 3 a set of rules from U into V, covering the region S = Ux F in the product space. P ( 3 ) is the power set of 3 . Any relevance function must be of the form 9t,:P(3)->[0,l]
(2)
With the appropriate number of rules, equation (1) can describe any nonlinear relationship because it is a universal function approximation (Wang, 1994). Under these circumstances the relevance of the fuzzy system at the point x^e S is the sum of the relevance of all the point rules x^e S and equal to one.
^A^^)=l%{^^)=^
(3)
2.2. The hierarchical prioritised structure The HPS structure, illustrated in figure 1, allows the prioritisation of the rules by using a hierarchical representation, as stated by Yager (1998). If / <j the rules in level / will have a higher priority than those in level y. For a system with / levels, such as /=!,..., n-\, and each level with Mi rules: I) If U is A.J and V._^ is low, then Fi is B.j and rule II is used; II) Vi is F;._, The output of a generic level i is given by the expression:
901 (4)
where 7^^
=4/(^*)A5,.;
, / = 1,---,M. is the output membership function of rule / in
level i and oti.i is the Relevance of fuzzy system evaluated up to i-1 level in Si inputoutput region, as stated in Salgado (1999): «.=^(«M , T[{\-a,_,),s({%XF;),
/ = 1,.-,M,))])
(5)
where iS and Tare, respectively, S-norm and T-norm operations. The 9t^ yFlj represent the relevance of rule / in level /. Input u
" ^ Level 1
•> i r , , Level 2 |
1 Level n
Output V„
Figure 1. Hierarchical Prioritised Structure (HPS) diagram.
2.3. Fuzzy clustering of fuzzy rules The purpose of clustering is to find groups of rules in a large rule base revealing the patterns that can be representative of fuzzy system behaviour. Such a task is achieved through the separation of the fuzzy rules Z={Ri, R2,-, RM} in c clusters, according to a defined criterion, finding the optimal clusters centre, V, and the partition matrix, U. Each value M.^ e [O, l], \
0<£w/^<«, V/G{1, 2,-.., 4
(6)
and c
M
(7) The equation (7) can be interpreted as the sum of the products between the relevance of the rules / in the x^ point with the degree of the rule / belonging to the cluster /., u„a..
902 This last item reflects the joint contribution of rule / to the i*^ hierarchical system, u.^, with the relevance of the hierarchical level /, a.. The requirements of equation (6) and the relevance condition of equation (3) are completely satisfied in equation (7). Therefore, and according to the FCFRA algorithm, the objective is to find a U= [t/,J and ^ ~K»^2>"''^c]» ^^^^ ^/ ^ ^^ where: n r c
M
"1
(8) yt=lL/=l /=1
J
is minimized with a weighting constant m>\, subjected to equation (7). The parameter rii is fixed for each cluster with membership distance of Vi. The results of the minimization can be expressed in the following steps: Possibilistic Fuzzy Clustering algorithms of fuzzy rules - P-FCAFR Step 1- For a set of points ^=={x7, X2,..., x„}, with XJES, and a set of rules 3={/?/, /?2v, RM), with relevance SK^ (jc^), ^ 1,... , M, keep c, 2 < c
(r)
/=1
K'r-£(9i,(-.)r-y-^, where [M*;' ] = t/*''', i=l, 2,..., c.
(9)
(«rr-i(3^,(-jr Step 3 - Compute the new partition matrix (/''^^^ using the expression: 1
.(--^1)
-, withl < / < c , 1 <1<M.
(10)
tm%Mrh-^k j=\ I jt=i
step 4- Compare L^'^ with if'^^^: If || C/'"'^^-(/''^|| < e the process ends. Otherwise let r=H-l and return to step 2. e is a small real positive constant. The applications of the P-FCAFR algorithm to fuzzy system rules (equation - 1) result in fuzzy system with HPS structure, i.e., in the form: M
/W=I«. 1I^'M'W",V /I«- IM'W«,V i=\
V /=l
(11)
903 3. A Pilot Plant Reactor Application In this section, the liquid level of the pilot plant reactor has been used to illustrate the proposed strategy for possibilistic clustering in the "fuzzy rules domain" (Afonso, 1998). This plant where the experimental tests were carried out is depicted in Figure 2a). An 80 litre capacity nonadiabatic stirred tank reactor, is fed by two liquid streams coming from two pressurised tanks (tank 1 with flowrate Qi and temperature Ti and tank 2 at temperature T2 and flowrate Q2, where flow rates are manipulated by control valves VCl and VC2, respectively). The reactor outflow is governed by gravity and by control valve VC5 draining through a pipe into atmospheric pressure. At the first stage, the system was modelled using the nearest neighbourhood algorithm, from a set of real data obtained from a set of 10 experimental tests. During the automatic learning process, a set of 187 fuzzy rules are generated. A graphical representation of the fuzzy model, fh, is shown in the figure 2b) for a constant reactor inlet flowrate 2i+2,it = Q\,k "*" Q2,k • ^^i^ process model can be represented in a discrete form, where k corresponds to the sampling time: K.x=K+fH[Qx.2,k^yc\,h,)
(13)
Figure 2. Simplifiedflowsheet of the pilot plant and the transfer function of valve VC5.
The second stage consists of clustering the fuzzy system into 2 clusters with each one representing a fuzzy system in a HPS structure, using the proposed FCAFR algorithm. The figures 3 a) and 3b), show the individual output response of each hierarchical fuzzy model. The original fuzzy system can be described as the aggregation (11) of both cluster surfaces. The figures 3c) and 3d) shows the relevance of each fuzzy subsystem, respectively for the 1st and 2nd cluster. Moreover, the space regions are covered for high values of Vc5 and h in fig. 3c). The remaining region domain is represented in fig. 3d). This methodology allows the splitting of the original model in two or more sub-models, each one describing one particular aspect of the fuzzy system.
904
0.05^ 0-0.05-0.1-0.15-0.2 > 0
0.2
0.
1
1-5
1
1.5
Figure 3 - Cluster 1 is represented in a) and cluster 2 is indicated in b). The relevance of 1st and 2nd cluster is shown respectively in c) and d).
6. Conclusions The mathematical fundaments for possibilistic fuzzy clustering of fuzzy rules were presented. The P-FCAFR algorithm was used to organize the rules of the fuzzy model of the liquid level inside the Pilot Plant Reactor in the HPS structure. The partition matrix can be interpreted as containing the values of the relevance of the sets of rules in each cluster. This approach is currently showing its potential for modelling and identification tasks, particularly in the fault detection and compensation field. Acknowledgment Financial support from FCT under research projects is gratefully acknowledged.
7. References Afonso, P.A. and Castro, J., 1998, Improving Safety of a Pilot Plant Reactor using a Model Based Fault Detection and Identification Scheme, Comp. & Chem. Eng., 22. Salgado, P., 2001, Fuzzy Rule Clustering, IEEE Conf. on Syst, Man and Cybernetics 2001, Arizona, Tucson, AZ, USA, pp. 2421-2426. Salgado, P, 2002, Relevance of the fuzzy sets and fuzzy systems. In: "Systematic Organization of Information in Fuzzy Logic", NATO Advanced Studies, lOS Press. Yager, R., 1998, On the Construction of Hierarchical Fuzzy Systems Models, IEEE Trans. On Syst., Man, and Cyber. -Part C, 28, pp. 55-66. Wang, Li-Xin, 1994, Adaptive Fuzzy System and Control, Design and stability analysis. Prentice Hall, Englewood Cliffs, NY 07632.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
905
Residence Time Distributions From CFD In Monolith Reactors - Combination of Avant-Garde and Classical Modelling Tapio Salmi, Johan Warna, Jyri-Pekka Mikkola, Jeannette Aumo, Mats Ronnholm, Jyrki Kuusisto Abo Akademi, Process Chemistry Group, Laboratory of Industrial Chemistry, FIN-20500 Turku/Abo, Finland
Abstract Computational fluid dynamics (CFD) was used to investigate the flow pattern and flow distribution in a recirculating monolith reactor system designed for catalytic three-phase processes. The information from the CFD model was transferred to a simplified simulation model, where the monolith and the mixing system were described by parallel tubular reactors coupled to a mixing space. The model was based on the following principles: the mixing space and the monoliths were in fully dynamic states, but the concept of differential reactors was applied on the monolith channels. Thus the simplified model consisted of ordinary differential equations for the gas and liquid phases. The modelling concept was successfully illustrated by a case study involving complex reaction kinetics: hydrogenation of citral to citronellal, citronellol and 3,7dimethyloctanol over cordeorite-supported nickel on alumina washcoat. A comparison of experimental results with the model predictions revealed that the proposed approach is reasonable for the description of three-phase monolith reactors.
1. Introduction Residence time distribution (RTD) is a classical tool in the prediction of the comportment of a chemical reactor: provided that the reaction kinetics and mass transfer characteristics of the system are known, the reactor performance can be calculated by combining kinetic and mass transfer models to an appropriate residence time distribution model. RTDs can be determined experimentally, as described in classical textbooks of chemical reaction engineering (e.g. Levenspiel 1999). RTD experiments are typically carried out as pulse or step-response experiments. The technique is principally elegant, but it requires the access to the real reactor system. In large-scale production, experimental RTD studies are not always possible or allowed. Furthermore, a predictive tool is needed, as the design of a new reactor is considered. The current progress of computational fluid dynamics (CFD) enables computational 'experiments' in a reactor equipment to reveal the RTD. A lot of commercial software has recently been developed to carry out CFD calculations, particularly for homogeneous systems, such as CFX and Fluent. Typically CFD is used for non-reactive fluid systems, but nowadays even reactive systems can be computed (Baldyga and Bourne 1999). The ultimate goal of chemical reaction engineering is to predict the overall reactor performance in the presence of chemical transformations. The difficulties of CFD, however, grow very much when multiphase systems with chemical reactions are considered. For this reason, a logical approach is to utilize CFD to catch the essential features of the flow pattern and use this information in classical reactor models based on RTDs.
906 The approach is illustrated by a case study, a three-phase monolith reactor coupled to a recycling device, the Screw Impeller Stirred Reactor (SISR) developed at TU Delft (Kaptejn et al. 2001),. Cylindrical monoliths are placed in a stator, and a foam of gas and liquid is forced through the monolith channels with the aid of a screw (Fig. 1). Monolith reactors combine the advantages of slurry reactors and fixed beds: minimized internal diffusion, low pressure drop and continuous operation (Nijhuis et al. 2001).
Figure. 1. The monolith reactor schematically and in reality.
2. Flow Distribution from CFD Calculations In monolith reactors, the distribution of the fluid into the channels is typically at least somewhat uneven; thus it is very important to predict the flow distribution and include it in the quantitative modelling. We utilized CFD calculations to obtain the flow characteristics of the experimental system (Fig. 1). The CFD calculations were performed with the software CFX.4.4. The flow profiles in the gas and liquid phase were solved with the turbulent k-e method (320000 calculation elements) To evaluate the distribution of gas bubbles, the Multiple Size Group method was applied. The results from the CFD calculations give the flow velocities for gas and liquid, the bubble sizes and the gas and liquid hold ups in the channels. Fig. 2. This information can be utilized in the conventional reactor model. The predicted slug flow (Taylor flow) conditions in the monolith channels were also confirmed by visual investigation of the flow by replacing the autoclave with a glass vessel of equal size. Fig. 1. Schematically the reactor can be regarded as a system of parallel tubes with varying residence times. The screw acts as a mixer, which implies that the outlet flows from the channels are merged together, and the inlet flows to the monolith channels have a uniform chemical composition. The principal flowsheet is displayed in Fig. 3. Based on this flowsheet, the mass balance equations are derived as follows.
907
Figure. 2. Flow distribution calculated in the monolith channels by CFD.
Figure 3. Simplified flowsheet of the monolith system described as parallel tube reactors and stirred mixing volume.
3. Simplified Model for Reactive Flow The surroundings of the monolith was considered to be perfectly backmixed system, where no reactions take place.. The monolith channels were approximated by the plug flow concept. The gas-liquid as well as liquid-solid mass transfer resistances were included in the model. Since the catalyst layer was very thin (few micrometers) and the reactions considered in the present case were slow, the internal mass transfer resistance in the catalyst layer was neglected. The gas-phase pressure in the reactor was maintained constant by controlled addition of hydrogen. The temperature fluctuations during the experiments were negligible; thus the energy balances were not needed. The conversions of the reactants were minimal during one cycle through the monolith, which implies that a constant gas hold-up was assumed for each channel. The reactions were carried out in inert solvents and previous considerations have shown that the liquid density did not change during the reaction. Based on this background information, the dynamic mass balance for the liquid phase in each channel can be written as follows: n'Li,j,m + Nuj AAL = Nuj, + n'uj.om + duu/dt
(1)
Due to the assumption of constant density, the volumetric flow rate does not change, and the model can be expressed by concentrations. The basic volume element is let to shrink and the hyperbolic partial differential equation (PDE) is obtained: dcLi,j/dt = NujaL - Nuj, as - (TLJ Ey) "^ dcuj/dz
(2)
This complete model is valid for all of the components, but, actually, the gas-liquid mass transfer (Nuj) term is non-zero for hydrogen only. The PDE-model can be further simplified by taking into account the fact that the conversion is minimal during one cycle through the channel, and the concentration profile in the channel can be assumed to be almost linear. The entire model can now be expressed by the average (c*) and the outlet concentration (CQ): dc*u,j/dt = N*u,jaL- N*u,j, as - 2(TLJ e^)'' (c*u,j -CQU)
(3)
The exact formulations of the fluxes (N*) depend on the particular model for mass transfer 1 being used; principally the whole scope is feasible, from Pick's law to the
908 complete set of Stefan-Maxwell equations (Fott and Schneider 1984). Since the only component of importance for the gas-liquid mass transfer is hydrogen, which has a limited solubility in the liquid phase, the simple two-film model along with Pick's law was used, giving the flux expression N*Li,j = k'u,j(c*Gi,j,/Ki-c*uj)
(4)
For the liquid-solid interface, a local quasi-steady state mass balance takes the form N*Li,jas+r*g,pB=0
(5)
In case that the liquid-solid mass transfer is rapid, the bulk and surface concentrations coincide, and the rate expression is directly inserted in the balance equation, which becomes dc*u,j/cit = r*i,jPB - 2(TLJ ELJ) '' (c*Li,j - CQU)
(6)
The surroundings of the monolith is described by the concept of complete backmixing, which leads to the following overall mass balance for the components in the surrounding liquid phase: dcoLi.j/dt = TL"VS(2c*Li,j - coLi,j,)aLj - CQLJ)
(7)
The treatment of the gas phase is analogous to the liquid phase. The flux describing the gas-liquid mass transfer is given by eq. (4). Consequently, the dynamic mass balance for the monolith channels can be written as dc*Gi,j/dt = -N*LijaL - 2(TGJ BOJ) "^ (coij-CoGi)
(8)
For the monolith surroundings, the concept of complete backmixing is again applied leading to the formula dCoGi/dt = TG"\Z(2c*Gij - CoGi,j,)CXGj - CQGJ)
(9)
The model for the schematic system (Fig. 3) consists of the simple ODEs (3) (or (6)), (7), (8) and (9), which form an initial value problem (IVP). In the case that pure hydrogen is used, its pressure is kept constant and the liquid-phase components are nonvolatile, the gas-phase balance equations (8)-(9) are discarded and the gas-phase concentration in eqs (3) and (6) is obtained e.g. from the ideal gas law. The initial conditions, i.e. the concentrations at t=0 are equal everywhere in the system and the IVP can be solved numerically by any stiff ODE-solver.
4. Application: Catalytic Three-Phase Hydrogenation of Citral in the Monolith Reactor Hydrogenation of citral was selected as an example, because it nicely illustrates a case with complex stoichiometry and kinetics, which is typical for fine chemicals. The stoichiometric scheme is displayed in Fig. 4. The reaction system is relevant for the manufacturing of fragrancies, since some of the intermediates, name citronellal and citronellol have a pleasant smell. Thus the optimization of the product yield is of crucial importance. Isothermal and isobaric experiments were carried under hydrogen pressure in the monolith reactor system at various pressures and temperatures (293-373K, 2-
909 40bar). The catalytic material was nickel over Al-washcoated cordeorite support. Hexan was used as a solvent. Samples were withdrawn from the reactor and analyzed by gas chromatography (Aumo et al. 2002). The fit of the model to experimental data is displayed in Fig. 5. The product distribution depends dramatically on the reaction conditions: at low temperatures and hydrogen pressures the system worked under kinetic control, and the desired intermediate products were obtained in high yields. As the temperature and hydrogen pressure were increased, the final product was favoured, and the process was evidently shifted towards mass-transfer control. The individual mass-transfer coefficients were estimated by using the molecular diffusion coefficient of hydrogen in liquid phase (Reid et al.l988) along with the hydrodynamic film thickness (Irandoust and Andersson 1989). Since the film thickness depends on the local velocity, the mass transfer coefficient was different in different channels. The rate equations describing the reaction scheme (Fig. 4) have been presented in a previous paper of our group (Tiainen 1998). The weighted sum of squares between measured and estimated concentrations was minimized by a hybrid simplex-Levenberg-Marquardt algorithm implemented in the simulation software Modest (Haario 1994). The model equations were solved in situ during the parameter estimation by the backward difference method. The estimated parameters were the kinetic and adsorption equilibrium constants of the system. The simulation results revealed that the model was able to describe the behaviour of the system. The parameter values were reasonable and comparable with values obtained in previous studies concerning citral hydrogenation in a slurry reactor (Tiainen 1998).
0.9 0.8
[
Citral
0.7
—^Qtrormelal
0.6
o
0.4
o*^
0.3 I
H|HJ(
y
Figure 4. Stoichiometry of citral hydrogenation over Ni-alumina.
Citronnelol
^***»^.ft
0.5
o
o ^^^^^
^ « o ^ uy^ro Ot ""^^-^^^
o
0.11 ^
—©——
o.ol
, R,
b
o
—^
^
1
Figure 5. Fit of model {-) to experimental data (o) citral hydrogenation in the monolith reactor system.
5. Notation A a c K k' L N n n'
area or cross-section area-to-volume ratio concentration gas-liquid equilibrium ratio overall mass transfer coefficient monolith channel length flux amount of substance flow of amount of substance
t V V z a e PB T
time volume volumetric flow rate dimensionless length coordinate fraction of volumetric flow rate through one channel hold-up catalyst bulk density residence time
910 Subscripts and superscripts G gas ch channel i component index j monolith channel index
L S T 0 *
liquid solid (catalyst) surface mixing volume (tank) inlet to the mixing volume average
value
Merged parameters (XGJ=WGJ/SWGJ TGJ= L/WGJ
CXLJ=WLJ/IWLJ
TG= VGT/(Ach SWGJ)
TL= VLT7(Ach IWLj)
TLJ= L/WLJ
6. References Aumo, J., Lilja, J., Maki-Arvela, P., Salmi, T., Sundell, M., Vainio, H., Murzin, D., 2(X)2, Catal. Letters (in press). Baldyga, J., Bourne, J.R., 1999, Turbulent mixing and chemical reactions, Wiley. Fott, P., Schneider, P., 1984, Recent advances in the engineering analysis of chemically reacting systems (Ed. L.K. Doraiswamy), Wiley Eastern. Haario, H., 1994, MODEST - User's Guide, Profmath Oy, Helsinki. Irandoust, IS., Andersson, B., Ind. Eng. Chem. Res. 28, 1685-1688. Kapteijn, P., Nijhuis, T.A., Heiszwolf, J.J., Moulijn, J.A., 2001, Catalysis Today 66, 133-144. Levenspiel, O., 1999, Chemical Reaction Engineering, Wiley (3^^ Ed.). Nijhuis, T.A., Kreutzer, M.T., Romijn, A.C.J., Kapteijn, F., Moulijn, J.A., 2001, Catal. Today 66, 157-165. Reid, R.C., Prausnitz, J.M., Poling, B.E., 1988, The Properties of Gases and Liquid, McGraw Hill. Tiainen, L.P., 1998, Doctoral thesis, Abo Akademi, Turku/Abo.
7. Acknowledgements The work is a part of the activities at the Abo Akademi Process Chemistry Group within the Finnish Centre of Excellence Progranmie (2000-2005)) by the Academy of Finland. Financial support from the National Agency of Technology (TEKES) and Finnish Graduate School in Chemical Engineering (GSCE) is gratefully acknowledged. AEA Technology is gratefully acknowledged for the special license agreement for the CFD software.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
911
Modelling the Dynamics of Solids Transport in Flighted Rotary Dryers P.A. Schneider, M.E. Sheehan and S.T. Brown James Cook University, School of Engineering Townsville Queensland 4811 Australia
Abstract This paper proposes a simple dynamic solids transport model for flighted rotary dryers, which results by discretising the dryer in the axial direction into a series of equivolume elements. Each resultant element is partitioned into two zones; one active and the other passive. Solids interchange between the active and passive zones is included, leading to a tanks-in-series/parallel approach, traditionally used by reaction engineers. Modelling solids transport in this manner allows the residence time distribution (RTD) characteristics of the rotary dryer to be elucidated. In this work gPROMS is used to simulate the proposed rotary dryer model. Data from a 100 toime per hour raw sugar dryer is reconciled against the dynamic solids transport model, by estimating overall solids transport coefficients.
1. Introduction/objectives The Australian raw sugar industry faces increasing competition in a highly competitive world market. Now, more than ever, export quality standards must be ensured. As the last unit operation in the manufacture of raw sugar, the rotary dryer plays a key role in meeting increasingly stringent product quality specifications. Given the high capital cost of additional drying capacity (approximately AUS$2M), it is prudent to investigate how existing dryer capacity can be better utilised. A key step in any optimisation involves the development of a dynamic model of the process in question. Rotary sugar drying involves the simultaneous and coupled cooling and drying of a wet crystalline feed, using a counter current air stream as a heat and humidity sink. This is shown schematically in Figure 1. Ait Out r-.
.1
..
I Air In
Solids In . 5 Out
Figure 1: Schematic view of a flighted rotary dryer, showing cross current airflow.
A key issue in modelling rotary dryers, which has largely been neglected to date, involves the incorporation of a dynamic solids transport model. Many previous workers
912 (Douglas et ai, 1993, Duchescne et ai, 1996 and Sheehan and Schneider, 2000), developed mass and energy balance relations for rotary dryers, in which they made two key assumptions. • The hold-up of solids in the dryer is uniform and always at steady state. • All of the solids in the dryer participate in drying. The first assumption is not valid due to feed variations, which are common to many rotary dryers employed within the Australian sugar industry. In fact, it could be reasonably argued that the dryer is never at steady state and therefore the hold-up can never be uniform along its length. The second assumption is also invalid, since visual inspection of any operating flighted rotary dryer reveals that, while some of the solids do contact the oncoming air stream, a significant portion of the crystals are held-up in the flights or kiln along the dryer floor and, thus, do not interact with the oncoming air stream. A key objective of this work was to develop a dynamic model of solids transport through an industrial flighted rotary dryer that addressed the above two assumptions and would form the base upon which the mass and energy balances could be superimposed
2. Methods 2.1. Solids transport modelling Solids transport down a flighted rotary dryer is complex and can be attributed to solids rolling and cascading. These mechanisms are complex and, while descriptive, would find very little direct application in improving the control of a rotary dryer. When modelling solids transport in a flighted rotary dryer, it can be observed that the solids behave in one of two ways. Solids either actively curtain, thereby gaining exposure to the counter current air stream, or travel passively (in the flights or along the dryer floor) and therefore do not participate in drying. Thus the solid phase may be subdivided into two categories, active and passive. This is pictured in Figure 2 a), which shows a schematic cross section of a flighted rotary dryer. Figure 2 b) shows an idealised conception of the active and passive solid phases. This concept assumes that passive solids are contained within a well-mixed element, while active solids are held within another, parallel well-mixed element. ,^ Passive solids
Active solids
a)
b)
Figure 2: a) Cross sectional view of a flighted rotary dryer, featuring active and passive solids phases, b) Idealised element, showing active and passive solids interaction.
913 The flow of solids out of the i^^ element, m^, is assumed to be proportional to the mass of solids within that element, m^, giving rh: =kmi
(1)
The coefficient A: is a constant of proportionality, which describes the propensity of solids to depart the i^^ element. Thus the dynamics of solids mass hold-up in the passive and active elements are: dm,
(^i~lp
dt
\ - ^i,p)- hm,p + h^i,c
(2)
dm,dt
•=
(3)
h^i,p-h^Ua
where the subscripts p and a refer to passive and active solids respectively and the transport coefficients are k^ (passive-to-passive), k2 (passive-to-active) and ^3 (activeto-passive). Furthermore, the dynamics of the concentration, w, of a trace component in the active and passive elements is determined as d^i,p dt
hWi-\,p^i-\,p
-^i,p^i,p)-\h^i,p^i,p
-h^i,a^i,a)
~^^i^P
V
Av
(4)
/
(5) The approach taken to model the dynamics of solids transport in a full-scale flighted rotary dryer combines approaches taken by Duchescne et al (1996) and Matchett and Baker (1988). Consider N dryer elements in series, as shown in Figure 3, in which solids flow from one element to the next as passive solids. In the present model, active solids would interact with a counter current air stream. Modelling the entire dryer is simply a matter of repeating the above equations (2-5) N times and giving suitable inlet flows for solids and air. N-l Solids In
Air Out
Solids
Air In
Figure 3: Schematic representation of a flighted rotary dryer, featuring active and passive solids phases.
914 2.2. Solids transport dynamic simulation The above equations for solids transport were implemented within gPROMS. This was done by creating separate gPROMS Models, which described the transfer of solids into, and out of, the passive and active elements. These gPROMS Models were then linked together by a third gPROMS Model into a variable number, N, of elements in series. The combined gPROMS Model was then formulated into a gPROMS Process, which executed the dynamics under varying conditions. A variety of steps were taken to verify the gPROMS code, such as mass balance closure on total solids in the dryer and a reconciliation of inlet and outlet tracer mass. 2.3. Industrial tracer experiments A 100 t/h flighted rotary sugar dryer at CSR Sugar Limited's Invicta Sugar Mill, located in North Queensland, was used as a case study to evaluate the proposed model. Approximately 0.5 kg of elemental lithium, as saturated lithium chloride solution, was injected into the sugar inlet end of the rotary dryer over a 40 second time frame, once the dryer had reached (close to) steady state operation. Samples of raw sugar leaving the dryer were taken and later analysed for lithium by atomic absorption spectrometry. It should be noted that a simulation results of the proposed solids transport model was used as a guide to determine when the dryer outlet stream should be sampled, in order to "catch" the peak of the residence time distribution (RTD) curve. In this way an information-rich signal was gained, which was invaluable for model validation and parameter estimation purposes, while at the same time reducing experimentation costs.
3. Results The gPROMS simulation was tested under a variety of conditions, in order to evaluate the dynamic behaviour of the solids in the dryer. Figure 4 shows the effect of a series of step changes in the inlet feed flow rate to the total mass hold-up (i.e. active and passive) in the first, middle and last dryer elements. As expected, the first element behaves very much like a first order system, while the middle and last elements have a sigmoidal shape, characteristic of higher order systems. *^9*^ -,
|V
(f
'SB 500 -
/
^
\\
\\ -y^^.^—
11 :
1 475 J
S
1
1 450S S 425 -
50
\ \ ' VV 75
1
First
/
Midde
/^
100
125
150
175
200
225
250
Time [min]
Figure 4: Model dynamic response of total solids mass in selected dryer elements.
915 The results of the industrial tracer study are shown on Figure 5. Laboratory analyses of elemental lithium in the raw sugar samples taken from the dryer, expressed in parts per million, are shown by the data points. These data are of excellent quality, considering the conditions under which the experiment was carried out. It is interesting to observe the extended "tail" of the RTD curve, indicating that there is some back mixing of solids in the dryer unit, which justifies the choice of the series-parallel structure of the proposed model.
k, = 353.4084 k2 = 11.09308 kj = 50.83923
g 2^ n S 20
lis
Time (min]
Figure 5: Full scale rotary dryer RTD data and gPROMS estimation of transport coefficients.
Parameter estimation was performed in gPROMS, which attempted to minimise the error between the predicted and actual lithium concentrations exiting the dryer. The optimal values for the transport coefficients are shown on Figure 5. Before this estimation was carried out, the plant data was normalised so that the area under the RTD curve was equal to unity, matching the conditions in the gPROMS simulation. The smooth curve in Figure 5 represents the optimised RTD from the gPROMS solids transport model, based on optimised parameter values for solids transport (A:i, ^2»^3 )• It is important to note that the transport coefficients were set globally for all elements along the dryer and did not change locally. It is clear that the proposed model structure well describes the steady state RTD of the flighted rotary raw sugar dryer. While the optimised gPROMS simulation agrees well the plant data, there are a few shortcomings of the model. First, the number of elements, N, had to be chosen manually, since it was not possible to optimise this parameter within gPROMS' estimation routines. However, once a reliable method had been developed for the estimation of the transport coefficients, multiple optimisations were run across a range of N values. Using this manual method, the optimum number of elements was determined to be 50.
916 Another important shortcoming of the proposed model is that the transport coefficients are not physically meaningful. However, this shortfall is more than made up for in terms of model simplicity gains. At steady state, the mass ratio of active solids to total solids in the dryer, a , is related to the transport coefficients according to
a = -^^^^ = - A ~
(6)
The optimised parameters for k2 and k^ yield a mass ratio of active to total solids of 18%, which is comparable to results presented by Matchett and Baker (1988) in their experimental study of rotary dryers. This encouraging result is more a matter of serendipity, since there are no physical constraints in our model to guarantee this result, for any given set of RTD data. As a result, a was fixed to a value of 20% and adjustment was made only to ki and k^ in their estimation (i.e. since k2 is now fixed by a ) . A parameter estimation procedure was set-up in gPROMS, but failed to deliver meaningful estimates for the transport coefficients. The reasons for this are unclear and are currently being investigated.
4. Conclusions and Outlook This study proposes a simple approach to modelling the dynamics of solids transport within a flighted rotary dryer. The approach taken was to model the system in a seriesparallel formulation of well-mixed tanks. The concept of active and passive solids is important, since it will lend itself well to the addition of mass and energy balance relations. This model formulation predicts the RTD of the system. Industrial RTD data was obtained from a 100 tonne per hour dryer and compared with the model predictions. gPROMS parameter estimation has delivered overall transport coefficients for this system. The transport coefficients are not independent, nor completely physically meaningful. However, they produce a very simple model formulation, which forms the basis for more detailed rotary dryer models incorporating mass and energy balances. Future work will see the development of a full dryer model based on the proposed solids transport model. Refinements will be made to the model to incorporate the effects of solids moisture and interaction with the counter current air stream.
5. References Douglas, P.L., Kwade, A., Lee, RL. and Mallick, S.K., 1993, Drying Tech., 11(1), 129-155. Duchescne, C, Thibault, J. and Bazin, C, 1996, Ind. Eng. Chem. Res., 35, 23342341. Matchett, A.J. and Baker, C.G.J., 1988, Particle Residence Times in Cascading Rotary Dryers Part 2 - Application of the Two-stream Model to Experimental and Industrial Data. J. Separ. Proc, Vol. 9, 5-13. Sheehan, M.E. and Schneider, P.A., 2000, Modelling of Rotary Sugar Dryers: Steady State Results. Proceedings of Chemeca 2000, Perth.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
917
On-Line Process Optimisation: Parameter T\ming for the Real Time Evolution (RTE) Approach Sebastian Eloy Sequeira\ Miguel Herrera^, Moises Graells^, Luis Puigjaner^ Chemical Engineering Department. Universitat Politecnica de Catalunya ^ETSEIB, Av. Diagonal 647 Pav. G, T P Barcelona (08028) Spain ^EUETIB, Comte d'Urgell, 187, Barcelona (08036) Spain
Abstract This paper describes a methodology proposed for tuning RTE (Real Time Evolution) parameters. RTE has been introduced in a previous work (Sequeira et al., 2002) as a new approach to on-line model-based optimisation. Such strategy differs from classical Real Time Optimisation in that waiting for steady state is not necessary and also in the use of simple optimisation concepts in the solution procedure. Instead, current plant set points are periodically improved around the current operation neighbourhood, following an also periodically updated model. Thus, Real Time Evolution (RTE) is based on a continuous improvement of the plant operation rather than on the formal optimisation of a hypothetical future steady state operation. In spite of using a simpler scheme, the proposed strategy offers a faster response to disturbances, better adaptation to changing conditions and smoother plant operation regardless the complexity of the control layer. However, a successful application of such strategy requires an appropriate parameter tuning, this is: how often set points should be adjusted and what does exacdy neighbourhood mean. Although the optimal values of these parameters strongly depends on the process dynamics and involves complex calculations, this work uses a simple benchmark to obtain general guidelines and illustrates the methodology for easily parameter tuning as a function of the process information typically available.
1. RTO and RTE Fundamentals The classical RTO loop (Marlin and Hrymak, 1997; Perkins, 1998) consists of subsystems for measurement validation, steady state detection, process model updating, model-based optimisation and command conditioning. Once the plant operation has reached steady state, plant data are collected and validated to avoid gross error in the process measurements, while the measurements may be reconciled using material and energy balances to ensure consistency of the data set used for model updating. After validation, the information is used to estimate the model parameters so that the model represents correctly the plant at the current operating point. Then, the optimum controller set points are calculated using the updated model, and they are sent to the control system after a check by the command conditioning subsystem. Real Time Evolution has been introduced as an alternative to current RTO systems. The key idea is to obtain a continuous adjustment of set point values, according to current operating conditions and disturbance measurements (those which affect the optimum location) using a steady state model. Table 1 summarises and compares the relevant features of both approaches. The steady state information is used by RTE only for data
918 reconciliation and model updating, while the core of the system is the recursive improvement, which does not need the process to be at steady state. Table 1: Functional sequences for RTO and RTE. RTO
RTE
Data acquisition / Data pre-processing IF STEADY
IF UNSTEADY
Data acquisition / Data pre-processing IF STEADY
Data Validation
Data Validation
Model Updating
Model Updating
IF UNSTEADY
Optimisation (optimal set-point values) Check steadiness
Improvement (Best small set-point changes)
Implementation
Implementation
The improvement algorithm consists in the following: given the current point, and current information about the disturbances, simulate a few changes in the decision variables in a pre-defmed small neighbourhood around the current point using a steady state model. The output of this algorithm is the best point in terms of the steady state objective function, which needs also to satisfy the required constraints. Thus, RTE approach can be seen as a variant of the EVOP strategy (Box, 1969), which relies in a model instead of plant experiments and avoids wasting resources in non-profitable trial moves. In addition, it does not require waiting for steady state so that an adequate tuning of RTE parameters allows the system to follow pseudo steady states, hence improving the economical performance even under continuous disturbances.
2. The Parameter Tuning Problem For a given process and a given set of disturbance patterns entering to the system, the tuning problem consists in finding the values for RTE parameters: time between executions, DT, and the "small" neighbourhood of maximum allowed changes, represented by a vector z- Following, the influence of such parameters over the system performance is summarised, and some guidelines are extracted to properly adjust them for the process under consideration (the Williams Otto reactor, Williams and Otto, 1960, as modelled in Sequeira et al, 2002 is used in this work). 2.1. Neighbourhood size z When the improvement procedure is repeated n times, the local optimum is expected to be found with an acceptable degree of accuracy. The greater the values in z the more inaccurate the result. The lower the values in z the higher the possibilities of being trapped in a saddle point, being affected by rounding errors (note that at every point
919 requires solving a non-linear equation system) and the lower the possibility of reaching the final value within the given number of iterations (n). Then, z can be considered as a parameter of an optimisation algorithm (in this case, the recursive application of the improvement). Given that the optimisation procedure only determines the best steady state, its tuning can be then de-coupled from the process dynamics. In this way, the tuning problem can be stated as finding x such that the distance between the true optimal point (f^) and the found using the an RTE recursive algorithm (J^^) is minimised for all the expected conditions:
min
1r z=—
p\
[f'''(xA)-r'(M
i/p
r\^)
1=1
V
(1)
where 4,^=7 to m, is the discretisation of the range of possible values for the disturbances with economical influence. Obviously, only in few cases is possible to identify the "true" optimal, but it can be approximated by a reference optimisation method able to give the optimal with the desired degree of accuracy. Additionally, p will commonly be assigned to two (Euclidean distance). Then, the procedure becomes: - Identify the range of variability of the disturbances to evaluate the 4 values Select the reference method for estimating f^^ - Solve the minimisation problem (eq. 1). In addition, an appropriate scaling of/and the decision variables will likely allow using the same value for the vector x components, and thus reducing substantially the computational effort. Figure 1 shows the value of zm for increasing values of xusing different and arbitrary values of n. Note that for a changing n there is an acceptable range rather than a punctual optimum (in this casey ^ 5 , 6 and 7). This fact is indeed desirable for the overall procedure, as will be explained in a subsequent section. 1
1
1
•\
— n = 20 1
' \ 0.1
1
/7 = 35 n = 50
\
V
'
4
\
1
*
\.---•.../•
V
Z^"
JT''/!^''^
0.05
1
1
1
1
1
1
1
Figure 1: Influence of x on z and its tuning. 2.2. Execution period DT A given disturbance p triggers the RTE procedure that will periodically improve the set points until no further improvement is possible. By changing the RTE frequency (7/D7)
920 different set points profiles are obtained for the same disturbance pattern. Thus, the question is the determination of the best value of DT. In order to compare economical performance, the Mean Objective Function is used:
\ioF{e)de MOF(t) = ^
(2)
t-L
where 10F denotes the hypothetical on-line measurement of the objective function (Instantaneous Objective Function), and to is a reference time (in this work, the time at which disturbance occurs). The effect of DT on the system performance has been studied by exciting the system with different ramp disturbances (with the same final values) and applying RTF with different DT values. Figure 2 summarises some of the MOF profiles obtained. Charts a and b correspond to disturbances that favour the steady state objective function, being the opposite for charts c and d.
MOF
MOF
Time
Time
Figure 2: Response of RTF system for different ramp disturbances and the influence of DT It has been observed, that when the disturbance makes the steady state objective function decrease (in this case when dp/dt < 0), the smaller the DT value, the better the performance in terms of MOF. There is a point (DTa) from which the benefit of reducing DT is negligible. Besides, as the slope of the disturbance increase, the DTa value decreases. On the other hand, when the disturbance makes the steady state objective function increase (in this case dp/dt > 0), the bigger the DT value, the better the performance in terms of MOF. There is also a point (DTa) from which the benefit of increasing DT is negligible. In addition, DTa value increases with the slope of the disturbance.
921 Such observation is summarized in Figure 3, where the dotted area indicates the region of "good" DT values according to the current value of dp/dt. This suggested a short term on-line tuning of DT, (Adaptive RTE, ARTE) which follows for instance the straight dashed-line in Figure 2 according to currents values of dp/dt. This ARTE policy has been then applied over a long-term simulation for a sinusoidal disturbance. It has been compared with an RTE strategy using different fixed DT values and also with no action as a reference. The results in Figure 4 indicate, in opposition to the previous though, that for a persistent disturbance using a fixed DT value works better than the short term ARTE approach, being the latter better only in the small region corresponding to the initiation of the disturbance. That can be explained considering that the MOF profiles, although showing relevant information about the performance, are hiding essential information about the capacity of reacting to the next value of the disturbance (the current decision variables' values). Therefore, although during an initial time interval bigger values of DT lead to better performance in terms of MOF (Figures 3a and 3b), the corresponding process state is not so well prepared for new disturbances as is in the case of lower values of DT. This means that the peaks in Figures 3a and 3b in the no action curves correspond just to an inertial effect, which disappears in the case of persistent disturbances.
- - - ARTE „....„ OT=U\p. DT^=DT,/2 DTj=DTj/2 DTj=DT,/2 DT,=200 No Action
-
' MMw^'^^ Figure 3: Variation ofDTa with dp/dt
Figure 4: System performance for a sinusoidal disturbance
However, as shown in Figure 4, it can be seen that there is again a DTa value from which further improvement by decreasing DT values is not perceptible, thus being that DTa the desired value for DT. Obviously, such DT value depends on the disturbance frequency, rather than its instantaneous derivative, and then, an adaptive RTE tuning procedure is expected to produce better results, when based in a mid-term and periodical characterisation of the disturbance in frequency terms (i.e. Fourier Transform). Unfortunately, given the non-linearity of the system, a simple linear identification (i.e. using the step response) is not enough appropriate to trust in, for this specific case.
3. The proposed tuning procedure The proposed methodology for the tuning procedure consists in the following basic steps:
922 Estimate DTa Do Make DT*=DTa Determine n as Tr/DT (Tr is the settling time of the process) Find the x value that minimises (Section 2.2) Characterise the disturbance in terms of amplitude and frequency Find DTa (Section 2.3) Loop Until DTa=DT* It should be noted that Z'/^T must not exceed the capabilities of the control system. In such case, the values in z ^iH ^^^^ to be increased. Besides, the extension to several disturbances, although not studied here, is expected to be given by the dominant frequency.
4. Conclusions This work briefly shows some findings about the influence of RTE parameters on the process economical performance. As a result, a methodology for an adequate tuning of such parameters is proposed. It is shown how the parameters related to control variables can be tuned just by using the steady state model. On the other hand, the time parameter needs both, the characterisation of the disturbance in terms of amplitude and frequency and a further testing over a dynamic simulation of the process. In addition, a periodical characterisation of the disturbances allows an on-line adaptation of the parameters.
5. References Box, G.; Draper, N.; Evolutionary Operation: A Statistical Method for Process Improvement Wiley, New York, 1969. Marlin, T.E.; Hrymak, A.N. (1997). In ICCPC. AIChE Symp. Ser.(316), 156. Perkins, J.D. (1998). In FOCAPO; J.F. Pekny and G.E. Blau, Eds. AIChE Symp. Ser., 94 (320), 15. Sequeira, S.E., Graells, M., Puigjaner, L. (2002). Ind. Eng. Chem. Res., 41, 1815. Williams, T.J.; Otto, R.E.; (1960) A.I.E.E. Trans., 79 (Nov), 458.
6. Acknowledgements One of the authors (S. E. S.) wishes to acknowledge to the Spanish "Ministerio de Ciencia y Tecnologia" for the financial support (grant FPI). Financial support from CICYT (MCyT, Spain) is gratefully acknowledged (project REALISSTICO, QUI-991091).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
923
Multi-Objective Optimization System MOON on the Internet Yoshiaki Shimizu, Yasutsugu Tanaka and Atsuyuki Kawada Department of Production Systems Engineering,Toyohashi University of Technology, Toyohashi 441-8580, Japan, email: [email protected]
Abstract Recently, multi-objective optimization (MOP) has been highly required to deal with complex and global decision environment toward agile and flexible manufacturing. To facilitate its wide application, in this paper, we have implemented a novel method named MOON^ (Multi-Objective optimization with value function mode led by Neural Network) as a Web-based application. By that, everyone can engage in MOP readily regardless of his/her deep knowledge about MOP. Also its client-sever architecture requires us to prepare only a web browser, and realizes the usage independent of users' computer configuration and free from maintenance of the software. After outlining the solution procedure of MOON^, the proposed system configuration will be explained with an illustration.
1. Introduction To support agile and flexible manufacturing in complex and global decision environment, multi-objective optimization (MOP) is increasingly interested in solving various problems in chemical engineering (Shimizu, 1999; Bhaskar, Gupta, and Ray, 2000). To avoid stiffness and shortcomings encountered in the conventional methods, we proposed a new prior articulation method named MOON^ (Multi-Objective optimization with value function modeled by Neural Network) (Shimizu and Kawada, 2002). To facilitate its wide application, in this paper, we have implemented its algorithm as a Web-based application. It is realized as a client-sever architecture through common gateway interface (CGI) so that everyone can use the system regardless of his/her own computation environment. After presenting the algorithm of MOON^ briefly, configuration and usage of the proposed system will be shown illustratively.
2. Solution Procedure through MOON^ The problem concerned here will be described generally as follows. (p.l)
Min f{x) = {fiix),f2ix),"-jN(x)}
subject to x ^ X,
where x denotes a decision variable vector, X a feasible region, and / an objective function vector some elements of which conflict and are incommensurable with each other. Generally speaking, MOP can be classified into the prior articulation method and the interactive one. However, conventional methods of MOP have both advantages and disadvantages over the other. For example, since the former derives a value function
924 separately from the searching process, decision maker (DM) will not be worried about the tedious interactions during the searching process as will be in the later. On the other hand, though the later can elaborately articulate the attainability among the conflicting objectives, the former will pay little attention on that. Consequently, the derived solution may be far from the best compromise of DM. In contrast to it, MOON^ can not only resolve these problems but also handle any kinds of problem, i.e., linear programs, non-linear programs, integer programs, and mixed-integer programs under multiobjectives by incorporating with proper optimization methods. 2.1. Identification of value function using neural networks First we need identify a value function that integrates each objective function into an overall one. For this purpose, we adopted a neural network (NN) due to its superior ability of the nonlinear modeling. Its training data is gathered through pair comparisons regarding the relative preference of DM among the trial solutions. That is, DM is asked to reply which he/she likes, and how much it is between every pair of the trial solutions. Just like AHP (Analytic Hierarchy Process, Saaty, 1980), such responses will be taken place by using the linguistic statements, and then transformed into the score as shown in Table 1. After doing such pair comparisons over k trial solutions\ we can obtain a pair comparison matrix whose i-j element ^y represents the degree of preference of/^ compared with/' (Refer Fig.3 appeared in the later). Table 1. Conversion table. Linguistic statements Equally Moderately Strongly Demonstrably Extremely Intermediate judgment between the two adjacent
a^1 3 5 7 9 2,4,6,8
After all, the pair comparison matrix provides totally li' training data for NN with a feed forward structure that is consisted of three layers. The objective values of every pair, say,/' and/^ become the 2N inputs, and an i-j element fly one output. Depending on the dimension of the inputs, appropriate numbers of hidden node are to be used. Using some test problems, we ascertain that a few typical value functions can be modeled correctly by a reasonable number of pair comparisons as long as the number of objective function is less equal to three (Shimizu, 1999) By viewing thus trained NN as a function VNN such that: {f\x),f\x)}^^^-^aij^R\ it should ne noticed that the following relation holds.
Hence we can rank the preference of any trial solutions easily by the output from NN that is calculated by fixing one of the input vector at an appropriate reference, say/*.
^ Under mild conditions, total number of comparison is limited to k(k-l)/2.
925 Vmif(x);f)=a.R
(2)
Since the responses required for DM are simple and relative, his/her load in the tradeoff analysis is very small. 2.2. Incorporation with optimization methods Now the problem to be solved can be described as follows. (p.2) Max Vj^^(f (jc),/^)
subject to
XE X
Since we can evaluate any solution from VNN under multi-objectives once x is prescribed, we can apply the most appropriate optimization method for the problem under concern, i.e., nonlinear program, direct search method, and even more metaheuristic method like genetic algorithm, simulated annealing, tabu search, etc. Also we can verify the optimal solution of (p.2) locates on the Pareto optimal solution set as long as Eq.(l) holds (Shimizu and Tanaka, To be appeared). If we try to use the algorithm that requires gradients of the objective function like nonlinear programs, we can calculate them conveniently by the following relation.
dv,,(f{x)j^) _f av^^(/(x),/^) Ya/(jc) dx
I
^fix)
(3)
1 dx
We can complete the above calculation by applying the numeric differentiation for the first term in R.H.S. of Eq.(3) while deriving the analytic form for the second term..
3. Implementation as Web-Based Application Due to the various reasons such as little knowledge about MOP, computer environment, etc., it is not necessarily easy for everyone to engage in MOP. To deal with such circumstances, we implemented MOON^ on the Internet as a client-server architecture that enables us to carry out MOP readily and effectively. Core of the system is divided into a few independent modules each of which is realized using the appropriate implementation tool. The optimizer module solves a single objective optimization problem through incorporating the identified value function, specifying the problem as a Fortran programming format, and compiling it by Fortran compiler. Though only sequential quadratic programming (SQP) is implemented presently, various methods are possibly available as mentioned already (GA was applied elsewhere; Shimizu, 1999). The identifier module provides a modeling process of the value function based on the neural network where a pair comparison is easily performed just by mouse click operation on the Web page. Moreover, the graphic module generates various graphical illustrations for easy understanding about the results. The user interface of the MOON^ system is a set of Web pages created dynamically during the solution process. The pages described with HTML (hypertext markup language) are viewed by users' browser that is a client of the server computer. The server computer is responsible for data management and computation whereas the client takes care of input and output. That is, users are required to request a certain service and to input some parameters. In turn, they can receive the service through visual and/or sensible browser operation. In practice, the user interface is a program creating HTML pages and transferring information between the client and the server. The programs creating HTML pages are
926 programmed using CGI programming languages named Perl. As the role of CGI, every treatment is carried out on the server side, and any particular tasks are not assigned to the browser side (See Fig.l). Consequently users are not only free from the maintenance of the system such like update, issue, reinstall, etc. but also are regardless of their computation environment like operating system, configuration, performance, etc.
Figure 1. Task flow through CGI.
Though there are several sites serving (single-objective) optimization library (e.g., http://www-neos.mcs.anl.gov/), none is known regarding MOP except for NIMBUS (Miettinen and Makela, 2000, http://nimbus.math.jyu.fi/) so far. However, since NIMBUS belongs to an interactive method, it has the disadvantages mentioned already. On the other hand, the articulation process of MOON^ is separated from the searching process, DMs can engage in the interaction at their own paces, and will not be worried about by the hurried/idle responses like the interactive methods. And it should be noted that the required responses are not only simple and relative but also DMs need not any particular knowledge about the theory of MOP. Such easy usage, small load in the tradeoff analysis, and maintenance-free features will be expected to facilitate the decision making from a comprehensive point of view that should be required for agile and flexible problem solving in chemical engineering. The URL of the system is http://www.sc.tutpse.tut.ac.jp^esearch/multi.html.
4. Illustrative Description of Usage As a demonstration of the Web-based MOON^, we provide a bi-objective design problem regarding decision on the strength of material (Osyczka, 1984) and a three objective problem. To grasp the whole idea and the solution procedure of MOON^, these examples are most valuable. We also provide the third entry of the web page for the original user problem. Below taking the first example, we will explain about the demonstration of the example problem. Moving to the target Web-page, we will find a formulation of the problem.
927 F
8
1 max
O 00 II
(S 1
II Q
<-\/
puto
pnad
Fl
puto
1
9
4
pnaa
1/9 1/4 1/5 1/3 1/8
1 6 5 7 2
1/6 1 1/3 1 1/5
F3
F4
5
3
8
1/5 3 1 3 1/3
1/7 1 1/3 1 1/5
1/2 5 3 5 1
¥2
1
• A
Fl F2 F3 F4
T T
J^l^ 1 = 1000 ^
Figure 3. Pair comparison matrix.
Figure 2. Beam form design problem.
(p.3) f,(x) =
^^D,'-x,'y{l-xXD,'-x,')]
f2(x) =
.4
4
^ 4
A
2 -^2
4 P y ^ 4 4 4 4 1 --^1
-^1 J
D
subject to g.(x) = 180- 9-78x10^ ^' 4.096x10'-x/
(4)
g2(Ar)=75.2-A:2>0
(5) (6) (7) (8)
g3(jc) = A:2-40>0
g,(ac) = x , > 0 h,(x) = J:,-5^2 = 0 ggPHg
^«»,»H3W^S»*«riB»«*«» •ig- Mlp//sc-iW)on2.n)to8eluLaco/c«r-fanAawad«-«andv/tx1-«/8teo13ex
»Sk ' •-»£».' n K i
^)»M><» ^»>i*.^w>*» ; : $ » « ^«>*ft J f i ^ ::;3V7> 'Jfi>--» ^ : ; » - x ^«-c&yi>yj-^ y^'-Koy^ ^ Q W K H ^ « s » i d « £
Result of optimization 1 Now computing! 1 Result will be sho^m below. 1 Donw now! 1 1
• Value of objective function o fl= •iE';?'53f'ii.CM;^?::OKi J9oi ••^'i
\
of2=o,oc!0'53?4ie5:;8?r]
1
• Value of decision variables « il= 2g2.S5S022S44?S0352 <>x2=5fi.57iS045SS;4204e
1 1
Objective function :fl
*;L;; 1
s.»t*»« "
1
"*
3 ..*•«
2 »»*»6
1 ....... 1
Objective functIon:f2 4.Se-e4 4 ••-04 3.3*-e4
•
+
-
+
p;
Tu* Jan 21 i7iieie« sees
^i tS
.
..
™ 2.S*-e4 2.••-04 i.5^-e4
*
*.
.
' ^ ; '
* . :
5.••-•5 .dir Tu* J*n 21 17lte>e« 2»e3
^ f5J -^ &«
Figure 4. Page representing a final result.
928 where xi and X2 denote the tip length of the beam and the interior diameter respectively as shown in Fig.2. Inequalities Eqs.(4)-(8) represents appropriate design conditions. Moreover, objectives /i and /2 represent volume of the beam and static compliance of the beam respectively. Then input page for problem description is provided to input the equations of the objective functions and constraints under the format similar to the Fortran language. After the repeated processes of input and confirmation, a set of trial solutions for the pair comparisons is generated arbitrarily within the hull convex spanned by the Utopia and the nadir solutions^. Now for every pair of the trial solutions, DM is required to make a pair comparison through mouse click of radio buttom indicator. After showing the pair-comparison matrix thus obtained (See Fig.3), and checking its inconsistency from the AHP theory, the training process of NN will start. Its training results are presented both numerically and schematically. The subsequent stages proceed as follows: select an appropriate optimization method (presently only SQP is available); input the initial guess of SQP for the optimization search; click start button. The result of the multi-objective optimization is shown schematically compared with the Utopia and the nadir solutions (See Fig.4). If DM would desire further articulation, additional search might be taken place before a satisfying solution has been found. In this case, the same procedures will be repeated within a narrower searching space around the earlier solution to improve it.
5. Conclusion Introducing a novel and general approach for multi-objective optimization named MOON^, in this paper, we have implemented its algorithm as a Web-based application. It is unnecessary for everyone to have any particular knowledge about MOP, and to prepare the particular computer environment. They need only a Web browser to submit their problem, and to indicate their subjective preference between the pair of trial solutions generated automatically by the system. Eventually, it can facilitate the decision making from a comprehensive point of view that should be required to pursue the sustainable development in process systems. An illustrative description outlines the proposed system and its usage. Further studies should be devoted to add various optimization methods as applied elsewhere (Shimizu, 1999; Shimizu and Tanaka, to be appeared) besides SQP, and improve certain user services that enable us to save and manage their private problems. The security routine for usage is also important aspects left for the future studies.
6. References Bhaskar,V., Gupta, K.S. and Ray, K.A., 2000, Reviews in Chem. Engng., 16, 1. Miettinen, K. and Makela, M.M., 2000, Comput. & Oper. Res., 27, 709. Osyczka, A., 1984, Multicriterion Optimization in Engineering with Fortran Programs, John Willey & Sons, New York. Saaty, T.L., 1980, The Analytic Hierarchy Process, McGraw-Hill, New York. Shimizu, Y., 1999, J. Chem. Engng. Japan, 32,51. Shimizu, Y. and Kawada, A., 2002, Trans, of Soc. Instrument Control Engnrs., 38, 974. Shimizu, Y. and Tanaka, Y., "A Practical Method for Multi-Objective Scheduling through Soft Computing Approach," JSME Int. J., To be appeared. ^ For example, a Utopia is composed ofy;(x,*) whereas a nadir is of minjfj{x{^), (i=l,"-, N) where jc,* is the optimal solution of the problem such that "maxy-(jc) subject to x^X."
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
929
Reduced Order Dynamic Models of Reactive Absorption Processes S. Singare, C.S. Bildea and J. Grievink Department of Chemical Technology, Delft University of Technology, Julianalaan 136, 2628 BL, Delft, The Netherlands
Abstract This work investigates the use of reduced order models of reactive absorption processes. Orthogonal collocation (OC), finite difference (FD) and orthogonal collocation on finite elements (OCFE) are compared. All three methods are able to accurately describe the steady state behaviour, but they predict different dynamics. In particular, the OC dynamic models show large unrealistic oscillations. Balanced truncation, residualization and optimal Hankel singular value approximation are applied to linearized models. Results show that a combination of OCFE, linearization and balanced - residualization is efficient in terms of model size and accuracy.
1. Introduction Many important chemical and petrochemical industrial processes, such as manufacture of sulphuric acid and nitric acid, soda ash, purification of synthesis gases, are performed by the reactive absorption of gases in liquids, in large scale processing units. For example, the whole of fertilizer industry relies on absorption processes. Rising energy prices and more stringent requirements on pollution prevention impose a need to continuously update processing conditions, design and control of industrial absorption processes. Traditionally, the design of Reactive Absorption Processes (RAP) relies on equilibrium models, whose accuracy has been extensively criticised both by academic and industrial practitioners. In contrast, the rate-based approach (Kenig et al. 1999), accounting for both diffusion and reaction kinetics, provides very accurate description of RAP. Solving such models requires discretization of the spatial co-ordinates in the governing PDEs. This gives rise to a large set of non-linear ODEs that can be conveniently handled for the purpose of steady state simulation. However, the size of the model becomes critical in a series of applications. For example, real-time optimisation requires fast, easy-tosolve models, because of the repetitive use of the model by the iterative algorithm; in model based control applications, the simulation time should be 100 to 1000 times shorter than the time scale of the real event. This work investigates the use of reduced order RAP models for the purpose of dynamic simulation, controllability analysis and control system design. Three different discretization methods, namely orthogonal collocation (OC), finite difference (FD) and orthogonal collocation on finite elements (OCFE), are compared. All three methods are able to accurately describe the steady state behaviour. However, the predicted dynamic
930 behaviour is very different. In particular, the OC dynamic models show large unrealistic oscillations. In view of control applications, different reduction techniques, including balanced truncation and residualization and optimal Hankel singular value approximation, were applied to linearized models. Results show that a combination of OCFE, linearization and balanced - residualization is efficient in terms of model size and accuracy.
2. Model Description The reactive absorption column is modelled using the well-known two-film model (Fig. 1). In this model, the resistance to mass transfer is concentrated in a thin film adjacent to the phase interface and the mass transfer occurs within this film by steady-state molecular diffusion. Axial z co-ordinate represents the length of the column. Gaseous component, A diffuses through the film towards liquid bulk and in the process reacts with the liquid component, B. In the present model, assumptions of plug flow, constant temperature and pressure are made. Dynamic mass balance equations in the non-dimensional form for gas bulk, liquid bulk phase can be written as: 2.1. Bulk gas phase mass balance It is assumed that no reaction occurs in the bulk gas phase and the gas film. dY. er-
dr
dY.
^ - r a dz
(Y
-h
C )
B.c.atz = oy.,=y.,,
(1)
where. Da is Damkohler number, aj and hj Q = A, B components) are dimensionless mass transfer coefficients and Henry's constants, respectively, r is the ratio between gas and liquid residence times. GB
GF
LF
LB
Fig 1. Schematic of reactive absorption column model.
931 2.2. Bulk liquid phase The second order reaction occurs in the bulk of the liquid, as well as in the liquid film.
(l-s)
'-^ = dr
dz
^-0.-^1 - D « C , . - C , , ( l - 0 dx
B.C.atz=l,C.,=C,, I.C.atr = 0,C.,
(2)
=C°M(Z)
where, 0j are dimensionless diffusion coefficients. 2.3. Liquid film mass balance Neglecting the fast dynamics, application of Pick's law of diffusion gives rise to the following set of second order differential equations. d'C. ax B.C.atJc = 0
m.(Y.-h.'C\
)=
dC
(3)
dx atx = l
C , =C j.L
J
Here, Ha is Hatta number which represents the ratio of kinetic reaction rate to the mass transfer rate. As a test case, the data reported by Danckwerts and Sharma (1966) is chosen. The dimensionless parameters are Da = 1.87 10 ^, a^ = 37.92, /3A = 6.94, /?5 = 3.82, HaA = 15.48, Has = 20.88, m^ = 5.46, HA = 1.62, r = 0.325.
3. Solution Method 3.1. Steady state The complete model of reactive absorption column is solved using three different discretization methods: (1) Orthogonal collocation (OC), (2) Finite difference (FD) and (3) Orthogonal collocation on finite elements (OCFE). In case of OC and OCFE, roots of Jacobi orthogonal polynomial are used as collocation points. FD method requires two discretization schemes in the axial direction: backward finite difference method (BFDM, 2nd order accuracy) in up-axial direction for bulk gas phase; forward finite difference method (FFDM, 2nd order accuracy) in down-axial direction for bulk liquid phase. In FD scheme, liquid film equations are discretized using central finite difference method (CFDM) of 4th order accuracy. The whole set of equations are written in gPROMS, which solves the set of algebraic non-linear equations using NewtonRaphson method. The different numerical methods are summarized in Table 1. Results of steady state calculations are shown in Figure 2, where gas and liquid concentration profiles along the column are depicted.
932 Table 1. Discretization method and number of variables and equations involved. Discretization method OC FD OCFE
# discretization points Axial Film 17 7 51 21 21 13
# of variables and equations 289 2295 609
FD scheme with 51 and 21 discretization points in the axial and film co-ordinate respectively is taken as a basis for comparison with OC and OCFE method. In OC scheme, 15 and 5 internal collocation points (axial and film co-ordinate respectively) and in OCFE scheme, 5 and 3 finite elements with 3 internal collocation points in each finite element are needed. In SS simulation, OC scheme results in the lowest number of variables and provides good accurate results. But it is found that it can not be used beyond 22 discretization points due to ill-conditioning of the matrix calculations. In such situation, OCFE provides improved stability with slightly increased number of variables. As seen, FD requires the largest number of variables to get accurate results. It should be noted that the definition and use of the dimensionless variables allows a robust solution of the model equations, easy convergence being obtained for all three discretization methods and for very crude solution estimates (for example, all concentrations set at 0.5). 3.2. Dynamic The dynamic simulation of RAC model was carried out for the above three cases. The dynamic response of gas and liquid outlet concentrations, YAOouh CBLOUI-^ to changes in inlet flow rates FMG^ ^VL and concentrations YAGim CBUH was investigated. Figure 3 presents results for a 0.05 step change in YACin (similar results were obtained for the other inputs). The expected, realistic response is a gradual increase of outlet concentration occurring after a certain dead time. The computed response showed oscillations, which is attributed to numerical approximation of the convection term. This effect is discussed by Lefevre et. al (2000), in the context of tubular reactors. 2.50 -aSi—B—aAD/OE
2.00 h DC (15,5) - FD (50,20) r OCFE (5,3)
1.50
o"
- s - O C (15,5) — FD (50,20) ^OCFE(5,3)
1.00 0.50 0.00 0.2
0.4
0.6 Z
Fig 2. Steady state profile for different discretization methods.
0.8
933 200 180 160 E 140 a. €L 120 100 80 60 40 20 0
1
1
TTl M/ y^
-I
0.05
1/
-OCFE BFDM(2) - 50
OC ,
1
0.2
0.15
0.1
0.25
Time/[-]
Fig. 3. Dynamic response of gas-outlet concentration to a step change of gas-inlet concentration. OC scheme produces large oscillatory response right from the start, without any dead time. Thus, it is not suitable for dynamic simulation purposes. In the case of OCFE and FD, the oscillatory behaviour starts after some dead time. The amplitude is much smaller compared to OC. As expected, oscillations are reduced by increasing the number of discretization points. Taking into account the size of the model and the shape of dynamic response, the OCFE seems the preferred scheme. Further, we used the "Linearize" routine of gPROMS to obtain a linear model. Starting with the OCFE discretization, the linear model has 48 states. This might be too much for the purposes of controllability analysis and control system design. Therefore, we applied different model-reduction techniques (Skogestad and Postlethwaite, 1996). Fig. 4 compares the Bode diagrams of the full-order model and the models reduced to n = 10 states by different techniques. For the frequency range of interest in industrial application (10 rad / time unit, corresponding roughly to 5 rad/min), the balanced residualization offers the best approximation.
0.1
1
10
100
1000
(o I [rad / dimensionless time]
Fig 4. Comparison of reduced-order models obtained by different techniques.
934 20 15
ICXFE
n=48, n=15v
OCFE Residualization
n=5 \
n=10
^ - ^ / ^
nonlinear
\f^\
0.1
1
10
100
CO I [rad / dimensionless time]
1000
0.05
0.1
0.15
0.25
Time/[']
Fig. 5. Comparison of linear models of different orders, a) Bode diagram, b) Step response. Figure 5 compares the effect of number of states that are retained in the reduced order model, using both the Bode diagram and step response. The linear model predicts well the dead time and the speed of response. For n= 15, the full and reduced-order models coincide. Reasonable results are obtained for n = 10. If the model is further reduced (n=5), it fails to predict the dynamic behaviour. From the time response presented in Figure 4, it seems that a second-order model including dead time should suffice. It is possible to identify such model, using for example, real plant data. However, we are interested to obtain the model starting with first-principles model. This is the subject of current research.
4. Conclusions - A dynamic model of reactive absorption column is developed in non-dimensional form. Three discretization methods; OC, FD, OCFE are used to solve the model equations. For steady state process synthesis and optimisation, orthogonal collocation based methods are found accurate and robust. - For dynamic simulation^ pure OC is unsuitable. OCFE is found to give realistic representation of column's behaviour, together with a small-size model. This presents a good option to FD scheme. Linear model reduction techniques are further applied to reduce the model for control design purpose. Balanced residualization with 15 states approximates satisfactorily the column dynamics. - In the future work, more complex reaction schemes, Maxwell-Stefan equations for diffusion, heat balance, axial dispersion term, hydrodynamics, thermodynamics, tray columns will be included in the model.
5. References Danckwerts and Sharma, 1966, Chem. Engrs. (London), CE 244. Kenig, E.Y., Schneider, R., Gorak, A., 1999, Chem. Eng. Sci. 54, 5195. Lefevre, L., Dochain, D., Magnus, A., 2000, Comp. & Chem. Engg. 24,2571. Skogestad, S. and Postlethwaite, L, 1996, Multivariable Feedback Control - Analysis and Design, John Wiley & Sons, Chichester.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
935
Separation of Azeotropic Mixtures in Closed Batch Distillation Arrangements S. Skouras and S. Skogestad Norwegian University of Science & Technology, Department of Chemical Engineering Sem Saelandsvei 4, 7491, Trondheim, Norway e-mail: [email protected], [email protected]
Abstract Batch time (energy) requirements are provided for the separation of ternary zeotropic and heteroazeotropic mixtures in three closed batch column configurations. Two multivessel column modifications (with and without vapor bypass) and a conventional batch column operated under the cyclic policy, were studied. The multivessel column performs always better than the conventional column and the time savings vary from 24% up to 54 %. Moreover, by eliminating the vapor bypass in the multivessel, additional time savings of 26% can be achieved for a zeotropic mixture. However, the multivessel with the vapor bypass should be used for the heteroazeotropic mixtures.
1. Introduction The multivessel batch column is a combination of a batch rectifier and a batch stripper. The column has both a rectifying and a stripping section and therefore it is possible to obtain a light and a heavy fraction simultaneouslyfi*omthe top and the bottom of the column, while an intermediate fraction may also be recovered in the middle vessel. Two modifications of the multivessel are studied here. First, the vapor bypass modification where the vapor from the stripping section bypasses the middle vessel and enters the rectifying section and second, a modification where both liquid and vapor streams enter the middle vessel. We refer to the first modification as conventional multivessel and to the second one as multivessel without vapor bypass. The third column configuration studied in this work is a conventional batch column (rectifier) operated with the cyclic policy. We refer to this column as cyclic column. The cyclic policy has been noted before in the literature by Sorensen and Skogestad (1994) and is easier to operate and control. All column configurations are shown in Fig. 1. Batch time comparisons are provided for the separation of one zeotropic and two heteroazeotropic systems. We consider batch time as a direct indication of energy consumption since the boilup is constant for all columns. The columns are operated as closed systems. In the multivessel a ternary mixture is separated simultaneously in one such close operation and the final products are accumulated in the vessels (Wittgens et al, 1996). In the cyclic column the products are separated one at each time and for a ternary mixture a sequence of two such closed operations is needed. An indirect level control strategy based on temperature feedback control is implemented as proposed by Skogestad etal (1997).
936 a)
b) Condenser
JS§ H-^
XL-.^
L^
-^
Nr r
c)
Rectifying section
Nr h
"^^
Middle Vessel
section -Middle Vessel
^/5 H—T —-^ Ns
Stripping section
HS^—1'^°''
" ^
N
-^
Stripping
Ns
Reb oiler
Figure 1. a, b) Multivessel batch column with and without vapor bypass, c) Cyclic batch column.
2. Simulations 2.1. Zeotropic systems The zeotropic system of methanol-ethanol-1-propanol was studied. Multivessel column: The zeotropic mixture is separated simultaneously in one closed operation. All three original components are accumulated in the three vessels at the end of the process, as shown in Figure 2a. Multivessel column without vapor bypass: The separation is performed as mentioned above. With this modification the light component is depleted faster from the middle vessel. This leads to improved composition dynamics in the middle vessel and it can be advantageous for some separations, as we will show later. Cvclic column: The separation is performed in two cycles that resembles to the direct split in continuous columns. During cycle 1 the light component (methanol) is accumulated in the top vessel (Fig 2b). Cycle 2 is almost a binary separation of the two components left in the still. The intermediate component (ethanol) is discharged from the top vessel, while the heaviest one (1-propanol) remains in the still (Fig 2c).
A*'
IIMIMWK^^^«M«
A \
j ^
'^ ^
.'^^...
• .liiili • 1"
''""""T^
Figure 2. a) Simultaneous separation of a zeotropic mixture in the multivessel column, b, c) Separation of a zeotropic mixture in two cycles in the cyclic column.
937 2.2, Azeotropic systems Two classes of heteroazeotropic systems were studied, namely classes 1.0-2 and 1.0-la. Skouras and Skogestad (2003) provided simulated results for the separation of different classes of heteroazeotropic systems in a closed multivessel-decanter hybrid. 2.2.7. Topological class 1.0-2 Water and 1-butanol form a heterogeneous azeotrope and an immiscibility gap over a limited region of ternary compositions exists. The stability of the stationary points of the system and the distillation line map modeled by UNIQUAC are shown on Figure 3a. One distillation boundary, running from methanol (unstable node) to the binary heteroazeotrope (saddle) divides the composition space in two regions. The system belongs to Serafimov's topological class 1.0-2 (Hilmen, 2002). Multivessel column: For separating a heteroazeotropic mixture of this topological class a decanter has to take the place of the middle vessel. The mixture is separated simultaneously in one closed operation with an initial built-up period. During this period the composition profile is built-up and the heteroazeotrope accumulates in the middle vessel (Fig. 4a). At the second (decanting) period the heteroazeorope is decanted and the organic phase is refluxed back in the column. The aqueous phase accumulates in the middle vessel, while methanol and 1-butanol are accumulated in the top and bottom vessel, respectively, as shown in Fig. 4b. Cyclic column: The separation is performed in two cycles with a built-up period in between. During Cycle 1, methanol is accumulated in the top vessel (Fig 5a). Then a built-up period is needed where the heteroazeotrope accumulates in the top. Cycle 2 is a heteroazeotropic distillation with a decanter taking the place of the top vessel. The aqueous phase is gradually accumulated in the top vessel (see Fig. 5b) and the organic phase is refluxed back in the column. The still is getting enriched in 1-butanol (Fig. 5b). Methanol (un) 64.6 "C
a) Serafimovs class 1J)-1a
Serafimov^dass1J)-2 /
' binodal curve at 2S*'C distillation boundary distillation linos
/ /
/
/
/'
j \
/'
hetaz (un) 70.8 "^C
/' \\
w liet.az (^ 92.1 °C
Water (sn) 100.0 °C
Figure 3. Azeotropic systems of a) topogical class 1.0-2 and b) topological class 1.0-la.
938
" - ^ binodal cunre at 25 "C
" ~ * binodal curve at 25 "C
-o-o- column liquid profile
-0-0- column liquid profile
composition evolution
composition evolution
1-Butanol 117.7 °C
Figure 4. Separation in the multivessel column, a) Build up period b) Decanting period. 2.2.2. Topological class 1.0-1 a Ethyl acetate and water form a heterogeneous azeotrope and an immiscibihty gap over a Umited region of ternary compositions exists. The corresponding distillation lines map is shown in figure 3b. The system belongs to Srafimov's topological class 1.0-la. Multivessel column: For this class of heteroazeotropic systems the decanter has to be placed at the top of the column. The mixture is separated simultaneously in one closed operation after an initial built-up period. During this built-up period the heteroazeotrope accumulates in the top vessel. At the second (decanting) period the heteroazeorope is decanted and the organic phase is refluxed back in the column. The aqueous phase accumulates in the top vessel, ethyl acetate in the middle vessel and acetic acid in the bottom. At the end of the process three pure products are accumulated in the vessels. Cyclic column: The separation is performed again in two cycles but with a built-up period before the cycles. During this built-up period the heteroazeotrope accumulates in the top vessel. During Cycle 1 this heteroazeotrope is decanted and the organic phase is refluxed back in the column. The aqueous phase is accumulated in the top vessel. Cycle 2 is almost a binary separation between ethyl acetate and acetic acid. The first one is recovered at the top vessel while the second remains in the still. Methanol (un) 64.6 "C
' binodal cuive at 2S **C
"^
I- column liquid profile
binodal curve at 25 * C
0-0- column liquid profile
,. composition evolution
composition evolution
het.az (^ 921°C
Water (sn) 100.0 °C
Figure 5. Separation in the cyclic column a) Cycle 1 b) Cycle 2.
,az (^
Water (sn)
rc
100 o°c
939
3. Results All simulations were terminated when the specifications in all vessels were fulfilled. Results are provided for two specification sets. i) Zeotropic system: x^pec = [0.99, 0.97, 0.99], x ^ c = [0.99, 0.98, 0.99] In the second set higher purity is required for the intermediate component. ii) Azeotropic mixture of class 1.0-2: x^pec= [0.99, 0.97, 0.99], x ^ c = [0.99, 0.98, 0.99] The heteroazeotrope is the intermediate 'component' (saddle). In the multivessel it is accumulated in the middle vessel/decanter. After decantation the aqueous phase is accumulated in the middle vessel. In the cyclic column the aqueous phase is the top product of Cycle 2. The specification for the aqueous phase (Xaq=0.98) in the second set is close to the equilibrium value (Xaq^''^=0.981) determined by the binodal curve at 25°C. iii) Azeotropic mixture of class 1.0-la: x^pec=[0.97, 0.97, 0.99], x\pec=[0.98, 0.97, 0.99] The heteroazeotrope is the light 'component' (unstable node). After decantation the aqueous phase is accumulated in the top vessel/decanter in the multivessel column. In the cyclic column the aqueous phase is the top product of Cycle 1. The specification for the aqueous phase (Xaq=0.98) is close to the experimental equilibrium value (Xaq"'P=0.985) determined by the binodal curve at 30°C. Charging of the column, preheating, product discharging and shutdown are not included in the time calculations. All these time periods would be the same for both the multivessel and the cyclic column. The only exception is the product discharging period, which is higher for the cyclic column, since the products are separated one at each time and they have to be discharged twice. All columns have sufficient number of trays for the given separarion. Same number of stages was used in both the multivessel and the cyclic column on order to be fair in our time comparisons. A modified multivesesel without a vapor bypass (Fig lb) was studied. The conventional multivessel (Fig la) with the vapor bypass has an inherent inability to 'boil away' the light component fi-om the middle vessel. The idea behind the modified multivessel is that the vapor stream entering the middle vessel will help the light component to be boiled off faster, thus, improving the composition dynamics in the middle vessel. The results in Table 1 prove that this is true. For the zeotropic mixture the modified multivessel is 26% faster. The improvement is more pronounced for the separation of the first heteroazeotropic mixture of class 1.0-2, where the time savings go up to 37%. This is because the accumulation of the aqueous phase takes place in the middle vessel (for this class of mixtures) and it is very time consuming. Therefore, the improved middle vessel dynamics have a greater effect on the separation of a heteroazeotropic mixture of class 1.0-2 than on a zeotropic mixture. A rather surprising result is the one observed for the separation of the heteroazeotropic system of class 1.0-la. The modified multivessel does not exhibit any significant advantage over the conventional one (7% time savings for specification set 1) and it can be even slower (6% more time consuming for specification set 2). The explanation is simple. For systems of class 1.0-la the heteroazeotrope is the unstable node and it is accumulated in the top vessel. Therefore the liquid-liquid split and the accumulation of the aqueous phase is taking place in a decanter in the top. The dynamics of the top
940 vessel dominates the separation. The improved dynamics of the middle vessel in the modified multivessel are not playing an important role anymore. Table 1. Batch time calculations and time savings (basis: conventional multivessel). Zeotropic system [0.99,0.97,0.99] [0.99,0.98,0.991 Heteroazeotropic systems Class 1.0-2 [0.99,0.97,0.99] [0.99,0.98,0.99] Class 1.0-la [0.97,0.97,0.99] [0.99,0.98,0.99]
Conventional multivessel (with vapor bypass) 3.6 hr 3.9 hr
Modified multivessel (w/o vapor bypass) -26% -26%
Cyclic column
3.1 hr 4.6 hr
-29% -37%
+28% +24%
2.6 hr 3.7 hr
+7% -6%
+54% +44%
+53% +46%
However, a modified multivessel for the separation of heteroazeotropic mixtures is problematic from the practical point of view. It is not practical to have a decanter where a vapor phase is bubbled through. Also the decanter is operated in a temperature lower than that of the column and a hot vapor stream entering the decanter is not very wise. A look to all the results presented in this work reveal that the multivessel column is in all cases preferable over the cyclic column in terms of batch time (energy) savings. For the separation of azeotropic mixture the modified multivessel without the vapor bypass seems to be the best choice, with time savings up to 52% compared to the cyclic column. For the separation of heteroazeotropes, time savings and practical considerations lead to the choice of the conventional muhivessel with the vapor bypass. Time savings vary fi'om 25% up to 50% depending on the mixture separated. Beside the time savings achievable by multivessel distillation one should also mention its much simpler operation compared to the cyclic column. The final products are accumulated in the vessels at the end of the process and there is no need for product change-overs.
4. Conclusions The multivessel column is superior to the cyclic column, in terms of batch time (energy) consumption, for all separations studied here. A modified multivessel column without vapor bypass is proposed for the separation of zeotropic systems. However, the conventional multivessel configuration with vapor bypass is proposed for the separation of heterogeneous azeotropic systems.
5. References Hilmen, E.K., Kiva, V.N., Skogestad, S., 2002, AIChE J., 48 (4), 752-759. Skogestad, S., Wittgens, B., Litto, R., Sorensen, E., 1997, AIChE J., 43 (4), 971-978. Skouras, S., Skogestad, S., 2003, Chem. Eng. and Proc, to be pubUshed. Sorensen, E., Skogestad, S., 1994, PSE '94, 5'^ Int. Symp. on Proc. Syst. Eng., 449-456. Wittgens, B., Litto, R., Sorensen, E., Skogestad, S., Comp. Chem. Engng., S20,1041-1046.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
941
Numerical Bubble Dynamics Anton Smolianski'*, Heikki Haario^ and Pasi Luukka^ "Institute of Mathematics Zurich University CH-8057 Zurich, Switzerland email:[email protected] ^Laboratory of Applied Mathematics Lappeenranta University of Technology P.O. Box 20, FIN-53851 LPR, Finland email:heikki.haario@ lut.fi, [email protected] Abstract A computational study of the dynamics of a gas bubble rising in a viscous liquid is presented. The proposed numerical method allows to simulate a wide range offlowregimes, accurately capturing the shape of the deforming interface of the bubble and the surface tension effect, while maintaining a good mass conservation. With the present numerical method, the high-Reynolds number wobbling bubble regime exhibiting unsymmetric vortex path in the wake has been successfully simulated. The computed time-evolution of bubble's interface area, position and rise velocity shows a good agreement with the available experimental data. Some results on bubble coalescence phenomena are demonstrated. Our studies reveal that plausible results can be obtained with two-dimensional numerical simulations, when a single buoyant bubble or a coalescence of a pair of bubbles is considered.
1. Introduction The rise of a gas bubble in a viscous liquid is a very complicated, non-linear and nonstationary hydrodynamical process. It is usually accompanied by a significant deformation of the bubble, indicating a complex interplay between fluid convection, viscosity and surface tension. The diverse shapes of the bubble resulting from this deformation cause a large variety of flow patterns around the bubble, and vice versa. A number of experimental studies have addressed this problem. Early studies include the rise of a bubble in an inviscid and a viscous liquid, see (Hartunian & Sears 1957), (Walters & Davidson 1962) (Walters & Davidson 1963), (Wegener & Parlange 1973) and (Bhaga & Weber 1981). Approximate theoretical solutions have been obtained for either low (Taylor & Acrivos 1964) or high (Moore 1959) Reynolds numbers under the assumption that the bubble remains nearly spherical. We employ the level-set (see (Sethian 1999)) method that permits to compute topological changes of the interface (like mergers or breakups). We use thefiniteelement method that relies on a global variational formulation and, thus, naturally incorporates the coefficient jumps and the singular interface-concentrated force. The combination of finite elements and the level-set technique allows us to localize the interface precisely, without an introduction of any artificial parameters like the interface thickness. As a whole, our computational method takes an advantage of combining the finite element spatial discretization, the operator-splitting temporal discretization and the level-set interface representation. In (Tomberg 2000) a combination of thefiniteelement and the level-set methods has been recently used to simulate a merger of two bubbles in a viscous liquid; however, the method is restricted to low Reynolds number flows only.
942 Using the presented computational method we provide a systematic study of diverse shape regimes for a single buoyant bubble, recovering all main regimes in a full agreement with available experimental data (for detailed analysis see (Smolianski et al. )). Next, we present results on the bubble coalescence phenomena.
2. Numerical Method As a simulation tool we employ a general computational strategy proposed in (Smolianski 2001) (see also (Smolianski et al. )) that is capable of modeling any kind of two-fluid interfacial flows. The dynamics of a gas bubble in a liquid can, thus, be considered as a particular application of this computational approach. We consider an unsteady laminar flow of two immiscible fluids. Both fluids are assumed to be viscous and Newtonian. Moreover, we suppose that the flow is isothermal, thus neglecting the viscosity and density variations due to changes of a temperature field. We assume also that the fluids are incompressible. Presuming, in addition, the fluids to be homogeneous, we may infer that the densities and viscosities are constant within each fluid. We utilize the sharp-interface (zero interfacial thickness) approach; the density and viscosity have, therefore, a jump discontinuity at the interface (see, e.g., (Batchelor 1967)). We assume that the interface has a surface tension. We also suppose that there is no mass transfer through the interface (i.e. the interface is impermeable), and there are no surfactants present in the fluids (hence, there is no species transport along the interface). The surface tension coefficient is, thus, assumed constant. Our computational approach for numerical modelling of interfacial flows can be summarized as follows: Step 0. Initialization of the level-set function and velocity. For each n-th time-step, n = 1,2,...: 1. Computation of interface normal and curvature. 2. Navier-Stokes convection step. 3. Viscous step. 4. Projection step. 5. Level-set convection step. 6. Reinitialization step. 7. Level-set correction step. The steps 1.-7. are performed successively, and each of the steps 2.-5. may use its own local time-increment size. On each step the last computed velocity is exploited; the viscous and projection steps use the interface position found on the previous global timestep. It is also noteworthy that the steps 5.-7. can be computed in a fully parallel manner with the step 2. The whole algorithm is veryflexible;it permits, for instance, to compute unsteady interfacial Stokesflowjust by omiting the Navier-Stokes convection step.
3. Bubbles in Different Shape Regimes Figure 1 shows the typical bubble shapes and velocity streamlines in the frame of reference of the bubble. Although all experimental results correspond to three-dimensional bubbles and our computations are only two-dimensional, a qualitative comparison is possible. The comparison enables us to conclude that our numerical bubble shapes are in a good agreement with the experimental predictions of (Bhaga & Weber 1981) and (Clift et al. 1978).
943
1^
TOI
w
(a)
(b)
(c)
(d)
(e)
(f)
Figure 1. Different computed shapes of bubbles: (a) spherical with Re=I, Eo=0.6, (b) ellipsoidal with Re-20 Eo=1.2, (c) dimpled ellipsoidal cap with Re=35, Eo=125, (d) skirted with Re=55, Eo=875, (e) spherical cap with Re=94, Eo-115 and (f) wobbling with Re=1100, Eo=3.0; pi/p2 = 10^* )Ui/)U2 = 10^. As it is seen from thefigure,all basic shapes are successfully recovered with the parameter values lying exactly within the limits given in (Clift et al. 1978). The interesting phenomena are observed in the case of a wobbling bubble. The wobbling typically appears with sufficiently high Reynolds numbers when the Eotvos number is, roughly, in the range between 1 and 100 (see (Clift et al. 1978)). Since the typical range for the Reynolds number corresponding to the wobbling motion is approximately the same as for the spherical cap regime, the wobbling bubble (see Figure 2) retains a nearly spherical cap shape. However, at later stage of the motion, a remarkable flattening of the bubble top can be observed (Figure 2). The bubble bottom undergoes permanent deformations resulting from the unstable and unsynmietric evolution of the bubble wake. In particular, the unsymmetric pairs of secondary vortices are clearly observed in the wake as the consequence of asynchronous separation of the boundary layer from different sides of the bubble surface. This flow pattern bears some resemblance to the von Karman vortex path typically formed behind a rigid body in a highly convectiveflow.We are unaware of any other successful numerical simulations in the wobbling bubble regime.
4. Results on Coalescence of Bubbles We consider the rectangular domain of the unit width with two initially circular bubbles inside; the radius of the upper bubble is equal to 0.25 and the radius of the lower one is 0.2. Bubbles have a common axis of symmetry. We prescribe zero velocity field at the initial moment. The dynamics of the bubbles, to a large extent, depends on the initial distance between them and on the magnitude of the surface tension. If the surface tension is high enough, no merger happens, the bubbles develop nearly ellipsoidal shapes and rise separately (see, e.g., (Unverdi & Tryggvason 1992)). Hence, in order to simulate a merger process, we take comparably small surface tension coefficient. Figures 3-4 illustrate the process of bubble merger in different shape regimes. During the rise of the bubble, two opposite signed vortices are created in the wake of the larger bubble. This produces a lower pressure region behind the large bubble and generates flow streaming into the symmetry line of theflow.As a result, the front portion of the small bubble becomes narrower and sharper. The head of the lower bubble almost catches up with the bottom of the upper one. In the next moment, the two bubbles merge
944
0
0.5
1
0
0.5
1
0
0.5
1
0.5
0
1
0
0.2 0.4 0.6 0,8
0
0.2 0,4 0 6
08
Figure 2. The rise of a wobbling bubble. Re=1100, Eo=3.0; pi/p2 = 10^, A*i//^2 = 10^, h = 1/80 into a single bubble. At this time, the interface conjunction forms a cusp singularity that is rapidly smoothed out by viscosity and surface tension.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3. Merger of two spherical bubbles; Re = 2, Eo — 1.2, pi/p2 = lO'^, 10, h = 1/40.
MI/A*2
Bubble coalescence in spherical shape regime is shown in Figure 3. Due to considerable rigidity of the bottom of the upper bubble, the liquid rather quickly becomes squeezed out of the space between the bubbles, and the bubbles merge. In ellipsoidal shape regime, the bottom of the upper bubble deforms under the influence of the lower bubble, thus, making it possible to preserve a thin liquid film between the bubbles. The upper bubble develops a dimpled-ellipsoidal rather than an ellipsoidal shape. When the bottom of the upper bubble cannot deform any more, the liquid film between the bubbles starts getting thinner, and, finally, the lower bubble merges with the upper one. The results agree with the computations by (Chang et al. 1996), by (Tomberg 2000), by (Unverdi & Tryggvason 1992) and compare favorably with the numerical predictions by (Delnoij el al. 1998) who found a qualitative agreement with available experimental data.
=
945
Figure 4. Merger of two ellipsoidal bubbles; Re = 20, Eo = 1.2, pi/p2 = 10^, /X1//X2 = 10, /i = 1/40.
5. Discussion We have presented the results of a computational study on two-dimensional bubble dynamics. Despite the seeming insufficiency of a two-dimensional model for the quantitative analysis of three-dimensional bubble evolution phenomena, we have been able to obtain a good qualitative agreement with the available experimental data. Since our numerical method captures the bubble interface as well as the surface tension effect and the mass conservation with the 2nd order accuracy, we managed to recover all basic shape regimes within the experimentally predicted ranges of problem parameters. In particular, we successfully simulated the wobbling bubble regime remarkable by its unsymmetric vortex path pattern and a highly convective nature. Since the wobbling and the sphericalcap regimes are characterized by very high Reynolds numbers, it was essential to have a numerical method capable of dealing with convection-dominated flows. On the other hand, the method should beflexibleenough to allow also a computation of a nearly Stokes flow (typical, e.g., for the case of a spherical bubble). We have demonstrated that such a flexibility can be maintained within thefinite-element/level-set/operator-splittingframework. In many cases, a good quantitative agreement has been observed (see (Smolianski et al.) for a thorough comparison of our computational results with available experimental data). This, probably, means that a two-dimensional modeling of bubble dynamics is not so far from being realistic. The preliminary study on the bubble coalescence phenomena also reveal that plausible results can be obtained already with two-dimensional simulations.
6. Acknowledgments This work was supported by the grant 70139/98 of Tekes, the National Technology Agency of Finland.
7. References Baker, G.R. and D.W. Moore, 1989, The rise and distortion of a two-dimensional gas bubble in an inviscid liquid. Phys. Fluids A 1,1451-1459. Batchelor, G.K., 1967, An Introduction to Fluid Dynamics. Cambridge University Press.
946 Bhaga, D. and M.E. Weber, 1981, Bubbles in viscous liquids: shapes, wakes and velocities. J. Fluid Mech. 105, 61-85. Chang, Y.C., T.Y. Hou, B. Merriman and S. Osher, 1996, A level set formulation of Eulerian interface capturing methods for incompressiblefluidflows.J. Comput. Phys. 124,449-464. Clift, R.C., J.R. Grace and M.E. Weber, 1978, Bubbles, Drops and Particles. Academic Press. Delnoij, E., J.A.M. Kuipers and W.P.M. van Swaaij, 1998, Computational fluid dynamics (CFD) applied to dispersed gas-liquid two-phase flows. In: Fourth European Computational Fluid Dynamics Conference ECCOMAS CFD'98, John Wiley & Sons, Chichester, pp. 314-318. Hartunian, R.A. and W. R. Sears, 1957, On the instability of small gas bubbles moving uniformly in various liquids. J. Fluid Mech. 3,27-47. Hnat, J.G. and J.D. Buckmaster, 1976, Spherical cap bubbles and skirt formation. Phys. Fluid 19,182-194. Moore, D.W., 1959, The rise of a gas bubble in a viscous liquid. J. Fluid Mech. 6,113-130. Sethian, A.J., 1999, Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision, and Materials Science. Cambridge University Press. Smolianski, A., 2001, Numerical Modeling of Two-Fluid Interfacial Flows, PhD thesis, University of Jyvaskyla, ISBN 951-39-0929-8. Smolianski, A., H. Haario, P. Luukka, Computational Study of Bubble Dynamics. To appear in the Intemational Journal of Multiphase Flow. Sussman, M., P. Smereka and S. Osher, 1994, A level set approach for computing solutions to incompressible two-phaseflow.J. Comput. Phys. 114,146-159. Taylor, T.D. and A. Acrivos, 1964, On the deformation and drag of a falling viscous drop at low Reynolds number. J. Fluid Mech. 18,466-476. Tomberg, A.K., 2000, Interface Tracking Methods with Application to Multiphase Flows. PhD thesis. Royal Institute of Technology, Stockholm. Unverdi, S.O. and G. Tryggvason, 1992, A front-tracking method for viscous, incompressible, multi-fluid flows. J. Comput. Phys. 100,25-37. Walters, J.K. and J.F. Davidson, 1962, The initial motion of a gas bubble formed in an inviscid liquid. Part 1. The two-dimensional bubble. J. Fluid Mech. 12, 408-417. Walters, J.K. and J.F. Davidson, 1963, The initial motion of a gas bubble formed in an inviscid liquid. Part 2. The three-dimensional bubble and the toroidal bubble. J. Fluid Mech. 17, 321-336. Wegener, P.P. and J.Y. Parlange, 1973, Spherical-cap bubbles. Ann. Rev. Fluid Mech. 5, 79-100.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
947
EMSO: A New Environment for Modelling, Simulation and Optimisation R. de P. Scares and A.R. Secchi* Departamento de Engenharia Quimica - Universidade Federal do Rio Grande do Sul Rua Sarmento Leite 288/24 - CEP: 90050-170 - Porto Alegre, RS - Brasil * Author to whom correspondence should be addressed, {rafael, arge}@ enq.ufrgs.br
Abstract A new tool, named EMSO (Environment for Modelling, Simulation and Optimisation), for modelling, simulation and optimisation of general process dynamic systems is presented. In this tool the consistency of measurement units, system solvability and initial conditions consistency are automatically checked. The solvability test is carried out by an index reduction method which reduces the index of the resulting system of differential-algebraic equations (DAE) to zero by adding new variables and equations when necessary. The index reduction requires time derivatives of the original equations that are provided by a built-in symbolic differentiation system. The partial derivatives required during the initialisation and integration are generated by a built-in automatic differentiation system. For the description of processes a new object-oriented modelling language was developed. The extensive usage of the object-oriented paradigm in the proposed tool leads to a system naturally CAPE-OPEN which combined with the automatic and symbolic differentiation and index reduction forms a software with several enhancements, when compared with the popular ones.
1. Introduction Simulator is a valuable tool for applications ranging from project validation, plant control and operability to production increasing and costs reduction. This facts among others has made the industrial interest in softwares tools for modelling, simulation and optimisation to grow up, but this tools are still considered inadequate by the users (CheComp, 2002). The user dissatisfaction is mainly related with limited software flexibility, difficulty to use/learn and costly. Besides the lack of software compatibility and the slowness of inclusion of new methods and algorithms. Furthermore, the users have been pointed out some desired features for further development, like extensive standard features, intelligent interfaces among others (Hlupic, 1999). In this work a new tool for modelling, simulation and optimisation of general dynamic systems, named EMSO (Environment for Modelling, Simulation and Optimisation) is presented. This tool aims to give the users more flexibility to use their available resources. The successful features found in the most used tools were gathered and some new methods where developed to supply missing features. In addition, some well established approaches from other areas where used, like the object-oriented paradigm. The big picture of the EMSO structure is shown at Figure 1 which demonstrates the modular architecture of the software.
948 Mcxtels library
^
Typical flowsheet
^JS
Initial condition H Initialization: system; NLA / NLASolver
I
Dynamic system: \ Integration: DAE / DAESolver discontinuity
Reinitialization system: NLA
\Reinitialization: / NLASolver
I Dynamic simulation
f
Model: mathematical based language
\
/
Objective Function
Optimiser
V
Dynamic Optimisation
EQUATIONS diff(Ml*L.z) = Feed.F*Feed.z - V.F*V.z - L.F*L.z; diff{Ml*L.h) = q+Feed.F*Feed.h - V.F*V.h - L.F*L.h; sum(L.z) = sum(V.z) = 1; V.T = L . T ; V.P = L.P;
J
Flowsheet: component based language
DEVICES sep_101 str_101 PID_101, valve_101 valve_102
include "thermo"; Model F l a s h VARIABLES in Feed as MaterialStream; out L as MaterialStream; out V a s MaterialStream; in q as Real(Unit="kJ/h"); Ml a s P o s i t i v e (Unit="kniOl") ;
J
\
as Flash; as MassStream; PID_102 a s PID; as ControlValve; as ControlValve;
CONNECTIONS str_l01.Stream to sep_101.Feed; sep_101.V to valve_l01.Stream; sep_101.L.P to PID_101.y; sep_l01.level t o PID_102.y; I PID_101.u to valve_101.uy f
Model: mathematical based language
^
PARAMETERS
e x t Comp a s ChemicalComponent Tr as Temperature; VARIABLES SUBMODELS in T as Temperature; e q u i l i b r i u m a s A n t o i n e ( y _ i = V . 2/ in P as Pressure; T=L.T, P=L.P, x _ i = L . z ) ; in y_i as FractionMolar; h as LiquidHenthalpy(h=L.h, i n H as EnthalpyMolar; T=L.T, P=L.P, x _ i = L . z ) ; EQUATIONS H as VaporHenthalpy{H=V.h, j \ H = sum(y i * ( C o m p . c p . A * ( T - T r ) T=V.T, P=V.P, y _ i = V . z ) ; ^ \ +Comp.cp.B*{T'"2 - T r ' " 2 ) / 2 end Submoc / +Comp.cp.C*(T^3 - T r ^ 3 ) / 3
:7
+Comp.cp.D*(T^4 - T r ^ 4 ) / 4 ) | ^
Figure 1. General vision of the EMSO structure and its components.
949
2. Process Model Description In the proposed modelling language there are three major entities: models, devices, and flowsheets. Models are the mathematical description of some device; a device is an instance of a model; and a flowsheet represents the process to be analysed which is composed by a set of devices. At bottom of Figure 1 are given some pieces of code which exemplifies the usage of the language. EMSO makes intensive use of automatic code generators and the object-oriented paradigm whenever is possible, aiming to enhance analyst and productivity. 2.1. Model In the EMSO language, one model consists in the mathematical abstraction of some real equipment, process piece or even software. Examples of models are the mathematical description of a tank, pipe or a PID controller. Each model can have parameters, variables, equations, initial conditions, boundary conditions and submodels that can have submodels themselves. Models can be based in pre-existing ones, and extra-functionality (new parameters, variables, equations, etc.) can be added. So, composition (hierarchical modelling) and inheritance are supported. Every parameter and variable in a model is based in a predefined type and have a set of properties like a Z?n^/description, lower and upper bounds, unit of measurement among others. As models, types can have subtypes and the object-oriented paradigm is implemented. Some examples of types declarations can be seen in Figure 2. Fraction as Real(Lower=0/ Upper=l); Positive as Real{Lower=0, Upper=inf); EnergyHoldup as Positive(Unit="J"); ChemicalComponent as structure ( Mw as Real{Unit="g/mol"); Tc as Temperature(Brief="Critical Temperature" Pc as Pressure;
);
Figure 2. Examples of type declarations. 2.2. The flowsheet and its devices In the proposed language a device is an instance of a model and represents some real device of the process in analysis. So, a unique model can be used to represent several different devices which have the same structure but may have different conditions (different parameters values and specifications). Devices can be connected each other to form Siflowsheet(see Figure 1) which is an abstraction for the real process in analysis. Although the language for description of flowsheets is textual (bottom right in Figure 1), it is simple enough to be entirely manipulated by a graphical interface. In which flowsheets could be easily built by dragging model objects into it to create new devices that could be connected to other devices with the aid of some pointing unit (mouse).
3. Consistency Analysis In solving the resulting system of differential-algebraic equations (DAE) of SL flowsheet, prior analysis can reveal the major failure causes. There are several kinds of consistency analysis which can be applied in the DAE system coming from the mathematical description of a dynamic process. Some of them are: measurement units, structural solvability and initial conditions consistency.
950 3.1. Measurement units consistency In modelling physical processes the conversion of measurement units of parameters is a tiresome task and prone to error. Moreover, a ill-composed equation usually leads to a measurement unit inconsistency. For this reasons, in EMSO the measurement units consistency and units conversions are automatically made for all equations, parameter setting and connections between devices. Once all expressions are internally stored in a symbolical fashion and all variables and parameters holds its measurement units, the units consistency can be easily tested with the aid of the units measurement handling package RUnits (Soares, 2002). 3.2. DAE solvability Soares and Secchi (2002) have proposed a structural method for index reduction and solvability test of DAE systems. With this method, structural singularity can be tested and the structural differential index can be reduced to zero by adding new variables and equations. Such variables and equations are the derivatives of the original ones with respect to the independent variable. EMSO makes use of this method, allowing the solution of high-index DAE problems without user interaction. The required derivatives of the variables and equations are provided by a built-in symbolic differentiating system. 3.3. Initial conditions consistency Once a DAE system is reduced to index zero the dynamic freedom degree is determined. So, the initial condition consistency can be easily tested by an association problem as described by Soares and Secchi (2002). This approach is more robust when compared with the index-one reduction technique presented by Costa et al. (2001).
4. External Interfaces Usually each simulation software vendor has its proprietary interfacing system, this leads to heterogeneous systems. Recently, the CAPE-OPEN project (CO-LAN, 2002) has published open standards interfaces for computer-aided process engineering (CAPE) aiming to solve this problem. EMSO complies with this open pattern. The interfaces are implemented natively rather than wrapping some other proprietary interface mechanism, and CORBA (OMG, 1999) was used as the middleware. The extensive usage of the interfaces turns its efficiency a priority. For this reason some modifications for the numerical CAPE-OPEN package (CO-LAN, 2002) where proposed. This modifications consists in changing some function calling conventions, more details can be seen in Soares and Secchi (2002b).
5. Graphical User Interface The graphical user interface (GUI) of EMSO combines model development, flowsheet building, process simulation and results visualising and handling all in one. EMSO is entirely written in C++ and is designed to be very modular and portable. In running tasks there are no prior generation of intermediary files or compilation step, everything is made out at the memory. The software is multithread, allowing real-time simulations and even to run more than one flowsheet concurrently without blocking the GUI. Furthermore, calculations can be paused or stopped at any time. The Figure 3 shows the EMSO GUI, it implements a Multiple Document Interface (MDI).
951
Re
^dim
I«lb
^idm
"HI 125-i
L-:|S fla$h2 " " - f
flashlOl
DEVICES flashlOl as separator; J as Massstream; slOl
^ . p Vaables J | j Paameters
i
Icontrol Ivalve
i i ImbalCond 51 f
hSavet
i g f
HSefver
i f
$101
9 1
ConJd
pcontrol as Plcontroll{ waive a s CQj
• ^ bquid Flow
feb?2000
separatos
OtipU |^£M'f^'|S^^-^^^
n q COUIVllAOdH B;j BasK model ppj llashO model B;| HttWZmodel
i f BiJPIDniodel BiJ P»Bor model
OPTIONS
N
CONNECTIONS
Results Visualising
•quipinenl 'pConhoT based on model ./hiodelsy./conliol/R.model iqiipmert Waive' based on modet Vmodels/./control/controf/alve.model
Waive
%
5^
SubEquJpment liServef' based on model: Vmodels/abstracLlquidHL Equipment tonboT based on model: ./model$/./control/R.model Valve' based on model: ./modek/./cortroL'contiot/alve.model
Output Channels
P M flashss inodel
Liquid Holdup
50
"IT
m ructoR
]
—
75-1
Model and Flowsheet editing
^ Bi^jVafBbtes
g
as Picontrollf as controlvalv
eqSefvej
: S ^
100
- - Vapour Flow
• . ^ Equahorc
J^<[^T$enjof model
||Flows..i^£IKS'l
Numbe(ofVa^abies:28 Numbef of Equations: 28 ! Freedom degrees Ok! b The ^stem has index 2 and was redxed to index zeio. ' Extra Equdcns: 37 ExtraVariaUes: 14 iJnbalCondtons 5 U Diianic freedom degrees Ok!
Calculation Priority Control
3Sn
MffikmViewi
Figure 3. EMSO graphical user interface.
6. Applications Consider the separation process outlined at Figure 1. It consists in a flash vessel with level and pressure control and was modelled as: — {Ml.x,) = at
F.z,-L.x,-V.y,
d (Ml.h) = q + F.hp - L.h - V.H dt
(la) (lb)
E-.=i:^.=i
(2) K=f2(T,.P,,z,). V = f,(P),
h = f,(T^P,x,),
L = f,{Ml)
H=f,(T^P^y,)
(3) (4)
where F, V and L are the feed, vapour and liquid molar flow rates, hp, H and h are the respective molar enthalpies, T is the temperature and P pressure, z„ Xi and yi are the feed, liquid and vapour molar fractions and q is the heat duty. In EMSO this process can be modelled in a modular fashion: Eq.(l) as flash device; Eq.(2) as thermodynamic equilibrium device; Eq.(3) as enthalpy devices; Eq.(4) as control devices. The dynamic simulation of this system is trivial if q and the feed conditions are known along the time and T, Ml and x, are given as initial conditions. But if a temperature profile is specified instead of ^ a high-index system take place and cannot be directly solved by the popular simulation tools.
952 In solving this problem EMSO reports a index two system and automatically reduces the index to zero. In the Figure 4 the composition profiles of the outlet streams are showed for a start-up condition of this index two system.
^^— x(n-hexane) [(n-heptane) — • x(n-octane) y(n-hexane) y(n-heptane) y(n-octane)
ttuuuqginnuHBnnnnnnHlj*!
4)C)C)O0()t)<)0fVM)0()0tj(>o<)U0nooO()
000
2000
4000 Time[s]
6000
Figure 4. Solution of the index two separation process. The index-three batch distillation column proposed by Logsdson and Biegler (1993) was also successfuly solve by EMSO, but there was no room to show the results.
7. Conclusions An object-oriented language for modelling general dynamic process was successfully developed and its usage has proved efficiency in code reusability. The development of model libraries of models for thermodynamics, process engineering and other application areas is one of the future tasks. The DAE index reduction method allows EMSO to directly solve high-index DAE systems without user interaction. This fact combined with the symbolic and automatic differentiation systems and the CAPEOPEN interfaces leads to a software with several enhancements. Dynamic optimisation is at design stage, but the modular internal architecture of EMSO allows it to be added further without re-coding the other modules.
8. References Che-Comp, 2002, The State of Chemical Engineering Softwares, www.che-comp.org . CO-LAN, 2002, Conceptual Design Document for CAPE-Open Project, www.colan.org. Costa Jr., E.F., Vieira, R.C., Secchi, A.R. and Biscaia, E.C., 2001, Automatic Structural Characterization of DAE Systems, ESCAPE 11, Kolding, Denmark, 123-128. Hlupic, v., 1999, Simulation Software: User's Requirements, Comp. Ind. Engrg, 37, 185-188. Logsdon, J.S. and Biegler, L.T., 1993, Ind. Eng. Chem. Res., v.32, n.4, 692-700. Luca, L. De and Musmanno, R., 1997, A Parallel Automatic Differentiation Algorithm for Simulation Models, Simulation Practice and Theory, 5, 235-252. OMG, 1999, The Common Object Request Broker, version 2.3.1, www.omg.org. Soares, R. de P. and Secchi, A.R., 2002, Direct Initialization and Solution of HighIndex DAE Systems with Low-Index DAE solvers, Comp. Chem. Engrg (submitted 2002). Soares, R. de P. and Secchi, A.R., Efficiency of the CAPE-OPEN Numerical Open Interfaces, Technical Reporting, UFRGS, Porto Alegre, Brasil (2002b). Soares, R. de P, Runits: the Measurement Units Handling Tool - Reference Manual, [email protected] (2002).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
953
New Concept of Cold Feed Injection in RFR J. Thullie and M. Kurpas Silesian University of Technology, Department of Chemical and Process Engineering, Strzody 7,44-100 Gliwice, Poland, tel/fax +48 32 237 1266, [email protected]
Abstract A model of a reverse flow reactor with an injection of cold feed was investigated. The injection caused a decrease in maximum temperature and conversion. However the decrease in maximum temperature of catalyst bed is lower than in minimum conversion especially for small cycling time.
1. Introduction Reverse flow operation is a conmion practice nowadays. Such operation involves existence of large differences in temperature along the catalytic bed and in time as well (Matros, 1989; Matros and Bunimovich, 1996; Eigenberger and Niken, 1994). This was caused by the fact that due to heat coupling between the catalytic bed and reacting gas a wandering temperature profile exists inside the catalytic bed. The movement of this profile is of crucial importance for the process. Sometimes the temperature gradients are so high that a damage to the catalytic bed may be done. It takes place when inlet conditions have been changed. When such a situation takes place will be one of the objectives of this work. The maximum of temperature gradient inside the reactor is related (Thullie and Kurpas, 2002) to the maximum temperature of the catalyst bed. This temperature may be kept within some limits when an injection of cold gas is used. There are two problems connected with such operation. First sintering or deactivating of the catalyst may be done because of high temperature. The second one the carrier may be cracked because of temperature gradient. A simple remedy to these problems is to divide the bed in two parts and inject a cold reacting gas between them. However this kind of action will decrease the final conversion. The problem is what part of the total stream of inlet gas should be directed as an cold injection and where it should be placed when a limit of maximum temperature is imposed. The reactor investigated is shown in Figure 1. The inlet mass stream G is divided into a main inlet stream Gj and an injection stream G2. The injection stream is placed in the middle of the catalytic bed. However when different position of injection stream is investigated, to fulfil symmetry requirements two injection points should be used. When the first is placed in the distance Ax from the entrance to the reactor, the second one should be placed in position L-Ax (where L is the length of the catalytic bed). RFR is working perpetually in nonstationary state forced by the reversion of the flow through the reactor. It was revealed by numerical simulation and was confirmed by the experiments (Matros, 1985) that after each reversion of the flow reaction zone is formed. The reaction zone has much higher temperature than the rest of the bed and
954 wanders through the catalytic bed in the direction of the flow. The velocity of this wandering is much slower than the velocity of reacting gas passing through the reactor (Thullie J. et al. 2001). Cold gas, injected between the catalyst bed is mixed together with the main stream of much higher temperature. When two injection points are present, a change in direction of the flow through the reactor demands immediate change of the injection position to have always the same distance of injection point from the feed point. Of course when central injection is used the position is always the same.
Figure 1. Scheme of a switch-flow reactor with inter-stage injection. Such operation of the reactor requires a proper selection of reactor parameters (Thullie and Burghardt, 1995). When the cycling time is properly determined according to mass flow rate of the main stream a special attention should be given to mass flow rate of the injection. When it is too high in comparison to the main stream the reactor may be cooled down and the reaction on the catalyst surface will be stopped.
2. Mathematical Background To specify a working point of the reactor one should determine main flow rate, cycling time, starting temperature of the catalyst bed, inlet gas temperature inlet concentration and a share of injection in the main stream. To investigate dependencies among these parameters some basic assumptions should be made. At first a simple first order reaction A—>B is assumed and first order rate expression r^ = /:Q introduced. All parameters are assumed constant. A homogeneous model according to mass transport and a heterogeneous model according to heat transport is considered. Under these assumptions the set of governing equations (Thullie J. et al. 2001) is: f dz
Da exp
\ 1 (l-.)
(1)
955
'l = s,(e,-e)
(2)
az
r 11 Y -y
dr 3^,
Initial conditions are:
— 1
(i-n)
(3)
And boundary conditions are: i9 (z,0) = ??o
The conditions of a change in flow direction after
i?(0,r) = ??o
cycling time
z
is:
i^i^iz.T^) = i9f^\^-z,T~), where: r^ and r~ denote right hand side limit and left hand side limit at the time moment r^. A new temperature after the mixing point is calculated according to equation: ^m = ( G ^ I ^ I + < ^ o ^ . )/G
"»
^11
(4)
2 2/
The assumption of ideal mixing at the injection point was done. A standard finite difference method was used to solve the set of equations (1-3).
3. Results The use of cold gas injections gives a significant decrease in maximum catalyst temperature and a moderate decrease in conversion. At the very beginning of the start up procedure the temperature and conversion profiles are similar for both processes regardless the injection, because the initial catalyst bed temperature dominates (Figures 2-3). When the injection point has a central position (the catalyst bed was divisible into two equal parts and the injection is situated between them) the significant decrease in maximum catalyst temperature was observed in comparison to standard operation. The simultaneous decrease in conversion was not so significant especially for short cycling times (Figure 2). A decrease of Tk^ of about 30% caused 23% decrease in conversion (Figures 2-3). The results of calculations suggest that in the case of the operation limit to maximum catalyst temperature, a cold gas injection is a useful solution. The injection point should be placed in the middle of catalytic bed, with exception of the case when the limit is so high that there is no possibility to reach cycling steady state (Figure 4). For this case the of injection should be placed near the entrance to the reactor (Figure 5). This results in higher rjmin in comparison to standard conditions. The points along the curves in Figure 5 are give for different positions of injection points with injection point equal to Vi L at the edge.
956
0.8
injection 1/2 L - cycle 2 injection 1/2 L - cycle 82 no injection - cycle 2 no injection - cycle 82 0.6
0,8
injection 1/2 L - cycle 2 • injection 1/2 L - cycle 82 no injection - cycle 2 no injectbn - cycle 82
" ^
1
Figure 2. Gas temperature profiles along the reactor when inter-stage injection is applied and in standard operation (St = 20, Da = 0.0496, O) = 0,1, ^ = 1, i&ko=2,eo = l, G2=107cGi).
y
-
0,2
0.4
0,6
0,8
Figure 3. Conversion profiles reactor when inter-stage is applied and in standard (St = 20, Da = 0.0496, co = 1^0 =2,1^0= 1, G2=10%Gi).
/
1
along the injection operation 0.1, J3 = 1,
It J"^^^^ ,
^^jc^^!^^^
— M — Injection 1/4 L ^
L
^
"*
^*"-
J H
- — E j e c t i o n 1/2 L 1
a
y
—A—injection 3/4 L X — • — no injection
1 ^
J
. . . X - - . injection 1/4 L ^
1 }
. . ^ . . . injection 1/2 L 1 -.-A-.-injection 3/4 L
[
. . . e . . - n o injection
J
10% Gi
- ^ T =4
c
1
Figure 4. Comparison of r]mm = / (^kmax) for Figure 5. Comparison of rimm = / ( ^ m J different location of injection points, and two for different cycling times (St = 20, injection flow rates (St = 20, Da=0.0496, Da=0.0496, CO = 0.1, /? = 7, ?%o= 2, (0 ^0.1, P=l, i^o= 2,1^0=1 G2 =5%Gi). 1^0=1 G2 =5%Gi).
When the cycling time is increasing, regardless the location of injection point, the number of cycles which gives pseudo-steady state operation decreases (Figure 6). This means that one should perform a lot of short cycles or not so many long cycles. The most profound decrease in maximum temperature with comparatively small decrease of minimum conversion is observed for small cycling times.
957 For small mass flow rate of injection gas no influence on to the start up time is observed.
ooo222fififi9fififlfiAA*A*f*****A^*^f
080000000
^i o •
3,5
•
o
AA^^^^^^^^T•
•
AA
ddddAAAAA^AAAAAAAAfii
••••••••
QpnaanDiiiDDDaannnaipaaaaaaaaaijiaaaaaaaaail] • injection -1/2 ' A injection - 3/4
^Tc=3
• no injection
2.5
D injecti(Mi -1/2 A injection - 3/4 f
1>- ^ = 6
o no injection
10
20
40
30
50 number of cycling
Figure 6. Comparison of catalyst maximum temperature for different cycling times for the case with cold gas injection and standard operation (with no injection) (St = 20, Da = 0.0744, co=0.l J3= 7, i^o= 2, i9o== I G2 = 10% d).
4. Conclusions 1. The results of calculations suggest that the use of cold gas injections gives the significant decrease in maximum catalyst temperature and moderate decrease in conversion. 2. The best position of the injection point is usually the middle of the catalyst bed. However when the temperature limit is so high that PSS is not achieved the flow rate of the injection should be lowered. If it is not possible the place of the injection should be moved in the direction of the reactor inlet. 3. When cold gas injection is used the most profound decrease in maximum temperature with comparatively small decrease of minimum conversion is observed for small cycling times.
5. Symbols j)a =
k^L-exp
5/ = ^ ^ : ^ ^ P-S*"o
(-y)
_ Damkohler number, - Stanton number,
k
- rate constant, 1/s
koo - frequency coefficient, 1/s
958 X
- dimensionless space variable,
L
- length of reactor, m
- dimensionless adiabatic temperature rise,
r^
- ration rate, kmol/m s
- dimensionless activation energy,
T
- temperature, K
L AO
fi = E y =
•
- dimensionless gas temperature, TQ
t? = -
'0
- dimensionless temperature of catalyst, • dimensionless time.
S'L
{^•^)pk-^pk
- ratio of heat capacities of gas to catalyst. ,2/^3
- inlet temperature, K - catalyst temperature, K
UQ
- lineary velocity, m/s
X
- space variable, m
Gy
- specific surface area, m /m"
a^
- heat transfer coefficient, J/m^Ks
^^'
- concentration of reference component, kmol/m^
e
- void fraction, m^/m^
Cp
- specific heat, J/kg K
p
- density, kg/m^
E
~ ration activation energy, kJ/kmol
T]
- conversion of component A
G
- flow rate of gas, kmol/s
R
- g a s constans, kJ/kmol K
AH
- heat of reaction, kJ/kmol
t
- time, s
6. References Eigenberger, G. and Niken, V., 1994, Internal. Chem. Eng. 34,4-16. Matros, Yu., 1989, Studies in surface science and catalysis, vol. 43. Utrecht The Netherlands. Matros, Yu., 1985, Elsevier, Amsterdam. Matros, Yu., and Bunimovich, G., 1996, Catal. Rev. Sci. Eng. 38, 1. ThuUie, J. and Burghardt, A., 1995, Chem. Eng. Sci. 50, 2299-2309. Thullie, J. and Kurpas, M., 2002, Inz. Chem. Proc. 23, 309-324 (in Polish). Thullie, J., Kurpas, M., Bodzek, M., 2001, Inz. Chem. Proc. 22, 3E, 1405 (in Polish).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
959
Numerical Modeling of a OK Rotor-Stator Mixing Device J. Tiitinen Helsinki University of Technology, Mechanical Process Technology and Recycling, P.O.Box 6200, FIN-02015 HUT, Finland E-mail: [email protected]
Abstract The flov^ field induced in an Outokumpu TankCell and by a OK rotor-stator mixing device in a process size flotation cell v^as simulated by cfd using two main grid types. The effects of different grid types were investigated with structured and unstructured grids. The geometry used was axisynmietric and a sector of 60 degrees of the tank with periodic boundaries was modeled. Finally, validation measurements and calculations with a laboratory size flotation cell were done. A "hybrid" grid for a laboratory size OK flotation cell with unstructured cells in the rotor domain and structured cells in the stator and tank domain was generated. Preprocessing and computational mesh generation process of complicated geometries like the OK rotor-stator mixing device can take considerably long time with regular structure type grids. This type of geometry is where meshes with irregular structure can be used much easier and with less processing time. Preprosessing and grid generation was done with the commercial Fluent Gambit 1.3. The CFD code Fluent 5.5 was used in the simulation. Standard k-e turbulence model and standard wall functions were used. Multiple reference frame method was used in all simulations instead of the computationally slower sliding mesh method. Simulations were done in one phase (/). Calculated velocity fields on horizontal and vertical planes, pressure distributions on rotor and stator surfaces and turbulent magnitudes were compared with structured and unstructured grid types. No grid dependency was found. Comparisons between velocity and turbulence results measured using the LDV (Laser Doppler Velocimetry) technique and CFD modeling were done. The predicted velocity components agreed well with the values obtained from LDV. The standard k-e model underpredicts the k level in the flotation cell compared to measured values. It was shown that a CFD model with periodicity, hybrid grid and MRF approach can be used for detailed studies on the design and operation of the Outokumpu flotation cell.
1. Introduction Froth flotation is a complex three phase physico-chemical process which is used in mineral processing industry to separate selectively fine valuable minerals from gangue. The importance of the mineral froth flotation process to the economy of the whole industrial world is important. As costs has increased in mining industry and ore grades and metal prices decreased, role of mineral flotation has become even more important. The flotation process depends among many other factors on control of the pulp aeration, agitation intensity, residence time of bubbles in pulp, pulp density, bubble and particle size and interaction and pulp chemistry. Development of flotation machines has been carried out as long as there has been minerals flotation to achieve better flotation performance and techniques. This
960 development has mainly been build from advices from practice or thumb rules. The development and research work has been mainly focused on the control of the chemistry not on the hydrodynamics of the flotation cell. This paper reviews the detailed hydrodynamics of Outokumpu flotation cells by using Computational Fluid Dynamic (CFD) modelling. This includes different computational grid type dependency defining in the CFD model and examining the flow pattern induced in the cell as well as validating the model.
2. General Outokumpu flotation cell development began in the late 1960s to satisfy company's own needs for treating complex-sulfide ores at Outokumpu's mines in Finland. Main properties for flotation machines can be defined: keep mineral grains in suspension in pulp, near ideal mixing disperse sufficient amount of fine airbubbles to the pulp make flotation environment advantageous for bubbles and minerals interaction, sufficiently turbulent conditions in the contact zone high selectivity develope a quiescent upper zone to flotation tank to avoid bubble-particle separation energy efficiency, low power and air consumption secure efficient discharge of concentrate and tailings The flotation cell mechanism by Outokumpu in general consists of: flotation tank rotor and stator air feed mechanism pulp feed- and discharge mechanism concentrate ducts pulp level regulators The Outokumpu's rotor profile was originally designed to equibalance the hydrodynamic and static pressures, allowing a uniform air dispersion over surface of the blades. The blade design also provides separate zones for air distribution and slurry pumping. Figure 1 shows Outokumpu cell desing in general. Industrial cell size can be from 5 m^ to 200 m^. Figure 2 is a close up demonstration of air distribution and slurry pumping. Mineral flotation is a three phase process where solid phase consists of mineral grains, liquid phase is mainly based on water and air as gaseos phase. In Outokumpu flotation cells air is conducted through hollow shaft to the rotating rotors direct influence. Intense eddy currents throw bubbles and pulp against the stator blades and through the gaps of the vertical blades and are distributed evenly to the flotation cell. Hydrofobic minerals attached to bubbles are lifted up to the froth area and to the concentrate duct.
Figure 1. Flotation cell.
Figure 2. OK rotor-stator close up.
961
3. CFD Modelling Mesh generation plays two main roles in CFD modelling. First, mesh generation consumes most of the total time used to analysis. Second, the quality of the computed solution is substantially dependent on the structure and quality of the computational mesh. The attributes associated with mesh quality are node point distribution, smoothness and skewness. Building a valid computational mesh is a separate species of science which can be separated to structured and unstructured grid generation. Choosing appropriate mesh type will mainly depend on the geometry of the flow problem. Figure 3 shows general 3D grid cells types accepted by most of the CFD solvers. Figure 4 shows an example of 3D multiblock structured grid and unstructured tetrahedral grid.
WamTWtdgit
Pyramid
Figure 3. 3D grid cell types.
Figure 4. Structured and unstructured grid.
When choosing appropriate mesh type for flow problem there are some issues to consider: Many flow problems involve complex geometries. The creation of a structured mesh for such geometries can be substantially time-consuming and perhaps for some geometries impossible. Preprocessing time is the main motivation for using unstructured mesh in these geometries. Computational expense can be a determinant factor when geometries are complex or the range of length scales of the flow is large. Hexahedral elements generally fill more efficiently computational volume than tetrahedral elements. A dominant source of error in calculations is numerical diffusion. Amount of numerical diffusion is inversely related to the resolution of the mesh. Also numerical diffusion is minimized when the flow is aligned with the mesh. In unstructured mesh cases with tetrahedral elements the flow can never be aligned with the grid. Using and combining different types of elements as a hybrid mesh can be a good option and bring considerable flexibility in mesh generation.
4. Computational Model 1 The flow and mixing simulations for grid type dependency calculations were carried out for the flotation tank geometry showed in figure 5. The tank is cylindrical and has six symmetrically placed vertical baffles and one horizontal baffle on the cylindrical walls.
962
Table 1. Tank specifications. Tank diameter (mm) Tank height (nmi) Shaft diameter (mm) Rotor diameter (nam) Rotor bottom clearance (nmi) Rotor speed of rotation (rpm)
3600 3600 160 825 83 100/160
Figure 5. Flotation tank geometry for CFD. Flotation tank is equipped with Outokumpu rotor and stator mixing device. The geometrical details of the tank are given in Table 1. For symmetry reasons it was sufficient to model only a part of the domain. The smallest symmetry of the geometry is 60** which contains one impeller blade and one vertical baffle. Two different type of grids were studied. Both grids were refined adaptively. After adaption both grids had about 450 000 cells. Steady state multiple reference frame approach was used in Fluent 5.5 simulations for modelling the rotor rotation in the stationary tank. Standard k-e turbulent model and standard wall functions were used. Simulations were done in liquid (water) phase. Computational model 1 results CFD simulation results, consisting of velocity vectors and distributions, pressures in rotor and stator area and turbulence quantities show similar results with both mesh types. No significant grid dependency between structured and unstructured grid types were found. Resultant velocities at two levels are shown in figure 6 and turbulent kinetic energy at rotor stator area in figure 7. ValooMy Magnttuda 0.3m
5
2
Vslocily Magnttiid* OAn
AV
\V \
0, OO
s l^^atruct
\
1
IH^ulTucI
1 3
i
V> < x — J 0,50
1.00
f9
^ 1,50
2,00
0, 00
^'^^ ^^^w— 0,50
1,00
_.• 1,50
Figure 6. Resultant velocities.
Figure 7. Turbulent kinetic energy at rotor-stator area.
2, DO
963
5. Computational Model 2 As a result from computational model 1 a hybrid grid for a laboratory size OK flotation cell with unstructured cells in the rotor domain and structured cells in the stator and tank domain was generated. The tank was a cylindrical, unbaffled tank with Outokumpu's rotor-stator mixing device. Computational geometry is shown in figure 8. Geometrical details of the tank are given in table 2. Table 2. Tank diameter (mm) Tank height (mm) Shaft diameter (mm) Rotor diameter (mm) Rotor bottom clearance (mm) Rotor speed of rotation (rpm)
1070 900 57 270 27 328
Figure 8. Computational geometry.
A 60** sector of the tank was modelled with periodic boundaries on the sides of the sector and symmetric boundary on the top of the tank to describe the free surface. Standard wall functions were employed on all wall boundaries of the computational domain. Fluent's multiple reference frame steady state approach was used with k-e turbulence model. Calculation was done in one phase (water). Computational model 2 results Results from computational model 2 were compared to Laser Doppler Velocimetry (LDV) measurements done to a similar flotation cell at CSIRO Thermal and Ruids Engineering laboratory. The velocity vectors predicted by CFD and the velocity vectors obtained from LDV are compared in figure 9. LDV is shown on the left and CFD velocity field on the right side.
0.2
0.4
0.6
0.8
1
r/R
Figure 9. Velocity vectors from LDV and CFD. Reasonable agreement is obtained between measured and calculated flow fields. Rotor creates a jet stream in the radial direction towards the cylindric wall. Two main flows circulates back to the impeller, one through the top side and second from the lower side of the stator. The comparisons of the measured and computed mean velocity components for radial direction as a function of cell height is showed in figure 10.
964 -•_r/R=024,| LDV
-•-r/FW).47,
1
f/f«.24.
I"
\
?"
V Ur(irt.)
LDV
LDV
r/fW)7S, TKKCR3|
r/FW47, TKKCFD
L ° u(^., '
5-
"'
u<:.,
"
Figure 10. Mean velocity components. Generally, the agreement is good between the measured and computed velocities. Standard k-e turbulence model suggest lower values than measured in the tank.
6. Conclusions The complex flow field in the Outokumpu's flotation cell has been studied by CFD. The flow pattern is dominated by strong radial flow from rotor towards the flotation cells cylindrical wall and two vortexes - one below the rotor and another above the rotor. The predicted velocity components agreed well with the values obtained from LDV. The standard k-s model does not predict accurately the k level in the flotation cell compared to measured values. It was shown that a CFD model with periodicity, hybrid grid and MRF approach can be used for detailed studies on the design and operation of the Outokumpu flotation cell.
7. References Matis, K.A. (Ed.), 1995. Flotation Science and Engineering, Marcel Dekker, Inc., New York, Basel, Hong Kong. Fluent Inc. 2002, Fluent manuals. Toimi Lukkarinen, 1987, Mineralprocessing part two, Insinooritieto Oy (In Finnish). Zhu, Y., Wu, J., Shepherd, I., Nguyen B. 2002, MineralsModelling of Outokumpu flotation cell hydrodynamics, CSIRO Minerals report DMR-1899.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
965
Teaching Modelling of Chemical Processes in Higher Education using Multi-Media L. Urbas, B. Gauss, Ch. Hausmanns*, & G. Wozny* Center of Human Machine Systems (ZMMS) Jebensstr. 1, 10623 Berlin, Germany ^Institute for Process & Plant Design (dbta) StraBe des 17. Juni 135, 10623 Berlin, Germany email: [email protected], [email protected], [email protected], [email protected]
Abstract Together with partners from three other German universities we are developing a multimedia learning system for the instruction of process engineering students. This paper focuses on a single module of that system which supports students in learning the task of stationary process modelling. As a basic example to teach systematic modelling we have selected a single distillation tray. On the base of this example, the students learn to apply systematic modelling techniques on complex problems, in particular they learn to draw proper system boundaries, to formulate mass and energy balances and to select appropriate equations to describe phase equilibrium and other relevant phenomena. To reach the intended level of competence students are supported with a pool of so called edw.snipplets, i.e. multi-media material like schematic drawings, interactive flash animations and short "edutaining" films that allow to look inside real distillation column. These edw.snipplets are integrated in a structured script that is nonlinear navigable by means of a semantic net. To foster self-paced learning some checkpoints are integrated where students can examine their understanding of the topic. Further support is given by small interactive programs {edu:trainlets) that can be fully integrated into the online script. These programmes may be as simple as an interactive visualization of some formula or concept like the bubble- and dew-point lenses and as complex as the simulation of a unit operation like a distillation tower or an adiabatic flash up to the operation of whole simulated plants in a distributed environment.
1. Introduction In the cross university project [my:PAT.org] multimedia based content is produced and new technologies are developed to be introduced for higher education in the subject matter of process systems engineering. In a joint effort process system experts and media-didactic experts work on extant concepts to fit them to the drastically changed demands for the work force in the field of process systems engineering plant design. We aspire a modular e-learning environment that serves the following needs: support of our traditional lecture as well as the theoretical and practical tutorials
966 improvement of the attractiveness of engineering studies, especially for female students (gender mainstreaming) promotion of teamwork in virtual and real workgroups support of self-paced learning styles teaching the basic knowledge about the engineering environment of process systems engineering.
2. First Principles Modelling of Chemical Processes Our teaching goal is that the students understand the first principle modelling of chemical processes. In the particular context of our course this means to functionally describe devices or processes in terms of formulating conservation laws for state variables and connect these variables by thermodynamic relations. To reduce the complexity for the students the module concentrates on systems with spatially lumped parameters. This class of systems can be described by ordinary differential equations and some algebraic relations. As this module is taught in the main study period students should have all of the mathematical and thermodynamic knowledge which is necessary to solve these modelling tasks. 2.1. Modelling as a design task Top and Akkermanns (1994) draw an analogy between the task of modelling of physical systems and the design task known from system engineering. As central element that is common for both tasks they identify a Specify-Construct-Assess (SCA) problem solving process. In particular they describe modelling as a stepwise refinery procedure which uses different ontological views at different stages of the model evolution component, process, and math view. They put emphasis on the iterative nature of the modelling process and reject the reuse of models. This reuse aspect is one of the core elements of Marquardt's (1992) methodology for the systematic structuring of chemical processes which was utilized in the modelling tool ModKit (Bogusch et al., 2001). ModKit offers support for the tasks of substantial abstraction, that is, decomposing the problem in a structure of hierarchical interconnected part models down to a level of not further decomposable models which include the mathematical equations. As the mathematical description is basically derived from conservation laws, the modelling process in chemical engineering can be efficiently supported by library elements. Nevertheless, both views meet if the typical chemical engineering point of view serves as an common ontological commitment. 2.2. Basic structure of the modelling process The aspect of structural invariance on the equation level is of high importance for teaching modelling of chemical processes, because it allows to identify a general structure for the task of modelling that seems simple at a first glance (adapted from Wozny, 2002): - construct a process flow diagram (component model) by placing particular modules and connecting this modules with streams. As stated before, modules represent processes, devices, and/or phases. This depends on the question the model shall give answer for
967 - describe the qualitative behaviour of the modules in terms of applicable conservation laws, connection of process variables by physical state equations and specify known properties of the streams and modules ( process model) - formulate the balance equations and physical property relations and specify given process and stream variables ( math model) - test number of unknowns = number of equations - solve equations to get an answer for your question. 2.3. User requirements analysis Despite the structural simplicity of the process sketched above our novice student has to overcome some barriers to become successful in modelling tasks. To identify the problems we conducted an user requirements analysis that included questionnaires, interviews and group discussions with lecturers and students of the department of process dynamics and operation at Technische Universitat Berlin. Furthermore we introduced a new (additional) course called Repetitorium with a new teaching form, switching from ex-cathedra teaching to a highly participative form. In the Repetitorium The students present their modelling solutions which are worked out in small break out teams and discuss process and results in the plenum. Such, we were able to elicitate the students' concepts (and misconcepts) of the modelling process. The results of the requirements analysis showed that the points of view about the shortcomings in the existing curriculum of students and teachers matched quite well (Gauss et al., 2002). Both groups agree that the courses at the university should refer closer to applied problems without neglecting the scientific approach. While most students consider learning with real world problems as more interesting and challenging the teachers complain about the students' lack of ability of transferring their theoretical knowledge to the practice. The survey showed that most of the students did not have the chance to develop modelling skills in other courses. Students have difficulties with a pragmatic cost/effect choice, i.e. which granularity is necessary and appropriate to answer the given question with minimum modelling effort. In particular, the students have difficulties to reduce the complexity by stating certain assumptions like neglecting particular phenomena, the drawing of system boundaries and the lumping of parameters.
3. Task Oriented Exploratory Learning The modelling task is a iterative learning process in itself, which calls for stepwise refinery and the change of view in different stages of modelling (Top & Akkermanns 1994). To support this multi perspective approach as well as the transfer and application aspect, we have decided to develop a learning module that links the theoretical knowledge to applied problems in a story based scenario. 3.1. Task oriented teaching approach We embedded new and extant edu:trainlets and edw.snippets in a "real world" story that offers common engineering problems which can be solved by modelling tasks. As story background the process of MeOH-H20 distillation was chosen. This approach implied the construction of a new framework and structure for the existing multimedia material. In addition to the semantic framework defined by the story, a real world navigation
968 metaphor was implemented. The learners navigate through the story via a display which resembles a process control system to actively visualize the hierarchical component view. This process control navigation display is placed in the left third of the screen while the other two thirds remain for the display of the learning content (see figure 1). fe!mj.Mmi|i!!!f!!|P|JJi.lJ..lillJ.IIJJJ.UIlJII.I.IJ.IM.IJ.UJJ.I.II« J
^
41
^Suchen
_iJFavoriten
•;;^Veflauf
-^' ^
mm
• =l|
Adresse |0]http://cetus.^n^)ms.tu-be^lin.de,•8O8O/^IYPAT/adl/in
Bearbeften
Ansicht
Favoriten
_ ^ f>Wedwelnzu
Extras ? L e m p l a t t f o r m fur die Prozess- und Anlagentechnik Offizielle Seite d e s L e h r s t u N s Diese Seite w i r d v o n R. Z e r r y und Ch. Hausmanns zur V e r f u g u n g gestellt
[my PAT.org] Abbrechen
Abmelden
Li-1
r'fcw
^IflJicJ
-,:>
Baum
Testergebnisse
Zur leichteren Anwendung der chemischen Thermodynamik werden Hilfsfunktionen
1
verwendet. Da das d-iemisedn Potential eine au6erst abstrakte Funktion ist, ist es vorteilhaft eine neue fijnktion, namiidi die Fugazitat f einzufiiren. Fur eine reine Substanz 1 bei der Temperatur T gilt: UfAj = <J%j = Vjdp ullgemetn cJpj - -s^d! + Vjdp
1 1
Bntettung Zoom out GrumMagen
*
ModeWerung *
Li
V
1
Es ist:
Phaseng^. Fugazitat Fugazitatskoetnzient Aktivital Aktivitatskoeffizient Berechnwig des Pggw.
H-U + pV JH=7'dS + l'dp a - H - ys d(} = dM-TdS-^'dr <JO = TdS+Vdp-TdS-Sdr dG = Vdp-Sdr Rjr reine SLjbstan2en sind die nartiell molaren ZustandsoraiSen aleich den molaren.
IfProzess/Deta l/Trennung^oden/T enns tufe rOrundlagen/F ugazjtat
Esfoigt
J/.ij = Jl^j = -Sjdl'
g Applet PESSE started
4- VfJp
SRCSfile: s 0 0 2 0 - e i n l e i t u n g - £ u g a z i t a e t . h t » l , v S
Figure 1. Screenshot of the system with the process control navigation display and a text component. 3.2. Pre structured task environments To address the granularity and boundary transfer problems of our students we replaced the former free modelling approaches by a pre structured task environment. The reduction of the interaction and solution space still allows the students to explore the results and consequences of their modelling decisions in respect to modelling and specification effort (which is a prerequisite for competence development, see Gernert et al. 2000) but enables us to provide some helpful feedback like solvability, degree of freedom analysis and some measures for complexity of the model. Further support is given by small interactive programs (edu:trainlets) that can be fully integrated into the task environment. These edu.trainlets may be as simple as an interactive visualization of some formula or concept like the bubble- and dew-point lenses and as complex as the simulation of a unit operation like a distillation tower or an adiabatic flash up to the operation of whole simulated plants in a distributed environment (Urbas 1999).
969
4. Evaluation To reduce the risk of wrong design decisions during the early stage of the software engineering process, repeated cycles of formative evaluation were carried out. Subject matter experts and experienced software engineers as well as students took part in heuristic usability tests with single elements, paper and pencil mock-ups and prototypes of the whole system. The results of these tests provided useful information for the improvement of the module in terms of the learning content and the interface design. Finally, a highly developed prototype of the module was subject to an experimental evaluation study with 18 chemical engineering students during a lecture. The students' motivation and attitudes towards computer based instruction were assessed with questionnaires. Before learning with the module for 45 min., the students completed a multiple choice test with 15 questions about the topic of stationary modelling. To control the effectiveness of learning with the module, the students completed the same test again in the following lecture, three days after the interaction with the module, without prior notice. Immediately after the interaction with the system, the students also were told to complete a detailed usability questionnaire. Results showed that the story-based content structure of the course and the process control navigation display were widely accepted by the students. The students judged the navigation in the system as easy to understand and the orientation in the course was not seen as a problem. Not only subjective measures like acceptance and motivation but also the learning performance was positively affected by the new module. The scores of the knowledge test increased significantly from the first to the second test. Thus the module has shown to be an excellent supplement to the existing curriculum. Nevertheless the students emphasise that they don't want to miss the personal assistance by the teaching staff, as they judged the contact to a tutor as the most important missing feature of the module.
5. Conclusions Introducing new media into the teaching process requires a systematic parallel-iterative approach with formative and summative evaluation playing an important role. We expect our approach to be a proper process model for developing and enhancing multimedia-supported teaching. As we accompany the development of new software components with usability tests the risk of wrong design decisions can be reduced. The results of user tests on prototypes with reduced functionality were fed back into the development process to ensure to meet the students requirements with our task oriented learning approach. First results of the experimental evaluation study with a highly developed prototype of the learning module are encouraging.
6. References Bogusch, R., Lohmann, B. & Marquardt, W., 2001, Computer-aided process modelling with ModKit. Comp. & Chem. Engng 25 (2001), pp. 963-995. Gauss, B., Hausmanns, Ch., Urbas, L. & Wozny, G., 2002, Multimedia in Learning Process Systems Engineering. In: Proceedings of the 6th Int. Sci. Conf on
970 Work with Display Units, pp 141-143, Eds. H. Luczak, A.E. gakir, & G. gakir. ERGONOMIC Institute, Berlin. Geraert, R., Krtiger, K. & Timpe, K.P., 2000, Kompetenzforderung als Bewertungskriterium fur UnterstUtzungssysteme. In: Bewertung von Mensch-MaschineSystemen, pp 75-90, Eds. K.P. Timpe, H.-P. Willumeit, H. Kolrep. VDIVerlag, Diisseldorf. Marquardt, W., 1992, Eine Modellierungssystematik zur rechnergestutzten Erstellung verfahrenstechnischer ProzeBmodelle. Chem.-Ing-Tech. 64 (1992), pp. 25-40. English Translation in Int. Chem. Engng, 34 (1994), 28-46. Top J. & Akkermans, H., 1994, Tasks and ontologies in engineering modelling. International Journal of Human-Computer Studies, 41 (4), 585-617. Urbas, L., 1999, Entwicklung und Realisierung einer Trainings- und Ausbildungsumgebung zur Schulung der ProzeBdynamik und des Anlagenbetriebs im Internet. VDI Verlag, Diisseldorf (Fortschrittberichte; Reihe 10; Nr. 614). Wozny, G., 2002, Skript Prozess- und Anlagendynamik. TU Berlin, Berlin.
7. Acknowledgement We kindly acknowledge the support at dbta and ZMMS, in particular we want to mention R. Zerry's implementation support and J. Huss, Dr.-Ing B. Goers and their students in the Repetitorium. The work presented in this paper is sponsored by BMBF in the Programme "New Media in Education". The virtual process control system and user adaptive algorithms are jointly developed with the MoDyS Research Group that is sponsored by VolkswagenStiftung within the programme "Junior Research Groups at German Universities".
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
971
Modeling of a Batch Process Based upon Safety Constraints Michiel E. van Wissen, Adam L. Turk, Costin S. Bildea and Zofia Verwater-Lukszo Delft University of Technology, PO Box 5069,2600 GA Delft, The Netherlands
Abstract Simulation and optimization are key tools for improving the operations of a chemical process. Unfortunately, the results from these tools are only a good as the accuracy and appropriateness of the process model. This paper presents a model of a polymer batch process that has been modified to include safety constraints. These safety constraints are based upon reactor being able to contain an explosion. Based upon this idea, runaway behavior and cooler limitations are incorporated in the process model since they determine the possibility of an explosion. Neither modeling task was simple to include since they relied upon conditional statements, generation of multiple scenarios, and computational complexity from nonlinear equations such as log mean temperature difference. The optimization of the process model with respect to runaway behavior required the analysis of multiple scenarios or parallel simulations for example. These multiple scenarios would describe the effect of runaway reactions at different times from the basic process profile. In the end, the simulation of the process model ran satisfactory and work on the optimization of the model is in process.
1. Introduction A substantial number of chemical, food, pharmaceutical and metallurgical industry rely on batch-wise production processes. The dynamic and complicated character of the recipe steps and the equipment can lead the process to reach unsafe conditions. The accurate and appropriate modeling of the batch process and the recipe is the focus of this research. In particular, the goal of this paper is to present the modeling for simulation and optimization of safety constraints related to a polymer batch process.
2. Description The polymer batch process is comprised of a Continuous Stirred Tank Reactor (CSTR) and a countercurrent heat exchanger for cooling (Figure 1). The reaction can be represented as a simple two component exothermic polymerization of the following form:
A + B^^C
972 The polymerization takes place in the presence of catalysis. The objective of the batch process is to maximize production of the product by controlling the amount and timing of added reactants along with the reactor's temperature profile. Of course, this objective is conditional on satisfying certain safety constraints. In the initial recipe the feedrates for components A and B, the temperature and the catalyst charge, which are also the decision variables in the optimization, are set.
Continuous Stirred Tank Reactor
Figure 1: Process Scheme
3. Safety Constraints The safety constraints primarily limit the temperature and pressure of the reactor from reaching critical values. These safety constraints are based upon process containment during runaway conditions and limitations of the heat exchanger for cooling. The batch process has to be operated in a manner that would allow the system to contain an explosion from a runaway reaction. A runaway reaction is defined by the failure of the cooling system. It should be noted that the batch process does have a control procedure that stops the process when a trip temperature or pressure is reached. Nevertheless, the process does exist in a period when the recipe steps are still being performed but the cooling system has failed. In addition to the runaway conditions, the limitations of the cooling system need to be modeled. The limitations of the cooling system have an impact on the temperature profile of the reactor, which in turn effects the generation of product. One limitation of the cooling system is the energy flux between the polymer stream and a water stream as well as the energy leaving in the water stream. 3.1. Runaway conditions The problem facing the modeling of the runaway conditions is the need to switch between following a temperature profile and generating one. The conditional statement is simple to represent in current simulation packages. The modeling difficulty comes from the selection of the equations and variables that will be affected by the conditional statement. The temperature of the reactor can be used since it is known during normal operation if one assumes perfect agreement between the given and actual profile. Nonetheless, the temperature is unknown for runaway conditions. The solution to this problem is to select energy flow or flux instead of temperature. The necessary energy
973 flux leaving the system to maintain the given temperature profile can be calculated while the same flux is zero during runaway conditions due to the failure of the cooling system. Therefore, the conditional statement for the runway conditions switches the necessary energy flux leaves the system. if Time > Time^„.„,, then Q=0 else
(1)
Q = QR», - Q , ^ - M,„„ • J m , -Cp,,,,, +Cp,_, -AT The calculation for the necessary energy flux to maintain the given temperature profile assumes that the temperature change or rate is known. In turn, the energy flux selected by this conditional statement can be used in the different energy equation (Eqn. 2) to calculate the true temperature profile of the reactor.
Mxctal-E^^i-^PHi+^Pv
dT dt
= QRX„+QP^+Q
(2)
This differential equation is based the enthalpy of the reaction, the enthalpy of the feed stream, and the energy flux leaving the system.
Figure 2. Solution of a scenario-integrated dynamic system. The conditional statement in equation 1 switches the leaving energy flux based upon a selected time. Since the occurrence of a runaway reaction is indeterminate in reality, the use of the trip time allows for the runaway conditions to be represented. In Abel and Marquardt (2000) a new methodology for capturing uncertainties such as runaway reactions in the model, the so-called scenario-integrated modeling is introduced. At
974 certain switching times, the model exhibits 'new' behavior, and switches between the nominal mode and the scenario mode as depicted in Figure 2. Here, we show all possible trajectories for one specific state variable (in this case Temperature), which forms a surface in the 'Temperature-Time-TimeRunaway' space. Since the time instant of an event is assumed to be unknown any instant on the time horizon of the nominal mode must be treated a s a possible switching time, resulting in the simultaneous analysis of the system along multiple time axes. Since the runaway behavior is allowed to happen at all times, multiple reactors having all different runaway-times were created in one process model. In this way, we captured the scenario-integrated modeling proposed by Abel and Marquardt (2000). 3.2. Cooler limitations As noted earlier, the cooler limitations are based upon the energy flux between the polymer stream and a water stream as well as the energy leaving in the water stream. The energy flux between the polymer and water streams in a countercurrent heat exchanger is described by the following equation.
Q = UALMTD
,
LMTD=^^^^_ '^
(3)
The variable U, A, and 9 represent the overall heat transfer coefficient, the area of the heat exchange, and the temperature difference between the polymer and water streams at different points. The logarithmic-mean-temperature difference (LMTD) is defined in the second part of equation (3). The primary modeling effort was to use this equation to calculate the temperature difference at point 1 since all other variables were known. Unfortunately, this equation has a discontinuity when the temperature difference at points 1 and 2 are equal. This discontinuity makes simulation difficult since it separates the feasible space for the temperature difference into two regions. Equation (3) can be written to avoid this discontinuity. Q-(lnei - l n 0 2 ) = U-A-(ei - 9 2 )
(4)
However, this form of the equation creates an infeasible solution when the two temperature differences are equal since the energy flux, the overall heat transfer coefficient and heat exchanger area becomes inconsequential. In physical terms, the two temperature differences can be equal. Equation (4) has to be changed so that this possibility was included: if 9i =92 then
Q = u.A.e. else Q-(ln9i-ln92) = U A ( 9 i - 9 2 )
^3^
975 Unfortunately, the two possibilities were to create a conditional statement that switched this equation with a simplified form of the LMTD used by Underwood: LMTD'''=(-(e/''+0;'')
(6)
Other approximations for the LMTD that are proposed by Paterson (1983) and Chen (1987) were also tried. Neither of these methods solved the problem. The first possibility would cause the conditional statement to infinitely flip back and forth between the different equations as it tried to find a solution. The Underwood, Chen and Paterson approximations had too many potential roots or solutions for a solver. One potential way of avoiding the computational problems is to model the cooling without the logarithmic-meantemperature difference as is done in Akman et al. (2002). We avoided the computational complexities by using equation (4) to calculate the necessary area of the heat exchanger instead of one of the temperature differences. One can see that the heat exchanger's area has only one possible solution for given set of values for the other variables. The only problem is that the temperature difference can still be equal. The following conditional statement solved this problem: if (e,-e2)<ethen
Q = u.A.e,
^^^
else Q(lne,-lne2) = U-A(0,-e2) This conditional statement allows for the energy equation to be replaced around the discontinuity. The e>0 is set to avoid computational problems with the error tolerances of the simulation or optimization solvers. The calculation of the heat exchanger's area is also beneficial in determining the limitation of the cooling system since an optimal area has already be determined. For example, the process does not exceed this cooling limitation as long as the calculated area is less than the actual area. In optimization, this concept can be easily represented by including the area of the heat exchanger is constrained by a path inequality The other cooling limitation is the amount of energy that the water stream can remove. This limitation is represented by the following equation: -Q = F „ , , . C p „ , , . ( T , - T ^ )
(8)
The flowrate of the water stream is set to its maximum value. The outlet temperature of the water stream can then be calculated from this equation. The outlet temperature is also included in the optimization problem as a path inequality.
976
4. Results The process model with the addition of the safety constraints has been successfully simulated. If the process model is tuned to a real process then the results obtained from the simulation match those observed in practice. Unfortunately, the runaway conditions could not be completely verified by physical data, but the results do match results from other models. In the optimization, the objective function is to maximize the product generated versus process time. The desired product was defined by endpoint equalities and inequalities, such as: amount of unreacted components. In addition, the safety conditions required certain path-constraints for the state variables such as temperature. Unfortunately, we experienced optimization problems with the above formulation. The problems stem from getting stuck in infeasible regions due to complexity of the process and the nonlinearity of the objective function. At the moment, we are working to overcome these problems so that we can test the runaway behavior and cooler limitations with respect to optimization.
5. Summary In the present paper we modeled a polymer batch process including safety constraints and runaway conditions, with the aim to simulate and optimize the model. It was shown which choices were made in the modeling and how to make the models suitable for scenario-integrated optimization. An optimization problem was formulated, but no satisfactory results have been obtained yet.
6. References Abel, O. and Marquardt, W., (2000), 'Scenario-integrated modeling and optimization of dynamic systems', AIChE Journal, 46(4). Akman, U., Uygun, K., Konukman, A.E. and Uzturk, D., (2002), 'HEN optimizations without logarithmic-mean-temperature difference', AIChE Journal, 48(3). Chen, J.J.J., (1987), Comments on improvements on a replacement for the logarithmic mean. Chemical Engineering Science, 42. Coulson, J.M., Richardson, J.F. and Sinnott, R.K., (1983), Chemical Engineering, Vol. l,Pergamon Press. Paterson, W.R., (1984), 'A replacement for the logarithmic mean'. Chemical Engineering Science, 39(11). Stoessel, P., (1995) 'Design thermally safe semibatch reactors'. Chemical Engineering Progress. Underwood, A.J.V. (1933), 'Graphical computation of logarithmic mean temperature difference'. Industrial Chemist, 9(167).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
977
Modelling at Different Stages of Process Life-Cycle Terhi Virkki-Hatakka\ Ben-Guang Rong\ Krisztina Cziner^, Markku Hurme^, Andrzej Kraslawski\ Ilkka Turunen^ ^Lappeenranta University of Technology, P.O.Box 20, 53851 Lappeenranta, Finland ^Helsinki University of Technology, P.O. Box 6100, FIN-02015 HUT, Finland
Abstract The role of models in process development is discussed in the paper. Scientists are developing more and more impressive results in computer aided process engineering and that way generating new possibilities. In spite of that, there exist, at least in most areas of chemical and process industries, series difficulties when models area applied in practice. Such difficulties are described with two industrial examples of process development projects. Requirements for models and modelling are discussed from then viewpoint of practical process development. An attempt is made to suggest such further research areas in computer aided process engineering, which would reduce the gap between the theoretical developments and practical application in the field.
1. Introduction In many traditional areas of chemical industry, e.g. in oil refining and petrochemistry, modelling and simulation is well developed. Also the design methodology has been established to start with conceptual design with approximate models and to proceed towards more detailed descriptions of processes and equipment. However, it is well known that reliable modelling of many chemical processes, e.g. those including solids or electrolytes, is not always so straightforward. This is even truer outside the actual chemical industry, e.g. in pulp and paper, as well as in food industry. Also in chemical industry new technology, such as intensified units, multifunctional units, new chemical routes etc. offer challenging and complicated modelling targets which cannot be solved with existing commercial simulation packages. Product development is coming more and more important and in many processes the properties of the product is the main concern. In fact, the application area of chemical engineering is widening. In many cases the novelty of the process, and the innovative approach require the development of tailor-made, detailed new models. Modern processes of chemical industry have often only a few steps. Then the main target, determining the success of the process, might not be the process structure but the detailed analysis of the most critical process unit. For these reasons detailed modelling of processes is coming more and more important. In these models one should also be able to include the specific features of the case at hand, as well as the new features originating from the creativity of the process developer. Different steps in the development of such a detailed process model can be presented in the following way: •
Identification of the main purpose of the model, i.e. the clear statement of the industrial objective. However, the goal should be that the same models, with minimum changes, could be used at all stages of process life-cycle.
978 •
Identification of different phenomena of the process. Deep knowledge of the process is an advantage for many reasons, and therefore one should favour mechanistic models.
•
Selecting the most important phenomena and planning their experimental research. In this connection it is often necessary to divide the process into several subsystems to avoid excessive complexity and too large number of parameters. An adequate combination of laboratory, mock-up and pilot-scale experiments has to be chosen from extremely large number of possible ones. Selecting the theoretical basis from possible several competitive theories. Formulation of equations. Solution of the model. Presentation and interpretation of the results. Parameter estimation and model validation based on experimental data. Experimental design methodologies should be adopted. Documentation of the model. Integration of the model in the total system, i.e. evaluation of the impact of the technology on the whole process.
•
Further development of the model as the project proceeds.
This modelling sequence is iterative and the developer usually returns to earlier steps after checking the results. From the preceding list, proper selection and validation of the models are important conditions for useful practical applications.
2. Process Examples Modelling experiences from two real industrial projects from Kemira, Finland, are presented. In both cases the process to be developed is novel, or at least has plenty of novel features. The first example, hydrogen peroxide process, is an excellent example of successful use of models through the whole process life-cycle, from R&D to design, operation and process improvement. In the second case, melamine process, only the R&D and the design stages are based on real experiences, and the description of later stages, i.e. piloting and operation, are based just on a project plan. 2.1. Anthraquinone process for production of hydrogen peroxide The process has been briefly described by Turunen (1997) as an example of process intensification activities. Most of the hydrogen peroxide production is nowadays based on the anthraquinone method. The differences between the technologies include mainly differences in solvents, catalysts and equipment types and details. The process has less than ten main unit operations including two multiphase reactors, liquid-liquid extraction, gas desorption, distillation and filtration. The process conditions do not include high temperatures or pressures. The necessary properties are not readily available from literature because of the large number of components in the process liquid. However, the measurement of the most of the properties is relatively easy because of the mild conditions. The number of components which take part in the main production reactions and separation steps is small. Therefore it was possible to develop reliable models for most of the unit operations and to base the design on these models. However, the side reactions and by-products involve complicated chemistry and
979 unknown phenomena, which had to be left outside the models. These phenomena did not affect much on the sizing of the process but they were very important in the operation of the plant. Observations concerning the use of models at different steps are shown in Table 1. Table 1. Observations process life cycle.
concerning
modelling
activities
Project step
Role of models
R&D
Mechanistic models were developed to study reaction rates, catalyst activity, mass transfer rates, hold-ups and physical properties. The models were validated by laboratory and bench-scale experiments. Parameter estimation, parameter sensitivity studies and experiment planning were very important computational activities at this work.
Conceptual design
Modelling of unit operations was more important than synthesizing of process structure. The latter was self- evident. Calculation of the mass and heat balances of the process was straight- forward. Detailed models of the most important process units were needed already at this stage to study the most decisive technological questions. The models had an important role in the design of the equipment of the pilot plant. The parameters of the models were estimated using experimental results from pilot tests. This was done partly "on-line" in the automation system. Improved models were used in the design of the main equipment. Because of reliable models, it was possible to use very high scaleup ratio. The models contained also very fine structural details of the equipment, such as the hole size of the sieve trays in the extraction, geometrical arrangement of static mixers in tubular reactors etc. Optimal or practical conditions of the process were predicted by the models. This was useful especially in start-up and early days of production. Models were improved by reestimation of the parameters on the basis of process measurements. Models were continuously used for trouble shooting. They were used for further development as well, especially for process intensification.
Piloting
Detailed design
Operation and further development
during
hydrogen
peroxide
Problems in model development and use Many conventional chemical engineering correlations proved inadequate because of gradually changing properties of fluids and certain specific physical phenomena. Large number of parameters frequently led to attempts to separate the phenomena and to study them one at time. The modelling of process units was done at a detailed level and no commercial software was adequate for that. Therefore tailor-made, specific models were needed.
Changes in the composition of process fluids and process conditions decreased the accuracy of parameter estimation. The phenomena which were not included in the models, mainly concerning side reactions, caused some risks in the design.
As a part of operators' support system, the models should have given accurate answers in all conditions. Reaching such a reliability level with models is very difficult.
980 2.2. Melamine process This process has been described by Turunen and Oinas (1998). The process conditions are very challenging, including temperatures of 400 - 470 °C, pressure of 100 bar and serious corrosion problems. Melamine has some unfavourable properties, such as the tendency to sublimate when heated and solidify on solid surfaces. These cause problems in the operation of the pilot and full-scale plant. Typical such problems include leakages, blockages in pipelines and failures of sealing and instruments. Estimation of physical properties in these conditions is very difficult, and so is the measurement of them. Therefore on has to rely on rough, approximate values. The number of the main process steps is even less than in hydrogen peroxide process. They include a high temperature gaslift reactor, scrubber, evaporator, crystallizer, solid-liquid separator and solids handling. Heat transfer apparata form an important part of the equipment because of high temperatures and very endotermic reaction. Therefore heat integration is very important for this process. Table 2. Observations concerning modelling in the development of melamine process. Project step
Role of models
R&D
Attempts to model certain physical properties and the main process units.
Conceptual design.
Approximate models of the main unit operations for balance calculations.
Piloting
Almost the only way to get experimental results. Model parameters to be estimated include both property and equipment parameters. High risk in scale-up because lot of empirical data has to be used instead mechanistic approach. Balance calculations useful for energy integration. Models have a role in fault detection and troubleshooting.
Detailed design
Operation
Difnculties in model development and use Modelling was very difficult because the possibility to make laboratory and benchscale experiments were limited (Proper conditions could be reached only in pilot plant). The models had plenty of unknown parameters because of the lacking properties and limited experimentation possibilities. High number of parameters. Sampling difficulties and non- redundancy of measurement points.
Limited value of models.
Purity of the product is the main criterion for successful operation (in addition to costs and reliability). However, only the main components, not the impurities were included in the models.
The side reactions and by-products had to be left outside the models. Inclusion of them would have required an extensive scientific study, which is not possible in the connection of industrial R&D project. In general, in this melamine case the models were less useful than in the hydrogen peroxide process and one had to rely more on
981 experimental data. The main reasons were the lack of reliable physical properties and difficulties in model validation. The latter problem was caused by the fact that in these process conditions, experimentation in laboratory and bench-scale was very limited. Pilot plant was needed to reach the real process conditions and therefore less flexibility was available in experimentation. Modelling experiences from the melamine project are shown in Table 2.
3. Model Selection and Validation The project examples describe a very typical situation indicating that conmiercial simulation programs can be used only in a limited extent in the practical development of novel processes. Usually existing models are not detailed enough and are incapable to cope with specific features in the real development projects. Attempts have been made in universities to develop flexible tools which could be reach the required specificity and detailed nature. For one reason or another, these have stayed on the level of research results and are not much utilised in practice. Companies, on the other hand, develop their own specific and detailed models for their own purposes. These can be used only for simulation of those processes for which they have been developed. In model selection, one should, of course favour mechanistic approach, but often a compromise between mechanistic and empirical models is needed. All the decicions in model selection should be based on the main purpose of the model. The purpose of the model also determines the degree of details in the model. Increasing the details and the theory in the models, while usually increasing accuracy and reliability, also often brings more parameters to be estimated and therefore also more experimental activities. In practice the best way often is to use several levels of "granularity", so that detail can be provided in the critical areas but simplified or empirical approach used in non-critical areas. Validation of the models on the basis of experimental results is extremely important. First of all, the complexity of the model has to be compatible with the quality of experimental data available. Crude data with high noise level may identify only rather crude models. A proper interplay between experimental and modelling work is crucial. A good fit between the model and measurements is usually not sufficient but, in addition, the proper values of parameters have to be identified and the mutual correlations between them revealed. This is especially important when the models are used for extrapolation, e.g. scale-up.
4. Conclusions The conventional simulation models do not support creativity but tend to lead existing solutions. Moreover, specific features of practical modelling problems force the users to develop their own models. These problems will be increased in future, when expanding application field of chemical engineering brings new modelling cases where detailed, specific and innovative approach is needed. To avoid this problem one could suggest a model library containing mechanistic models of different physical and chemical phenomana, as described in Fig, 1.
982
LIBRARY OF MODELS OF PHENOMENA Reactions & catalysis
Solid surfaces, particulates
Heat and Mass transfer
creative user
specific requirements of the application
Diffusion
Mixing & phase separation
expertmentation^ modern measurement methods
requirements of the life cycle architecture
Fig. 1. Library ofphenomena models for innovative modelling of novel processes.
The system described in Fig. 1 could be used to develop new processes and process units by combining relevant phenomena according to specific goals and constraints. This would allow a creative approach and also specific requirements of each case could be taken into account. The modelling should be done together with simultaneous experimentation and therefore also mathematical software for parameter estimation, sensitivity analysis and experiment planning would be needed. The system would help in the development of models with different degree of details for specific purposes: intensified units, multifunctional units, new chemical routes, properties of products etc. Such a tool would inevitably be a tool of experts, its use would require expertise.
5. References Turunen, I., 1997, Intesification of the anthraquinone process for production of hydrogen peroxide. Proceedings of 2°^ International Conference on Process Intensification, BHR Group Conference Series No.28,99-107. Turunen, I., Oinas, P., 1998, U.S. Patent 573 1437.
6. Acknowledgements The authors would like to thank Kemira Chemicals Oy for permission to present the example processes.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
983
The CFD Simulation of Temperature Control in a Batch Mixing Tank Guangyu Yang, Marjatta Louhi-Kultanen and Juha Kallas Department of Chemical Technology, Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta, Finland
Abstract A CFD simulation method for a batch cooling or heating mixing tank, which follows a specific controlled program, was developed in this study. The momentum transfer and turbulent heat transfer in the whole tank were included in the computation while the wall temperature was reset according to time. The simulation results show that the average temperature in the tank adheres to a specified cooling curve, but the temperature distribution can be ignored in a 10-liter mixing tank. The simulation results were verified with batch cooling experiments performed on pure water. Furthermore, the temperature distribution in a large-scale 5-m^ tank and the application of the temperature distribution in other chemical processes in the mixing tank is briefly discussed here.
1. Introduction The mixing tanks are used for cooling or for heating the solution through the jacket in many batch chemical processes and it is well known that the heat transfer in the process is strongly affected by the hydrodynamics of the process. Therefore, heat transfer depends on the different geometries and scales of the tank in which the temperature distribution differs. Heat transfer in mixing tanks has been studied to a large extent and the emphasis has been on obtaining specific empirical models that correlate heat transfer with the condition of the hydrodynamics, physical property of the solution and geometry parameters of the tank in order for it to be used for process design and control (Karcz and Kaminska-Borak, 1997, Strek and Karcz, 1997). The shortcoming of the experimental method for studying heat transfer is that the objective under investigation is geometry-dependent, in addition to which it is very time-consuming to investigate case by case. However, computational fluid dynamics (CFD), being a numerical method, can help us to investigate the heat transfer process in a different way. This is done by solving the basic transport equations, i.e. the continuity, momentum and energy transport equations, in the studied domain under specific boundary conditions, which means the simulation method for studying heat transfer is geometry-independent. During the recent years, some research has been done in this area of active research (Barrue et al., 1999, Boltersdorf et al., 2000). In industrial production, the temperature distribution in a batch mixing tank is sometimes a critical factor in process control; for instance, in batch crystallization and in batch chemical reaction systems. In some processes, the solution temperature should follow a certain cooling or heating program, while in some other cases, heat removal should proceed in such a way that the temperature in the tank be kept constant in order to prevent temperature runaway. Therefore, heat transfer control is usually required during the batch and plays a crucial role in optimising the operation of the process in order to obtain the desired end-product and improve operational safety.
984 This study proposes a CFD simulation that can be used to simulate controlled batch cooling in a mixing tank. Using this model, the mean temperature in the tank can be controlled to follow a required profile. Also, the temperature distribution in the tank can be monitored during the batch time. According to a literature review carried out by the authors, no research on turbulent heat transfer in mixing tanks has been found in the field of CFD simulation. This study compares temperature distributions in tanks of various sizes in order to investigate the scale-up of heat transfer in a batch mixing tank. The principle presented in this work can be used generally for jacketed mixing tanks operating batch-wise. Further, how to apply the proposed model in other applications was discussed here.
2. Simulation Theory and Simulation Technology 2.1. The simulation model for heat transfer between the wall and the fluid In the simulation, the heat transfer between the coolant and solution is simplified as the heat transfer from the wall to the solution in the mixing tank. In order to maintain the solution temperature on a specific cooling curve during the batch, the wall temperature should be reset over time according to the process requirements. The variation of the wall temperature over time results in the variation of the solution temperature over time and in accordance with the location based on the heat transfer simulation. Based on the solution temperature distribution at the current time step, /, and at the last time step, /-I, the rate of transferred heat as well as the average temperature of the solution at the current time step / can be calculated. Furthermore, the heat transfer coefficient at the current time step, /, can be calculated using the rate of transferred heat and the temperature difference between the solution and the wall. The main equation for the calculation of heat transfer can be expressed as:
Jc,.p.vo).[r,(/-i,j)-r,(/,7)] ^ = /i(0-A.[r,,(/)-rjo>
(^)
where / indicates the time step and 7 the location in the tank. In order to proceed with the simulation process, the wall temperature should be reset for time step /+1. The solution temperature at the end of time step i+l can be estimated using the cooling profile, F(t), which is the requirement of the batch operation process, 7'.,pO- + l) = F(f,,,)
(2)
The predicted rate of transferred heat for time step /+1 can be calculated and, thereby, the wall temperature at the same time step estimated using the heat transfer coefficient at time step / using the following equation: r , ( / + i ) = r,^,(/+i)- ; '^
(3)
A'h{i)
Based on this method, the simulation process can be continued until the end of the batch run and the temperature distribution T(iJ) in the tank obtained at different operation times. For the initial simulation, the initial solution temperature is assumed to be uniform in the tank, and the initial wall temperature can be guessed using the initial solution temperature at the end of first time step based on the cooling profile.
985 2.2. The simulation for turbulent heat transfer in the mixing tank Sliding grid technology is used mainly in the CFD simulation of the transient flow in the stirred tank presented by Luo et al. (1993). The mesh is divided in two domains of which one is fixed in space and the other fitted to an impeller blade and rotates with the impeller. The calculations are initiated from a state of rest, and after a few revolutions, a periodic state is reached in which the flow repeats itself from cycle to cycle, which describes the mixing status in the tank. The fluid flow was assumed to be fully developed turbulence with a Reynolds number, Re=2xl0'^, as proposed by Ibrahim and Nienow (1995). The turbulent heat transport equation was used together with the continuity transport equation, momentum transport equation and turbulent k-e flow model to simulate the heat transfer process in the main turbulent flow area. In this simulation, the value of the turbulent PrandU number was set to 0.9 (Anderson, 1984). The boundary conditions used in this CFD study are specified using wall functions based on the concept of the universal law of the wall. According to the universal law of the wall, it is assumed that the near-wall region is an area of constant shear stress and that the length scale of a typical turbulent eddy in this region is proportional to the distance from the wall. This assumption results in a logarithmic velocity and temperature profile near the wall. The values of the required parameters for the heat transfer boundary were determined based on the theory proposed by Jayatilleke (1969).
3. Simulation Result and Experimental Verification The stirred 10-liter tank used in the simulation was a jacketed cylindrical vessel with a shaped bottom and four baffles. Mixing was performed using a impeller with six blades pitched at 45°. Cooling took place through the jackets of the vertical cylinder part and the draft tube, whereas the shaped bottom of the tank did not have a jacket. The temperature distribution in the tank was simulated using water as the fluid. During the simulation, the initial temperature was set at 50°C, and was reduced to 30°C over a period of 40 minutes at a constant cooling rate of 30°C/h. The rotation speed of the impeller in the simulation was 250 rpm, corresponding to Re =6x10"^. The simulated results in Fig. 1 show that the mean temperature in the tank is in line with the cooling profile required in the batch cooling process. This indicates that the temperature control of a batch cooling process can be implemented using CFD simulation. Fig. 2 shows that the temperature distribution at the end of the run is almost even in most parts of the tank, which can be explained by the fully turbulent heat transfer. The heat transfer boundary can be clearly seen near the draft tube and the wall. It can also be observed that the temperature has a relatively low value on planes 2 and 3. This can be explained by the fluid dynamics and also by the cooling conditions in the tank. The small differences in the above-mentioned locations clearly show how heat transfer by cooling takes place in the tank. In the upper area, the temperature is relatively high because the heat transfer area is smaller in this region and flow rate of water is relatively low. The flow in the downwards direction leads to the more efficient heat transfer in the draft tube region and results in the region of lowest temperature outside the draft tube.
986
Fig. 1. The simulated solution and wall temperature in a 10-liter tank.
•*
Fig. 2. The temperature distribution in a 10-liter tank, values corresponds to K.
^ 4
Fig. 3. Positions 1-7 used in the temperature simulations and measurements in a 10liter tank; position 8 was used as the input data for the temperature control. To verify the simulated temperatures, the experimental study was carried out as is described here. A tank with the same geometry as that used in the simulation was employed in the experiments. Furthermore, the same operating conditions and cooling profile as that used in simulation this study were used in the experiments. During the cooling process, the temperatures in the seven different locations were monitored as shown in Fig. 3. In location 8, the local temperature was close to the mean temperature in the tank in accordance with the simulation results and, therefore, the temperature measured in location 8 was used to control the cooling program in the experiments. Locations 1 to 7 in the tank are points of local temperature measurements that were empirically registered and simulated during the batch cooling. The comparisons of the measured temperature and simulated temperature are shown in Figs. 4 and 5. A typical result in location 4 in Fig. 4 shows that the simulated temperature fits the measured values at this location well and the time-dependent temperature in the tank can be modelled using CFD. However, the temperature is uniform in the 10-liter tank based on the results obtained from the simulation and the experimental work as shown in Fig. 5. The difference between these two sets of results can be explained by the measurement error.
987
lexp isim
I
H H H H H H
Hi
I i| l| i| II II II II
I i l | l i l 1111 m I PI
|
i | l | II II i | Il Il 1111 i l m il
P2
P3
P4
P5
P6
P7
Location of the measurement point
Fig. 4. Calculated and measured temperatures over the elapsed time.
Fig. 5. Calculated and measured temperatures at different locations.
4. Discussion One benefit of CFD simulation is that it is possible to use a verified CFD model for the simulation of the different scales and different geometries of equipment. Another advantage of CFD simulation is that some critical parameters can be studied using CFD modelling and the thus derived CFD model can be directly used to simulate an entire process. On the industrial scale, large jacketed mixing tanks are widely used and heat transfer control is usually required. The scale-up of heat transfer is often difficult due to variations in hydrodynamics. However, for certain processes, the accurate temperature control is a critical factor for guaranteeing the quality of the product and the safe operation of the process. Knowing the local temperature can be very important information for the process control. From this point of view, CFD simulation is an appropriate method for the study of heat transfer in mixing tanks of different scales. An example of scaling-up of jacketed mixing tank is shown here. By using the simulation model presented in this work, the heat transfer in a 5-m^ mixing tank was simulated. In the simulation, the impeller speed used was 4 rpm based on the scale-up rule of the same Reynolds number. It can be clearly seen that a specific temperature distribution exists in the larger tank as is shown in Fig. 6. The maximum temperature deviation from the lowest temperature is approximately 3 K, which is 30 times that obtained from a 10liter tank. The proposed simulation method in this study can also be applied in some chemical processes, for example, in crystallizer and in chemical reactor, where the distribution of other operation parameters are directly related with the temperature distribution. For example, the supersaturation distribution is an important parameter for controlling the crystallization process in a mixing tank, which is usually defined as the difference in concentration between the supersaturated and saturated solution. The concentration of saturated solution can be taken from the solubility data, if solution temperature is known. Based on the temperature distribution and concentration distribution in the tank, the supersaturation distribution can be obtained. If local supersaturation exceeds a certain limit, the crystallization will be difficult to control. The simulated supersaturation, which relates to temperature in a batch cooling crystallization, is shown in Fig. 7. It can be seen that the certain supersaturation distribution exists in the 10-liter crystallizer.
988
Fig. 6. Temperature distribution in a jacketed S-ni tank, K.
Fig. 7. Supersaturation distribution in a jacketed 10-liter tank, kg/m .
5. Conclusions In this study, a new model was developed for simulating a controlled batch cooling process using CFD. The time- and space-dependent variable, i.e. temperature, was predicted by solving the turbulent transport equations together with the defined wall conditions. The simulated results show that the temperature trend of the variable over the elapsed time and the temperature distribution in various locations can be clearly visualized during the batch cooling process, which was experimentally verified. The temperature distribution in a larger scale tank and the potential application of the simulation theory in a mixing tank were also discussed in this work.
6. List of Symbols A area of heat transfer, m^ Cp specific heat, J/(kg K) Hp predicted rate of transferred heat,W h heat transfer coefficient, W/(m^ K) Ts solution temperature, K Ts^p predicted solution temperature, K
Ts,a average solution temperaure, K r^; wall temperature, K ith time interval, s time, s volume of cell, m density of liquid, kg/m^
7. References Anderson D.A., J.C. Tannehill and Richard, H., Fletcher, Eds., 1984, Computational Fluid Mechanics and Heat Transfer, Hemisphere publishing corporation. New York. Barrue, H., Xuereb, C. and Bertrand, J., 1999, Recent Research Developments in Chemical Engineering, 107. Boltersdorf, U., Deerberg, G. and Schluter, S., 2000, Recent Research Developments in Chemical Engineering, 15. Ibrahim, S. and Nienow, A.W., 1995, Trans IchemE, 73, Fart A, 485. Jayatilleke, 1969, Frog. Heat Mass Transfer 1, 193. Karcz, J. and Kaminska-Borak, J., 1997, Fol. Recents Frogres en Genie des Frocedes, 265. Luo, J.V., Gosman, A.D., Issa, R.I., Middleton, J.C. and Fitzgerald, M.K., 1993, IchemE Research Event, Birmingham, 657. Strek, F and Karcz, J., 1997, Fol. Recents Frogres en Genie des Frocedes, 105.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
989
On the Generalization of a Random Interval Method J. Zilinskas^ and I.D.L. Bogle ^ 1. Dept of Computer Engineering, Kaunas University of Technology, Lithuania 2. Dept of Chemical Engineering, University College London
Abstract Balanced random interval arithmetic is proposed for improving efficiency in global optimisation extending the ideas of random interval arithmetic where a random combination of standard and inner interval operations is used. The influence of the probability of the standard and inner interval operations to the ranges of functions is experimentally investigated on a manufacturing problem.
1. Introduction In process engineering it is frequently necessary to solve global optimization problems (Floudas, Xu et al, Byrne and Bogle). When an objective function and a feasible region are defined by analytical formulae or by procedural code then the methods based on interval arithmetic may be efficient. Interval methods for global optimization are currently effective for problems where the dimensionality is not too high (Byrne and Bogle, 1999). A disadvantage of interval methods is the dependency problem: when a given variable occurs more than once in an interval computation, it is treated as a different variable in each occurrence. Because of this the estimated bounds of an objective function are not tight, especially when a problem is given by code developed without foreseeing the application of interval arithmetic. For some problems interval methods cannot produce acceptable sizes of the multidimensional solution "boxes". Such inefficiency is caused by the large ranges resulting from interval operations. Alt and Lamote (2001) have proposed random interval arithmetic, where standard interval operations are replaced by random combinations with newly defined inner interval operations producing comparatively small ranges (although it cannot guaranteed that the ranges will be smaller). In such a way the result of a computation becomes a rather small "box" and there is a large probability that it will contain a solution. Random interval arithmetic has been applied to compute ranges of some functions over small intervals. We extend the ideas of Alt and Lamotte (2001) to cases of different probabilities of standard and inner interval operations. In this experimental investigation the approach is applied to the objective function of a difficult global optimization problem to explore the behaviour over large intervals.
2. Random Interval Arithmetic One of the first proponents of interval arithmetic was Moore (1966). Interval arithmetic operates with real intervals x = [jcj, ^2 ] = {xG 911 Xj < x < JC2}, where jci and x^ are real
990 numbers. For any real arithmetic operation {:^ op >;} the corresponding interval arithmetic operation {X op Y) is defined, whose result is an interval containing every possible number produced by {x op >'}, xeX, ye Y. We will use the notations of Alt and Lamotte (2001), denoting [a v ^]= [min(a,b),max(a,b)\, x^ = min(|jCi|,\x21) and x^ =max(|xi|,|x2|). Interval multiplication by a scalar is defined as yxX = \}Xi vyx2]The standard interval arithmetic operations are defined as: X+Y =
[{x,+y^)v(x2+y2)]
(1) (2)
y^xX,
XxY
OGX,0^y
[[min{xiy2'^2>'i}n^ax{xiyi,X2}'2}l
Oe Xfie Y
(3)
f[(^c/>'Jv(x,/3;Jl O^X,O^Y
X/Y = \
(i/y^)xX,
(4)
OGX,0^y
The inner interval operations are defined as: X+Y = X-Y
= [
[{x,+y2)v{x2+y,)]
(5)
[{x,-yi)v{x2-y2)]
(6)
[{xcyd)''{xdyc)\
0€X,0€Y (7)
l[max{jc,)'2.J'^23'i}"^i"{*^i>'i'^2>'2}J
Oe X,OG y (8)
The guaranteed lower and upper bounds for the function values can be estimated applying standard interval operations with the intervals instead of the real operations in the algorithm to calculate the function values. These bounds may be used to solve the global optimization problem. A disadvantage of interval methods is the dependency problem (Hansen, 1992) and because of this, the estimated bounds of an objective function are not tight, especially when a problem is given by a code developed without foreseeing the application of interval arithmetic. If the interval is sufficiently small so that operators in all the operations are monotonic, the exact range of a function for given interval data can be obtained by correctly using the standard or inner operations depending on whether the operands have the same monotonicity or not (Alt and Lamotte, 2001). Operations are summarized in Table 1.
991 Table 1. The interval operation on two monotonic operands.
+ X /
Have the same monotonicity Standard interval operation (1) Inner interval operation (6) Standard interval operation (3) Inner interval operation (8)
Do not have the same monotonicity Inner interval operation (5) Standard interval operation (2) Inner interval operation (7) Standard interval operation (4)
The difficulty is to know the monotonicity of the operands. This requires the computation of the derivatives of each subfunction involved in the expression of the function being studied, which needs a large amount of work. Alt and Lamote (2001) have proposed the idea of random interval arithmetic which is obtained by choosing standard or inner interval operations randomly with the same probability at each step of the computation. It is assumed that the distribution of the centres and radii of the evaluated intervals is normal. The mean values and the standard deviations of the centers and radii of the evaluated intervals computed using random interval arithmetic are used to evaluate an approximate range of the function:
k
radii' r^ceneers
-^M radii +^0^^,..\
(9)
where jUcentres is the mean value of the centres, jUradu is the mean value of the radii, CFrada is the standard deviation of the radii, a is between 1 and 3 depending on the number of samples and the desirable probability that the exact range is included in the estimated range. Alt and Lamotte suggest that a compromise between efficiency and robustness can be obtained using a=l.5 and 30 samples. The standard deviation of the centres was not used in calculations because in the experiments done here it was always very small. Random interval arithmetic has been applied to compute ranges of some functions over small intervals. Alt and Lamotte showed that random interval arithmetic provides ranges of functions which are much closer to the exact range than the standard interval arithmetic for single variable problems. Random interval arithmetic assumes that operators in all operations are monotonic. This may be the case when intervals are small and there is only one interval variable. When intervals are wide, as they can be in process engineering problems, operators cannot be assumed to be monotonic. Independent variables cannot be assumed monotonic either. Therefore such random interval arithmetic uses inner interval arithmetic too often and provides results which are too narrow when intervals are wide, so it cannot be applied to global optimization directly. Standard interval arithmetic provides guaranteed bounds but they are often too pessimistic. Standard interval arithmetic is used in global optimization providing guaranteed solutions, but there are problems for which the time of optimization is too long. Random interval arithmetic provides bounds closer to the exact range when intervals are small, but it provides too narrow bounds when intervals are wide. We would like to have interval methods that are less pessimistic than using standard interval arithmetic and less optimistic than with random interval arithmetic. We expect that the random interval arithmetic will provide wider or narrower bounds depending on
992 the probability of standard and inner operations at each step of the computation. Balanced random interval arithmetic is obtained by choosing standard and inner interval operations at each step of the computation randomly with predefined probability. Standard interval arithmetic is used when the probability is 1. Inner interval arithmetic is the case when the probability is 0. The influence of probability of the choice on the subsequent ranges of functions should be experimentally investigated and this is reported here.
3. Experimental Study of the Balanced Random Interval Arithmetic Balanced random interval arithmetic with different probabilities of standard and inner interval operations was used to evaluate ranges of several objective functions of difficult global optimization problems over random intervals. One case of typical results is illustrated using the objective function of a multidimensional scaling problem with data from soft drinks testing (Mathar, 1996, Green et al., 1989) and these results are presented here. Ten different soft drinks have been tested. Each pair was judged on its dissimilarity and the accumulated dissimilarities Sij are the data for the problem. The goal of this multidimensional scaling problem is to find the best configuration of 10 objects representing each drink in the two-dimensional space which would help to interpret the data. The objective function of the problem is / ,
x2
V
/(X)=S
(10)
J
where jc, i, x/2 are the coordinates of the /* object (/=1..10 and y=1..10). Balanced random interval arithmetic with different probabilities was used to evaluate ranges of (10) over random intervals. The histograms of the centers and radii of the 10000 intervals evaluated using balanced random interval arithmetic with probabilities 0.5, 0.6 and 0.7 over one random interval region are shown in a, b, and c of Figure 1. The ranges of the horizontal axes for the histograms of the centres are the centre of the inner interval and the centre of the standard interval. The mean value of the centres moves towards the centre of the standard interval when the probability of standard interval operations is increasing. The ranges of the horizontal axes for the histograms of the radii are between 0 and the radius of the standard interval. The mean value of the radii increases to the radius of the standard interval when the probability of standard interval operations is increasing. Normal distributions with evaluated means and standard deviations are also shown. The distributions of centres are normal, but the standard deviations are not small, as they were in (Alt and Lamotte, 2001) when intervals were small. Therefore, instead of using equation (9), the standard deviation of the centres should be used when the range of a function is evaluated as follows: \Mcenters ~ ^^
centers ~ f^radii
~ ^^
radii' /^centers "^ ^^
centers "'" Mradii "^ ^^
radii J
(^ ^ /
993 The evaluated interval ranges of function (10) with 0^3 are shown below the figures. centers of intervals, mean=138.160723, std=3.302589 1000|
centers of intervals, mean=142.870490, std=3.490012 lOOOi
130 140 150 160 170 radii of intervals, mean=21.366027, std=12.460792
130 140 150 160 170 radii of intervals, mean=11.236776, std=8.465984 lOOOi
0
20 40 60 80 100 120 evaluated interval=[91.618227,184.703220]
^0
20 40 60 80 100 120 evaluated interval=[73.652051,212.088929]
a) 0.5 centers of intervals, mean=148.401892, std=3.601010 IOOOF
130 140 150 160 170 radii of intervals, mean=37.854903, std=15.354640 lOOOr
500
20 40 60 80 100 120 evaluated interval=[53.680039,243.123745]
c)0.7
100 150 200 250 range=[87.159300,180.614000]
d) random sampling
Figure 1. The histograms of the centres and radii of the intervals of function (10) evaluated using balanced random interval arithmetic with probabilities 0.5, 0.6 and 0.7 and of the function values at uniformly distributed random points. The histogram of function values at 20000 uniformly distributed random points in the same subregion is shown in Figure Id. Twice as many function evaluations have been used because the interval function evaluation uses approximately two times more calculations than real function evaluations. The ranges of the horizontal axis are the standard interval. Most function values are in one quartile of the standard interval. The ranges of function (10) in the subregion evaluated using balanced random interval arithmetic with the probability of standard and inner interval operations equal to 0.5 (Figure la) do not include all function values at random points in the same subregion as a sizeable number of radii are incorrect. The global minimum may be missed by the optimization algorithm if the evaluated ranges of the function are too narrow. However, the optimization is faster when the ranges are narrower. The choice of the probability of standard and inner interval operations dictates the balance to be struck between the speed and reliability of the algorithm. The ranges of function (10) in 10000 random subregions have been evaluated using balanced random interval arithmetic with different probabilities of standard and inner
994 interval operations. The ranges were evaluated using means and standard deviations of centres and radii of 30 balanced random intervals. The smallest probabilities for which the evaluated ranges include the function values at 20000 uniformly distributed random points have been found. 0.65 is the smallest probability for which the evaluated ranges of 99.49% of subregions include function values at random points. For 93.42% of subregions, the smallest probability for which the evaluated ranges include function values at random points is less than 0.625. For 87.74% of subregions the smallest probability is less than 0.600.
4. Conclusions The use of balanced random interval arithmetic is explored. The influence of the probability of the standard and inner interval operations to the ranges of functions is experimentally investigated. The value used for the probability will depend on the balance required between efficiency and robustness but on these results a value of 0.65 would give almost a 99.5% success rate. The preliminary test results seem promising for the construction of the global optimization algorithms based on these ideas of probabilistic generalized interval methods.
5. References Alt, R. and Lamotte, J.-L., 2001. Experiments on the evaluation of functional ranges using random interval arithmetic. Mathematics and Computers in Simulation, 56, 17-34. Byrne, R. and Bogle, I.D.L., 1999. Global optimization of constrained non-convex programs using reformulation and interval analysis. Comput. Chem. Eng. v.23, 1341-1350. Floudas, C.A., 1999. Recent advances in global optimization for process synthesis, design and control: enclosure of all solutions. Comput. Chem. Eng. 23 S963S973. Green, P., Carmone, F. and Smith, S., 1989, Multidimensional Scaling: Concepts and Applications. Allyn and Bacon, Boston. Hansen, E., 1992, Global Optimization using Interval Analysis. Marcel Dekker, New York. Mathar, R., 1996, A Hybrid Global Optimization Algorithm for Multidimensional Scaling. Classification and Knowledge Optimization. Proceedings of the 20* Annual Conference of the Gesellschaft fur Klassification e.V., University of Freiburg, 63-71. Moore, R.E., 1966, Interval Analysis. Prentice-Hall. Xu, G, Brennecke, J.F., Stadtherr, M.A., 2002, Reliable computation of phase stability and equilibrium from the SAFT equation of state, Ind. Eng. Chem. Res. 41(5):938-952.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
995
Adaptive Control of Continuous Pulp Digesters based on Radial Basis Function Neural Network Models Alex Alexandridis, Haralambos Sarimveis and George Bafas National Technical University of Athens 9 Heroon Polytechniou str., 15780, Athens, Greece
Abstract In this paper, an adaptive nonlinear Model Predictive Control (MFC) configuration is proposed, where the model used for predicting the future behavior of the process is a discrete dynamic Radial Basis Function (RBF) network. The innovative fuzzy clustering algorithm allows the continuous and easy adaptation of the model, thus making it suitable for controlling time-varying processes, such as the continuous pulp digester.
1. Introduction This paper presents a new and efficient adaptive MFC (Frett and Garcia, 1988) scheme based on dynamic RBF network models (Moody and Darken, 1989). Though the idea of incorporating RBF models into an MFC configuration is not entirely new (Fottmann and Seborg, 1997; Bhartiya and Whiteley, 2001), the innovation of the proposed approach is the utilization of an adaptive method for training the RBF model. The methodology is generic in nature, but its advantages indicate that it would be suitable for addressing the difficult task of controlling continuous pulp digesters (Wisnewski and Doyle, 1998), which are major units in a pulp and paper mill and very important for the quality of the produced paper. The proposed control methodology is tested on a simulated digester, based on the extended Furdue model (Wisnewski et al., 1997).
2. Adaptive MPC Using RBF Dynamic Models Many of the processes in the chemical industry are time varying. This means that they are subject to changes in the process operating region, or in process dynamics (e.g. decrease of a heat transfer coefficient). Developing a model that is accurate under all circumstances is a rather difficult task. On the other hand, MFCs need a model that approximates adequately the dynamics of the process, in order to yield satisfactory performance. Obviously, when dealing with a time varying process standard MFCs may yield poor results since the model they employ could become obsolete. In this paper we suggest a new algorithm which uses the standard MFC framework, and employs an adaptive RBF neural network model for predicting the process future behavior. 2.1. The Adaptive Fuzzy Means (AFM) algorithm The fuzzy means algorithm, which is an efficient method for developing RBF network models for dynamic systems, has already been proposed in a recent publication (Sarimveis et al., 2002). Though this algorithm presents some remarkable advantages
996 /Fin5tdata[x(l)y(l)]
/
Determine the location of the first center
Connection weight adaptation
Use the standard least squares method to calculate the new connection weights
Use RLS to update the connection weights
Figure 1. Algorithm overview. (very fast training, better approximation), still it cannot take into account changes in the operating region or the dynamics of the process, that occur with time. To this end, we have developed an adaptive version of the fuzzy means algorithm which can cope with both these situations. In order to achieve this, the algorithm performs two levels of adaptation: The connection weights between the hidden and output layer are adapted using the RLS algorithm, while the structure of the hidden layer is also adapted by adding or deleting hidden nodes. An overview of the algorithm is shown in figure 1, while a short description follows next. The key concept behind the algorithm is the idea of the fuzzy partition of the input space: Assuming a process with Ni input variables, the space of each input variable is evenly partitioned into a number of triangular fuzzy subsets. Then, fuzzy partitioning is extended to the entire input space so that a number of fuzzy subspaces are created, where each fuzzy subspace is defined as a combination of A^, particular fuzzy sets. The multidimensional membership function ju^i (x) of an input vector x into a fuzzy subspace A^, is defined as: l-rd^{x),
if
rd^{x)
M^i (x)= 0
otherwise
where rd'(x) is the Euclidean relative distance (Nie, 1997) between A ' and the input data vector x . Obviously the fuzzy subspace that assigns to the input vector the greatest membership degree is the closest subspace to this vector, since it corresponds to the smallest Euclidean relative distance. The notion of fuzzy subspaces is crucial for the
997 development of the algorithm, since each fuzzy subspace center is considered as candidate for becoming a hidden node of the RBF network. In order to initialize the algorithm, some operational parameters must be defined. These are: • The number of consecutive time steps that a center is not assigned to an input example before it is removed from the hidden layer of the network • The size of the moving time window, which is used for storing past input-output examples • The forgetting factor for the RLS method. Once the algorithm is initialized, it starts with the first input example, and determines the fuzzy subspace that is closer to that data point. The center of this subspace becomes the center of the first hidden node. Each time a new input example is available, the algorithm checks whether it can be assigned to an already selected fuzzy subspace. If this cannot be done then a new node must be added to the hidden layer. In this case, the center of the new hidden node is located at the center of the fuzzy subspace which is closer to the new input vector. Otherwise, the algorithm checks whether there is any center which has not been assigned recently to an input vector (information about the history of node activations is stored by the algorithm in a vector called Activation History Vector - AHV). If such a center exists, this center is removed. When the structure of the hidden layer is modified either by adding or deleting a node, then the connection weights need to be recalculated. This calculation is based on a moving time window, where a number of past input-output data are stored. The new connection weights are obtained by regressing the outputs of the hidden layer on the real outputs of the system. In case there is neither addition nor deletion of a node, the algorithm updates only the connection weights between the hidden layer and the output layer using the RLS with exponential forgetting algorithm. 2.2. Incorporation of the AFM algorithm to the MPC scheme The AFM algorithm can be easily incorporated into an MPC scheme, where in each time step k a rigorous nonlinear optimization problem is formulated. The objective is to calculate the optimal values of the manipulated variables v over a control horizon M, so that the error between the RBF model predictions and the desired set-point over a prediction horizon N is minimized. As soon as the optimization problem is solved, the first control move \(k) is implemented, and then the RBF model is updated using the AFM algorithm. The procedure is shown in figure 2. Assuming one controlled variable, the optimization problem can be described by the following set of equations: N
min
^t^m+i) - /"I+£||R"^AV(^+o|[
(2)
v(Ar),v(it+l),...v(it+M
subject to: y{k + i) = NN(k + /) + Eik)
(3)
998 y" i
u(k) *
MPC
»
Process
v(k-^
Networic parameters
RBF Adaptation
Figure 2. Control scheme description. E(k) =
(4)
y{k)-y{k)
v^„
l
M+l
(5) (6) (7)
In the above equations \(k + /) is a vector containing the values of the manipulated variables / hours ahead, NN(A: + /) is the RBF model prediction, E(k) is the current error between the actual output measurement and the model prediction and 6, R are the error and move suppression weights.
3. Adaptive Control of Kappa Number in a Continuous Pulp Digester Pulp digesters are complex tubular reactors, where the wood chips are cooked in a solution of sodium hydroxide and sodium sulfide, which is known as white liquor. Combined with thermal effects, the presence of the alkaline solution causes fragmentation of the lignin molecules into smaller segments, whose sodium salts are soluble in the cooking liquor. The result is the delignification of the wood chips, which gives the desired characteristics to the produced pulp. The main control variable in a continuous digester is the kappa number, which represents the amount of residual lignin in the pulp. The objective in a control system is to keep the kappa number at the desirable set point, thus producing pulp of steady quality. This is not an easy task, due to a number of difficulties: • The nonlinear dynamics, which make it difficult to develop an accurate dynamic model of the process • The large retention times • The numerous disturbances that affect the quality of the produced pulp. The most important disturbance is the quality of wood chips, which varies during the daily operation, due to the different sizes and ages of the trees from which the chips originate. • The frequent grade changes.
999 3.1. Description of the digester simulation In this paper we propose the implementation of the adaptive MPC scheme described in the previous section, in the kappa number control problem. The test cases presented here are based on simulating the digester behavior with a variation of the extended Purdue model, which approximates the plug flow by dismembering the digester into a number of zones. Each zone is modeled separately as a CSTR that contains three phases: The wood phase, the free liquor phase and the bound liquor phase. It should be mentioned that as the number of CSTRs increases, the model approximates more accurately the plug flow. A set of nonlinear ODEs is created by formulating the mass and energy balances for all three phases in each CSTR. The state variables in the set of ODEs are the concentrations of the various components and the temperatures in all three phases. The CSTRs are coupled, since the output of each CSTR is used as input to the next one. The resulting system of ODEs can be solved with numerical methods, thus providing a detailed dynamic simulation of the digester. The user can simulate the operation of the digester in various conditions, by changing the input and output concentrations, temperatures and flows, and also the type of wood entering the digester. In contrast to the extended Purdue model, where the user must define whether the free liquor flows upwards or downwards in each zone, the direction of the free liquor flow is calculated automatically, by performing a mass balance on the free liquor, throughout the digester. For the particular test cases, we have simulated a digester that produces pulp out of softwood. The digester was split into 50 zones, thus producing a set of 950 nonlinear ODEs. The simulation was performed using the ode23 solver of MATLAB. The simulation time corresponding to one-hour real operation time was 30s in a Pentium IV 1400 MHz processor, which, given the complexity of the system, is a rather short time. 3.2. Implementation of the adaptive control scheme - Results The proposed adaptive Model Predictive Controller was implemented using the MATLAB programming language, and tht frnincon function for solving the constrained nonlinear optimization problem. In the closed loop simulation, the kappa number of the produced pulp was used as the controlled variable, while the temperatures of two input flows along the digester served as manipulated variables. The controller was used to take a decision every one hour. For testing the adaptive scheme, a first test case was simulated, where the RBF network model was initially trained offline using the non-adaptive fuzzy means algorithm. The training data set consisted of a number of input-output training data, which were generated by using a random sequence of input temperatures that produced kappa number values between 20 and 30. As soon as the simulation started, a step change in the kappa number set point from 29 to 38 was implemented, where obviously the new set-point value was outside the range covered by the training output data. Thereafter the RBF network was allowed to adapt itself in each time step using the adaptive training algorithm. In order to compare the performance of the proposed scheme, we tested another MPC configuration where the initial RBF model developed offline using the input-output set of data, was not allowed to adapt itself. For both tests, the R coefficient matrix in equation 2 was set equal to [1 1]. The kappa number responses for both control schemes are shown in figure 3a.
1000
Time (h)
(a) Figure 3. Kappa number response for (a) R=[l 1], (b) R=[3 3].
Time (h)
(b)
It is clear that only the adaptive scheme achieved to drive the system to the desired set point, while the non-adaptive scheme became unstable, obviously due to the inadequacy of the non-adaptive model to approximate successfully the kappa number dynamic behaviour in the new operating region. A second test case was then examined, where the the R coefficient matrix was set equal to [3 3]. The responses of both closed loop systems are shown in figure 3b. Though this time the performance of the non-adaptive MPC was substantially improved, the superiority of the adaptive scheme is still clear, since it reaches the desired set-point in much sorter time. In both test cases, the adaptive RBF network started with 25 nodes, while at the end of the simulation the hidden layer consisted of 28 hidden nodes, meaning that 3 new hidden nodes were added by the adaptive training algorithm, in order to describe more accurately the new operating region.
4. Conclusions A new method for adaptive nonlinear control was proposed, with application to continuous pulp digesters. The method is based on an MPC framework that employs RBF network models, trained by an adaptive extension of the fuzzy means algorithm. Two test cases were implemented on a continuous digester, which was simulated using a variation of the extended Purdue model. The objective was to control the kappa number, when a change in the operating region occurred. Results showed that the proposed method is very effective in such situations and proved its superiority compared to the standard non-adaptive nonlinear MPC configuration.
5. References Bhartiya, S., Whiteley, J.R., 2001, AIChE J., 47, 358. Moody, J., Darken, C , 1989, Neural Computation, 1,281. Pottmann, M., Seborg, D.E., 1997, Comp. Chem. Engng., 21, 965. Wisnewski, P.A., Doyle, F.J., 1998, J. Proc. Contr., 8,487. Wisnewski, P.A., Doyle, F.J. Kayihan, F., 1997, AICHE J., 43, 3175. Prett, D.M., Garcia, C.E., 1988, Fundamental Process Control, Butterworths, Stoneham, MA Nie, J., 1997, IEEE Trans. Fuzzy Systems., 5, 304.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1001
Application of Data Reconciliation to the Simulation of System Closure Options in a Paper Deinking Process 2.
D.Brown\ F.Marechal^, G.Heyen^ and J.Paris^ 1. Ecole Polytechnique, Montreal, Canada Swiss Federal Institute of Technology, Lausanne, Switzerland 3. LASSC, Universite de Liege, Belgium
Abstract An equation solver data reconciliation software has been used to build a validated model of a waste paper deinking plant, by combining control room measurements and design specifications. An optimal sensor system configuration that allows validating key performance indicators by using only control room measurements has been determined by identifying, with genetic algorithm programming, the additional sampling points and corresponding sensors required for a minimum cost sensor system configuration.
1. Introduction Computer aided process simulation is an efficient design tool, which can help pulp and paper industries, pressured by global competition, and urged to comply with environmental regulation, to upgrade their facilities rapidly, and at low engineering and operating cost (Jacob and Paris, in print). Namely, in the context of system water closure, accurate models are needed to predict the impacts of retrofit modifications on a given process. Data reconciliation is essential for process performance follow up and simulation model calibration. Based on measurement redundancy, it is reconmiended as a preliminary step to process simulation. The numerous benefits of real time plant data reconciliation have been discussed in detail by Heyen (2000). In the pulp and paper industry, it has been noted that there are few examples of large-scale simulations that are actually based on reconciled data collected from a process in operation (Jacob and Paris, in print). Due to the number of pieces of equipment and streams involved, and in spite of the abundance of information that can be acquired using process sensors, a considerable amount of additional measurements are often still required to reach satisfactory levels of redundancy. These levels can be achieved at an acceptable cost by combining data reconciliation with equipment design specifications and process diagrams. The problem with this approach is the validity of the assumptions and their impact on the measurement corrections and precision. To overcome this drawback, a method based on the sensitivity matrix analysis has been proposed by Heyen et al (2002) to identify the appropriate additional sampling points.
2. Address of correspondence: Dr. Frangois Marechal, Laboratory of Industrial Energy Systems, Institute of Energy Sciences, Swiss Federal Institute of Technology, CH-1015 Lausanne, E-mail: [email protected], Tel 41 21 693 35 16, Fax 41 21 693 35 02
1002
2. Case Study The aim of this study was to calibrate a model of an old newspaper and magazine deinking plant by applying data reconciliation, and to design a sensor system capable of identifying key process performance indicators with satisfactory precision and at minimal cost. The deinking plant is located in Quebec. It uses 80 % old news paper and 20 % old magazines furnish to produce deinked pulp. The plant is located next to a thermo mechanical pulp newsprint mill to which part of the deinked pulp is sent to produce 30 % recycled content paper. The recycling facility was built in the early nineties, and was subsequently modernized at the end of the decade in order to increase its production capacity. During the upgrade, several modifications were made to the pulp treatment sequences and the process water circulation layout. It is estimated that the fresh water intake has been reduced from approximately 21 to 15 tons per ton of oven dried pulp produced. Figure 1 shows a simplified layout of the present day plant.
Coarse Cleaners & Screens I Drum IPulper
Flotation
Fine Screens] & Cleaners
_.^__._i-_^-H—^—r
Secondary Clarifier
LEGEND: • pulp
IWhite WaterL I
Clarifier
waste streams
Unit A| processing stage
Drainers
Flotation
Coarse Cleaners] & Screens 1
:?i
Washers
recirculated process water
^Unit B processing stage with fresh water intake
Figure 1. Process layout and water circulation.
3. Methodology A data reconciliation model has been built for the plant. This is an optimization model constituted by an objective function that corresponds to the minimization of the weighted errors on measurements and of a list of constraints representing the physics of the process operations. The constraints are mass and energy balances, separation rules, and thermodynamic behaviors. The model has been developed by using the equation solver type data reconciliation software VALI III (Belsim s.a, 2001). A series of assumptions were made to build the model. Cellulose has been added in the compound data bank considering that only density properties were necessary, since the deinking process is almost isothermal. It was assumed that waste paper enters the pulper at 15°C, that the temperature of fresh water and white water entering the system is set at 50°C, that there is no heat loss from the process piping and equipment, and that output
1003 streams from units with multiple outlets are at equal temperature. Part of the pressure flow network was detailed with measurements from controller room printouts. Pressure drop was otherwise neglected. The application of structural analysis of the incidence matrix allows analyzing the degree of redundancy of the control room measurements set, in order to identify missing and non validable measurements as well as the ones that can be corrected by measurement redundancy (Kalitventzeff and Joris, 1987). In the deinking plant additional sensors are required to obtain a satisfactory level of redundancy. In the proposed approach, an initial point has been obtained by compensating the lack of data by using data from previous studies of the original plant (Walosik, 1999; Bonhiver et al., 1998; Savu et al., 2001), updated with specifications from process control diagrams and laboratory test benchmark specifications of the upgraded plant. The results were then used to identify optimal sampling points for missing sensors. The resolution of this problem is done by using a genetic algorithm programming method (Heyen et al, 2002). The constrained data reconciliation problem is first transformed in an unconstrained one by using the Lagrange formulation: Min L = (Y-yf
P{Y-y)-^AF
1 2 P^ = -^ , with 8i the standard deviation of measurement i £7 In the new problem, the actual measurements have their own accuracy, while the unmeasured variables are considered as measured variables with a standard deviation that represents the use or not of a new sensor for this measurement. This is obtained by assigning an integer variable yij that represents the use (1) or not (0) of the sensor of type j for the measurement i. with
F = AY-\-B
4=
and
n^^ensorlyiJ+^O'
•r
V / in {unmeasured \aiiables}
The objective function is computed by summing up the sensors costs and a sum of the measure (Pi) of the projected standard deviation (O.) of the key performance indicator i resulting from the sensitivity analysis matrix obtained by solving the linearized langrange formulation as defined by Heyen et al. (1996):
~ 1 zz
1
A
-1
A^' 0
"1
•
1
"y (^
). .
= M-1
-B
-B
The method then operates in different steps starting with a verification of problem feasibility, assuming all available sensors have been implemented. This provides an upper limit for the cost of the system. The definition of the sensor system (values of the yij) are then defined using a genetic algorithm.
1004
Pi= mmeasured "i
m
, where
min i=\
with
—if
p. =0.01 mid 10,-
7=1
^i^^idesired
/ desired
'/
^/>^/d
Hunmeaured • numbcr of possible additional sensors, nj the number of sensor types m : number of key performance indicators.
4. Results and Discussion The data reconciliation and the sensor system configuration problem have been applied to sections of the plant for which controller room printouts were available: the pulper and the second pulp line. Table 1 presents an overview of the analysis of the incidence matrix of the system, when all measurements and modelization assumptions have been integrated to the model of this section of the plant. Table 1. Summary of the analysis of the incidence matrix. Number of equations Number of unmeasured variables Number of measured variables Number of constants Total number of redundancies
405 469 60 92 -64
Number of additional measurements required Number of unmeasured variables which can be validated Number of vahdable equations Number of unvaUdable measurements Equations with no influence on the validation problem
79 20 370 21 20
The next step consists in adding design specifications to compensate for the missing measurements. Five different types of measurements and specifications related to mass balances were used to build the model: consistency, flow rate, mass ratio of rejects or accepts from a separation unit (expressed as the ratio between inlet and outlet mass flows on an oven dried basis), the thickening ratio from a separation unit (expressed as the ratio of the outlet consistency on the inlet consistency), and finally flow rate ratios for split streams. In practice only volumetric flow and consistency can be measured by the plant's control system, whereas specifications expressed as ratios are related to equipment performance parameters indicated on process diagrams, and cannot be directly measured during plant operation. These types of specifications are useful because they confer the flexibility necessary to account for flow or consistency variations. Generally, consistency and pressure variations are monitored using smaller time constants in control loops. Hence, these parameters were preferred over flow when specifying dilution stream requirements from design specifications, the exception being when units such as pressure screens must operate at steady discharge. Flow rate variations are tolerable due to the variability of the process. Indeed, it is preferable to consider the process diagram flow specifications as indicative values, rather than set specifications. When compared to design specifications, control room printouts indicate that input flow rates are not set. Also, production reports from the plant show that the total pulp output may vary considerably from day to day. In this respect, the validation model has to be flexible enough to be reconciled with different sets of data sampled at different points in time. For this reason, measured flow rate variables were often specified with larger inaccuracies than other variables. Table 2, compares global mass balances for the entire process, computed with raw data to the
1005 ones obtained by data reconciliation. Differences can be noted between flow variables left unspecified in the validated model (all streams except the paper intake). Table 2. Process inlet and outlet flow rates and temperatures.
INLETS
TOTAL IN OUTLETS
SPECIFICATIONS Flow (kg/s)
VALIDATION Flow (kg/s)
77.20 6.74 63.45 147.39 2.20 0.47 55.86 18.88 70.18 14759
78.14 6.74 63.57 148.45 2.28 0.45 56.90 18.42 70.40 148.45
WATER PAPER WHITEWATER SLUDGE SOLIDS EFFLUENT WATER PULP(10%cs) PULP (4.4 % cs)
TOTAL OUT
For the sensor system design, two different cases have been considered. In the first case, ratio specifications are considered as constants, this implies that the units they refer to operate without any variability. This yields the minimum cost configuration capable of monitoring the overall process variability (e.g. the generally non-steady state behavior of the process due to breaks, start ups, slow downs, etc.). In the second case, ratio specifications are considered as variables so that the system may give an assessment of the accuracy of the performance of each individual separation unit. Measurements and design specifications of consistency and flow rate are considered as variables in both cases. The original configuration includes 43 sensors. The constant ratio specification configuration would require 47 additional sensors, while the variable ratio specification configuration would require 97. Numbers and types of the different sensors for the optimal sensor system are presented in table 3. Annualized costs and accuracies of sensors are obtained from component price guides, and other industrial sources (Gerkens, 2002). These include pressure, temperature, flow, and consistency measuring devices. Table 3. Summary of the sensor system optimization and sensor costs.
Number of possible sensors Maximum cost (upper limit) (€) Best solution after 1201 generations Additional sensors required Annualized cost of additional sensors (€) SENSOR (# of different types) Temperature (1) Pressure (1) Volumetric flow (2) Consistency (6) Total * RS: Ratio specifications
Constant Ratio Specifications 361 209,100
Variable Ratio Specifications 385 223,600
47 22,400
97 53,200
ANNUALIZED COST (€) 520 510 570 & 1,000 500 to 1,000
ACCURACY
-
-
1.5 °C 1.5% 1.5% 1 %
NUMBER OF SENSORS Present 0 30 8 5 43
Constant R.S* 11 50 16 13 90
Variable R.S* 19 56 30 36 141
1006
5. Conclusions Data reconciliation has been applied to build a validated model of a waste paper deinking plant. Controller measurements were combined with equipment and process design specifications, to compensate for missing data. Despite the fact that the inputs to the process may vary greatly, the model is still generally consistent because ratio design specifications allow accounting for measurement variability. A method based on the sensitivity analysis and genetic algorithm programming has been applied to design the optimal sensor system capable of computing the plants key performance indicators with a satisfactory accuracy at minimum cost. This would allow building a validation model by using only sensor measurements but requires doubling or tripling the number of sampling points.
6. References Belsim, VALI 3 User's Guide, Belsim s.a, St-Georges-sur-Meuse, Belgium, 2001. Bonhivers, J.C., Belon-Gagnon, S., Paris, J., Simulation dynamique de I'atelier de desencrage de I'usine de Kruger a Bromptonville, Quebec, Reference manual, Ecole Polytechnique, Montreal, Canada, 1998. Gerkens, C , Conception Rationnelle de systemes de mesure dans les procedes chimiques. Final project, Ulg, Liege, Belgium, 2002. Heyen G., Marechal, E. and Kalitventzeff, B., Sensitivity Calculations and Variance Analysis in Plant Measurement Reconciliation, Computers and Chemical Engineering, vol. 20S, 539-544, 1996. Heyen, G., Application of Data Reconciliation to Process Monitoring, Symposium ISCAPE 2000, Carthagena de Indias, Colombia, 2000. Heyen, G., Dumont, M.N. and Kalitventzeff, B., Computer-Aided Design of Redundant Sensor Networks, ESCAPE 12, The Hague, Netherlands, 2002. Jacob, J. and Paris, J., Data Sampling and Reconciliation, Application to Pulp and Paper Mills, Appita Journal, Canada, in print. Kalitventzeff, B. and Joris, P., Process Measurements Analysis and Validation, CEF87, The Use of Computers in Chemical Engineering, Taormina, Italy, 1987. Savu, E., Sarailh, S., Marechal, F. and Paris, J., Impact de la fermeture des circuits dans un procede de desencrage, P&P Can. (to be published), et in preprints, 6* Research Forum on Recycling, 155-158, Magog, Canada, 2001. Walosik, S., Gestion de I'eau dans le procede de desencrage/ problematique et etude de cas. Masters thesis, Ecole Polytechnique, Montreal, Canada, 1999.
7. Acknowledgements The EU-Canada Cooperation Agreement on Higher Education and Training Academic Mobility Program from HRD Canada and MRN Quebec, have contributed to funding this project. The authors would also like to thank the plant's staff for providing time and information towards the realization of the study.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1007
Mathematical Description of the Kraft Recovery Boiler Furnace A. O. S. Costa, E. C. Biscaia Jr. and E. L. Lima* Programa de Engenharia Quimica - COPPE/UFRJ Cidade Universitaria - CP: 68502. CEP 21945-970. Rio de Janeiro, Brasil. *[email protected] .br
Abstract It is proposed a new (hybrid) approach for the mathematical description of the black liquor burning process in an industrial recovery boiler furnace. The system is divided into four different regions and the concentration of each chemical substance in each region is calculated by direct minimisation of the corresponding Gibbs free energy. The particulate formation is separately described through a neural network trained with industrial data. The resulting hybrid model satisfactorily reproduces black liquor burning data obtained from literature and industrial sources.
1. Introduction A stationary mathematical model that describes Kraft recovery boilers is under development at the Programa de Engenharia Quimica of COPPE/UFRJ. This study has the technical support of one of the biggest Brazilian pulp and paper companies (Klabin Parana Papeis - KPP). As part of this research project, a mathematical model of the black liquor burning in the furnace of the boiler has been developed and constitutes the main purpose of the present contribution. 1.1. Black liquor burning Grace (1992) makes a detailed description of the different stages involved in the black liquor burning process: • Drying stage: the particle loses its residual humidity; • Pyrolysis stage: the particle increases its volume due the gas generation; (the generated gases are: TRS (Total Reduced Sulphur: CH3SH, CH3SCH3, CH3S2CH3), SO2, CO2, CO, CH4, H2O); • Char burning stage: the particle volume reduces with corresponding density increase, then the particle arrives at the furnace bottom; • Oxidation and reduction of the inorganic salts stages: Na2S reacts exothermically with the oxygen producing Na2S04. Na2S is the inorganic salt that should be recovered in the furnace due its importance as an active agent of the wood digester process. The relation between the recuperated sulphur mass as Na2S and the total sulphur mass present in the smelt is called reduction efficiency. Although the smelt oxidation causes a decrease of the reduction efficiency, the energy produced during this reaction facilitates the fusion of the inorganic salts forming the smelt. Besides, this energy favours the endothermic reduction of the Na2S04. This component reacts with carbon forming again Na2S.
1008 A particulate material composed by unburned liquor particles and inorganic salts (chemistry dust) is still formed during the black liquor burning. Part of this material is carried out to other parts of the recovery boiler.
2. Methodology The characteristic reactions of the black liquor burning, except the ones involved in the particulate formation, are described by a technique based on the minimisation of Gibbs free energy. For this purpose, the KPP furnace has been divided in four regions (Figure 1). In each region, different stages of the process are described: • Drying region: drying of the particle residual humidity; • Region 1: black liquor pyrolysis and generation of combustion gases; • Region 2: particle residual carbon combustion and smelt formation; • Region 3: combustion of reduced substances coming from the furnace bottom. The burning reactions of the black liquor have been considered only in regions 1, 2 and 3. Thus, the concentration of the chemical species has been determined in each of these regions. The numbers of phases and chemical species in each region have been chosen based on information reported in the literature. A Sequential Quadratic Programming (SQP) method has then been used to solve the minimisation problem.
Region 3
Tertiary air Oivii"
Black liquor feed
Tertiary air Black liquor feed
Drying Region
Region 1
Secondary air Primary air
Region 2
Secondary air Primary air
^4E^
Smelt Smelt Figure 1: Regions considered in the mathematical description of the furnace. Due to the complexity of the particle formation phenomenon, Jokiniemi et al. (1996), this process is described separately through an empirical model using industrial data, supplied by KPP. Preliminary tests have shown that linear models cannot describe the relation between the variables evolved in this process at the same time the chemical mechanisms of the particle formation is not completely known. So a neural network has been developed to describe the particle formation phenomenon.
3. Results 3.1. Minimisation of Gibbs free energy The results presented in the present contribution have been obtained adopting constant values for the black liquor feed (29.57kg/s), the black liquor solids concentration (84%),
1009 the primary air feed mass flow (38.52kg/s), the secondary air feed mass flow (34.85kg/s) and the tertiary air feed mass flow (18.34kg/s). Information suppHed by KPP report that in the same operation conditions the reduction efficiency observed is 96.88% and the TRS emission is 0.82ppm. The general considerations adopted during Gibbs free energy minimisation of regions 1, 2, and 3 are: the black liquor is composed by C, H, O, Na and S; the regions 1, 2 and 3 have, respectively, constant mean temperatures Ti, T2 and T3; the furnace operates at latm; all present phases are considered as ideal; the black liquor particles arrive dried at region 2; the feed to region 3 is composed by the particle residual humidity, the tertiary air and the gaseous phases coming from regions 1 and 2; region 1 is formed by 2 phases (solid and gaseous); region 3 is formed by only one gaseous phase; the nitrogen is inert in the furnace. Moreover, it has been considered that all the reduction and oxidation reactions, in region 2, occur in the solid phase. Thus, after reaching the chemical equilibrium, the inorganic species only melt to form the smelt. Consequently, during the minimisation of the Gibbs free energy in region 2, only two phases have been considered: one solid and one gaseous. The preliminary results indicate that part of the primary and secondary air feed goes directly to region 1 without reacting in region 2. Thus, it is adopted a parameter (Pd) to simulate this behaviour. Additional tests were made to obtain the correct value of Pd for the simulated operation condition. The obtained results show that 40% of the primary and secondary air feed reacts in region 2 (Pd = 40%). The Gibbs free energies of regions 1 and 2 were minimised and the corresponding results are presented in Tables 1 and 2. In this test, different values of Ti and the mean temperature supplied by KPP for region 2 (T2 = 1080°C) were adopted. Table 1: The chemical composition of region 1. T,CQ
Testl 200
Test 2 300
substances C(s) NazCOaCs) Na2S04(s) Na2S(s) NaOH(s) 02(g) H2(g) C02(g) CO(g) H20(g) S02(g) CH3SH(g) CH3SCH3(g) CH3S2CH3(g) CH4(g) N2(g) Gases TRS (ppm) SO2 concentration (ppm)
80.06 13.15 6.79 0 0 0 0.29 13.89 4.73 13.48 0 0 0 0 2.11 65.50 0.29 2.07
78.98 13.89 7.04 0.09 0 0 0.66 15.08 3.30 12.48 0 0 0 0 2.49 65.99 69.15 5.38
Test 3 Test 4 Test 5 400 500 600 Region 1: molar composition (%) 74.84 71.02 76.51 15.55 16.59 19.08 0.01 1.06 0 6.88 8.55 9.85 0.01 0 0.05 0 0 0 2.00 4.59 8.14 15.10 15.81 14.83 3.59 3.19 4.54 12.00 10.19 7.67 0 0 0 0.01 0 0 0 0 0 0 0 0 1.97 1.50 0.88 65.34 64.72 63.94 7.24 148.01 0.08 1.37 3.27 11.39
Test 6 700 58.72 27.13 0 14.02 0.13 2.18 11.25 9.99 9.27 5.05 0 0 0 0 0.35 61.91 0.01 0
Lisa (1997) affirms that the TRS formation begins when the black liquor particle achieves 200°C and ends at 600°C. This information has been reproduced by the results present in Table 1. The carbon consumption of the solid phase in region 1 increases with the increase of T]. Consequently, particles leave this region with a low concentration of this element. Due
1010 this reason, the oxygen is used to oxidise the inorganic salts in region 2. Therefore, the reduction efficiency decreases with an increase of Ti. Table 2: The chemical composition of region 2. Test 1 200
Test 2 300
02(g) H2(g) C02(g) CO(g) H20(g) S02(g) CH4(g) N2(g)
0.03 65.25 0.67 33.19 0.86 0 0.39 19.12 8.45 1.86 0 0 70.18
0.02 65.34 3.14 30.62 0.88 0 0.28 20.31 5.95 2.01 0 0 71.45
0.03 65.40 1.08 32.54 0.95 0 0.37 16.98 6.62 1.99 0 0 74.04
Reduction Efficiency (%)
98.03
90.71
96.80
TiCC) Substances C(s) Na2C03(s) Na2S04(s) * Na2S(s) NaOH(s)
Tests 400
Test 4 500
Test 5 600
Test 6 700
0.02 65.18 3.77 30.07 0.96 0 0.28 17.19 4.82 2.13 0 0 75.58
0 65.12 20.85 12.97 1.06 0 0.16 16.46 2.43 2.35 0 0 78.60
0 64.88 33.80 0 1.32 4.76 0 11.36 0 2.59 0 0 81.29
88.86
38.35
0
Region 2: molar composition (%)
The obtained smelt concentration (smelted solid phase of region 2) is usually composed by 33% of Na2S and 66% of Na2C03, reproducing the smelt behaviour reported by Grace (2001) and Macek (1999). The real reduction efficiency has been reproduced when Ti is 400°C. In another test, the Gibbs free energies of regions 1, 2 and 3 were minimised considering different values of T3. The chemical composition of regions 1 and 2 are presented in Table 1, test 3 and Table 2, test 3. Table 3 shows the results obtained for region 3. Table 3: The chemical composition of region 3. T3CC)
Testl 200
Test 2 300
0 1.72 15.51 0.13 18.48 0 0.01 0 0 0.48 63.67 117.47 0.17
0 2.82 15.03 0.94 18.14 0.01 0 0 0 0 63.06 5.25 149.80
Test 4 500
Test 5 600
Test 6 700
0 2.15 14.34 1.62 18.82 0.01 0 0 0 0 63.06 0 156.78
0 1.66 13.85 2.11 19.31 0.01 0 0 0 0 63.06 0 156.77
Region 3: molar composition (%)
Substances 02(g) H2(g) C02(g) CO(g) H20(g) S02(g) CHsSHCg) CHsSCHsCg) CH3S2CH3(g) CH4(g) N2(g) Gases TRS (ppm) SO2 concentration (ppm)
Test 3 400 0 2.65 14.84 1.12 18.32 0.01 0 0 0 0 63.06 0.73 155.83
0 2.47 14.66 1.30 18.49 0.01 0 0 0 0 63.07 0.08 156.67
Lisa (1997) affirms that the tertiary air oxidises the TRS gases coming from the furnace bottom. This behaviour is reproduced by the results presented in Table 3. During this study, it was observed that the reduction efficiency is not strongly affected through modifications in Ti or T3. Moreover, assuming values for Pd between 30 and 40%, it has been possible to predict the real reduction efficiency of KPP for different operational conditions.
1011 However, small modifications in Ti or T3 affect significantly the calculated concentration of TRS gases. This behaviour is due to the precision of the optimisation problem resolution, since TRS gases are present in a very small amount (ppm) compared to other chemical species. Thus, the minimisation of the Gibbs free energy technique is not robust enough to predict the TRS gases emission. 3.2. Empirical description of particulate formation Different feedforward neural networks with three layers have been tested to describe the particulate formation in the KPP furnace. A linear activation function has been used in the first layer and tan-sigmoid activation functions were used in the hidden and output layers. Training has been accomplished through 1000 epochs using a backpropagation algorithm. The data have been carefully pre-treated eliminating all data that presented significant measurement errors. A total of 3705 data sets have been used for the training procedure and validation has been based on another 676 data sets, chosen at random. The evaluation of each neural network prediction efficiency has been accomplished using the sum of the squared errors of the validation and training procedures and also by graphic analysis. The best neural network obtained has 9 inputs (temperature, pressure and rate of the furnace black liquor feed; pressure and rate of the primary air feed; pressure, temperature and rate of secondary air feed and tertiary air feed rate) and 12 neurons in the hidden layer. The neural network output (Pf) represents the number of particles per minute (apm) passing through a specific region of the furnace. Figures 2 and 3 show the training and validation results. 900800
o
Epart predicted == Epart real Epart real = Epart real
y^
700 600 500 400 300
J^^^°^
o
2001000570
575
580
585
590
training tests
595
600
100 200 300 400 500 600 700 800 900
Pf real (apm)
Figure 2a: Behaviour of the real and predicted Figure 2b: Relation between the predicted and data to some training tests. real values to all training tests. Figure 2: Behaviour of the best neural network (training tests). The prediction efficiency of the best neural network is not uniform. The results presented in Figure 3 show that for Pf values larger than 300apm, the neuronal network presents smaller prediction efficiency. This behaviour can be associate to the small amount of training data for Pf > 300apm. However, the neural network presented in Figures 2 and 3 can satisfactorily describe the data tendency. Thus, this model has been incorporated to the mathematical model described in item 3.1.
1012
900 800-1
o Epart predicted = Epart real — Epart real = Epart real
700 600^ 500 400 300 200 100 0 225
230
235
240
245
validation tests
250 255
100 200 300 400 500 600 700 800 900 Pf real (apm)
Figure 3a: Behaviour of the real and predicted Figure 3b: Relation between the predicted and data to some validation tests. real values to all validation tests. Figure 3: Behaviour of the best neural network (validation tests).
4. Conclusion A methodology based on the minimisation of Gibbs free energy has been successfully adopted to describe the chemical composition of an industrial black liquor recovery boiler smelt. Moreover, the chemical composition reported in the literature from other industrial sources could also be satisfactorily reproduced using this technique. Industrial data, supplied by KPP, were used to build an empirical model that describes the particulate formation in the furnace. The feedforward neural network chosen to describe the phenomenon has three layers, 9 inputs and 12 neurons in the hidden layer. This model predicts the amount of particles formed in the furnace and carried to another part of the recovery boiler. A low computational cost is required to solve the resulting hybrid model (thermodynamic model plus neural network). Thus, this model can be used to analyse the effect of different operation conditions on the black liquor burning.
5. References Grace, T.M., 1992, Chemical Recovery Process Chemistry, Chemical Recovery in The Alkaline Pulping Processes, Eds. R. P. Green and G. Hough, TAPPI Press, Atlanta. Grace. T.M., 2001, A Review of Char Bed Combustion, International Chemical Recovery, Conference, 65, Canada. Jokiniemi, J.K., Pyykonen, J., Mikkanen, P. and Kauppinen, E.L., 1996, TAPPI Journal, 171.79. Lisa, K., 1997, Recovery Boiler Air Emissions, chapter 8: Kraft Recovery Boilers, Eds. T. N. Adams, TAPPI Press, Atlanta. Macek, A., 1999, Process in Energy and Combustion Science, 275,25.
6. Acknowledgements The authors acknowledge the financial support provided by CNPq - Conselho Nacional de Desenvolvimento Cientifico e Tecnologico - as well as the technical support from Klabin Parana Papeis (KPP) industry, particularly to O. Vieira, S. H. S. Martinelli and M. A. Betini.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1013
Implementation of a Model Based Controller on a Batch Pulp Digester for Improved Control P.L. de Vaal*, C. Sandrock Department of Chemical Engineering, University of Pretoria, Pretoria 0001 South-Africa *: Author to whom correspondence should be sent ([email protected])
Abstract Effective control of the sulphite pulp digestion process in the production of dissolved pulp in a batch digester is limited by three important restrictions namely, the inability to measure the degree of polymerisation (DP) of the cellulose in the wood pulp, which is the controlled variable, the fact that flowrate of steam to an external heat-exchanger, through which the cooking liquor circulates, is the only manipulated variable available and that, due to scheduling requirements, cook time per digester is fixed. The traditional S-factor prediction of cook-time to control DP is inadequate. Use of a simplified model-based inferential technique to estimate DP offers an improved methodology to control the process. A simplified fundamental model with adjustable parameters was developed and its accuracy to predict DP based on given operating conditions was tested using available plant data. This model was built into a control structure for implementation on an operational batch digester. Use of this model enables adaptive response to changing conditions on the plant by optimal adjustment of the model parameters to fit the measured characteristics of the digester. Because the parameters form part of fundamental relationships in the model, realistic bounds can be placed on them during the optimisation process, enabling a better understanding of the behaviour of the model. By automating the optimisation process, the controller becomes independent, requiring no human intervention for day-to-day operation. The plant model is periodically adapted by adjusting the parameters in the model, based on historical data representing the performance of preceding cooks.
1. Introduction In the production of dissolved pulp, special emphasis is placed on the quality of the pulp, especially with respect to fibre length. Unlike many other processes, where backblending of product can ensure adherence to product quality requirements, this is not possible in the case of dissolved pulp production. Although modern pulp manufacturing plants make use of continuous pulping digesters, several large plants are still operational, where batch digesters are used for production of dissolved pulp. To enable production targets, the control challenges on such a plant require innovative approaches to ensure trouble-free operation on a continuous basis.
1014 The control challenges to be addressed are: (i) (ii) (iii) (iv)
(v)
Each digester operates on a batch cycle over an operating range. The controlled variable for quality, the degree of degradation of the pulp, is not continuously measurable, Only one manipulated variable, steam supply rate, is available In order to ensure continuous supply of pulp to the downstream processes, strict requirements exist for each digester with respect to available time to complete cooks, The controller has to operate within specific constraints, which include limits on available steam and continuously varying heat transfer area in addition to unmeasured disturbances associated with liquor strength, wood chip composition, moisture content etc.
The conventional approach towards control of pulp viscosity is to estimate cook duration based on a prediction of required energy which is in turn based on a simplified reaction rate for delignification of the wood chips. This methodology is commonly known as the S-factor approach, in the case of dissolved pulp production. In order to achieve tighter control of pulp viscosity under the stated specifications, a model-based approach was taken to address the problem. It is the intention of this paper to show that substantially improved control is possible by making use of available technology.
2. Traditional Control of Pulp Viscosity The SAPPI SAICCOR plant in Umkomaas, KwaZulu-Natal in South-Africa produces dissolving pulp for the production of a wide range of products. Acid sulphite pulping in 23 batch digesters is used to produce the pulp from mainly Eucalyptus wood. Since dissolving pulps are produced, the aim of the process is to produce pulp with a specified final degree of polymerisation. Temperature in the digester is used as a manipulated variable in the control of the cellulose degradation, while the steam flow to the external heat exchangers is used to control digester temperature as shown in Figure 1. Control of temperature is accurate indicating that the control problem is associated with the difficulty in determining the temperature setpoint profile for the temperature controller. A modified S-factor model, which is currently in use at SAICCOR relates the degree of polymerisation to temperature in the digester in order to be able to calculate the temperature setpoints that will ensure obtaining the desired DP in the available cook time.
1015
r"
••£A
1 .
Adaption Algorithm
|
DPset
I
Controller
DP
P
Plant
—•-A-l
d
V
. (a)
(b)
Figure 1: (a)Two-way control strategy per digester and (b) proposed control model. The S-factor model was originally developed by Yorston and Liebergott (1965) and was based on the assumption that a correlation exists between the lignin content of the solid phase in the reaction and the pulp viscosity. The widely accepted delignification rate equation, in the form also later reported by Hagberg and Schoon (1973) in 1973, was used to model delignification and relate the pulp viscosity to that. The Arrhenius temperature dependency of the reaction had already been confirmed and this was used inequation 1. The rate equation was then integrated with the assumption that the order of delignification was unity with respect to the residual lignin percentage. This gave the "S-factor" model: (1)
Integration of the left-hand side of this equation yields: [Lf]=kie{-^oSF}
(2)
from the initial assumption, the residual lignin concentration was taken to be proportional to the cuprammonium viscosity of the pulp, leading to the following: Viscosity = k2e^ ^o'^^^
(3)
The parameters ki and k2 are constant parameters. Equation 3 formed the basis of the "S-factor" model. Individual mills adapted the S-factor model to their own data and observations in order to improve the control of their respective plants, for example using
1016 the actual SO2 concentration of the liquor instead of the partial pressure of the SO2 (Marr and Bondy, 1986). It is apparent that the weakness of the model lies in the assumption that the residual lignin content is proportional to the degree of polymerisation of the cellulose.
3. Improved Model-Based Approach The derivation of a simplified mechanistic model (Kilian & de Vaal, 2000), used by the controller can be summarised by the following assumptions: (i) The reactions in the digester can be accounted for by modelling the rate of cellulose degradation, lignin dissolution and strong acid equilibriums in the digester. The hemicellulose degradation rate is implicitly calculated in these equations. (ii) The initial composition of the reaction mixture is known from a test of the liquor, (iii) The temperature and pressure in the digester are known. To model the reactions, the following reaction rates are used:
-4^^ = kL[LP tlSOa-P k P (Lignin)
(4)
at
d|A]^ksA5L)^A-^ S ^^^
= k c p + f (Cellulose) = k n c [HC]^ • p"^ I (Hemi-cellulose)
(5) (6) (7)
The temperature dependence of all the k factors above except for ksA , can be described by the Arrhenius relationship, k = k^ exp(E / T ) , with E and ko known for each of these reactions. The hydrogen and bisulphite ion concentrations are obtained via electroneutrality arguments combined with the assumption that equilibrium is attained between the SO2 in the vapour and liquid phases. 3.1. The model-based controller The control objective of any controller can be seen as keeping the effect of disturbances small while allowing the controlled system to produce the specified outputs, as well as allowing the system to move between desired outputs in a satisfactory manner. This leads to a conceptual picture of the control process required for the digester. It can be seen that the controller is based on a general feedforward structure, with measured process variable vector p, and unmeasured disturbance vector d entering the controlled system. Vector p is used in the feedforward structure to determine the value of the manipulated variable, T^ax required to reach the desired DP value. This concept is shown in Figure 1 (b). It should be clear that some form of inverse model is required as with all feedforward control. This inverse is obtained iteratively by obtaining the value of T^nax that gives
1017 DP=DPsef This is due to the fact that a highly non-linear time domain model has already been developed, and finding this inverse is computationally intensive. 3.2. Adaptive model Upon examination of Figure 1(b), it is clear that the controller does not have any specific measurement of control error. When the conditions on the plant change, causing plant-model mismatch, control can be expected to degrade. In order to keep the model parameters in line with the measured results from the plant, it is necessary to provide a feedback mechanism. This is done using a periodic optimisation of the model parameters. At a predetermined frequency, the model parameters are adapted by optimising the fit of the model to the measured DP values for the cooks since the last optimisation. This ensures that the model is constantly in close accord with the conditions on the plant. This is illustrated in Figure 2 below. 3.3. Plant interaction Interaction with the plant was achieved using direct queries to the plant database to obtain all the data required for operation. Both continuous and historical data can be obtained in this manner, so that operation is entirely automated. Digester DP
DCS
T.P
Controller
HBtriai
Continuous
Predictor
T.P.R
1
Historical J
I
Per Baitch
Figure 2: Adaptive model.
' ^rtax
^
QueryHist
^1
Offline model
I
Optimiser
1
1018
4. Results The model used in the controller has been shown to be accurate using existing plant data (Killian, 1999). Before implementation, simulation experiments have also shown that the controller predictions for Tmax outperforms the current S-factor controller. Table 1 shows the actual and predicted temperatures, along with the resulting viscosities for six cooks done. Table 1 Temperatures obtained and predicted by the program.
Cook number
i
2 3 4 5 6
% Difference between T^ax reached using S-factor control and T^^^ predicted by the new model
IM 0,72 3,55 -0,71 3,55 2,17
Normalised Viscosity (%)
91 100 61 77 56 53
5. Conclusion A new model-based control algorithm that interacts with an industrial batch digester was developed and has been implemented. The control predictions made by the new controller led to improved control compared with the current controller which is based on a modified S-factor model.
6. References Hagberg, B., Schoon, N.-H. (1973), Kinetical aspects of the acid sulfite cooking process: Part 1: Rates of dissolution of lignin and hemicellulose, Svensk Papperstidning,(15), 76, 561-568. Kilian, A (1999), Control of an Acid Sulpite Batch Pulp Digester Based on a Fundamental Process Model, Master's Dissertation, University of Pretoria Kilian, A., de Vaal, P.L (2000) Potential for improved control of an acid sulfite batch digester using a fundamental model, TAPPI Journal (83) November. Marr, S.Y. and Bondy, W.B. (1986), Application of viscosity prediction model in sulphite pulping. In CPPA/TAPPI International Sulphite Conference, Quebec. Yorston, F.H. and Liebergott, N. (1965), Correlation of the rate of sulphite pulping with temperature and pressure. Pulp and Paper Magazine of Canada,(May),66, T272.
European Symposium on Computer Aided Process Engineering - 13 A. Krasiawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1019
Steady State and Dynamic Behaviour of Kraft Recovery Boiler Sh. Ghaffari, J. A. Romagnoli Centre for process Systems Engineering Department of Chemical Engineering The University of Sydney Sydney, NSW, 2006, Australia
Abstract This work is part of the smart enterprise division in Tumut Visy Pulp and Paper, and addresses the advanced control and operation of the mill. In this context, a robust dynamic model is developed for the recovery boiler, and validated over a wide range of operating conditions. In the steady state case, energy and mass balances were carried out over the different sections of the process. In the dynamic case, initially, heat and mass transfer across the bed are coupled with moisture evaporation, black liquor pyrolysis, char combustion and gasification, gas-phase. The influence of model parameters kinetic constants and operational variables on process dynamics are studied by numerical simulation. The model was developed/implemented in a Visual C++ environment.
1. Introduction The production of Kraft linerboard is an extremely competitive market in which mills must optimise production whilst operating within environmental limitations. Visy is a private company that offers customers a complete range of packing materials including paper and cardboard. It consists of five separate divisions, recycling, paper production, board production, specialties as well as pulp and paper. Visy is pioneering To-aft pulping' technology with energy efficient qualities at its Tumut mill. The mill uses extensive industry experience and research in the paper industry to engineer advanced operation concepts with near-zero effluent levels. The mill comprises of seven sections, each with a unique purpose. These include the wood yard and stock preparations, fibre line, paper machine, evaporation plant, recovery boiler, recausticizing area, and auxiliary systems. The mill is extremely integrated and contains three main cycles, which are the fibre line, chemical recovery, cooling water and effluent treatment. The Smart Enterprise Division was established within Visy with the challenge of improving the performance of the mill and to implement advanced strategies that ensure optimal process operation. Four key strategies were identified for this purpose and are outlined below: Steady sate and Dynamic modelling Process optimisation and advanced control Data reconciliation and rectification Fault identification and diagnosis
1020 The development of a dynamic model of the recovery boiler at the mill is addressed in this study. The model would allow operators to gain deeper process knowledge, in particular the effect of input variables on the flue gas temperature out of the furnace. Future studies will consider the benefits of using such detailed model for formulating/implementing advanced control strategies.
2. Model Definition A steady state and a dynamic model were successfully developed to predict the steady state and dynamic behaviour of a recovery boiler. Each model is briefly described in the following. 2.1. Steady state model The steady state model is based on the mass and energy balances which were carried out over the furnace, heat exchanger area, economising section, dolezal, drum, generating section and superheaters. A set of nonlinear algebraic equations were used to accurately predict changes in the physical state of each system. Fig. 1 shows a simplified PFD for recovery boiler steam generation cycle. Several assumptions were made to simplify the model equations as shown below: Chemical reduction efficiency is 0.95. Excess air ratio is 1.2. 1% of elemental sulfur feed is converted to sulfur dioxide. The fraction of potassium converted to potassium sulfide is 0.25. Temperature of smelt leaving the furnace is 800 °C.
DQLEZAL
ECQNOMISER
BCOHOMISER rOHOMIS]
f
m
PMMAJYSHB
DRUM
SBCONDARYSH
•P
^
TEKTIABYSH
STEAM FLUEGcAS WATER
Fig. 1- A simplified PFD for recovery boiler steam generation cycle. The numerical solution of the steady state model equations was obtained using the global convergent Newton method. 2.2. Dynamic model The dynamic model is based on mass and energy balances for the solid phases and mass and energy balances for the gas phases which are coupled with moisture evaporation, black liquor devolatilization, char combustion and gas-phase combustion.
1021 Combustion of black liquor is characterized in three stages: drying, devolatilization and char burning. Drying and devolatilization occur very rapidly. Char burning is the last stage in the burning process and there is no visible flame which is characterized by a number of heterogenous chemical reactions with simultaneously consumes carbon and reduces Sodium Sulphate to form liquid smelt. Drying is modelled as heating rate limited and the bulk temperature of the liquor drop remains at the water saturation temperature of black liquor until drying is completed. A model for black liquor devolatilization was developed by Fredrick (1995) in which devolatilization is dependent upon heating rate. However rate parameters were not well defined in the literature and their result are sensitive to the assumptions made. More recently a kinetically-limited devolatilization model for black liquor was developed by Wessel (1998) which was subsequently adopted in this study. The rate of devolatilization is based on three competing reactions, each with an Arrhenius type rate expression, and first order behaviour with respect to the unreacted black liquor solid. Table 1. Reactions and rate expressions. Drying: tiiO^i) — ^ ^ / / 2 ^ ( , )
^1 = f
(1)
Devolatilization: + a,(, Inorganics ^,^^
'='
(2)
Heterogeneous reactions of char: C(,> + H,0^^^ — ^ ^ / / 2 ( ^ , + CO,^,
^^3 = K,[C]c„^o '(CH^O + 1-42C„^ )
C(,> + C02(^) — ^ ^ 2 C 0 ( ^ )
/?4 = K,[C]cco^ licco^ + 3.Acco)
(4)
C<.,+0,,,,-^CO,,,,
R,=K,AAC]Po,
(5)
iVa,5y,) + 20,(^, -^^Na,SO,,,,,
^a = K.AANa.S]?^^
(6)
2 q , , + Na^SO,-^^Na,S
R, = K,[X,^,o,
(7)
+ 2C0,
f*[C]
(3)
Gas phase water gas shift:
Gas phase combustion: CH,+i3/2)0^-!^^2H,0
2H, + 0,^^^2H,0 *K = Atxp{-E/RT),
+ CO^^,
R, = K./T^Cc^^c^^
Rn=KnfK'ccoic„f'c„^o K^^=A^^e\p(E^^/RT), 4 = 1 6 0
m^/g
(9)
dD
1022 Char burning process is assumed to occur after devolatilization is complete and carbon and inorganics are released from the black liquor. Kraft char burns via a sulfate/sulfide cycles. The carbon in the char reacts with sulfate, reducing it to sulphide and forming CO2. The sulphide in turn reacts with oxygen from the combustion air, reformatting sulfate and completing the cycle. Other char burning reactions occur if O2, C02 or H2O are presented in the gas. Combustible gases are evolved from black liquor during devolatilization and char burning. Volatiles are represented by mixture of gases (CH^, CO, CO2 and H2O) which is based on the elemental composition of black liquor solids, residual carbon and inorganic matter and volatile yield. The homogenous gas phase combustion is simulated when O2 is present in the surrounding gases to react with volatiles and char combustion products leaving the particle. The gas phase reactions are so fast in comparison with the time scale of turbulence. The reactions considered are given in Table 1. Excess air is used in the furnace to control the combustion of black liquor fuel and prevent the formation of environmental pollutants. Excess airflow is altered whilst all other process inputs and assumptions. The results in Fig 4 illustrate that reducing the excess airflow increases the steam temperatures out of the tertiary and secondary superheater. The steam temperature out of the primary superheater shows no apparent fluctuations, however the economiser and Dolezal water temperatures increase. The only way for this to occur is to have hotter flue gas exiting the boilerbank. Figure 5 shows an increase in flue gas steady state temperature with decreasing excess airflow rate in all sections other than the boilerbank as had been expected and consequently there is an increasing flue gas steady state temperature out of the primary economiser. The model was also validated against plant data (Fig. 6 and 7). As shown, the model results are strongly compatible with the plant data. 500
^ 750
450 • eco
400
«do
350
Aspl
300
• sp2
250
xsp3
200 150 101
103 105 107 109 111 113 Feed Water Rate (T/Hr)
Fig2- The effect of the variation of feedwater flow rate upon water/steam temperature.
101 103 105 107 109 111 113 Feed Water Plow Rate (T/Hr)
Fig. 3- The effect of the variation of feedwater flow rate upon flue gas temperature.
1023 1100-
^ 5001 u a 450^ 400-
•
1 ^^'
1 3001 250-
zUO H
I
X 0
•
-
.
•
•
•
•
•
*
X
V
• spl
5)
«sp2
a
sp3 • eco
•
•
•
X t
1.2
1.3
1.4
1.5
Xdol
H
• fti
800-
•
650
X A
500
•
0 V
3
• X A
" ?
• X
•
•
•
•
I
1.6
1.1
1.2
1.3
1.4
1.5
• eco A spl
•
• sp2
350-
I
1.1
950-
Xsp3 1.6
• bo
Excess Air
Excess Air Ratio
Fig. 4- The effect of altering the excess airflow upon steam/water temperature.
Fig. 5- The effect of altering the excess airflow upon flue gas temperature. QPlDDOf-OdMOl
500n 450 O 400 d350 1 300 i 200 S.SO
1 100 50 0
• Model
n
n
11
_li M , !1 •
, 1.1M
• Hbdel
n
Q Process-t^Caic
1
r^
•
, ^1 M . LJ 1 sp2
sp3
i
5
8
u
Fig 6- Comparison of water/steam temperature of the model against the process temperatures.
gas Fig 7- Comparison of flue the temperature of the model against process temperatures.
Figures 8 and 9 show some preliminary results of dynamic simulation. In this study the initial solid content was increased 20%. As expected by increasing initial solid content the temperature of both flue gas and the flow rate of flue gas are increasing/ decreasing respectively.
1000 • •••
• • ••
•••
175
J73 [l71 1190
•• 4 t • ^ 4 • »
93} 166
593 ^Tir^^^lO
15
D
Fig 8- The flue gas temperature vs. time for a 20% increase in solid content.
5TTO<*10
15
Fig9- The flue gas flow rate vs. time for a 20% increase in solid content.
1024
4. Conclusions As part of the smart enterprise division at Visy pulp and paper mill this work is related to the development of an advanced operation/control strategy for the plant recovery boiler. The development/implementation of a steady state/dynamic model was initially conducted, in this work and was reported here. Model validation, were found to be within 6% error with respect to actual plant data (for the steady-state case) and preliminary dynamic simulations show consistent trends. Work is underway to extend the capabilities of the dynamic model to two/three dimensional case for more realistic considerations as well as to incorporate the upper furnace and steam generation sections. The developed model will then be used to investigate the implementation of advanced control strategies that could achieve different operation objectives and maintain desired paper quality for the mill.
5. References Jarvinen, M. and Zevenhoven, R. "Black liquor Devolatilization and Swelling Detailed Droplet Model and Experimental Validation", Behaviour of Inorganic Material in Recovery Boilers Conference, Ben harbour, Maine(USA). Verrill, C.L. and Wessel, R.A., 1998, "Sodium Loss Detailed Black Liquor Drop Combustion Model for Predicting Fume in Kraft Recovery Boilers," Tappi J. 81(9):139. Wessel, R.A., Parker, K.L. and Verrill, C.L.,1997 "Three-Dimensional Kraft Recovery Furnace Model: Implementation and Results of Improved Black-Liquor Combustion Models," Tappi J. 80(10):207. Wessel, R.A., Verrill, C.L., 1998, "Black Liquor Combustion Model for Predicting Ash Chemistry in Kraft recovery," The proceeding of AICHE annual meeting. Florida, USA.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1025
Processing of Thermo-Mechanical Pulping Data to Enhance PCA and PLS Robert P. Harrison'^ and Paul R. Stuart Department of Chemical Engineering, Ecole Polytechnique de Montreal, Montreal, Quebec, Canada.
Abstract The purpose of this paper is examine what differences, if any, are obtained in multivariate analysis results using different time scales and averaging methods on compressed historical data. The data describe the operational performance of the refiner section of a modern thermo-mechanical pulp mill, a straightforward case whose feed and intermediate products have understandable physical and mechanical properties. Overall, it was found that medians give slightly better results than averages, and that goodness of fit of the model is heavily influenced by the sampling frequency of key process parameters.
1. Introduction Seasonal variations, changes in incoming chip quality and other external factors that can affect pulp quality are often beyond the control of the thermo-mechanical pulp (TMP) mill operator. Many internal factors are controllable, however, and could possibly be used to counteract these external forces. The ultimate goal is to model in real time parameters that cannot be measured continuously, in order to apply inferential control ("soft sensor") as reported in Strand et al. (2001) and elsewhere (Kooi, 1994; Kresta et al., 1994). Before proposing any such control strategy, however, it is necessary to understand the correlations and trends which are inherent to the refining operation at the heart of the pulp mill, using historical data. The Canadian TMP newsprint mill under investigation has had a high-speed PI data historian in place for 34 months, into which virtually all process and operating data for the entire mill are fed. The mill has over 6 000 data tags, some of which are updated every 10 seconds, potentially representing millions of numbers per day. This data explosion has created a daunting mass of information, one for which the automated pattern-recognition techniques of multivariate analysis (MVA) are perfectly suited. The underlying principle of MVA is that useful patterns and relationships not intuitively obvious lie hidden inside enormous, unwieldy databases. Mill personnel have tried to establish relationships between the process variables by considering only a few at a time, an impossible task, hence their interest in co-operating with Ecole Polytechnique on a new approach. Final paper quality is of paramount importance to the mill, and can be influenced by many factors (Wood, 2001). Principle Component Analysis (PCA) of pulp quality and newsprint quality variables at the mill has shown that two pulp parameters. Medium Author to whom correspondence should be addressed: [email protected]
1026 Fibre Fraction and R48 Fibre Length Fraction, are strongly correlated with important paper characteristics such as permeability, stretch, burst strength and tear strength (Harrison et al., 2003). These two related parameters were therefore selected for the present study. Previous papers on TMP operation that were reviewed used a variety of time scales, ranging from monthly averages to instantaneous readings (Lupien et al., 2001; Saltin et al., 1995; Shaw, 2001; Strand et al., 2001). The main purpose of this paper is to examine what differences, if any, are obtained in MVA results using different timescales and averaging methods.
2. Methodology One of the mill's four pulp production lines was selected for study. Using the mill process and instrumentation diagrams, key data tags were identified on and around the primary and secondary refiners, including: • Chip quality data (grab samples at the TMP feed conveyor, analysed in a laboratory): chip size distribution, bulk density, humidity. • Refiner operating data, such as: throughput; specific energy imparted to the chips; energy split between the primary and secondary refiner; vertical and conical plate distances; dilution rates; levels, pressures and temperatures in various units immediately connected to the refiners; steam generation rate; voltage at chip screw conveyors; specific hydrosulphite consumption. • Equipment data, such as: operational hours elapsed since refiner plates were last replaced or changed direction; number of refiners sending steam to heat recovery at any given moment; number of "feedguard" events, indicating refiner blockages; refiner body temperature. • Pulp quality data (automated, on-line analysis of grab samples using Pulp Expert system): fibre length distribution; freeness; consistency; brightness. Daily averages were extracted for the full 34 months, viz., November 23'^ 1999, to October V\ 2002. To investigate shorter time scales, mill personnel helped to identify a recent typical operating week in which the chips were all from a single supplier, and no unusual production problems were encountered at the TMP mill: September 16*-22"^, 2002. The first step for each run was to perform PCA on the entire dataset, to identify outliers and obtain an overall portrait. Periods of unusually low production (< 100 t/d) were excluded beforehand, as previous experience had shown these to produce major oudiers systematically. Partial Least Squares (PLS) analysis was then performed using Medium Fibre Fraction (MFF) and R48 Fibre Length Fraction (R48) as the two Y's, and all other upstream data as the X's. Where applicable, a lag was introduced between the X's and Y's to account for the 45-minute residence time in the latency chest. Of the many MVA outputs that can be generated using the Simca-P software, three dissimilar ones were selected for comparison: Variable Importance Plot to rank the X's in terms of importance to modelling the Y's; R^ and Q^ values for each of the two Y's; and Observed vs. Predicted to examine how well the PLS model can predict new Y's based on the X's.
1027
3. Research Results 3.1. Establishing time scales Process data from the mill are stored in compressed form, i.e., only those values deviating more than ±1% from the previous stored value are kept. Compressed data can be extracted as: • the actual stored data points, which will include significant time gaps, or • interpolated, in which all time gaps are filled. There are various other possibilities, such as for instance selecting the previous stored value. The shortest time increment used at the mill in question is 10 seconds, which may be considered the lower limit. The system has been on line for 34 months, so one year could be considered the upper limit. It is possible to select virtually any time scale in between for analysing data for diagnostic purposes. In the refining section of the mill, there are several frequencies which guided the choices: • Instantaneous pulp quality readings for Line 1 are taken every two hours, on average. • Chip grab samples are taken from the TMP incoming conveyor every eight hours. • The mill operates on three daily shifts of eight hours. There are also computer limitations; for instance, it is not feasible to extract 10-second interpolated data for hundreds of tags over 34 months. Figure 1 below illustrates the spectrum of available time scales in this case. Three time scales were selected: • 24 hours, encompassing three workshifts and three chip samples; • 8 hours, corresponding to one workshift and one chip sample; and • 1 hour, which is intentionally less than both the chip sampling frequency and the pulp sampling frequency. One must also establish the averaging method. In the PI-Datalink software, the "average" function is a time-weighted mean, corresponding to interpolated data, whereas the "mean" function only uses the actual data points, thereby giving greater weight to periods where there are significant changes in the process. Both these options were used in the 8-hour case, along with the median value.
Pulp sampled every 2 hours
[
U (0
o
c E
c E o
x: T—
I Chips sampled every 8 hours
I
I
£ 00
"!cl CNI
I
I
I
^ 5
o E
(0
Figure 1: Range of possible time scales for system under study.
1028 3.2. Overview of entire dataset It has been shown that pulp refining data can be distilled into a small number of latent variables or components using MVA (Broderick et al., 1995). In a previous study at the same mill (Harrison et al., 2003) daily averages over several years yielded four major PCA components, the most important of which was Line 1 throughput. The next two components appeared to be strongly related to seasonal variations in chip quality, and the fourth to refiner plate gap. An important outcome is that even when the low production points are removed, throughput continues to dominate other variables when using multi-year data. This occurs even within a relatively narrow range of normal production rates. This is probably due to the large number of variables that change when the throughput changes, such as dilution flows, screw feeder motor voltages and so forth. In contrast, when the period is reduced to just one week, the production rate becomes less influential so long as the major valleys are removed. In performing PLS analysis with MFM and R48 for Y's, it was found that the PCA score plots for the remaining hundred-plus X variables strongly resembled those for the entire dataset, which will greatly facilitate the eventual physical interpretation. 3.3. PLS with different time scales and averaging techniques No major problems were encountered in performing PLS at the various time scales. Linear interpolation was used for the chip and pulp quality data at the 1-hour time scale. The fit of all the models was reasonably good, with R^ values between 0.66 and 0.92, and Q^ values of 0.56 to 0.72. Of course, all the runs were performed on only one week of data, meaning that the models' ability to predict other weeks is probably much lower. 3.4. Shorter time scales With modern computers, it is possible to extract one week's worth of data for hundreds of tags at 10-second increments. This requires linear (or other) interpolation between compressed data points. It would also require some form of interpolation for intermittent measurements, such as the 2-hour pulp quality grab samples, an extremely gross approximation. The use of the previous recorded value is also of no use in this case, since Simca looks for variables which tend to move at the same time, and is "fooled" into thinking that a major process shift is occurring every two hours or so. The same argument applies, to a lesser extent, to shorter averaging time scales such as 1 minute or 10 minutes. To perform a PLS at such a temporal resolution, a much more frequent pulp sampling campaign would be required. Residence times in the different unit operations would also have to be known to an equivalent precision, to ensure that all time lags are accounted for during the data pre-processing. In conclusion, shorter time scales can be used to model individual sections of the refining process, for which frequent data is available, but in this case they were not appropriate for modelling the overall process from chips to pulp.
4. Comparison of Different Data Processing Approaches 4.1. Relative importance of X's to PLS model Variable importance plots for 1-hour, 8-hour and 24-hour averages showed the same handful of X's dominating, regardless of time scale, though sometimes the relative order of the X's would switch. Among the rest of the X's, the main difference was that chip
1029 size distribution had virtually no impact on the 1-hour averages, supporting the notion that even shorter time scales would be of no use in this case. Overall, the chip data resolved best at the 24-hour time scale, probably because the inherent variability of the grab samples was partly compensated by the levelling effect of the averaging over three readings. 4.2. Goodness of fit of PLS model PLS Q^ values for the Medium Fibre Fraction variable were generated for each of the runs (Figure 2). While the 8-hour average was not much better than the 1-hour average, the 8-hour median did show a significant improvement. It appears that the median is a better representation of the 8-hour workshift than the average, which may be unduly influenced by minor, short-term perturbations. Note that the median MFM value was also used in this case. The "mean" averaging technique gives a poorer result, and is probably not appropriate for this type of application. One interesting outcome is that the 24-hour average fared the poorest. This means that this model predicts individual days less well than 8-hour shifts. This may simply be a case of too few data points. When the entire month of September was included, the Q^ improved, but the important X's changed radically, meaning that it was no longer the same model. 4.3. Observed vs. predicted Another method of comparing the outputs is to plot the actual observations for each time increment against the value the model would have predicted if given the corresponding X data as an input. The degree of scatter in Figure 3 shows that the 8hour model comes closest to fitting the ideal 45** line. The most likely explanation is that the 1-hour time increment is lower than that of the pulp quality readings, such that the Y values for some of the hours are really averages of purely interpolated data only. The 24-hour average shows significant and non-normally distributed scatter, probably due to the low number of data points.
Figure 2: PLS Q values for Medium Fibre Fraction.
1030
36
•
1
^r
35
s»
Jt^"* '
35
3«,50 1
5 ii»
j3(
'
'>»
5
> ^
' J • •' {3350
r33
J33
H
\J^'"
/^ 32
31
>31»
33
' Vv
150
Si} 3»
1 -h average
35
3?
3!
3t
36
8-h average
Figure 3: Real observation (ordinate) vs. PLS prediction scales.
100 m
VH 32.60 3280 3,01 M
33.« 33.60 3180 3»,» 3450
24-h average (abscissa) for different
time
5. Conclusions Overall, must be Specific • • •
it was found that low production points tend to dominate other variables, and so treated separately, and that medians give slightly better results than averages. recommendations are to: Remove low production days using a percentile or threshold. Use medians instead of averages, if possible. The optimal time scale depends on the intended application. Generally, the same X variables are prominent regardless of the time scale used, but the goodness of fit of the model is heavily influenced by the sampling frequency of key process parameters.
6. References Broderick, G., Paris, J., Valade, J.L. and Wood, J., 1995, Applying Latent Vector Analysis to Pulp Characterization, PaperijaPuu, 77 (6-7): 410-419. Harrison, R., Leroux, R. and Stuart, P., 2003, Multivariate Analysis of Refiner Operating Data from a TMP Newsprint Mill, PAPTAC 2003 Conference, Montreal, Canada. Kooi, S., 1994, Adaptive Inferential Control of Wood Chip Refiner, Tappi Journal 77(11): 185194. Kresta, J.V., Marlin, T.E. and MacGregor, J.F., 1994, Development of Inferential Process Models Using PLS, Computers and Chemical Engineering 18 (7):597-611. Lupien, B., Lauzon, E. and Desrochers, C , 2001, PLS Modelling of Strength and Optical Properties of Newsprint at Papier Masson Ltee, Pulp and Paper Canada 102(5): 19-21. Saltin, J.F. and Strand, B.C., 1995, Analysis and Control of Newsprint Quality and Paper Machine Operation Using Integrated Factor Networks, Pulp and Paper Canada 96(7): 48-51. Shaw, M., 2001, Optimization Method Improves Paper/Pulp Processes at Boise Cascade, Pulp and Paper, March, 43-51. Strand, W.C, Fralic, G, Moreira, A., Mossaffari, S. and Flynn, G, 2001, Mill-Wide Advanced Quality Control for the Production of Newsprint, IMPC Conference, Helsinki, Finland. Wood, J.R., 2001, Controlling Wood-Induced Variation in TMP Quality, Tappi Journal 84(6): 3234.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1031
A Decomposition Strategy for Solving Multi-Product, Multi-Purpose Scheduling Problems in the Paper Converting Industry Jemstrom P., Westerlund T. and Isaksson J.* Abo Akademi University, Process Design Laboratory Biskopsgatan 8, FIN-20500 Abo * UPM-Kymmene, WalkiWisa nN-37630Valkeakoski
Abstract In the present paper multi-product, multi-purpose, daily, short-term production planning in the paper converting industry is considered. In the actual production process finite intermediate storage is used and in the discrete operations, successive operations may start before proceeding stages have been completely finished. Such features enable flexible manufacturing as well as efficient production plans. The scheduling problem is, thus, a challenging problem to solve, especially since the number of orders to process, even in the short-term production plan typically exceeds 100. With the presented decomposition method even problems this large can be solved.
1 Introduction Multi-product, multi-purpose facilities are common in the industry of today, as they offer flexibility in the manufacturing process. The downside of these kind of facilities is that they are more difficult to operate than dedicated ones, especially when it comes to the production planning. Inflexiblemanufacturing systems, incorporating multi-purpose machines, there is seldom a single known bottleneck. Instead the bottleneck moves according to what is produced and when the production of each product has started. Many heuristics have been developed to address the problem, Abadi(1995) and Franca et al.(1996) present some methods using iterative methods. Different approaches may be used, but it turns out that the models become very large with growing number of orders. The specific problem considered may be modeled by using a continuous time formulation and the effectively solved by using decomposition methods. Care must be taken when choosing decomposition strategy. Two different decomposition approaches are used, one product based and one time based. Real world problems can be solved using both decomposition schemes. The solution time also depend on the choice of objective function.
1032
2 Problem Definition The paper converting process considered in the example is the process of printing and laminating. The machine part consists of 18 machines, divided into printing machines, laminators and cutters. There are six of each type of machine, but all machines have different characteristics. Four of the cutters are physically attached to a corresponding laminator, and these machine pairs can be modelled as just one machine. All set-up times are sequence dependant. Each product needs one to seven production steps when produced. The average is three steps, one of each operation. For each product there is predetermined production plan regarding which machines to use, limiting the problem to being a strict scheduling problem without assigning tasks. A schematic overview of the factory is shown in figure 1. Material flow is possible between any units, with the exception of the cutters and laminators, 1 to 4, connected physically to each other.
[
Laminator 1 Laminator 2 Laminator 3 Laminator 4 Laminator 5 Laminator 6
Figure 1. Schematic layout of the factory considered.
3 Basic Formulations 3.1 Allocation constraints The problem is formulated as a before-after problem with a continuous time representation. A job on a machine is done either before or after another one. Mathematically it can be expressed as: ^i,m ^ tj^m I Pj,m
y ^j,m ^ ^i,An i Pi,m)
(1)
where / and j are jobs, t is the time a job starts, p notes the processing time for the job including set-up times, and m is the machine. The disjunctive formulation must be rewritten in order to be solvable with existing optimization tools. This process can also be automated, Bjorkqvist and Westerlund (1999). The formulation can be rewritten using a Big-M formulation.
ti,m-tj^rn+M'yij
<
M - Pi^rr.
tj,m - ti,m - M • yij
<
-Pj.m
,V(/,7,m),iV7
(2)
1033 where yij — 1 if job i precedes job j , 0 otherwise. The relaxation variable, M, is large enough. In the case of minimization of total make span the known upper bound can be used successfully. Apart from the machine-wise expressions there must be expressions for how a products is done. U,m+1 > U^m + Pi,m , V/
(3)
3.2 Objective function One of the most frequent objective functions used in the literature is the total completion time. This objective function can, however, be of little interest in actual industrial applications. In order driven industries, like the paper converting industry, nothing is manufactured before an order is placed. A suitable objective function to use in order driven industries is therefore preferably based on the tardiness of products ordered. Different customers may then have different weights on the tardiness for each product. The use of tardiness has also an other advantage when it comes to solving times. Production plans created with total completion time as objective function have the benefit of being tight plans, with all excessive capacity at the end of the schedule. In order driven businesses these features can not be fully benefited from, and the use total completion time alone as objective function is thus not perhaps the best solution. The total completion time is noted by Cmaxy and the objective function is simply. Minimize CmaxMathematically Cmax is simply expressed as: Cmax>ti-¥piyiel
(4)
,where / is the set of all products in the production plan. In the case of tardiness every product has its own due-date, and this makes the products more distinguished. The search space is more limited, which shows in the solution time. Minimize SCwf-T^)
(5)
1=1
where 7/ is the tardiness for job /, and wf is the weight for the tardiness for the same job. If the total tardiness is zero, then the solution is known to be optimal. Schedules that may obtain solutions with only a few products beeing late are easier to solve than tight schedules with more products beeing late, in the globally optimal solution.
4 Decomposition Strategies Two decomposition methods have been compared, one product based and one time based one. Roslof et al.(2001) developed an algorithm for decomposing large problems for a single machine. The product based decomposition presented here is mainly based on what Roslof et al. presented. A feasible schedule for the whole set of products to be produced is built up starting with a few products, and then adding until the whole set is scheduled. With a feasible solution, containing the whole set of products, better solutions can be obtained by releasing and rescheduling a few products at a time. The complexity of
1034 each updating procedure is greatly reduced compared to the complexity of scheduling the whole problem at once. The solution can not be guaranteed to reach global optimum, but a good sub-optimal solution is usually enough for practical industrial use. With increased computational power the amount of jobs released in each iteration can be increased. This is a great advantage over heuristic methods, that usually do not benefit from increased computational power. The other decomposition strategy is based on time scope. In this case we observe that the order stock changes constantly as new orders arrive and old ones are completed. 4.1 Product based decomposition The product based decomposition works in two stages, a build-up and a rescheduling stage. In the build-up stage there is an initial schedule containing only a few products. A few products at a time are added to the initial schedule, and the set is rescheduled with the earlier set preserving their internal order. The procedure is repeated until all products are scheduled. New products inserted
i_l__i__i__l__l__3 Product sequence from previous schedule
Figure 2. Illustration of build-up process.
The idea behind the build-up procedure is illustrated in figure 2. As shown in Roslof et al.(2001), the order in which the products are inserted has impact on the quality of the schedule obtained. It is therefore of use to use the same reordering, or post processing, strategy as proposed in the same paper. When all products are inserted, some are released and the system is rescheduled. This improves the quality of the solution, as the elements akeady scheduled can not change place in the sequence during the build-up. The results from tests with actual production data is shown in figure 3, for the total makespan, and in figure 4, for total tardiness. In both cases two different strategies using the product-based decomposition were compared to the case when solving a single problem, marked as "MILP" in the figures. In the decomposition runs products were added by two at a time as well as by four at a time in the other run. It shows that less products added at each stage improves computational effectiveness. In the case of the makespans there were no deviations in the objective function between the strategies. With total tardiness there was a trend that showed decreasing quality with increasing computational effectiveness. All tests were made using CPLEX 7.0 on 650MHz Athlon processor with 256MB RAM on Linux platform. Default settings were used in all cases, CPLEX(2000). The solutions obtained using direct MILP formulations look attractive, but the cpu-time to reach these is beyond practical daily use.
1035 SoMng tbnaa, total tardiiwaa
SoMng thm, total makaspan
60 50 - 6 initial, 2added - 6 initial, 4 added
/ / /
fl40
f I30 I20
\
'
/
/
- - 6 initial, 2 added
/
/
• 6 initial, 4added
10
Figure 3. CPU-times for total makespan
Figure 4. CPU-times for total tardiness.
Table 1. Objective value when using total tardiness as objective function. Strategy Products 6 10 14 18 22 26 30
2-added CPU-s Obj 0.58 0 0.88 3759 1.17 5250 2.34 7667 4.79 24045 8.61 39248 60880 18.63
4-added CPU-s Obj 0.58 0 0.8 3759 2.21 5250 16.51 7576 23954 35.09 39158 55.16 60772 436.85
direct MILP CPU-s Obj 0.5 0 41.14 3664 350 4997 (40% int gap) 7344 (61% int gap) >9500 na na na -
4.2 Time based decomposition For the production it is enough to have a good schedule that ranges a few days into the future. For the sales department it is good practice to have some sort of schedule of previous orders when taking new ones. The proposed time based decomposition strategy takes both production and sales aspects into account. This decomposition strategy uses preferably the product based decomposition as base in a two step iterative process with two different objective functions. When solving scheduling problems formulated as MILP-problems, feasible solutions are usually found at an early stage. These solutions are in many cases good ones, but the quality is hard to determine due to large integer gaps. The decomposition strategy is to build up a feasible schedule containing all known orders. This large set is solved only until a few feasible solutions are given, due to time limitations. The second stage is to reschedule a smaller set of products taken from the beginning of the big schedule. The smaller set is solved to optimum, and the answer is then returned to the large problem in the form of additional constraints. The larger problem may then be resolved. In the
1036 smaller set there are also constraints from earlier runs in the form of the detailed schedule obtained earlier. It is worth noting, that the objective function for the smaller set must contain a minimization of the makespan. If not, the system may perform worse than when using only the whole set of orders alone. In its simplest form the sequence may be to minimize tardiness for the whole set of products, and then minimizing makespan for the smaller set. Solving times are determined by the methods used for the different phases.
5
Conclusions
Both decomposition schemes are usable in practical applications. Both have been used succesfully to schedule real life situations with more than one hundred orders. The time based model provides a good short term daily schedule, as well as a good overwiev of the order situation, with acceptable solving times. Both decomposition methods suffer from reduced performance with growing number of orders, but there is a difference depending on the objective function used. When using total makespan as objective function the problem becomes quickly unsolvable without decomposition methods. When using the tardiness as objective function the problem is no longer as size dependant as it is with total completion time. Instead there is a clear indication that the time it takes to solve the time is more related to the tardiness. Creating a tight schedule takes longer than creating a loose one. In some instances, involving large amounts of orders, it may be necessary to use both methods together. If the smaller set of products, for the operational plan, in the time-based decomposition method can not be solved in one run, the product based decomposition can be applied with success. The solutions provided may be suboptimal, but unlike other heuristic methods it is possible to improve quality with increased computational power.
6 References Abadi I., (1995). Flowshop scheduling problems with no-wait and blocking environments: A mathematical programming approach, Ph.D. thesis. University of Toronto. Bjorkqvist J., Westerlund T, (1999). Automated reformulation of disjunctive constraints in MINLP optimization. Computers and Chemical Engineering Supplement, pp. SUS14 Franca P., Gendreau M., Laporte G., Muller F. (1996). A tabu search heuristic for the multiprocessor scheduling problem with sequence dependent setup times. International Journal of Production Economics, 43, pp. 79-89 Roslof J., Harjunkoski I., Bjorkqvist J., Karlsson S., Westerlund T. (2001). An MILPbased reordering algorithm for complex industrial scheduling and rescheduling. Computers and Chemical Engineering, 25, pp. 821-828 CPLEX - CPLEX 7.0 Reference Manual, ILOG Inc., (2000)
7 Acknowledgements Financial support from the European Union-project Vip-Net (G1RD-CT2000-00318) is gratefully acknowledged.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1037
Utilization of Dynamic Simulation at Tembec Specialty Cellulose Mill Mohamad Masudy Tembec Inc. CP 3000, Temiscaming Quebec Canada JOZ 3R0, email: [email protected]
Abstract A high-fidelity dynamic simulator has been developed for Tembec Specialty Cellulose mill in Temiscaming in order to facilitate evaluation of mill upgrade alternatives. Areas of interest were primarily the screen room and the oxygen delignification stage, where it was desired to acquire an understanding of how the proposed upgrades were going to affect the overall process. Utilizing simulation early in the process design stage has delivered substantial benefits including tighter design and smoother startup while minimizing the design life cycle. Since the startup of the new mill the simulator has evolved into a full mill dynamic simulator, which has been interfaced to the mill data historian PI. This approach has made it possible to evaluate several different process and operational modifications based on real process conditions. The model has been utilized for several applications including the mill energy balance, effluent flows and mill water balance. Incorporating the grade specifications in the simulator has been instrumental in verifying the process dynamics during a grade transition. The existing control strategy along with the interlocks has also been implemented in order to facilitate study of control and operational strategies and to identify the process and equipment limitations.
1. Introduction Tembec Specialty cellulose mill in Canada is part of the sulfite pulp group operations in Canada. In 1998 to secure the long-term productivity and quality goals the mill decided to evaluate several alternatives to modernize the screen room and the bleachery. The project goals were to, • Reduce manufacturing costs by employing cost effective and reliable operation • Increase mill throughput to protect and increase Specialty market share • Meet the environmental and quality requirements of the Specialty pulp market Early in the process design, the mill decided to utilize simulation to outline and evaluate several mill upgrade alternatives. This approach made it possible to test hypotheses and assess different alternatives at a small cost and has helped with de-bottlenecking of the process before the actual systems were built. Steady state simulation was utilized to provided the required mass and energy balance, while dynamic simulation has later been used to verify the design parameters like equipment sizing.
1038 Gradually the steady state model has been enhanced with dynamic capabilities and connected to the mill data historian PI. The model has then been validated against actual mill data, which has made it possible to resolve discrepancies of the model. The actual control strategy has also been modeled in order to facilitate analysis of grade and production rate changes. Some of the applications are minimizing the overflows, water usage and prediction of pulp quality parameters.
2. Process Description The cooked chips from 11 batch digesters are blown into a holding vessel called the blow tank. The pulp is then processed in the coarse screening stage to remove the knots from the pulp stream. The accepts from this stage is stored in another vessel used to feed the fme screening stage remove the remaining oversize material. The rejects, if not removed, will cause problems in the downstream process and the final product requiring larger amounts of bleaching chemicals. In the next washing stage, the dissolved solids, i.e. lignin, resin, red liquor, acids, etc., are removed from the pulp via a counter current washing strategy by addition of hot shower water and by utilizing vacuum reinforced gravitational force (drainage). The washing equipment that has been utilized is a Chemiwasher that borrows its design and drainage principles from the Fourdrinier type wet end of a paper machine. The concentrated filtrate is used for the dilution demand upstream and the excess is sent to the evaporators. Unlike the previous stages, the Oxygen and delignification stage after the Chemi-washer aims at removing the remaining solids by chemical additions rather than mechanical separation. In the Oxygen tower, the remaining lignin is oxidized. Further downstream, in the Extraction tower, the lignin extraction takes place. A press stage is also installed in between these two stages. After the extraction stage tower, the pulp is washed in a three stage counter current drum washers. The excess filtrate from this stage after cooling is sent to the effluent treatment plant. Further downstream, there are several bleaching stages along with washing stages after each bleach tower. After the bleaching plant, the pulp is finally dried using two drying machines, which operate very similar to other paper machines. Figure 1 and 2 illustrate the schematic overview of the screen room and the oxygen delignification bleaching areas.
3. The Steady-State Model The goal for developing the steady state simulation was to create a process model with sufficient degree of detail so that different process designs could be evaluated. The model also incorporated the equipment and process constraints. The data used for the existing equipment were based on actual mill data while models for several new pieces of equipment were based on data from the suppliers. The design alternatives, one by one, were then simulated and evaluated. The model was further enhanced with some dynamic capabilities in order to see the impact of equipment sizing on the overall performance. This concerns primarily the storage capacities in the countercurrent flow of pulp and liquor in each stage. The model has also been completed to account for dissolved solids carryover and evaporator efficiency, which has served as a tool for specifying equipment efficiencies in negotiation with equipment suppliers. This
1039 approach made the results quantifiable for the management to approve the mill upgrade project in 2000. Since then the proposed design has been implemented without significant problem with startup or operational concerns. Other areas where this model has been used are process modifications in order to reduce water demand and improve the heat recovery system. This model was created using WinGems simulation software by Pacific simulation.
Figure 1 : Screen room area.
Figure 2 : Oxygen delignification bleaching area.
1040
4. The Dynamic Model Steady state process simulation has traditionally been used as a tool for preliminary process design assessment in the pulp & paper industry. But due to the complexity and interaction between different design parameters and equipment constraints, the traditional methodology often provides incomplete and sometimes misleading information. To be able to accurately assess the performance and suitability of alternative process designs, dynamic simulation is indispensable in dealing with complex engineering systems. Furthermore by taking the equipment and process constraints into consideration, process bottlenecks can be identified early in the design stage. The results can then be used to estimate the capital costs and potential paybacks of different design and operational alternatives. In order to facilitate this task, the steady-state model was converted to a full dynamic model, which was further extended to a mill wide model covering digesters to the drying machines and the steam plant. Much of work was devoted to ensuring that all necessary and relevant process streams and equipment were accounted for and correct. The dynamic simulation software utilized here was Cadsim plus by Aurel systems.
5. Model Validation The original steady state model was based on the design data. After the mill upgrade was completed, the dynamic model has been validated against the actual mill data acquired from the mill data historian. In order to be able to run the model at more than normal operating conditions, there has been a need to make the model robust enough to handle the different extreme conditions such as startups and shutdowns. Therefore the control strategy and DCS logic had to be implemented. Furthermore a DDE link between the mill data historian PI and the dynamic model was established. This approach has made it possible to verify the simulation controls strategy and tuning parameters based on the actual process conditions.
6. Decision Support Utilizing dynamic process simulation as a tool for decision support in the process industry has both short and long term advantages. Once the new upgrades were commissioned, the mill decided to utilize the model to solve specific problems or optimization issues. To achieve this, the mill simulation and operational costs were modeled to enable some degree of manual optimization. A credible decision support system requires real process data, which in this case was achieved by connecting the simulation model to the PI data historian through a DDE link. This approach is intended primarily for process engineers to study operational or design modifications in the process based on offline process data. The approach is more suited to evaluate what-if scenarios in order to understand how changes in the operational parameters influence the process outputs. The long-term benefits are enhancements in process understanding. Furthermore, it provides a tool so that actual process scenarios can be played back to evaluate past performance, quantify the cost of operation and predict the future benefits. Figure 3 illustrates a snapshot of this model from the actual simulation.
1041
Figure 3 : Virtual bleaching area control room.
There is work underway to use this as an online tool, where the simulation can run along with the actual process to predict the future process behavior based on the operating conditions.
7. Stochastic and Deterministic Models Tembec also employs multivariate statistical analysis and modeling tools in areas where first principal models are not readily available. By combining these statistical based models of the quality parameters from both laboratory and online measurements with the simulation model, we have been able to predict pulp quality in the presence of variable retention time in the process. This has been used to predict finished product quality such as DCM Resin and SI8. The first principal dynamic model is primarily used to account for variable retention time compensation while statistical model are used to model scarce lab measurements with fixed lag times in each bleaching stage. Simca by Umetrics has been used to extract the statistical models. Figure 4 illustrates pulp quality prediction results for finished pulp viscosity.
1042
PulpM9C.M8(FLS)
YVar(FM_\^9C) YPnedH(FMVi9|)|
Figure 4 : Observed versus predicted pulp viscosity.
8. Future Enhancements In some other similar projects we have been using steady-state optimizers along with the simulation to minimize e.g. dissolved solids carryover and energy demand in the evaporation. Although this approach is not a Real Time Optimizer, but in many cases it can be used under normal operating conditions. The analysis of more complex systems can be accomplished utilizing the statistical methods. Another potential benefit of this approach is to utilize Design Of Experiments methodology to run trials on the simulated process. This will facilitate the study of cause and effect and investigation of dependencies in the process.
9. Conclusions Process simulation can provide significant value-in-use in process design. A detailed dynamic simulation can be used throughout the project lifecycle and later serve as a tool for process optimization. This can result in substantial benefits including tighter design, reduced project lifecycle, smoother startups and optimized production. The close match of the simulation to the real process gives confidence that the effect of modifications in process design and operatit)n can be accurately predicted. Utilizing the actual mill data has been a valuable tool to identify feasible operational space and actual disturbances. This has made it possible to study design and operational modifications based on actual process constraints and conditions. Combining statistical modeling techniques can also be a complement to the traditional modeling particularly where these models are not readily available. This has also resulted in reducing the degree of complexity of the simulation.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1043
Synthesis of Heat Recovery Systems in Paper Machines with Varying Design Parameters Frank Pettersson and Jarmo Soderman Heat Engineering Laboratory, Abo Akademi University Biskopsgatan 8, FIN-20500 Abo, Finland
Abstract The heat recovery system (HRS) is a vital part in a paper machine when it comes to the overall energy economy in papermaking. For a typical nev^sprint machine more than 60% of the exhaust energy from the dryer section can be recovered, resulting in a recovery of about 30 MW. The synthesis task of a HRS is a decision process where the target is on the one hand to achieve maximal energy recovery and on the other hand to obtain this recovery with minimal investment costs. These goals are contradictory and thus the problem is to fmd a solution minimizing the overall costs, considering simultaneously both energy and investment costs. This synthesis task can be performed with e.g. pinch-analyses or optimization methods. One of the first tasks for the designer is to decide which design parameters, including process flow streams, temperatures and heat transfer coefficients are to be applied. This task is in general not trivial and the result will have a great impact on the overall economy of the final HRS. One challenge is how to take into account uncertainties and known variations in some parameters. The desired design must be capable of handling all evolving situations but it should also be the most economical one when considering the duration of the different operational situations. In this work the importance of taking the variations and uncertainties into account in the design stage is shown with a case study.
1. Introduction Designing a HRS system requires decisions about which matches between the streams should be implemented as well as the size of the heat exchangers. Formulating the design task as an optimization problem results in a mixed integer non-linear programming (MINLP) problem. The structure i.e. the decision on how to connect the heat exchangers is defined with binary variables and the areas and temperatures are defined with real valued variables. Most numerical methods for solution of optimal heat exchanger networks (HEN) decompose the problem in order to simplify the solution procedure. The problem can e.g. be decomposed into three subproblems; a linear programming (LP) problem, a mixed integer linear (MILP) and a non-linear programming (NLP) problem as presented in e.g. Floudas et al. (1986). This decomposition technique is based on the same principles as pinch-analysis (Umeda et al. 1978, Linnhoff et al. 1983) where the energy costs are considered as the most dominating and thus the minimum utility consumption is of first priority. It is although obvious that this is not always a correct assumption. A method solving the whole HEN
1044 design problem simultaneously has been presented by e.g. Yee and Grossmann (1990). One major problem with the simultaneous methods is that they are often restricted when it comes to the number of streams and complexity of the models due to the computational work. The HRS design problem has been solved as an MINLP problem by Soderman et al. (1999) by discretization in temperature intervals. For all possible matches between these temperature intervals, areas and transferred heat can be evaluated in advance. The problem can thus be stated with linear constrains and a nonlinear objective function, due to the economy of scale. Evolutionary programming methods are general stochastic techniques that can deal with large problems without being restricted by non-convexities and discontinuities in the models. For stochastic techniques the solution cannot be guaranteed to be the global optimum. On the other hand, the techniques work with a large set of possible solutions candidates and are thus assumed to have good properties in screening the most promising regions of the feasible search space for multimodal problems. The design of HEN has been addressed with different hybrid methods in e.g. Athier et al. (1997) where simulated annealing together with NLP was used and Lewin (1998) who solved the problem with a genetic algorithm and an NLP subproblem. Heat exchanger networks are usually designed assuming fixed design parameters representing nominal operating conditions. However, the operating conditions may change considerably during time. Changes may be imposed by varying production loads, used raw materials or produced products. The changes can also be a result of normal degradation of the equipment as e.g. fouling of the heat transfer surfaces. Some of the changes are known while others can only be estimated. Considering these changes in the design stage may be of utmost importance so that poor overall economical designs can be avoided. Modeling design uncertainties in a mathematical framework was initiated by Halemane and Grossmann (1983). An approach to deal with uncertainties, described as probability distributions, is to use multi-period models. With a large number of uncertain parameters the problem size will, however, become extensive. This is especially true when a high accuracy is desired and a lot of periods have to be used. A hybrid method based on evolutionary programming and non-linear programming for HRS design problems under uncertainties has been presented by Pettersson and Soderman (2002).
2. Paper Machine A modern paper machine may have a yearly production of 350.000 ton of paper and a machine speed up to 2000 m/min. The papermaking process is highly energy consuming. For manufacturing one ton of newsprint about 600 kWh electricity and 1500 kWh heat energy is required. Almost all the heat energy is used in the drying section. Drying of the paper web from about 35-50% dry content to about 90% dry content is performed in the dryer section of the paper machine where cylinders are heated with low pressure steam. For removal of moisture, supply air at 95°C is blown into the paper
1045 hood resulting in an exhaust air from the hood with a moisture content of about 150 gH20/kg dry air at a temperature of 80°C. Almost all heat energy used in the process can be found in the exhaust air and it is thus very suitable for attempts to recover the heat. The main target with heat recovery is naturally considerable cost savings. The heat recovered can be used to heat a variety of streams. The recovered heat is transferred to e.g. the circulation water heating the machine room ventilation air, to process water and to the heating of supply air to the dryer section. This can be seen from Fig. 1 where the main streams are depicted. In the figure no decisions about the matches in the heat recovery section are yet indicated. The moist exhaust air from the hood is removed at three different locations.
*- Fbperweb
Outdoor exhaust 30=0
Process wcffer
Qjtdoor air
Fresh water
^-O
Fig. 1. The HRS design problem with no matches indicated in the HRS section. To decide the configuration of the HRS we thus have to consider three hot streams and three cold streams, which is quite a small number of streams when comparing to general HEN synthesis problems. The condensation taking place in the heat exchangers make the task more complicated when both the heat capacities and also the overall heat transfer coefficients (Fig. 2) vary strongly. 0.6
0.06
0.4^
0.04
U
U 0.21
0.02
0: 60 50 \ T
Ol 60 \
40 ""^\
'''''
_._„^.—
30 80
70
60
50
40
50 -. 30 80
70
60
50
40
T Fig. 2. Heat transfer coefficients (kW/m^K) for moist air (135 g H20/kg dry air) to process water (left) and supply air (right) for different combinations on temperatures.
1046
3. Case Study In this study two HRS networks are obtained with the hybrid method presented in Pettersson and Soderman (2002). The first one is obtained when nominal operational parameters are considered and the second is obtained when estimated variations on some parameters are also observed. The design task is the one depicted in Fig. 1 with the following objective function for evaluation of the quality of a possible solution. mini = K{k,n^ +k,A'^ + M a . +^54':)+^7e/i
(1)
K is the annualizing factor and Aaa is the total heat transfer area for the air-air heat transfers given in 1000 m^. ttaa indicates the number of different air-air matches and thus ki is a fixed cost for each new match, ki an area cost factor and k^ is a constant with values typically between 0.6 and 1. In a similar manner the investment costs for the airwater matches are obtained using constants k^, ks and k^. QHU is the amount of hot utility needed for all cold streams to be heated to desired temperature levels after the heat recovery section and k-j is an annual hot utility cost factor. All heat exchangers are considered to be of countercurrent type. Costs for cooling are not included because the exhaust air is simply released outdoors in a paper machine. Investment costs for heat exchangers for external heating are also excluded from the problem because they are assumed to be needed in startup situation and are therefore included in the final design. The investment costs for these are also proportionally low, due to the greater temperature differences. The factors used in the objective function are: A;i=0.02, /:2=0.24, A:3=0.6, A:4=0.04, ks=0.22 and A;6=0.8 when the unit on investment costs is in M€. The factor 0.16 has been used to annualize the investment costs, corresponding to a 10 % interest rate and a 10 year depreciation time. The hot utility cost is 16 €/MWh. When searching for the point-optimal design, the moisture content of the exhaust air is fixed at the most probable operational condition jc=135 gH20/kg dry air. The flows of process water, supply air and circulation water are fixed at respectively 70, 70 and 200 kg/s and for each of the three locations for removal of moist air the flow is 30 kg/s. With these parameters, the obtained point optimal structure and the heat exchanger areas are indicated in Fig. 3. H1
«B>-^
^
H1
H3
MOOrrf
C 1 Air
H2
H3
k >1100m* ci
<
Y4200m*
Q
-KB>*
H2
^ \ -
C 2 Glued water
2100m*
?
1700m*
Fig. 3. Point optimal design.
C 3 Process water
C 2 Glued water
T^rwv«»
3100m*
^rY4Qft7
C 3 Process water 12001^ I
I
Fig. 4. Flexible design obtained considering variations.
1047 Several parameters are though known to vary during operation e.g. the moisture content of exhaust air from the dryer section, the mass flow of process water and circulation water, most initial temperatures and heat transfer coefficients. The variations do not occur due to unsuccessful process control, but due to changing manufactured paper qualities or due to changing outdoor or fresh water temperatures. In this work, the variations in moisture content of exhaust air and the flow of process water have been considered as important and expected to have an impact on the solution. The variations on moisture content and process water flow are estimated using normal distribution functions. The objective function is thus reformulated so that a weighted mean of the different possible operational points can be evaluated by.
-11:
.yflTT
exu
25?
,72^
exd
((i-¥bx + cx^ -^dF + eF'' + faF}lxdF C^)
incorporating expected mean values, //,, and variances, 5,. The parameters a,...,f'm the approximating function are obtained by using e.g. a linear least squares formulation on a number of evaluations at different operational points. The assumed variations in this case are thus described by a mean moisture content of 135 gH20/kg dry air with a variance of 20 gH20/kg dry air. Similarly for the flow of process water, a mean flow of 70 kg/s and a variance of 5 kg/s have been estimated to describe the evolving operational points best. Considering these variations, the resulting design can be seen in Fig. 4. Comparing the two obtained designs at the basic operation point (x=135 and F=70), the annual costs are for the point-optimal design, 0.634 M€ while they are 0.697 M€ for the flexible one. Thus, the point-optimal is 9% cheaper. On the other hand, when taking into account the expected variations, the annual overall costs for the point-optimal design is 0.806 M€ while they are 0.761 M€ for the flexible design. Now, it can be observed that the point-optimal solution is 6% more expensive than the flexible one. The differences between the costs for the point-optimal design and the flexible design are illustrated in Fig. 5. The negative values in Fig. 5 indicate that the point-optimal solution is cheaper at these operational conditions.
Fig. 5. Differences between costs (M€/a)for the point-optimal and the flexible design.
1048 One may argue that selecting the most commonly occurring operational condition is not completely fair, and that a more demanding operational point should be selected. With parameters values fixed at x=120 and F=75, a new point-optimal design, shown in Fig. 6 can be obtained resulting in an annual cost of 0.792 M€ when the whole range of operational points is considered. It can be noted that this design is still 4% more expensive than the flexible one. The differences between the costs for the point-optimal design and the flexible design are illustrated in Fig. 7.
H2
H1
H3
<-
C 1 Air
^<S>--
C 2 Glued water
ksqom' C 3 Process water
900mtJ
JJ900rrf
Fig. 6. The new point optimal design.
Fig. 7. Differences between costs (M€/a) for the new point-optimal and the flexible design.
4. Conclusions The importance of considering varying and uncertain design parameters in the design of HRS in paper machines has been shown with a case study. The differences in annual costs indicate that efforts for obtaining flexible solutions are justified. It is clear that further work on both formulating flexible design tasks as optimization problems and developing efficient solution methods for these are needed. A hybrid method based on evolutionary programming and NLP has been successfully used in this case study, although no guaranties for global optimality for the obtained designs can be given.
5. References Athier, G., Floquet, P., Pibouleau, K. and Domenech, S., 1997, AIChE J. 43, 11, 3007. Floudas C.A., Ciric, A.R. and Grossmann, I.E., 1986, AIChE J. 32,2, 276. Halemane, K.P. and Grossmann, I.E., 1983, American Institute of Chemical Engineering Journal, 29, 3,425-433. Lewin, D.R., 1998, Comp. Chem. Engng., 22, 10,1387. Linnhoff, B. and Hindmarsh, E., 1983, Chem. Eng. Sci. 38,745-763. Pettersson, F. and Soderman, J., 2002, 15* International Congress of Chemical and Process Engineering, CHISA 2002, Prague, I 3.5. Soderman J., Westerlund, T. and Pettersson, F., 1999, 2"^ Conference on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction, Budapest, Hungary, Proceedings, 607-612. Umeda, T., Harada, T. and Shiroko, K., 1979, Comp. Chem. Engng. 3, 283-282. Yee, T.F. and Grossmann, I.E., 1990, Comp. Chem. Engng., 14, 1165.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1049
Smart Enterprise for Pulp and Paper: Digester Modeling and Validation p. A. Rolandi*, J. A. Romagnoli Centre for Process Systems Engineering Department of Chemical Engineering The University of Sydney Sydney, NSW, 2006, Australia
Abstract This paper discusses the mathematical modeling, dynamic simulation and vaUdation of the entire digester area at the Tumut Mill: the Lo-Level™ feed line, the Lo-Level™ heat exchange and recovery system, and the digester itself -a Lo-Solids™ EMCC™ singlevessel hydrauHcally-full heterogeneous reactor. The mathematical description of the digester is a modification of the extended Purdue model, a rigorousframew^orkbased on the fundamental principles of mass and energy conservation. The heat exchange units -shell-and-tube heat exchangers, condensers and kettle reboiler- are introduced considering an interlinked-cell model approach. The underlying mathematical formulation consists of a large number of differential-algebraic equations (DAEs) vv^hich result from the lumped-parameter approximation used to describe the several process units. An open architecture solution is developed, and the set of equations are codified using gPROMS and C-H- programming languages. The agreement betw^een the model predictions and the experimental information from plant data is reasonable, thus providing a model-basedframew^orkfor fiiture research.
1. Introduction The Smart Enterprise Project within Visy Pulp and Paper Industries supervises the development of an advanced set of tools for the Tumut Mill. These routines aim to extract meaningfiil information from on-line plant data, gain understanding in the operational status of the process, and provide a sound basis for making faster and more informed decisions. The tools conceived for the Smart Enterprise Project range from process modeling and simulation softw^are, to parameter estimation, fault diagnosis and data reconciliation routines, as v^ell as process optimization and advanced process control modules. Although the objectives of the Smart Enterprise division could be tackled from a semi-empirical approach, a rigorous theoretical framework could only be formulated from a model-based perspective of the physical and chemical transformations in the system. Therefore, process modeling and simulation are of a paramount importance, since any excessive simplification or assumption at this critical stage would have a direct impact on the results of any fiirther development. * Corresponding author. Tel.: + 61 2 9351 4337; Fax: + 612 9351 2854 Email: prolandi@chem. eng.usyd.edu.au
1050 Since the introduction of the H-factor concept by Vroom in 1957, several authors -Christensen et al (1982), Gustafson et al (1983), Harkonen (1987), Michelsen (1995) and Wisnewski et al, (1997)- have made important contributions to the field. These theoretical advances in the understanding of chip delignification encouraged improvement in the design of pulping equipment. Nowadays, the need for cost-saving simplification of process equipment and simultaneous enhancement in process performance has made the cooking of wood chips a highly integrated process. Consequently, the performance of an industrial digester under normal and abnormal operating conditions is directly influenced by auxiliary process equipment, and the state of this key unit can no longer be predicted without considering vital information resulting from other units in the area. On the other hand, while some strategies such as parameter estimation and data reconciliation benefit from the redundancy of information that occurs when mass and heat balances are applied to a series of interconnected units, other strategies such as process optimization have practical application when environmental or cost objective fiinctions are considered over an entire area of unit operations. In light of these facts, a better understanding of current technologies and a greater chance to make the most of their potential can only be achieved via a thorough description of the whole cooking process. This paper will address the mathematical modeling, dynamic simulation and validation of the continuous digester and associated auxiliary units at the Tumut Mill in Australia.
2. Model Definition and Implementation 2.1. Digester The Andritz-Ahlstrom continuous digester in the Tumut Mill is a Lo-Level™ singlevessel hydraulically-frill tubular reactor that produces 610 air-dry tons of pulp per day at design operating conditions. In this huge reaction vessel, wood chips and cooking liquor entrained in the void space of the solid matrix travel downwards through the column. Additionally, the hquor present in the void space between chips, the free Uquor, moves either downward or upwards, in a co-current or counter-current flow to the chips and entrained liquor respectively. The flowrate of bound liquor depends on the extent of pulping, whereas the flowrate of free liquor is determined by addition and extraction flowrates in each zone of the digester. A Lo-Level™ Hi-Heat™ vertical reactor consists of three co-current zones -impregnation i (Diooi), impregnation ii (DI002), lower cooking (DI003)- and three counter-current zones -upper cooking (Diioi), wash (DI102) and blow (oil03) as shown in Figure 1. However, during unusual operating circumstances the co-current or counter-current flow configuration in the above zones could no longer be valid. One of these unusual scenarios is pressure upset conditions where the upper cooking screen extraction flow is smaller than the actual downward flow of free liquor just above the screens, transforming the upper cooking zone temporarily into a co-current one. Another typical situation occurs when a change in the digester pressure control strategy -from an upperextraction flow control to a cold-blow-addition flow control- is needed to facilitate the wiping of the extraction screens. In the original extended Purdue model, mass and heat balances from co-current and counter-current zones were derived from the fiindamental principles of conservation
1051
Figure 1. Feed line, digester and recovery system of the Tumut Mill digester area. considering a lumped-parameter approximation. Since under upset conditions the flow of free liquor cannot be defined a priori, any mathematical formulation unable to consider a discontinuity in the flow pattern loses its capacity of prediction. A model suitable for parameter estimation, data reconciliation and model-predictive control strategies must introduce mechanisms to handle the highlighted discontinuities. Fortunately, one elegant solution can be devised by reformulating the mass balance for the free liquor phase as follows:
i ^
= F;'V;;+F; v ; ; + F ; V ; ; - F,p,, - v;p,, + ^^ck- - PU)- y.Pu d)
F / + F ; = F^ = I / R F ;
(2)
In Eq. 1, the superscript d.uorx denotes downstream, upstream and injection/extraction convective flows respectively. In Eq. 2, i? = 0 characterizes a co-current zone, while R = \ describes a counter-current one, hence Eq. 1 is able to describe both flow configurations. Furthermore, the coefficient R could be interpreted as a factor that characterizes the degree of mixing in the zone, and could theoretically take any value in the interval [0,1]. The introduction of the coefficient R in Eq. 1 by means of Eq. 2 leads to the possibility of making use of it as a tuning factor that characterizes the free liquor flow pattern in the column. 2.2. Heat exchange and recovery system The heat recovery system is composed of a kettle-type reboiler (EE425), a reboiler preheater (EE466) and a steam economizer (EE419), as shown in Figure 1. As a result of the nature of this heat recovery network, the performance of the heat exchange units exhibits strong interactions, which become more critical during transient operation or in situations where fouling resistances significantly deviate the operating conditions from the original design point. An interlinked-cell approach was considered to model the frill range of units of the heat exchange and recovery system. 2.3. Feed line The Lo-Level™ feed line consists of a series of fluid storage and transportation devices as well as pipes and pipe nodes that transfer pre-steamed chips and cooking liquor from atmospheric pressure to the digester operating pressure as shown in Figure 1.
1052 2.4. Implementation aspects The resulting mathematical formulation of the entire area -the Lo-Level™ feed line, the Lo-Level™ heat recovery system and the Lo-Solids™ digester and associated heat exchange units- consists of a large number of differential-algebraic equations (DAES) which result from the lumped-parameter approximation used to describe the several process units. These equations were implemented in gPROMS, a novel equation-oriented declarative language. gPROMS' environment combines two distinctive entities, MODELS and TASKS into a higher-level entity, the PROCESS, which describes the state of a simulation. Li order to relax the performance of gPROMS' numerical methods, all physical and transport properties for water, steam and cooking Uquor where written in C++ and compiled as a Dynamic Link Library (.DLL).
3. Methods and Experimental Results The abundance of experimental data from the Tumut Mill can only be systematically and consistently utilized by automation of several data pre-processing routines. These tools consist in the implementation of public-domain numerical methods and statistical analysis recipes and the development of a Graphical User hiterface (GUI) in Visual Basic for Applications (VBA) that allows the user to have frill control of their execution. 3.1. Initialization simulation - initial conditions The solution of any set of DAEs is dependent on its initial conditions and, with the exemption of a few cases, the initial values for the state -differential- variables of the model cannot be determined from plant data, fri this study, flow rates, concentrations and enthalpies -temperatures- were averaged using a composite trapezoidal integration rule. Then, a simulation for initialization purposes was run for 420 min of undisturbed integration based on mill-data averaged values. It is interesting to indicate that in case of lack or inconsistency of plant data, a simulation could still be run with default values for the missing control and input variables. If severe discrepancies arise between the experimental data and the initialization-simulation predictions, a new TASK with a different averaging time window should be considered. 3.2. Validation simulation In a first approach, information from the Distributed Control System (DCS) could be introduced in a TASK entity without any experimental data pre-processing. However, signal noise and small temporal fluctuations in process variables can be fairly eliminated by proper averaging and filtering. In this context, measured variables and set points for certain temperature and level control loops -running in automatic mode during the simulation- were subject to averaging. A time window of 30 min was used unless important dynamic changes in the operating conditions -for example, production rate changes or upset conditions- suggested doing so. The validation simulation was run for 1020 min with re-initialization of a subset of the model input variables every 30 min. In general, the CPU integration time rarely exceeded 130 min. 3.3. Analysis of results Plant data are available on a daily basis. However due to the lack of space, only the results for two days of plant operation are summarized -Figures 2a and 2b. Even though temperatures are important variables for evaluation of the model capability to predict real plant scenarios, the most significant figure from an operative and productive point
1053 of view is the lignin content of the blow line pulp, traditionally known as kappa number. In the Tumut Mill, an on-line kappa number measurement is available from the suction of the medium consistency pump (MCP) at the end of the fiberline, by means of a device named Smart Pulp Platform (SPP). It should be stressed that the calibration of such apparatus is far from being a trivial task, and discrepancies up to 6 #K -kappa units- can occur between the SPP results and the laboratory analysis. In other words, even though the state-of-the-art technology in kappa measurement is far from being an accurate, precise and reUable one, it provides a rough on-line indication of this number that could help to visualize average values and general trends of the blow line kappa number. From Figure 2a and 2b, it is evident that the variability of the SPP kappa number is greater than that predicted by the model under the same sequentially time-averaged operating conditions. However, the average kappa number and the dynamic trends are in very good agreement considering that the industrial limit for kappa control in the digester under study is 10 #K -this is shown in Figure 2a by the acronyms UCL and LCL which denote the upper and lower control limits, respectively. Discrepancies in the results might arise from different scenarios. Circumstances such chip size distribution and composition fluctuations, residence time variations, symptomatic movement of the chip column, and disagreement in operating procedures are the most Hkely to occur. Even though some of the outlined points could be accounted for by modifying the governing equations in the primitive models -for example, by introducing chip size distribution, population balances and chip-thicknessdependant diffusion terms-, research is still being conducted in many of these areas because of the complexity of the described phenomena. Moreover, although the consequences of these trends are observable, their quantification from an empirical or theoretical approach is a difficult task due to the interdependence between these phenomena. When considering the influence of feedstock operations, upstream processes such as chipping, screening and chip-pile storage are responsible for the chip size distribution fed to the digester. Chip size distribution is rarely quantified in the Tumut Mill and is difficult to predict without the abundance of consistent experimental data. On the other hand, problems in debarking operations introduce chip composition fluctuations which cannot be captured in the existing equations for reaction and diffusion phenomena in the proposed model. As a consequence of a large variability in chip sizes, fines are continuously fed to the B L O W LINE K A P P A N U M B E R ( A i 3 7 2 A ) - 1 1 / 0 7 / 2 0 0 2
B L O W LINE K A P P A N U M B E R ( A I 3 7 2 A ) - 0 6 / 0 7 / 2 0 0 2 9&00 94.00 9a 00 9200
87.00
^ 91.00
J 9aoo J 8900
saoo 87.00 8&00
I " 86.00
••
1
:**'*:***
1 'i*****.*.*..,^
•
1 1 "'•
•
,
J 85.00 1-84.00
82.00
•
81.00
8&00
80.00 200
400
eOO
800
1000
Time [min]
Figure 2a. Kappa trends 06/07/2002.
Figure 2b. Kappa trends 11/07/2002.
1054 reactor. Fines become softer because of the delignification process and cause plugging of the extraction screens. The resulting higher differential pressure across the screens leads to a non-uniform distribution of chemicals added in the circulation loops. Operators often tackle this problem by carrying on a wiping procedure that comprises altering the screens switching sequence or closing an entire row of screens. However, column movement is inevitably impacted which causes chip andfi-eeliquor channeling and a consequent modification of the residence time of the chips, as well as a disturbance of the optimal temperature and concentration gradients respectively. In addition, operating procedures depend on the experience and background of the digester and fiberline operators, and there is disagreement in chip-level management strategy, column compaction policy, and reactivity-proactivity behavior. From the ideas above explained, even if the chemistry of the cooking process were perfectly understood, it would still be very difficult to account for hydraulic aspects of the digester that could not be introduced within thefi-ameworkoffimdamentalmass and energy balances such as the proposed modification to the original extended Purdue model described in this work.
4. Conclusions The model calculates not only all the measured process variables of interest, but it also predicts the kappa number and yield of the pulp, which are two important variables for process control, quality control and overall operation assessment. The abundance of experimental datafiromthe mill provided a unique chance to validate the enhancements introduced to the original extended Purdue model. The agreement between the model predictions and the experimental information is reasonable considering the current understanding of chip size distribution and composition fluctuations, residence time variations, and hydraulics of the reaction vessel. Within the Smart Enterprise Project the proposed model has a number of applications. Because of its ability to predict a broad spectrum of plant scenarios, in a fraction of the actual process characteristic time, the model will be fiilly customized as an on-line simulator to be used both, as an operator's training tool, and a process engineer's analysis tool. Finally, the model presented in this paper will be used to investigate on-line parameter estimation and data reconciliation strategies, steady-state and dynamic optimization routines, and advanced model-based control systems for the entire digester area.
5. References Christensen, T., Albright, L.F. and Williams, T.J. 1982, "A Mathematical Model of the Kraft Pulping Process". Tech. Rep. 129, Purdue University, PLAIC, West Lafayette, IN. Gustafson, R.R., Sleicher, W.T., McKean, W.T. and Finlayson, B.A. 1983, "Theoretical Model of the Kraft Pulping Process". Ind. Eng. Chem. Process Des. Dev., 22, 87-96. Harkonen, E.J., 1987, "A Mathematical-Model for 2-Phase Flow in a Continuous Digester". TAPPI J., 70(12), 122-126. Michelsen, F., 1995, "A Dynamic Mechanistic Model and Model-Based Analysis of a Continuous Kamyr Digester". PhD Thesis, University of Trondheim, Trondheim. Wisnewski, P.A., Doyle, F.J. and Kayihan, F. 1997, "Fundamental continuous-pulp-digester model for simulation and control". AIChE J., 43(12), 3175-3192.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1055
Multiobjective Optimization of a Continuous Pulp Digester C. M. Silva and E. C. Biscaia Jr. PEQ/COPPE/UFRJ - Federal University of Rio de Janeiro - Brazil [email protected]
Abstract A rigorous dynamic model of the Kamyr continuous pulp digester has been developed as a distributed-parameter system. The model was adjusted based on data from an industrial unit. Optimal operating strategies are generated by a multiobjective optimization method based on Genetic Algorithms, aiming to control the kappa number and the pulp yield. Manipulated variables include the temperature of the liquor and chips fed in the digester and the inlet white liquor flow rate. The simulated results have confirmed the efficiency of the multiobjective technique to find the Pareto optimal set, offering a viable strategy to solve such complex dynamic optimization problems.
1. Introduction Pulp production is the most important process for the chemical conversion of wood. It accounts for more than one third of the total wood annually processed. Worldwide consumption of paper products already exceeds 310 million tons per year, and paper industry continues to expand its production. The major process for producing chemical pulps is the alkaline sulphate or Kraft process. It takes place in a complex plug-flow reactor known as digester, where wood chips are cooked with aqueous solution of sodium hydroxide and sodium sulphide. The lignin that binds the cellulose fibers together is dissolved through a combination of chemical and thermal effects. A continuous digester consists basically of three zones: impregnation, cooking and wash zone. In the impregnation vessel, the white liquor penetrates and diffuses into the wood pores. The mixture is fed into the top of the cooking zone, where both chips and liquor are heated to the desired reaction temperature. The delignification process is carried out during the continuous transport through the cooking zone. Controlled conditions are maintained in order to avoid overcooking, which results in excessive reduction of the pulp viscosity and loss of the pulp strength quality. In the wash zone, a countercurrent flow of free liquor extracts the inorganic solids from the entrapped liquor inside the pores, while simultaneously continues mild delignification. The residual lignin present in the pulp is expressed in terms of the "kappa number", which is determined by the oxidation of lignin by potassium permanganate under acidic conditions. The kappa number is the major parameter in the evaluation of the pulp quality. Additional relevant parameters are the yield, defined by the relation of the dry pulp to the raw material fed, and the H factor, which represents a measurement of the pulping level. The Kraft process is not only the dominant chemical pulping process, but also the most important overall concerning the various production methods. Several studies have been reported on modeling of continuous digesters at different levels of complexity. Michelsen and Foss (1996) have developed a dynamic model that describes the transfer
1056 of mass, momentum and energy in the continuous digester. The model is able to predict pulp quality and to explain the interaction between the reaction kinetics and the residence time in the vessel. Wisnewski et al (1997) have presented an extension of the Purdue model, with a more detailed description of the mass and energy transport within the digester. A lumped-parameter approximation has been used to describe the axial transport mechanism, employing a series of continuous stirred-tank reactors. Funkquist (1997) has proposed the gray-box identification of a continuous digester, incorporating empirical data into a physically based model to describe the process as a distributedparameter system. Moczydlower (2002) has elaborated a digester model combining the description of the number of phases and compounds proposed by Wisnewski et al. (1997), as well as the same kinetic and stoichiometric parameters used by these authors, with the structure of partial differential equations employed by Funkquist (1997). Several modifications have been devised attempting to overcome some of the conflicting targets of the Kraft process, as to maximize the pulp yield and minimize the environmental impact. In general, the greater the requirement for fiber purity, the lower the yield of fiber and the greater the cost of the process. Increased delignification rates together with reduced alkali charges and improved pulp properties are also required. A considerable number of works has been developed concerning the control of the digester (Wisnewski and Doyle III, 1996, Wisnewski and Doyle III, 1998, Amirthalingam and Lee, 1999, Doyle III and Kayihan, 1999). Not many studies, however, are available in the literature on optimization of the digester (Sidrak, 1995). In this contribution, the pulping process on a continuous digester is analyzed by the multiobjective optimization technique in order to seek the trade-off surface regions between the conflicting objectives.
2. Mathematical Model The continuous digester model employed in this work constitutes an improved version of the model proposed by Moczydlower (2002). The model describes a reaction vessel consisting of a co-current cooking zone and a counter-current wash zone. Free liquor is fed into the top and bottom of the digester, and is extracted between the cooking and wash zone. The pulp is discharged at the bottom of the digestor. The process is considered to be composed of three homogeneous phases: wood chips, free liquor and entrapped liquor. The wood is assumed to consist of five components: high-reactivity lignin, low-reactivity lignin, cellulose, araboxylan and galactoglucomman. The free liquor and the entrapped liquor phases comprise six components: active effective alkali, passive effective alkali, active hydrosulfide, passive hydrosulfide, dissolved lignin and dissolved carbohydrates. The model was adjusted using data from the industrial unit of Klabin Parana Papeis (Brazil). The assumptions made in the model development include: (a) the space dependence is one-dimensional and axial along the vessel; (b) the temperature and the alkali concentration within the chips are assumed to be uniform; (c) the chips and the entrapped liquor phase are in local thermal equilibrium; (d) delignification reaction only occurs in the cooking zone.
1057 A dimensionless model consisting of 19 partial differential equations (PDEs) is derived for the state variables. The equations include convection, dispersion, diffusion and reaction, as follows: Solid-phase component balance (/= 1,..., 5): dCC:
dt
dCc: -v^——^i-a-
'
a^Cc-
dz
dz^
(1)
/?,
Entrapped-liquor-phase component balance (/=1,..., 6): dCe:
dt
dCe: d^Ce^ . r— •• -Vn —r-^ + ' dz " adz^ T^ + A^ yjTc exp
i-L
{Cl,e-Ce,)-'ZrijRj
(2)
Tc
Free-liquor-phase component balance (/=1,..., 6): dCL
dCh
d^CL
ri ,
p-
[^ f,
1
(Ce,-£C/,)
(3)
Wood-chip energy balance: dTc ..y^^^a^^A{Tl-Tc)-/3iRj dt dz dz
(4)
M
Free-liquor energy balance: 377 dt
-V
dTl d^Tl + -^A{TC-TI) • + adz dz" l-V
(5)
where /?, = Daxi expl
T,
\Ce,
ifc,-ccr)1+ Da 2/ exp
1
^CciCe^ \fci - Cc°° j
Cci, Cei and C/, are the concentrations in the chips, the entrapped liquor and the free liquor; Tc and Tl are the chips and the liquor temperatures; v^ and V/ are the chips and liquor velocities; a is the dispersion coefficient (Peclet number); A and A^ are the heat and mass transfer coefficients; /? is the heat of reaction coefficient; y is the stoichiometric coefficient matrix; t] is the volume fraction of the chips; 8 is the chip porosity; E is the activation energy; Da is Damkohler number and Ccr is the concentration in which the component does not react. The definition of the dimensionless groups and the parameter values are presented in Wisnewski et al. (1997) and Funkquist (1997). In order to improve the original model, Danckwerts boundary conditions were assumed in the inlet and outlet flows for mass and energy balances. Moczydlower (2002) has adopted as inlet boundary conditions the feed concentration of the components and the feed temperature. Since the classical mathematical modeling were employed to describe the diffusion, dispersion and convection phenomena, Danckwerts boundary conditions are required in order to satisfy the mass and energy balances. Besides providing a more
1058 realistic description of the process, such assumptions made the optimization of the model possible by reducing the stiffness of the original model. Distinct boundary conditions were proposed for the cooking and wash zone, as free liquor is fed in both the top and bottom of the digester. Initial conditions were settled supposing the vessel initially filled with wood chips of null content of components and liquor at a certain initial concentration. Both liquor and chip flows are considered to be at the same temperature. The set of PDEs was spatially discretized into 20 points along the digester by means of the Spline Collocation method. The mass and energy balance equations for the free liquor phase involve different dimensionless groups for the cooking and wash zones, therefore, the system was solved separately for each of these zones. The resulting system, consisting of 380 ODEs, was integrated using the Runge-Kutta method. Figure 1 shows the steady-state profiles of the Kappa number and temperature in the digester. The other important process variables were omitted for the sake of brevity. The behaviors of all profiles obtained by the proposed model are consistent with the ones presented in the literature. In Table 1, the simulated results are compared with the ones obtained by the original model and in the industrial plant, using the same chip flow rate, liquor concentration and flow rate, feed temperature and equipment geometry. The simulated results are in accordance with the available measured data from the industrial unit. The difference in the pulp yield is probably due to the unavailability of wood composition data. The wood component concentrations were estimated based on the results of analysis in the industrial plant and values of the literature. Another reason could be the dispersion coefficient, which inference may have led to higher effective alkali consumption in the simulated process, increasing the degradation of the hemicelluloses. 165
O
160 155 150
145-1
a. 140 J E 0)
135-1 130
5
10
15
20
25
30
35
40
45
Distance from the top (m)
10
15
20
25
30
35
40
45
Distance from the top (m)
Figure 1. Simulated Kappa number and temperature profiles. Table 1. Parameters for evaluation of the process. Proposed model Original model Industrial unit Variable Kappa number (-) 74.4 80 75 Yield (%) 59.5 61 63 Temperature at extraction (°C) 162.7 162.5 162 Effective alkali at extraction (g/1) 4.3 5 5
1059
3. Multiobjective Optimization Method A multiobjective optimization method has been proposed to generate optimal operation strategies for the continuous digester. An optimization algorithm based on the Genetic Algorithms (GA) has been developed. The GA optimization procedure consists of a search for non-dominated solutions. The concept of non-dominance refers to the solutions for which no objective can be improved without worsening at least one of the other objectives. The progress strategy is guided by the fitness evaluation, and consists of performing the population with genetic operators to generate the next population. Different adaptations of the original GA are presented in the literature (Cheng and Li, 1998, Toshinsky et al., 1999, Wang et al., 1998). A detailed background on the GA theory is reported in Busacca et al. (2001). The multiobjective optimization algorithm developed constitutes an improved version of a Pareto genetic algorithm proposed by Cheng and Li (1998). The standard ranking procedure, based on the concept of non-dominance, has been extended to treat multidimensional problems. A new class of operators is introduced to enhance the algorithm performance: (a) a niche operator, which prevents genetic drift and maintains a uniformly distributed population along the optimal set; (b) a Pareto-set filter, which avoids missing optimal points during the evolutionary process and (c) an elitism operator, which insures the propagation of the best result of each individual objective function. These operators reduce the necessary number of generations and are computationally feasible for even complicated problems (Silva and Biscaia, 2002). A fitness function based on the rank procedure is provided to determine the reproduction ratio. A penalty function method based on fuzzy logic theory is adopted to incorporate the constraints into the fitness function. The algorithm operates in a continuous variable space, which is computational fast and stable in converging to global optima. 3.1. The formulated problem The multiobjective optimization problem involves the minimization of the kappa number,/i, and the maximization of the pulp yield,/2. The decision variables include the temperature of the liquor and chips fed in the digester, 7, and the inlet white liquor flow rate, Q/. The constraint of the process limits the value of the Kappa number in a certain range, in order to meet the product specification requirements: 5
^Cci(exiting)
Minimize
/i =
—-r^— O.OOI53I; Cq i=l
subject to
maximize /2 = -j^ X ^^i (entering) i=l
70 < kappa < 80
The optimization of the process has been carried out using the GA parameters shown in Table 2, established in a previous sensitivity study (Silva and Biscaia, 2001). Table 2. Genetic algorithm parameters. Population size Crossover probability
20 75 %
Mutation probability Number of children /crossover
5% 1
1060 Table 3. Optimal results. Kappa (-)
59
59,4
59,8 Yield (%)
Figure 2. Pareto optimal set.
60,2
,6 60,6
70,8 71,4 72,2 73,4 74,1 74,8 75,1 75,7 76,4 77,4 78,7 79,2 79,7
Yield (%) 59,2 59,3 59,5 59,6 59,6 59,8 59,8 60,0 60,0 60,1 60,4 60,4 60,5
TCQ
Qi(g/1)
159,7 159,6 159,7 159,0 158,7 158,7 158,8 158,9 158,4 158,0 157,7 157,6 157,4
1,61 1,61 1,59 1,61 1,61 1,61 1,60 1,58 1,60 1,60 1,60 1,60 1,60
Figure 2 shows the Pareto optimal set obtained in the optimization and TableS some of the optimal results. It can be observed that the objective functions chosen are affected in opposing ways by changes in the decision variables, which characterizes the multiobjective feature of the problem. The optimal strategies for the feed temperature of chips and liquor and the inlet liquor flow rate permit a reduction on the Kappa number of up to 9 points with a decrease on the process pulp yield of just 1.3%. The optimal results of the Kappa number do not violate the limits required in the production process.
4. Conclusions An improved continuous digester model, with more realistic boundary conditions, has been developed for simulation and optimization of a pulping process. A Pareto Genetic Algorithm has been used to conduct multiobjective dynamic optimization of the system. The optimal set of operating conditions is able to overcome the conflicting optimization goals, and shows that for each kappa number there is a maximum yield associated, and vice-versa. Under such conditions, maximization of the plant yield is obtained while minimizing the kappa number, according to the product specifications.
5. References Amirthalingam, R. and Lee, J.H., 1999, J. Proc. Cont. 9, 397. Busacca, P.G., Marseguerra, M. and Zio, E., 2001, Reliab. Engng. Syst. Saf. 72, 59. Cheng, F.Y. and Li, D., 1998, AIAA J. 36, 1105. Doyle, F.J. Ill, and Kayihan, P., 1999, Chem. Engng. Sci. 54, 2679. Funkquist, J., 1997, Control Engng. Practice 5, 919. Michelsen, F.A. and Foss, B.A., 1996, Appl. Math. Modeling 20, 523. Moczydlower, D., 2002, Modeling and Control of a Continuous Pulp Digester, PhD Thesis, Federal University of Rio de Janeiro, Brazil (in Portuguese). Sidrak, Y., 1995, TAPPI J. 78,93. Silva, CM. and Biscaia, E.C., 2001, to be published in Comp. Chem. Engng. Silva, CM. and Biscaia, E.C, 2002, CAPE v. 10: Multiobjective Dynamic Optimization of Semi-Continuous Processes, Eds. J Grievink and J Schijndel, Elsevier, Amsterdam. Wisnewski, P.A. and Doyle, F.J. Ill, 1996, Comp. Chem. Engng. 20, S1053. Wisnewski, P.A., Doyle, F.J. Ill and Kayihan, F., 1997, AICHE J. 43, 3175. Wisnewski, P.A. and Doyle, F.J. Ill, 1998, J. Proc. Cont. 8,487.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1061
Searching for Enhanced Energy Systems vdth Process Integration in Pulp and Paper Industries Jarmo Soderman and Frank Pettersson Abo Akademi University, Heat Engineering Laboratory Biskopsgatan 8, 20500 Abo, Finland e-mail: [email protected]
Abstract The scope of this paper is to discuss some process integration options for enhancing the energy systems in pulp and paper industries. The focus is on the recovery of heat of the exhaust air streams from the paper machine dryer section and on the utihsation of eventual excess heat from thermo-mechanical pulping (TMP). With optimal heat recovery systems substantial savings can be obtained. New papermaking technologies like improvements in the press section and impingement drying have opened up new possibilities for utilisation of secondary heat energy. Production of electrical power with organic Rankine cycle (ORC) can be an option in that respect.
1. Introduction Production of paper requires large amounts of both electrical and heat energy. For production of one ton paper approx. 600 to 900 kWh of electrical energy and 1400 to 2000 kWh of heat energy is needed, depending on the paper grade and mill. The electrical energy demand for production of one ton TMP-pulp is typically from 2000 to 3000 kWh. Approx. 70 % of that energy can be recovered as low-pressure steam that is used in a large part at the paper machine dryer section in an integrated mill. Because of the high energy demands the enhancement of the energy systems with process integration are studied intensively.
2. Development of process integration methods for heat exchanger network design Pinch technology has been the most utilised process integration tool in the heat exchanger network (HEN) synthesis. An early step in the method development was the introduction of heuristic rules for the HEN synthesis by Masso and Rudd (1969). Hohmann (1971) introduced the idea of estabHshing the minimum utility target ahead of design. Huang and Elshout (1976) defined a bottleneck in the temperature vs. energy diagram of hot and cold composite curves and Umeda et al. (1978) called the touching point of the curves as pinch point. Linnhoff and Flower (1978) presented a systematic procedure for generation of energy optimal HEN. The ideas were developed to the pinch technology, presented for instance in Linnhoff and Hindmarsh (1983).
1062 Mathematical programming for HEN synthesis was developed from the early attempts of e.g. by Kesler and Parker (1969) to solve the HEN synthesis problem as an assignment problem, through the sequential transportation and transhipment models of Cerda and Westerberg (1983) respectively Papoulias and Grossmann (1983) and the three-step model of Floudas et al. (1986), to the simultaneous HEN synthesis model of Yee and Grossmann (1990).
3. Paper machine dryer section heat recovery system In a paper machine heat recovery system (HRS) the exhaust air streams of the paper machine dryer section are utilised for heating different cold streams. Condensation of the air moisture causes strong non-linearities in the system, notably the heat flow rates per °C and the heat transfer coefficients in the exchangers. A mixed integer non-linear programming (MINLP) model has been developed that takes into account these nonlinearities (Soderman et al., 1999). Additionally the heat transfer area prices can be given as concave price curves in the model and the climate of the mill location can be taken into account with a multiple period formulation. The model is based on partitioning the overall temperature range into a number of temperature intervals. Heat of a hot stream temperature interval can be transferred to the cold stream temperature intervals with lower temperatures. The heat can also be cooled with a cold utility or it can be discharged. For a cold stream interval the options are heat transfer from the hot stream intervals with higher temperatures or from a hot utility. Maximum savings are obtained by minimising the sum of running and investment costs of the HRS. A solution comprises the heat exchange matches to be included as well as the process parameters and the heat transfer areas. For a paper machine with impingement drying an example of input data for optimisation is shown in table 1. Table 1. A set of hot and cold stream data for a paper machine with impingement drying. HOT STREAMS stream type air flow, kg d.a./s temperature, °C moist.cont., I^g H20/kg d.a. COLD STREAMS stream type air, kg d.a./s; water, kg/s temperature in, "C temperature out, "C moist.cont., kg H20/kg d.a.
HI exhaust air 30.5 85
H2 exhaust air 17.2 236
H3 exhaust air 30.5 82
0.17
0.26
0.15
C2 01 imp.supply air hood supply air 48.9 17.2 28 28 95 350 0.02
0.02
C3 wire pit water 190 51 60
04 process water 100 32 60
05 circ.water 150 28 45
-
-
-
1063 The obtained optimal solution is shown in fig. 1. The heat transfer coefficients are calculated prior to the optimisation for each optional interval pair, taking into account the influence of condensation. HI H2 H3 GAS Hood Exhaust Air Imp. Dryer Exhaust Air Hood Exhaust Air , 85 °C 236 °C . 82 °C CI Imp. Supply Air 28.0 °C C2 Hood Supply Air 28.0 °C C3 Wire Pit Water 51.0 °C C4 Process Water 34.0 °C' 4495 mM C5 10884 kW Circulation Water 34.1 °C
2752 kW
-m
2575m^I
350 °C
i 2936 kW 3408 m^
95 °C
I 3365 kW
1096m21
1068 mH
60 °C
2925 kW
4233 kW
60 °C 1042 m^
38.4 °C
2144 m 4
45 °C
2410 kW
5288 kW
44.0 °C
42.7 °C
Fig.l. Flow diagram of the optimal HRS in the given example,
4. Upgrade secondary heat streams with a heat pump Different types of heat pumps can be applied to upgrade the secondary heat streams. The conventional compressor heat pump cycle (CHP) and mechanical vapour recompression (MVR) are widely utilised. With an absorption heat pump (AHP) or an absorption heat transformer (AHT) the compressor can be omitted, but the process becomes more complex. In fig. 2 an absorption heat transformer is applied for a paper machine dryer section. LiBr-H20 mixture can be used as working fluid. Around 50 % of the sum of the heat input in the evaporator and the desorber is obtained from the absorber at an elevated temperature.
>er i
Iter I
^~~7
exhaust I Evaporator air(i;
I I i '^**°'**''
I
^Ur a y
1
5^2
cooling
•€K
DRYER SECTION
exhausti air (2)
^M?—\ Qr
On
^»
Fig.2. Flow diagram andp, T-diagram of an AHT, that heats the supply air, glycol water for ventilation air heaters and process water with dryer section exhaust air.
1064
5. Electrical power production from excess heat In an integrated mill there may be a problem to find good use for all the available secondary heat. The situation accentuates, when impingement drying is applied in the dryer section. A part of the steam is replaced by hot air, heated by gas burners. The steam consumption in the dryer section is decreasing also thanks to the improvements in the press section. The option to utilise an eventual excess heat for production of electrical power with an organic Rankine cycle (ORC) is discussed here. ORCtechnology is applied in several geothermal power plants. Evaporator, HR-steam
HR-steam
Condensate
Fig. 3. Flow diagram of the basic ORC.
Fig. 4. ORC with a recuperator.
Basic ORC, fig. 3, is a conventional power cycle with one-component organic working fluid. The fluid is evaporated at an elevated pressure by a heat source, expanded in a turbine, condensed with a suitable cold stream and pumped back to the higher pressure. The cycle efficiency can be improved with a recuperator, fig. 4. ORC with high-speed technology has been developed, in which the turbine, the generator and the circulation pump is built on the same shaft in a hermetic unit. The rotational speed of the turbine is much higher than in a conventional ORC and consequently the equipment sizes are drastically reduced (Larjola, 1995). If the exhaust air from cylinder drying is used as a heat source, the working fluid can be for instance ammonia. The condensation of the exhaust air moisture starts at around 60 °C. The cold inlet water can be used as a heat sink. Due to the relatively small temperature difference the process efficiency is low. At winter period the outdoor air can be used instead of process water. Somewhat better efficiency could be obtained, but more heat transfer area has to be buih. Exhaust air from impingement drying could also be used as heat source. The condensation starts at around 70 °C. With a larger temperature difference a slightly better efficiency can be obtained than with the cylinder drying exhaust air. A much improved ORC-process can be obtained with heat recovery steam (HR-steam) from the TMP-steam reboilers. An integrated mill with a TMP-plant combined with a paper machine is taken here as an example. Paper grade is newsprint with a basis weight of 40 g absolute dry web/m^. With a machine speed of 1700 m/min and a sheet width of 10 m the production of paper is 12.3 kg/s air-dry paper with 92 % dry solids. The paper
1065 is dried in the dryer section from 48 % to 92 % dry solids. The specific energy consumption of the TMP-refmers is 2000 kWh/t of 90 % dry solids pulp, the pulp production capacity 14 kg/s and the total installed refiner effect 100 MW. In the reboilers the HR-steam production is 1 ton steam/MWh and the pressure of the clean steam 3 bar(e). The paper machine dryer section is built with 30 % impingement drying and 70 % cylinder drying. It is assumed, that due to the impingement drying 6 kg/s of HR-steam becomes available for ORG. With HR-steam as the heat source and ammonia as the working fluid the pressure of the NHs-gas before the turbine could be approx. 90 bar(a) and the temperature 120 °C, that is quite near the critical point. The condensation pressure of the NH3 would be approx. 12 bar(a). The relatively high pressures lead to elevated plant costs. Other working fluids, such as isobutane or isopentane, can be used to get lower pressure levels. The critical point of isobutane is approx. 36 bar(a) and 135 °C and of isopentane approx. 34 bar(a) and 187 °C. With isopentane as working fluid the pressure in the evaporator could be approx. 13 bar(a) at 130 °C and in the condenser approx. 1.1 bar(a) at 30 °C. With an overall efficiency of 14 % for the electricity production the generated power would be 1.8 MW. With a specific plant cost of 2000 euros/kWh the investment would be 3.6 million euros and the cost of electricity 0.04 euros/kWh with an annuity factor of 0.16. A binary working fluid NH3/H2O with for instance 80 % NH3 and 20 % H2O is used in the Kalina cycles. A flow diagram of one type of Kalina cycle (from Leibowitz and Mlcak, 1999) is shown in fig. 5. The cycle is applied here with HR-steam as heat source and process water as cooling media. In a Kalina cycle the concentration of the NH3/H2O mixture is varying. After the evaporation the fluid is partitioned in the separator into an NH3-rich gas stream, to be led to the turbine, and an H20-rich stream, to be cooled in a preheater and mixed back to the NHs-rich stream after the turbine. The mixture is then cooled in a recuperator, condensed and pumped back to the evaporator.
Condensate
Fig. 5. A Kalina Cycle.
1066 Kalina cycles have been given credit of higher efficiencies than the basic ORC, (e.g. DiPippo, 1999). If the steam from the TMP-refiners is used directly in ORC, instead of the HR-steam, the overall investment cost could be reduced. The steam reboiler for that part of the TMP-steam is replaced with the ORC-evaporator. The construction of the ORCevaporator is close to the reboiler construction. The ORC-process should be placed in a separate building, where both the fire risk and the exposure limits can be taken properly into account. Process water heating in an ORC-plant condenser opens up new possibilities to utilise the heat from the paper machine dryer section. Heat can be used for example for reduced-pressure evaporation of wastewater to recover clean water. Water recovery evaporation plants are in operation in several mills in Finland and Sweden.
6. Conclusions Conventional heat recovery from paper machine exhaust air streams can be designed optimally with respect to the climate at the mill location and the type of drying process involved. Different heat pump types can be applied, for example absorption heat transformers, to elevate the stream temperatures above the temperature of the excess heat. In an integrated mill with impingement drying the eventual excess low-pressure steam from the TMP-heat recovery, before or after the reboilers, can be utilised in an organic Rankine cycle for production of electrical power. The next step is to study the design and economic aspects of different types of ORC-plants.
7. References Cerda, J. and Westerberg, A.W., 1983, Chem. Eng. Sci., vol 38, 1723-1740. DiPippo, R., 1999, Geo-Heat Center Quarterly Bulletin, June 1999, 20, 1-8. Floudas, C.A., Ciric, A.R. and Grossmann, I.E., 1986, AIChE Journal, vol 32, 276-290. Hohmann, E.C., 1971, Optimal Networks for Heat Exchange. Ph.D. Thesis, University of Southern Cahfomia, Los Angeles. Huang, F. and Elshout, R., 1976, Chem. Eng. Prog., vol 72, Nr. 7, 68-74. Kesler, M.G. and Parker, R.O., 1969, Chem. Eng. Progr. Symp. Series, vol 65, 111-120. Larjola, J., 1995, Int. J. Production Economics, vol 45, 227-235. Leibowitz, H.M. and Mlcak, H.A., 1999, GRC Trans., vol 23, 75-80. Linnhoff, B. and Flower, J.R., 1978, AIChE Journal, vol 24, 633-642. Linnhoff, B. and Hindmarsh, E., 1983, Chem. Eng. Sci., vol 38, 745-763. Masso, A.H. and Rudd, D.F., 1969, AIChE Journal, vol 15,10-17. Papoulias, S.A. and Grossmann, I.E., 1983, Comput. Chem. Eng., vol 7, 707-721. Umeda, T., Itoh, J. and Shiroko, K., 1978, Chem. Eng. Prog., vol 74, Nr. 7, 70-76. Soderman, J., Westerlund, T. and Pettersson, F., 1999, 2nd Conf. on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction, Budapest, Hungary. Proceedings, 607-612. Yee, T.F. and Grossmann, I.E., 1990, Comput. Chem. Eng., vol 14, 1165-1184.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1067
The Performance Optimisation and Control for the Wet End System of a Fluting and Liner Board Mill M.T. Virta Mondi Paper UK Mytham Road, Little Lever, Bolton, BL3 1AU, UK, email: [email protected] H. Wang and J.C. Roberts The University of Manchester Institute of Science and Technology (UMIST) Department of Electrical Engineering and Electronics and Department of Paper Science PO Box 88, Manchester, M60 IQD, UK, email: hong.v^[email protected], [email protected]
Abstract This paper describes the development of a novel dynamic predictive and optimal control method for the wet end of a papermaking systems. This part of the system plays an important function in the process in terms of its controllability and potential for optimisation. The w^et end process is complicated and the control systems are always multivariable and dynamic in nature. Due to the severe interactions between each variable, general physical and chemistry based modelling techniques cannot be established. As such, feed-forward neural networks are selected as a modelling tool so as to build up a number of non-linear models that link all the variables to the concerned quality outputs and process efficiency. A software package has been established for this application and tested on a paper machine. The results of this trial were encouraging, showing clear potential to achieve the long term aim for optimising all the key commercial terms by controlling the wet end with a dynamic MISO/MIMO model.
1. Introduction Over the past 30 years, process automation has dramatically changed the operation of pulp and paper mills. Effective control of critical processes has reduced costs while producing important productivity and quality gains. While process automation has produced a wide range of benefits for the forest products industry, many improvement opportunities remain (Brown, 2001). One of the opportunities is the predictive and optimised control of wet end systems of the paper machines. This is now achievable due to availability of reliable on-line and in-line chemistry measurements and sensors for the wet end of paper machines. However, paper makers do not want to invest in measurement and sensors because quick pay back is not strictly guaranteed. This is the case, especially when it is not commonly known for which control systems these instruments can be effectively used, most of the control strategies are based on SISO control. This cannot provide a successful optimal control of the wet end due to nonlinearity and complex interactions in that part of the process. In this context, Mondi Paper UK has been trying to develop a new optimisation method for the whole paper
1068 making process. This has included the solution to wet end optimisation. In this paper the construction of new software is introduced, which is based on multivariable non-linear modelling and system optimisation techniques.
2. Dynamic Multivariable Optimisation Software The dynamic multivariable optimisation software, which has been developed for the wet end of a papermaking system, is named DesicionMakerl. The data reduction, neural network modelling, optimisation and prediction techniques are combined in an organised way in this software so as to be effectively used together with the existing DCS systems. The final result of this program is the predicted optimal adjustable inputs, with respect to the key commercial terms of the paper mill. In the current version of the software the adjustable input is starch and the key terms are retention and strength. These terms are chosen as a group of starting points for this optimisation project. In the whole mill wide optimisation the program can be extended and the key terms such as the speed of the paper machine, breaks, other chemical flows, energy transfer, effluent quantity and quality will also be included. When the calculation is finished, then the program output (i.e., the adjustable input) is controlled manually. The ultimate aim is to complete the closed loop control automatically.
3. Techniques The data reduction techniques are necessary because the number of variables can be too large to handle in practice. This is especially true in the neural networks modelling phase. By using these data reduction techniques it is possible to reduce the data dimension or alternatively, allow more inputs to be included in the model. If the data dimension is reduced then the program also runs faster. The available data reduction techniques can be divided into two kinds, linear and nonlinear (Kramer, 1991). During this work a novel data reduction technique has been developed, where linear and non-linear techniques are combined together. This technique is called the hybrid principal component analysis. The non-linear part of this technique is based on neural networks. Neural networks have also been used for the modelling of the function of all the variables that affect the key commercial terms of the paper mill. This information is used in the performance function and optimisation, in which the gradient method has been chosen due to its simplicity and reliability. In the following, an example is described with the performance function composed of two models, namely the strength model and the retention model. They will be used to illustrate the construction of the performance function. In the formulae the strength target is included, so that the strength values could be made as close as possible to its target.
1069 Ti=-
+ b*(Si-Stsrgetf
where, Tj = r, Si ^target Zi a,b,c /
= = = = = =
+ C*zf
(1)
the performance index in vector i , the value of the optimised paper mill performance, the bottom wire retention, % the final product strength, N m thc aimed strength value, N m the starch addition, kg/h the priority constants the current sample time instant
Using equation (1), it is possible to minimise all the three terms. B y minimising the first term, the retention of the system c a n b e maximised, leading to better u s e of the raw material and improved efficiency of the paper machine. B y minimising the second term, the actual strength can be made as close as possible to its target. Finally, by minimising the third term, the starch consumption can be reduced. Since the model was designed for real-time optimisation, there is a need for a prediction technique. A new type of prediction method was introduced for predicting the optimised output so as to determine the maximised key terms in this model. In a predictive model the neural network unit is trained with the inputs and outputs sampled at a certain time difference that represents the prediction steps. In the following, an example of predictive retention equation is given. The retention output vector contains the data that is L steps further than the input data in the time. 'i+L
where, ri+i / r, Zi Ui / L
f(ri,Zi,Ui)
= = = = = = =
(2)
the predicted retention with control step L, % a non-linear training function by a neural network model the retention at sample time /, % the starch flow at sample time /, kg/h he vector consisting of all the measured variables the current sample time instant the predictive control step
In equation (2) the final result is n+i, which is the predicted retention with the predictive control step L. The control step can b e chosen within the limits of data length. The r„z/ and Ui are all vectors at sample time / and this function includes the historical data of the retention output r,. Assuming that the current sample time is /, then by maximising
1070 equation (2), the future retention is maximised, when an optimal adjustable input is found.
4. Testing the DesicionMakerl in a Machine Trial The aim of the machine trial was to test the developed control software, DesicionMakerl, where the optimal primary starch flow was predicted with a dynamic model for a longer run. The predicted and optimised starch flows and retention levels are compared in figure 1.
3100.0 f» 3050.0 g 3000.0 +1 £ 2950.0 o 3 2900.0 2850.0
Starch flow (Used)
i
Retention (Achieved)
-m— Retention (Predicted)
1 Starch flow (Predicted)
Figure 1. The comparison with the optimised and used starch flows and retention levels in a machine trial.
The results show that the mill was running with the optimal level of starch with respect to both retention and strength and later in the trial the starch usage was reduced. The maximised retention results seemed to be reasonable and there was no remarkable difference between maximised and the achieved retention results. The predicted and achieved retention levels dropped slightly with lower starch usage. Figure 2 shows the results of the predicted and optimised starch flows and strength levels.
1071
3100.0 I , 3050.0 BT 3000.0 2 2950.0 o 2 2900.0 2850.0
jyyi^^j4. yj
iliiiiM
216.0 214.0 F 212.0 z 210.0 4-» 208.0 206.0 0) 204.0 0) 202.0 H 200.0 O s 198.0
Test point ]Starch flow (Used)
] Starch flow (Predicted)
-Strength (Achieved)
-Strength (Predicted)
Figure 2. T/z^ comparison with the optimised and used starch flows and strength levels in a machine trial.
The strength results are not from on-line measurements, but they are measured off-line in the laboratory. The target value of the CMT strength during the trial was 200 and in every test point the predicted has been above the target, but also getting closer to its target value during the trial.
5. Conclusions This paper has presented the development of a dynamic, predictive and optimal control method for the wet end of a papermaking system. The control of this part of the papermaking process is difficult because of its complex, multivariable and non-linear nature with long time delays. The main objective of this work has been to develop a closed loop control strategy suitable for control of the wet end processes of a paper machine. This includes industrial implementation directed towards achieving optimal control of the wet end of a paper making system. It is necessary to establish dynamic models that can effectively characterise the dynamics and non-linearity of the wet end systems. A software package has been developed and called DesicionMakerl. It performs tasks such as data reduction, neural network modelling and on-line optimisation in a logical sequence. Prediction techniques are also used in this package so as to realise the in-time tuning of the starch input for the process. A machine trial has been performed to test the DesicionMakerl. The aim of this trial was to try a new type of control method, where the optimal primary starch flow is predicted and adjusted. During the test, the paper machine performed at an optimal level
1072 of starch flow with respect to the required retention and strength values and during the trial the starch addition level was reduced, which resulted in savings in starch usage. The optimised strength and maximised retention results seemed to be reasonable even if the optimised strength values were not very near the target and the maximised retention levels were close to the achieved retention levels. The results of this trial were encouraging, showing a clear potential to achieve the long term aim for optimising all the key commercial terms by controlling the wet end with a dynamic MISO/MIMO model. A patent application covering this technique has been filed.
6. References Brown, G.R., 2001, Solutions!, pp.25-28, December. Kramer, M.A., 1991, AIChE Journal, pp. 233-243, Vol.37/2. Scott, W.E., 2001, Uptimes, pp. 2-5, Vol. 8. Virta, M.T., 2002, The performance optimisation and control for the wet end systems of a fluting and liner board mill, A thesis for the degree of Doctor of Philosophy, UMIST, Manchester. Wang, H., Wang, A. P. and Duncan, S, 1997, Advanced process control for paper and board making, PIRA International Press.
7. Acknowledgements The authors would like to thank Mondi Paper UK, in particular Mr. Ron Simpson, Mr. Carl Cole and Mr. Colin Dainty for their valuable help and inputs in the project.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1073
On-line Estimation of Bed Water Content and Temperature in a SSC Bioreactor Using a Modular Neural Network Model Gonzalo Acuna^ Francisco Cubillos^ Paul Molin^, Eric Ferret^ and Ricardo PerezCorrea^ ^Facultad de Ingenieria, Universidad de Santiago de Chile, Av. Ecuador 3659, Santiago, Chile. E-mail: [email protected] ^ Laboratoire de Genie des Procedes Biotechnologiques Alimentaires, ENSB ANA, Dijon, France. ^ Departamento de Ingenieria Quimica y Bioprocesos, Pontificia Universidad Catolica de Chile.
Abstract A modular neural network model is developed in order to give on line estimations of bed water content and temperature in a SSC bioreactor. This grey box, predictive model gives accurate results for previous experiments carried out in the case of Gibberella fujikuroi SSC, and could be used for bioreactors on line monitoring.
1. Introduction Solid-substrate cultivation processes (SSC) offer a number of advantages over submerged fermentations including higher productivity, lower operating costs and less downstream processing. However, large scale SSC bioreactors require thorough control of environmental conditions like temperature, nutrient concentration, oxygen availability and bed water content. In fact, the main difficulties arises from temperature and bed water content regulation due to the significant heat generation during growth period and the highly dependence of optimal biomass growth on water activity level. Automatic control of large scale SSC bioreactors is still a very difficult task due to the complexity of the process (distributed, non-linear and time-variant), thus preventing the development of adequate models which are necessary for employing model-based control strategies. In addition, some of the most relevant state variables for control purposes, like bed water content, are not able to be measured on-line and in real-time because of the lack of appropriate sensors in a pilot scale or in a industrial scale level. Previous efforts on developing a mass balance and energy balance phenomenological model for a SSC has been published elsewhere (Peria y Lillo et al., 2001). This model accounts for the evolution of bed temperature and bed water content for a packed-bed aseptic pilot SSC bioreactor (200 kg capacity) for cultivations of the fungus Gibberella fujikuroi on wheat bran. Although the good results reported the model is only able to partially predict humidity evolution for some fermentations while failing to achieve good bed temperature predictions.
1074 In this work an alternative modular neural network model, for the same described process is proposed. It is well known that neural networks can approximate complex non-linear functions with arbitrary precision.
2. Phenomenological SSC Model Solid-substrate fermentation processes (SSF) are four-phase systems, where air flows through a solid bed containing a water-insoluble support and a nutrient-rich aqueous solution. This technology is used to produce many high-value metabolites, such as antibiotics, biopesticides, aromas and enzymes. The production of gibberellic acid by the filamentous fungus Gibberella fujikuroi has been used by our group to tackle and solve the technical difficulties revealed in the automation and scaling up of SSF technology. Gibberellic acid is a biologically active compound that regulates many plant growth processes. An aseptic packed bed bioreactor with a nominal capacity of 200 Kg, periodic agitation and forced air were built to scale up the production of gibberellic acid. The main model equations that describe the evolution of both average bed water content and average bed temperature, and dry mass were developed by Peria y Lillo et al , 2001. Although this model exhibits good predictions of water bed evolution it fails to adequately predict bed temperature.
3. Modular Dynamic Neural Network Model Feed forward neural networks have been extensively used as black box models of various chemical and biochemical processes (Vlassides et al., 2001; Gontarski et al., 2000). Grey-box models have also been suggested and tested successfully (Aguiar and Filho, 2001; Zorzetto et al., 2000) in order to combine prior knowledge included in the phenomenological model with neural networks. Similarly a priori knowledge contained in the above mentioned model can be incorporated into a modular neural network model, each module taking into account a different aspect of the process. In this work the modular neural network, includes five hidden layers ,two input vectors, and three outputs, only two of them acting as targets: LI and L4 with transfer functions while L2, L3 and L5 have linear transfer functions (see Figure 1). Four modules take account of the different process phenomena. Module 1 (LI) stands for the energy balance. Inputs to this module are the following state variables at time k-1: pressure drop ,P, outlet air temperature, Tgo, inlet air temperature, Tgi, water external addition, Fw, CO2 and bed temperature, T. It is worth noticing that P, which accounts for bed compactness is an additional variable not included in the original phenomenological model. The reason to include it is that P represents the level of bed compactness which in term is related with the heat transfer. Outputs of this module are decoupled in order to obtain, with module 2 (L3) the bed temperature at time k and with module 3 (L3) the metabolic water production rate R at time k. This is also an additional output of this modular neural model and represent an advantage over a merely black box model because it allows monitoring of this important kinetic parameter. Finally, module 4 (L4 and L5), take into account the mass balance and have as inputs the metabolic water rate
1075 production R at time k and the water external addition, Fw and the bed water content, X^ at time k-1. Output of this module is X^ at time k.
Figure 1: Modular neural network scheme. PI = First Input vector [ P, Tg, Tgi, Fy^ CO2 T] at time k-l P2 = Second Input vector [Fy^ X] at time k-I Tk = Bed Temperature at time k Xk = Bed water content at time k Rk, = Water production rate at time k
Data coming from 6 experimental runs where used for training and testing the neural model including around 80 parameters. Data were split into two sets (learning and test set), each one including 300 points. The Neural Network Matlab® Toolbox was used with a Levenberg Marquardt optimization algorithm for training.
4. Results and Discussion Two estimation schemes were used. The first one, consists in one-step-ahead predictions from the input variables (see Figure 1). In this case the modular neural network model was able to correctly predict bed water content and bed temperature for the training and the test set data (Figures 2 and 3).
1076 3.&1
3.0-
2.5-
X ex
2.0-
P 1.5-
1.0
0.5-
..Jr 1
1
1
1
1
1
r
,
1
,
1
1
2.0
Xsim
Figure 2: Experimental versus one-step-ahead predictions of water bed content for all experimental data (600points) MSE=0.016.
4240-
CI
C3
C2
C4
o
C6
C5
38363432302B2624-
M iji
.
r
o
a^iie'oi
>
°
LK^ *!
.
o
o
^
l^o
,
Ob
o * Sjjjfc ^
-y
18-
O
00°
22-
20-
o
o
[
— _ —H
°^
° 1
1
•
1
'
1
Samples
Figure 3: One-step-ahead predicted bed temperature versus experimental data for the 6 experimental runs. CI, C2, C3 correspond to training sets while C4, C5 and C6 are the validation set data. The second scheme consists on multiple-step-ahead predictions from initial values of bed water content and bed temperature. In this case the already mentioned variables were recursively introduced as inputs of the modular neural model at each time step. Results were not as good as for one-step-ahead-predictions. The main problem consisting in a lack of generalization of the modular neural model. However very good results were obtained when dealing with each experimental runs separately. Figures 4, 5 and 6 show bed water content, water production rate and bed temperature, respectively.
1077 315-
(0
X 1^-
•ryyjff;-
1.0-
• Q5-
1
1
1
1
1
1
1
1
1
.
1
1
20
Xe>p
Figure 4: Experimental versus multiple-step-ahead predictions of water bed content for all experimental data (600points) MSE=0.0887.
^•Y-'-T
V~.'4
K-J LAM 80
1G0
Samples
Figure 5: Multiple-step-ahead predicted values of water production rate in dry basis and experimental values for culture 3.
Samples
Figure 6: Multiple-step-ahead predicted values of bed temperature and experimental values for culture 1.
1078
5. Conclusions Results show very good estimation capacities using the first proposed scheme on validation data while succeeding to furnish only good water bed content estimations when working under the second scheme. The results confirm the capacity of this kind of neural model to track complex dynamic systems when a priori knowledge is conveniently introduced. Hence, the developed model can be used on-line, for example in a non linear model predictive control scheme.
6. References Aguiar, H.C. and Filho, R.M., 2001, Neural network and hybrid model: a discussion about different modeling techniques to predict pulping degree with industrial data, Chem. Eng. Sci., 56:565-570. Gontarski, C.A., Rodrigues, P.R., Mori, M. and Prenem, L.F., 2000, Simulation of an industrial wastewater treatment plant using artificial neural networks, Comp. Chem. Eng., 24:1719-1723. Peria y Lillo, M., Perez-Correa, R., Agosin, E. and Latrille, E., 2001, Indirect measurement of water content in an aseptic solid substrate cultivation pilotscale bioreactor, Biotech. Bioengn., 76(1):44-51. Vlassides, S., Ferrier, J.G. and Block, D.E., 2001, Using historical data for bioprocess optimization: modeling wine characteristics using artificial neural networks and archived process information, Biotech. Bioengn. 73(l):55-68. Zorzetto, L.F.M., Filho, R.M. and Wolf-Maciel, M.R., 2000, Process modelling development through artificial neural networks and hybrid models, Comp. Chem. Eng. 24:1355-1360.
7. Acknowiegments Fondecyt Grants 1010179 and 1020041 (Chilean Government) and Ecos-Conicyt Grant C99-B01 (French Cooperation).
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1079
On-Line Monitoring and Control of a Biological Denitrification Process for Drinking-Water Treatment M.FJ. Eusebio, A.M. Barreiros, R. Fortunato, M.A.M. Reis, J.G. Crespo, J.RB. Mota* Departamento de Quimica, Centro de Quimica Fina e Biotecnologia, Faculdade de Ciencias e Tecnologia, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal
Abstract Online monitoring and control of the biological denitrification process in a cell recyle membrane reactor has been developed and implemented at laboratory scale. The system has been tested with a real groundwater contaminated with nitrate. It is shown that a simple feedforward control strategy that adjusts the feed rate of the carbon source to maintain the optimum inlet carbon/nitrate ratio value is effective at reducing both nitrate and nitrite concentrations in the treated water below the maximum admissible values.
1. Introduction In many areas of the world groundwater is the primary source of drinking water. Unfortunately, groundwater supplies are increasingly contaminated with nitrate, which exceeds the maximum admissible value of 50 mg NOJ/L set by the World Health Organisation and European Community (ENV/91/24, March 18,1992). Water contamination by nitrate is caused by the intensive use of chemical fertilisers and untreated industrial and domestic wastewaters (Bouchard et al., 1992). The Biological denitrification process eliminates nitrate by completely reducing it to gaseous nitrogen. This is in contrast to physico-chemical remediation processes, such as ion exchange, reverse osmosis and electrodialysis, in which the pollutant is just transferred and/or concentrated. The major disadvantages of the conventional biological denitrification process are the microbial and secondary contamination of treated water (Bouwer and Crowe, 1988; Lissens et al., 1993a,b). Microbial contamination is mainly caused by the presence of the microorganisms used in the biological process and can be eliminated by using ultra/microfiltration membrane bioreactors. The membrane effectively retains the microbial culture inside the reactor so that it may be operated under low hydraulic residence time. It has been previously demonstrated that the membrane bioreactor ensures a high nitrate removal rate (up to 7.7 Kg NO^/m^ reactor-day) and a residual concentration of nitrate and nitrite, in the treated water, below the maximum admissible values (Barreiros et al., 1998). The secondary contamination of drinking water is due to the presence of organic soluble materials, which are produced during the biological treatment process (metabolic by-products) and/or are added in excess as electron donors for the biological nitrate reduction. In order to avoid contamination of the treated water by residual carbon, the amount of electron donor added must be set according to the nitrate concentration in the polluted water. Ideally, this amount should be equal to the quantity required for the dissimilative nitrate reduction plus the amount required for cell growth (assimilation) and maintenance (Blaszczyk, et al., 1981; Her and Huang, 1995; Constantin and Fick, 1997). If nitrate is not fully reduced to gaseous nitrogen, intermediary accumulation, mostly of nitrite, is
1080 likely to occur. In fact, the toxicity of nitrite is higher than that of nitrate; the maximum admissible value for nitrite has been set at 0.1 mg NO^/L (ENV/91/24th March, 1992). The concentration of nitrate in groundwater has seasonalfluctuationsdue to climatic and environmental factors. In order to have an efficient denitrification process, the amount of carbon source must be regulated according to thefluctuationsof nitrate concentration. This objective can be ensured by using an adequate control strategy. The aim of the present study is: • To develop and implement an on-line monitoring strategy for the biological denitrification process in a cell recycle membrane bioreactor; • To develop a simple, yet effective, control scheme to maintain the nitrate and nitrite concentrations below the maximum admissible values for drinking water by adjusting the feed rate of the carbon source.
2. Experimental Setup The denitrifying mixed culture was obtained from sludge taken from a wastewater treatment plant, enriched in a synthetic medium (Barreiros, et al., 1998), and grown under anoxic conditions at 28°C and pH 7.0. The groundwater employed (Estarreja, Portugal), which was contaminated with nitrate concentrations in the range 140-190 mg NO^/L, was supplemented with phosphate before each run. The experimental setup is shown in Figure 1. It comprises a cell recycle membrane bioreactor, measuring equipment and sensors, and an online monitoring and control system. The membrane reactor consists of a stirred vessel with an effective volume of 0.45 L, coupled to a membrane module. The contaminated water is pumped tangentially along the membrane surface with a cross-flow velocity of 1 m/s, generating two streams: a permeate stream free of cells (treated water), and a retentate stream (with cells) which is
Figure 1: Schematic diagram of the membrane bioreactor and online monitoring system. Thick lines represent streams, whereas thin lines represent transmission signals.
1081 recirculated to the reactor. The system is operated continuously by feeding it with contaminated water to be treated and removing part of the permeate free of nitrate and nitrite. The permeate is partially recycled to the system in order to guarantee the desired hydraulic residence time for each experiment. A hollow fibber polysulfone membrane with an effective area of 0.42 m^ was used throughout this study. The internal diameter of thefibersis 0.5 mm. The membrane molecular weight cut-off is 500 kDalton to completely retain suspended solids, supracoUoidal material, and micro-organisms. The hydraulic permeability of the membrane at 28°C is 875 L/m^ . h . bar. The online monitoring and control system measures the nitrate, nitrite and dissolved organic carbon (DOC) concentrations, using an adjustable sampling rate and controls the flow rate of the carbon source added to the bioreactor. To check the accuracy of the manipulated variable, this flow rate is also measured by recording the weight change of the carbon source. The permeation conditions of the membrane are inferred by measuring the transmembrane pressure at the inlet, outlet, and permeate of the ultrafiltration system. A snapshot of the console window of the monitoring and control interface is reproduced in Figure 2. The software interface was implemented in Lab view. The cell concentration was determined by optical density (OD) measurement at 600 nm and compared with an OD versus dry weight calibration curve. Nitrate and nitrite concentrations were measured using a segmented flow analyzer (Skalar analytic^^). Nitrite detection was based on the colorimetric reaction with N-(l-naphthyl)-ethylenediamine; nitrate was detected as nitrite by the same method after reduction by hydrazine. DOC was also measured using the segmented flow analyzer. Carbon compounds were detected as .jysJiLi
J oooooo
OOOOOO OOOOOO 000000 OOOOOO 000000 000000 000000 000000 OOOOOO 000000 OOOOOO
Figure 2: Snapshot of the console window of the monitoring and control interface developed in LABVIEW. The peaks represent online calibrations of the nitrate and nitrite measurements against a standard sample (lOOppm or lOppm).
1082 C02 in a refractive index detector after digestion with persulfate by UV radiation. Acetate was measured by a High Pressure Liquid Chromatography (HPLC) using a reverse phase column (Hamilton PRP-X300). Because of the inherent characteristics of the instrumentation used to monitor the nitrate, nitrite and DOC concentrations, the online measured values of these variables are delayed by 5,10, and 20 minutes respectively. Note, however, that these delays do not reduce the effectiveness of the monitoring and process control system since they are in general much smaller than the characteristic time of the disturbances in a real influent stream.
3. Results and Discussion As stated in the introduction, the ratio of carbon consumed to nitrate reduced (C/N) is the key variable to effectively control the denitrification process. Using the results presented here, we shall show that when the carbon source is added according to an optimum inlet C/N ratio value, both nitrate and nitrite concentrations in the treated water are kept below the maximum admissible values. Figure 3 shows the measured concentrations of nitrate, nitrite, and DOC during the denitrification process subjected to different inlet C/N ratios. The system was firstly operated to steady state using an inlet C/N ratio of 1.55. During this initial transient period (approximately 30 hours) both nitrate and nitrite accumulate, after which the concentrations of both pollutants in the treated water drop to values below the maximum admissible ones. The purpose of this preliminary experiment was to simulate the startup of the water treatment plant.
CD
E
d oo
Figure 3: Measured nitrate, nitrite and COD concentration histories in the outlet stream during the denitrification process subjected to different inlet C/N ratios (whole run). • , nitrate; m, nitrite; #, COD.
1083 C/N=1.55 I
C/N=1.29
C/N=139 I aN=1.29 I
mr
aN=1.39
E o 4= c
8 g O
lb
£ Q
o o
(0
I z
Figure 4: Measured nitrate, nitrite and COD concentration histories in the outlet stream during the denitrification process subjected to a continuous cycling of the inlet C/N ratio value between 1.29 and 1.39. A, nitrate; m, nitrite; ^, COD. Then, several tunings of the inlet C/N ratio value were performed to determine the optimum operating value and assess the responsiveness of the system. The C/N ratio was firstly decreased to 1.29, it was observed that the system responded quickly to the imposed step change in the inlet C/N ratio (Figure 4). Both nitrate and nitrite concentrations increased. Under these new operating conditions, the treated water did not met the quality requirements of a drinking water. The nitrite concentration was above 0.1 mg NO^/L, although the nitrate concentration was below the maximum admissible value. Nitrate accumulation was caused by the limitation of carbon due to the low C/N ratio used. The inlet C/N ratio was then increased from 1.29 to 1.39. Again, the system responded fast and reduced the nitrite concentration below 0.1 mg NO^/L. Finally, the system was subjected to a continuous cycling of the inlet C/N ratio value between 1.29 and 1.39 to test its responsiveness. The results confirm that the optimum inlet C/N ratio value that avoids nitrate and nitrite accumulation is in the range 1.3 < C/N < 1.4. This C/N value is very consistent with the values obtained in continuous tests using pure denitrifying culture and synthetic medium (Barreiros, et al., 1998). It is roughly 30% larger than the value calculated according to the stoichiometry of the dissimilative reduction reaction of nitrate with acetate as a carbon source, and is very close to the value of 1.4 predicted by the empirical equation proposed by Mateju et al. (1992), which also takes into account the amount of carbon used for cell synthesis.
1084
4. Conclusions Online monitoring and control of the biological denitrification process in a cell recyle membrane reactor has been developed and implemented at laboratory scale. The system has been tested with a real groundwater contaminated with nitrate. The results presented in this study show that the C/N ratio is the key parameter to guarantee an efficient denitrification process. A simple feedforward control strategy that adjusts the feed rate of the carbon source to maintain an inlet C/N ratio value of 1.39 is effective at reducing both nitrate and nitrite concentrations in the treated water below the maximum admissible values. Moreover, this control strategy based on the C/N ratio is easy to implement in a water treatment plant and does not increase the complexity of its operation at industrial scale. Acknowledgement. Financial support for this work has been provided by Funda^ao para a Ciencia e Tecnologia under contract Praxis XXI 3/3.1/CEG/2600/95.
6. References Barreiros, A.M., CM. Rodrigues, J.G. Crespo, M.A.M. Reis, 1998, Bioprocess Eng. 18, 297. Blaszczyk, M., M. Przytocka-Jusiak, U. Kruszewska, R. Mycielski, 1981, Acta Microbiol. Polon. 30,49. Bouchard, D.C., M.K. Williams, R.Y. Surampalli, 1992, J. AWWA 84, 85. Bouwer, E.J., RB. Crowe, 1988, J. AWWA 80, 82. Constantin, H., M. Fick, 1997, Water Res. 31, 583. Her, J.J., J.S. Huang, 1995, Biores. Tech. 54,45. Liessens, J., R. Germonpre, S. Beemaert, W. Verstrate, 1993a, J. AWWA 85, 144. Liessens, J., R. Germonpre, I. Kersters, S. Beemaert, W. Verstrate, 1993b, J. AWWA 85, 155. Mateju, v., S. Cizinska; J. Krejci, T. Janoch, 1992, Enzyme Microb. Tech. 14,170.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1085
The Role of CAPE in the Development of Pharmaceutical Products Daniel J. Horner, PhD, BEng, AIChemE and Parminder S. Bansal, BEng, CEng, MIChemE AstraZeneca R&D Charnwood, Bakewell Road, Loughborough, Leicestershire, LEI 1 5RH, E-mail: [email protected], [email protected]
Abstract One of the key challenges facing pharmaceutical companies is to reduce the time to market and cost of goods of their products whilst continuing to comply and exceed stringent regulatory requirements. With the ever increasing need for shorter drug development periods, more efficient tools and methods of working are requried. The role of the Process Engineer in the development of a candidate drug is to actively seek a robust and scaleable process through the application of experimental and theoretical process / chemical engineering science. They are expected to bring a long term view to the process development strategy that ensures SHE issues are raised and resolved, bulk drug capacity requirements are achieved and the most appropriate innovative technologies exploited. In this paper, a variety of CAPE techniques employed at AstraZeneca in order to generate a better understanding of the chemistry and scale-up challenges for our products will be discussed. The use of these tools across the various functions represented on development projects allows for close collaboration and consistent methods of working.
1. Introduction Batch process modelling techniques are being utilised during the drug development process, allowing route selection, equipment requirements, manufacturability, siting and SHE issues to be identified and resolved. The models are highly flexible and can be used to simulate scale-up from laboratory through pilot plant and into full-scale manufacture. The process can be optimised during development through the use of these tools, providing minimal risk to the product along with significant time and cost benefits. During the life cycle of a project a number of different campaigns are undertaken. Scale-up and scale-down issues are arguably the most important areas of work for process engineers. An understanding of the scientific fundamentals that affect scale-up; mass and heat transfer phenomena, heterogeneous reactions, crystallisation, isolation and drying, mixing/agitation, reaction kinetics and safety are critical to successful production. Traditionally, they have been the remit of the Process Engineer, resulting in an exclusively engineered-focussed solution. The use of CAPE tools in conjunction with experimental work promotes a collaborative approach, improving the interface between science and engineering to find the best technical solution to a problem.
1086 The development of powerful dynamic simulation packages has greatly increased the understanding of processes and allows for improved manufacturability and equipment specification. With candidate drugs becoming increasingly complex and in limited supply during development, a general lack of information exists. The use of property prediction tools is important to ensure the dynamic models are as accurate as possible. The use of process control software to control scale-down reactors within our process engineering laboratory provides an important link between laboratory preparations and pilot plant and ultimately full-scale manufacture. Whilst it ensures consistent production methods and highlights potential manufacturing issues, large amounts of data can be collated quickly providing invaluable scale-up information for later accommodations. The combination of laboratory experimentation and CAPE technology is providing AstraZeneca with the means to reduce costs and development time, whilst producing optimised and robust processes for our products. A number of real-life case studies are detailed below which demonstrate the effectiveness of CAPE tools in the development of pharmaceutical products.
2. Case Study 1 - Chlorination / Oxidation of Compound X
R
"S
R
1.) Acetic acid / water 2.) Chlorine (gas)
X-SulphonylCl This is a step in a process that was recently developed at AstraZeneca. The original process description specified 10 mole equivalents of chlorine to be used. A reaction mechanism was postulated suggesting that only 3 mole equivalents were required, therefore the rest was effectively wasted. Attention was focused around improving the mass transfer of the chlorine. Laboratory experiments showed that the reaction kinetics is extremely fast and highly exothermic. Scale-up mixing utilities, provided with DynoChem, were used to provide required agitation rates in the laboratory to ensure the gas was fully dispersed. A model was developed using DynoChem to enable accurate predictions of scale-up to be made. To ensure its validity, experimental data was also fed into the model, from which the necessary scale-up parameters could be derived. RCl data was used to measure the exothermal activity during the reaction and experiments were performed to assess the saturation concentration of chlorine in the solvent mixture. Laboratory and plant scale temperature trials were carried out and the data used to derive actual heat transfer coefficients. The use of DynoChem allowed the data to be processed efficiently. The model predicted that an excess of chlorine was not required in the reaction if the addition rate was controlled. This finding was confirmed in the laboratory, thereby significantly reducing the raw material and plant scrubber requirements. The batch temperature is limited to less than 15°C and upon scale-up the model showed the
1087 reaction to be heat transfer limited, as opposed to mass transfer limited and so further work was carried out to investigate the effect of different jacket temperatures. From a knowledge of the plant vessel heat transfer characteristics and extrapolation of laboratory data, a jacket temperature was defined that maximised heat transfer without risk of freezing the batch contents at the wall of the vessel. Further scenarios can be modelled with improved mass and heat transfer, enabling the process engineers to confidently define the equipment requirements for future campaigns.
LSL Batch
-l-HCI(mol) -2-BatchTemp. (°C) -3-RSR(mol) -4-lnt_1 (mol) -5-lnt_2 (mol) -6-Product(mol)
0
50
100
150
200
Time (min) Figure 1. Typical Model output from Dynochem plotted in Microsoft Excel. Although this graph is fairly complicated, it serves to show the vast amount of information that can be gleaned from a single model. It is possible to make crude predictions for the depletion of reactants and formation of products. The reaction at this scale shows a peak temperature of 14.7°C and chlorine addition time of three hours, which is in very close agreement to that experienced on plant. Figure 2 shows the effect of improving the heat and mass transfer in this system. This allows the process engineer to quickly focus attention on the important parameters of the system. The figures used here are arbitrary, but show that by improving mass and heat transfer, significant reductions in reaction time can result. Thus, the process engineer can focus on the key parameters to improve the process.
1088
Improved Heat and Mass Transfer
-1-Ha(mol) -2-BatchTennp. (°C) -3-RSR(mol) -4-lnt_1 (mol) -5-lnt_2(mol) -6-Product (mol)
50
100
150
200
Time (min) Figure 2. Chlorination model with improved heat/mass transfer.
3. Case Study 2 - Modelling distillation in work-up of Compound X Distillation is a widely used unit operation in pharmaceutical manufacture to remove components from a system to an acceptable level. The process engineer is able to assist in the selection of the optimum solvent system, providing vapour-liquid equilibrium information and predictions of the efficiency of the separation. A recent example highlighted the effectiveness of CAPE tools in the design and prediction of distillation performance. The solvent used for the chlorination of compound X is acetic acid. However, cooling crystallisation in acetic acid resulted in poor physical form, resulting in problematic isolation and drying. Alternative solvents were investigated and crystallisations from toluene were found to provide excellent physical form. Due to temperature sensitivity of the product to degradation, a reduced pressure distillation was required to perform the solvent swap. The toluene-acetic acid system is a reasonably well understood system and a plethora of data has been published. The Detherm database (www.dechema.de) is a valuable process engineering tool and a source of credible physical property data. However, if published data is not available, the principles described below can be applied to almost any system. Vapour-liquid equilibrium (VLE) data was modelled using SMSWin and Aspen Properties and in this case validated against published data. Property prediction software (ProPred) was used to model the properties of compound X using group contribution methods, allowing the effect of the compound upon the vapour liquid equilibrium to be investigated.
1089 The VLE data was then fed into the batch distillation modellers available at AstraZeneca (SMSWin and Aspen Batch Frac) to predict the composition of the distillate over time. —I
•e
^ ClJ)
C^)
1 3RDCHAR
\-
[ 2NDCHAR
[-
3:
I
3RDDIST I
n!^
- | 2ND0IST I
C^
—I
C^
1STDIST I
I PRODUCT I
C^
Figure 3. Schematic Representation of Batch Distillation Model (Aspen Plus). This particular model allows up to three charges to be made to the still, which in this case comprises a charge to define the initial pot composition and two intermediate toluene charges. The model is set-up to distil to a pre-defined pot volume in between charges. For the compound X system it was found that the concentration of acetic acid in the pot had fallen to an acceptable level following the third distillation. Although this is a relatively simple model, it allows the process engineer to screen for suitable solvents for all solvent swaps, as well as provide the process chemists with optimum operating conditions. Thus, laboratory development time can be focussed on other issues.
4. Case Study 3 - Batch Simulation Aspen Batch Plus is a tool used predominantly by Process Engineers to store process data from various projects being currently developed. Aspen Batch Plus enables the Process Engineer to compile all the data pertinent to the process into a central location, thereby generating a model / simulation of the process. At AstraZeneca, simulations are developed early in the project life-cycle with a view that it will grow as more information becomes available. This tool aids in the process of technical transfer, that is the transfer of all process information from one facility / site to another. The tool allows simple "scale-up" or capacity calculations to be performed and generates a complete mass balance of the chemistry under consideration. Aspen Batch Plus is also used as a scheduling tool, in order to identify potential bottlenecks. An Aspen Batch Plus model was developed for a product that was recently transferred to full-scale manufacture. The process was to be manufactured in a new facility for which the design was copied from an existing plant. The cycle time predictions from the model showed that this design of plant was not capable of producing the required amount of product and further isolation and drying equipment was necessary. The model was also used to predict VOC emission data used for initial abatement design. Outputs from the model were also used to describe the process flow, assist generation of batch records and estimate effluent stream composition.
1090 Other Aspen Batch Plus models developed earlier in the project life-cycle have been used to identify potential throughput issues in the technology transfer plant, provide an estimation of the amount of iodo contaminated effluent from a process and comparison of manufacturing routes to enable potential problems on scale-up to be considered in decision making. The models are also used to evaluate potential manufacturing facilities, not only for full scale but also for development campaigns.
5. Conclusions This paper sets out to show, within the vagaries of the pharmaceutical industry, the CAPE tools that can be used in process development. There are a number of issues to be resolved when developing a process, not just process specific, but business (economic) drivers, safety issues, environmental concerns, moral and ethical issues and regulatory requirements. In light of these varied challenges, we have found that not one CAPE package covers every aspect of development and that the combination of a number of specific tools suits the requirements of the pharmaceutical industry. The main theme of this paper is that CAPE tools need to be used in conjunction with more "primitive", but no less valuable tools, such as laboratory work. It is impossible to fully understand and appreciate the issues and challenges of a process by computational modelling alone. The model provides a valuable insight into the process and identifies the parameters that require a more detailed study. This increased understanding means improved process development and more robust processes, that will help to deliver products to the market as quickly and cost effectively as possible. The development of a pharmaceutical process can be key to its success, especially during the early stages of a project. When processes are not fully developed and understood, problems are encountered in technology transfer, often resulting in "fire-fighting" as issues arise. Another important benefit is the potential cost savings during process development, thereby allowing more products to be developed for less resource. The identification of issues before campaign manufacture also reduces lost time on plant. CAPE tools have enabled the process engineers at AstraZeneca to improve the understanding of the processes being developed. They have also aided cross-functional collaboration between process engineers and process chemists, in particular when considering transfer of a process from the laboratory to the Pilot Plant. Ultimately this enables us to develop cost effective robust processes with minimal SHE impact.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1091
Developing Phenomena Models from Experimental Data Niels Rode Kristensen", Henrik Madsen^ and Sten Bay J0rgensen^ "Department of Chemical Engineering, ^Informatics and Mathematical Modelling, Technical University of Denmark, DK-2800 Lyngby, Denmark
Abstract A systematic approach for developing phenomena models from experimental data is presented. The approach is based on integrated application of stochastic differential equation (SDE) modelling and multivariate nonparametric regression, and it is shown how these techniques can be used to uncover unknown functionality behind various phenomena in first engineering principles models using experimental data. The proposed modelling approach has significant application potential, e.g. for determining unknown reaction kinetics in both chemical and biological processes. To illustrate the performance of the approach, a case study is presented, which shows how an appropriate phenomena model for the growth rate of biomass in a fed-batch bioreactor can be inferred from data.
1. Introduction For most chemical and biological processes first principles engineering methods can be applied to formulate balance equations that essentially provide the skeleton of an ordinary differential equation (ODE) model for such a process. What often remains to be determined, however, are functional relations in the constitutive equations for phenomena such as reaction rates and heat and mass transfer rates. These phenomena models are often difficult to determine due to the fact that finding a parametric expression with an appropriate structure to match the available experimental data is essentially a trial-and-error procedure with limited guidance and therefore potentially time-consuming. In the present paper a more systematic procedure is proposed. The key idea of this procedure is to exploit the close connection between ODE models and SDE models to develop a methodology for determining the proper structure of the functional relations directly from the experimental data. The new procedure more specifically allows important trends and dependencies to be visually determined without making any prior assumptions and in turn allows appropriate parametric expressions to be inferred. The proposed procedure is a tailored application of the grey-box modelling approach to process model development proposed by Kristensen et al. (2002b), within which specific model deficiencies can be pinpointed and their structural origin uncovered to improve the model. The remainder of the paper is organized as follows: In Section 2 the details of the proposed procedure are outlined; in Section 3 a case study illustrating the performance of the procedure is presented and in Section 4 the conclusions of the paper are given.
1092
ODE Oiodel formutalKW
^ ^
SDEitJOdeJ
^
State esumalton
^ ^
Ncmparame^dc mcRfeB«i&
A
t
f
First engineering principles
Experimental data
Estimate of functional relation
Figure 1. The proposed procedure for developing phenomena models. The boxes in grey illustrate tasks and the boxes in white illustrate inputs and outputs.
2. Methodology The proposed procedure is shown in Figure 1 and consists offivebasic steps. First a standard ODE model is derived fromfirstengineering principles and the constitutive equations containing unknown functional relations are identified. The ODE model is then translated into a stochastic state space model consisting of a set of SDE's describing the dynamics of the system in continuous time and a set of discrete time measurement equations signifying how the available experimental data was obtained. A major difference between ODE's and SDE's is the inclusion of a stochastic term in the latter, which allows uncertainty to be accomodated, and which, if the constitutive equations of interest are reformulated as additional state equations, allows estimates of the corresponding state variables to be computed from the experimental data. The specific approach used for this purpose involves parameter estimation and subsequent state estimation by means of methods based on the extended Kalman filter (EKF). By subsequently applying methods for multivariate nonparametric regression to appropriate subsets of the state estimates, visual determination of important trends and dependencies is facilitated, in tum allowing appropriate parametric expressions for the unknown functional relations in the constitutive equations to be inferred. More details on the individual steps of the proposed procedure are given in the following. 2.1. ODE model formulation In the first step of the procedure, a standard ODE model is derived and the constitutive equations containing unknown functional relations are identified. Deriving an ODE model from first engineering principles is a standard discipline for most chemical and process systems engineers and in the general case gives rise to a model of the following type: dxt
"df
f(xt,ut,rt,t,0)
(1)
where t G Eistime,a;t G W^ is a vector ofbalanced quantities or state variables, u^ G W^ is a vector of input variables and ^ G M^ is a vector of possibly unknown parameters, and where /(•) G E"* is a nonlinear function. In addition to (1) a number of constitutive equations for various phenomena are often needed, i.e. equations of the following type:
n = ^{xt,ut,6)
(2)
1093 where n is a given phenomenon and (/?(•)€ M is the nonlinear function of the state and input variables needed to describe it. This function is, however, often unknown and must therefore somehow be determined from experimental data. In the context of the systematic procedure proposed in the present paper, the first step towards determining the proper structure of (^(•) is to assume that this function and hence rt is constant. 2.2. SDE model formulation In the second step of the procedure the ODE model is translated into a stochastic state space model with rt as an additional state variable. This is straightforward, as it can simply be done by replacing the ODE's with SDE's and adding a set of discrete time measurement equations, which yields a model of the following type: dxl = f*{xt,Ut,t,e)dt-\-a*{ut,t,e)duj* Vk =h{xl,Uk,tk,9)-\-ek
(3) (4)
where t G E is time, x^ = [xj r^]^ G E"^"*"^ is a vector of state variables, Ut € M"* is a vector of input variables, y^ € EMS a vector of output variables, ^ G E^ is a vector of possibly unknown parameters, /*(•) G E ^ + \ or*(-) G E^^+^^^^^^^+i) and /i(-) G Mf are nonlinear functions, {a;^} is an (n -f 1)-dimensional standard Wiener process and {e^} is an /-dimensional white noise process with e^ G AT (0, S(uk^tk^ 9)). The first term on the right-hand side of the SDE's in (3) is called the drift term and is a deterministic term, which can be derived from the term on the right-hand side of (1) as follows: /V^* ., + a\ J {xt,ut,t,e) =
1
(f{xt,uurt,t,e)\ 1
(5)
where the zero is due to the assumption of constant r^. The second term on the right-hand side of the SDE's in (3) is called the diffusion term. This is a stochastic term included to accommodate uncertainty due to e.g. approximation errors or unmodelled phenomena and is therefore the key to subsequently determining the proper structure of (^(•). A more detailed account of the theory and application of SDE's is given by 0ksendal (1998). 2.3. Parameter estimation In the third step of the proposed procedure the unknown parameters of the model in (3)(4) are estimated from available experimental data, i.e. data in the form of a sequence of measurements yQ,y^,...,yj^,...,y^. The solution to (3) is a Markov process, and an estimation scheme based on probabilistic methods, e.g. maximum likelihood (ML) or maximum a posteriori (MAP), can therefore be applied. A detailed account of one such scheme, which is based on the EKF, is given by Kristensen et al. (2002a). 2.4. State estimation In the fourth step of the procedure, state estimates are computed to facilitate determination of the proper structure of (^(•) by means of subsequent multivariate nonparametric regression. Using the model in (3)-(4) and the parameter estimates obtained in the previous step, state estimates xl^j^, A: = 0 , . . . , iV, can be obtained by applying the EKF once again using the same experimental data. In particular, since vt is included as an additional
1094 state variable in this model, estimates r^jj^, A: = 0 , . . . , AT, can be obtained, which in turn facilitates application of multivariate nonparametric regression to provide estimates of possible functional relations between rt and the original state and input variables. 2.5. Nonparametric modelling In the fifth step of the procedure the state estimates computed in the previous step are used to determine the proper structure of (p{-) by means of multivariate nonparametric regression. Several such techniques are available, but in the context of the proposed procedure, additive models (Hastie and Tibshirani, 1990) are preferred, because fitting such models circumvents the curse of dimensionality, which tends to render nonparametric regression infeasible in higher dimensions, and because results obtained with such models are particularly easy to visualize, which is important. Additive models are nonparametric extensions of linear regression models and are fitted by using a training data set of observations of several predictor variables X i , . . . , Xn and a single response variable Y to compute a smoothed estimate of the response variable for a given set of values of the predictor variables. This is done by assuming that the contributions from each of the predictor variables are additive and can be fitted nonparametrically using the backfitting algorithm (Hastie and Tibshirani, 1990). The assumption of additive contributions does not necessarily limit the ability of additive models to reveal non-additive functional relations involving more than one predictor variable, since, by proper processing of the training data set, functions of more than one predictor variable, e.g. X1X2, can be included as predictor variables as well (Hastie and Tibshirani, 1990). Using additive models, the variation in ffc|jfc, A: = 0 , . . . , A/^ can be decomposed into the variation that can be attributed to each of the original state and input variables and the result can be visualized by means of partial dependence plots with associated bootstrap confidence intervals (Hastie et ai, 2(X)1). In this manner, it may be possible to reveal the true structure of (p{-) and subsequently determine an appropriate parametric expression for the revealed functional relation. Remark. Once an appropriate parametric expression for the unknown functional relation has been determined, the parameters of this expression should be estimated from the experimental data and the quality of the resulting model should subsequently be evaluated by means of cross-validation. A discussion of methods for evaluating the quality of a model with respect to its intended application is given by Kristensen et al. (2002b). Remark. A key advantage of the proposed procedure is that functional relations involving unmeasured variables can easily be determined as well if certain observability conditions are fulfilled, e.g. functional relations between reaction rates, which can seldom be measured directly, and concentrations of various species, which may also be unmeasurable.
3. Case Study: Determining the Growth Rate in a Fed-batch Bioreactor To illustrate the performance of the proposed procedure, a simple simulation example is considered in the following. The process considered is a fed-batch bioreactor, where the true model used to simulate the process is given as follows:
(6)
1095 where X is the biomass concentration, S is the substrate concentration, V is the volume of the reactor, F is the feed flow rate, Y is the yield coefficient of biomass, 5 ^ is the feed concentration of substrate, and /x(5) is the biomass growth rate, which is characterized by Monod kinetics and substrate inhibition, i.e.: l^[S) = /in
(7)
'K2S'^-^S-\-Ki
where /Xmax, Ki and K2 are kinetic parameters. Simulated data sets from two batch runs are generated by perturbing the feed flow rate along a pre-determined trajectory and subsequently adding Gaussian measurement noise to the appropriate variables (see below). Using these data sets and starting from preliminar balance equations, where the biomass growth rate is assumed to be unknown, it is illustrated how the proposed procedure can be used to visually determine the proper structure of /i(5). In the context of the first step of the proposed procedure, an ODE model corresponding to (6) has thus been formulated and the constitutive equation for /i(5) has been identified as containing an unknown functional relation. The first step towards determining this relation then is to assume that /i(5) is constant, i.e. ^{S) = /JL, and translate the ODE model into a stochastic state space model with /i as an additional state variable, i.e.:
dt +
+ eik,
•(Til
0
0
a22
0 .0
0 0
ekeN{0,S)).
s=
0 0
0
0" 0 da,; 0
(8)
•5ii
0
0 . 0
522
0 •
0
0
5'33.
(9)
As the next step, the unknown parameters of this model are estimated using the EKFbased estimation scheme presented by Kristensen et al. (2002a) and the data sets mentioned above. Using the resulting model and the same data sets, state estimates X^^, Sk\k^ yk\k^ P'kiky A; = 0 , . . . , A^, are then obtained by applying the EKF once again and an additive model is fitted to reveal the true structure of the function describing /x by means of estimates of functional relations between fi and the state and input variables. It is reasonable to assume that /i does not depend on V and F, so only functional relations between fik^k ^^^ ^k\k and 5^]^; are estimated, giving the results shown in Figure 2. These plots indicate that /lk\k does not depend on XJ^JJ^, but is highly dependent on Sk\ky which in turn suggests to replace the assumption of constant /x with an assumption of /x being a function of S that complies with the revealed functional relation. To a skilled biotechnologist this relation is a clear indication of Monod kinetics and substrate inhibition and immediately suggests to replace /x with the true function fj,{S) in (7). Remark. The case study presented here illustrates how the proposed procedure can be used to determine the proper structure of an unknown functional relation direcdy from experimental data by providing information about which variables contribute to the observed variation and how these contributions may be characterized parametrically. Once a parametric expression has been determined, the parameters can be estimated and the
1096 0.25
0.2 0.15 ai
0.05
..... :^r.^ .},'.•••••''•
B-
•. .jji>-H^^ ^ • r
0 •y /
•
•'•'
"i
.•••^
•. •
-0.05 -0.1 -0.1S
-02
1.5
2
2.6
3
(a)Afc|fc vs.Xfciib
3.5
(b)Afc|fc vs. 5;fe|fc
Figure 2. Partial dependence plots of [jLk\k vs. X^^k and Sk\k (solid lines: Estimates; dotted lines: 95% bootstrap confidence intervals).
quality of the resulting model evaluated by means of cross-validation. The proposed procedure thus facilitates systematic development of phenomena models from data.
4. Conclusion A systematic approach for developing phenomena models for chemical and biological processes from experimental data has been presented. The approach is based on integrated application of stochastic differential equation (SDE) modelling and multivariate nonparametric regression and can be used to uncover unknown functionality behind various phenomena in first engineering principles models using data. The proposed modelling approach has significant application potential, e.g. for determining unknown reaction kinetics in both chemical and biological processes, and a key advantage in this regard is that functional relations involving unmeasured variables can easily be determined as well if certain observability conditions are fulfilled.
5. References Hastie, TJ. and Tibshirani, RJ. (1990). Generalized Additive Models. Chapman & Hall, London, England. Hastie, TJ.; Tibshirani, RJ. and Friedman, J. (2001). The Elements of Statistical Learning - Data Mining, Inference and Prediction. Springer-Verlag, New York, USA. Kristensen, NR.; Madsen, H. and J0rgensen, SB. (2002a). Parameter Estimation in Stochastic Grey-Box Models. Submitted for publication. Kristensen, NR.; Madsen, H. and J0rgensen, SB. (2002b). A Method for Systematic Improvement of Stochastic Grey-Box Models. Submitted for publication. 0ksendal, B. (1998). Stochastic Differential Equations - An Introduction with Applications. Springer-Verlag, Berlin, Germany, fifth edition.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1097
Multi-Site Capacity Planning for the Pharmaceutical Industry Using Mathematical Programming Aaron A. Levis and Lazaros G. Papageorgiou* Centre for Process Systems Engineering, Department of Chemical Engineering, UCL (University College London), Torrington Place, London WCIE 7JE, U.K.
Abstract This paper presents a systematic mathematical programming approach for long-term, multi-site capacity planning under uncertainty in the pharmaceutical industry. The proposed mathematical model constitutes an extension of the previous work done by Papageorgiou et al. (2001) determining both the product portfolio and the multi-site capacity planning in the face of uncertain clinical trials outcomes. Two distinct decision-making levels are identified, namely the strategic level (here-and-now decisions) and the operational level (wait-and-see decisions). The overall problem is formulated as a two-stage, multi-scenario, mixed-integer linear programming (MILP) model. A hierarchical algorithm is then proposed in order to reduce the computational effort needed for the solution of the resulting large-scale MILP problem. The applicability of the proposed methodology is demonstrated by two illustrative examples.
1. Introduction Every year the pharmaceutical industry spends a large amount of research funds developing new chemical entities (NCEs). This process involves producing and screening vast libraries of chemical compounds before a limited number of promising NCEs enters the clinical trials phase. Despite the costly R&D effort, only a few of the initial chemical compounds actually become marketed drugs, depending on the clinical trials outcomes and the estimate is that it may take eight years to develop a new product (Papageorgiou et al., 2001). Pharmaceutical companies are constantly faced with the question of best use of limited resources available to obtain the highest possible profit and the decisions involved are usually taken in the presence of significant uncertainty. In order to address the combinatorial nature of the R&D product pipeline problem, Subramanian et al. (2000) developed a computing architecture based on mathematical programming and discrete-event system simulation so as to facilitate decision-making for new product development. Integrating the problem of capacity planning and new product development under uncertainty, Rotstein et al. (1999) presented a stochastic capacity planning model incorporating clinical trials uncertainty, using a scenario-based
To whom correspondence should be addressed. Fax: +44 20 7383 2348, Phone: +44 20 7679 2563, E-mail: 1. papageorgiou@ucl . a c . u k
1098 approach. Their model though, is limited to the case of single-site capacity planning. Gatica et al. (2001) developed a stochastic single-site capacity planning model, focusing on the different stages of product development. Maravelias and Grossmann (2001) proposed a muti-period model able to accommodate simultaneously new product development and capacity planning of manufacturing facilities, without considering though the company's trading structure. Finally, Papageorgiou et al. (2001) have developed a multi-site, multi-period capacity planning model incorporating the internal trading structure of the company. However, their model assumes a deterministic demand profile for the potential products, without considering the uncertainty of clinical trials outcomes. In this paper, we present a systematic approach for simultaneous optimisation of product portfolio and multi-site capacity planning in the face of clinical trials uncertainty while considering the trading structure of the company. A hierarchical algorithm is also proposed for the solution of the resulting large-scale MILP model.
2. Problem Statement There are four main issues that need to be addressed in our problem: product management, capacity management, trading structure, clinical trials uncertainty. Two different outcomes (Success/Failure) are considered in the clinical trials phase for each potential product. Overall, the multi-site capacity planning problem under uncertainty can be stated as follows: Given: (1) A set of potential products, (2) probability of success in clinical trials for each product, (3) production rates, fixed and operating costs for each product at each production site, (4) forecasted nominal demand and selling price for each product, (5) a set of potential production sites and products involved, (6) construction lead-times and capital investment costs for each production site, (7) taxation, interest and inflation rate for each location, (8) trading structure of the company. Determine: The product portfolio (which products from the candidate portfolio to manufacture), the manufacturing network (where to manufacture the selected products), a multi-site investment strategy (what capacity and when to invest in each production site), detailed production plans (how much product to manufacture in each suite at each production site per year), sales and inventory planning profiles (how much product to sell and how much inventory to maintain) So as to maximise the expected nett present value (eNPV).
3. Mathematical Formulation In formulating the detailed mathematical model, we follow the notation of Papageorgiou et al. (2001) while adding a stochastic dimension to the problem in order to account for the uncertain clinical trials outcomes. The time horizon used in our model is discretised into time intervals of equal duration. Startup and shutdown periods are considered to be negligible compared to the duration of each time interval.
1099 The decision variables involved in our problem can be partitioned into two different sets, namely the strategic and the operational decisions. The strategic decisions reflect the decisions that must be made immediately (here-and-now) in the face of significant uncertainty and they include: product selection (binary variables), allocation of products to production sites (binary variables), capacity investment decisions for the selected production sites (binary variables). Generating all scenarios for p potential products, each one with two outcomes, results in 2^ scenarios. Each individual scenario is a fairly small deterministic problem. The demand and its associated probability for the different outcomes of each product are assumed to be known. If a product fails in the clinical trials, the demand is consequently zero over all remaining time periods. The multi-site investment strategy is common to all possible scenarios present in the second stage. However, due to the different product demand patterns, every scenario has its own characteristic production, inventory and sales profile. The operational decisions reflect the scenario-dependant decisions made upon completion of the clinical trials and resolution of the uncertainty (wait-and-see) and they include: timings of scale-up and qualifications runs (binary variables), allocation of products to manufacturing suites (binary variables), detailed production plans at each production site (continuous variables), inventory profiles (continuous variables), sales profiles at each sales region (continuous variables). Based on the given probabilities of success for each potential product, the problem is then to find the optimal product portfolio and investment decisions together with detailed production and sales plans so as to maximise the eNPV. The eNPV is simply the summation of all scenario NPVs, weighted by their associated probabilities. The derivation of the objective function is similar to the one in Papageorgiou et al. (2001). The overall problem is formulated as a two-stage, multi-scenario mixed integer linear programming (MILP) model.
4. Solution Methodology The solution of the above mathematical model proves to be a very demanding task in terms of computational effort needed due to the inherent complexity and the combinatorial nature of the problem. In this work, we propose a hierarchical algorithm that decouples the different decision-making levels (strategic vs. operational) from each other by employing a suite-aggregate mathematical model formulation. The suiteaggregate mathematical model formulation derives from the detailed model by simply dropping the manufacturing suite index. Consequently, the capacity investment decisions are now treated as integer variables instead of binary ones. Furthermore, the suite-aggregate model does not account for the scale-up and qualification runs. For example, the suite investment and production constraints in the detailed model are modelled as follows (suite /, production site /, time period t, scenario k)\
Mt=4t-i-^Eb-d Ktk^r^'Tiptk
v/,u
(1)
^UpMk
(2)
1100 When an investment decision for any suite is taken (E^ =7), a construction lead-time (S) is required before that suite becomes available for production {AI = 1). The amount of each product (5,^^^) produced within each available suite at each production site is given by multiplying the characteristic production rate (r^) with the suite production time (Tipji^). The same constraints in the aggregate model take the following form: AI=AI_] + EJ_S \/t,l
K'k='-p-TU
VAAU
(3)
(4)
In this case, the model considers an integer number of invested suites that become available for production later on. For every product, we calculate the overall amount produced in each production site without considering every suite individually. The main advantage of the aforementioned formulation is that it leads to a much smaller problem size, both in terms of constraints and variables that requires considerably less computational effort to solve. On the other hand, the reduced problem size of the resulting model comes at the expense of less detailed production plans. However, for the purposes of determining the strategic here-and-now decisions, aggregate production plans can still capture the various trade-offs among the candidate production sites. The suite-aggregate model sufficiently approximates the detailed model by focusing on the strategic decisions, while it adopts a myopic behaviour towards the second stage operational decision variables, thus providing a valid upper bound by overestimating the objective function of the original problem. The proposed hierarchical algorithm comprises two steps. In the first step, the aforementioned suite-aggregate model is solved in order to determine the strategic hereand-now decisions. In the second step, the derived strategic decisions variables are fixed and the original detailed model is solved in the reduced variable space in order to determine the optimal levels for the operational wait-and-see decisions variables. It should be added that the second step of the proposed algorithm can further be decoupled by solving each scenario as a separate MILP model with fixed product portfolio and multi-site investment strategy, since they are both scenario-independent decisions variables, already determined from the previous step of the algorithm.
5. Illustrative Examples Two instances of a stochastic, multi-site, multi-period capacity planning problem are solved in order to validate the applicability of the proposed mathematical model and its corresponding solution strategy. Consider four alternative locations (A-D), where A and B are the sales regions, A is the intellectual property owner (IP-owner), while B, C and D are the candidate production sites. Two examples, namely 3PR0D and 6PR0D, consider the manufacturing of three (P1-P3) and six potential products (P1'P6), respectively. The entire time horizon of interest is thirteen years. In the first three years, no production takes place and the outcomes of the clinical trials are not yet known. Initially, there are two suites already in place in production site B. Further decisions for
1101 investing in new manufacturing suites are to be determined by the optimisation algorithm. We assume that the trading structure is given together with the internal pricing policies as shown in Figure 1.
i
/
SALES A ^ P1,P4,P6
SITEB PI, P4, P5 sQost + 20%
y
SUPPLY
Sales Regions
Production Sites
- •
SITEC P2, P3, P6
^ w
nesaie
IP OWNER
\
— LJ^/C
MARKET [\.esaie — AJ^/C
\
y^
SALES B 1y/Pesale ^ P2,P3,P5
SITED PI P3, P6 Tost + 20%
\
\Resale
Figure 1: Trading structure of the company.
Both problems were implemented in GAMS (Brooke et al., 1998) using the XPRESSMP MILP solver with a 5% margin of optimality. All runs were performed on an IBM RS/6000 workstation. In example 3PR0D, products PI and P3 are selected for manufacturing while PI, P3, P4 and P5 are selected in example 6PR0D. Furthermore, the solution determined by the optimisation algorithm suggests that it is more profitable to invest in production sites B and D, while no suite investment decisions are taken in production site C (Figure 2).
-S 1 >
.s 6
3PROD PROBLEM
CA
1I I I
CA
to
t1
t2
t3
Time (years)
t4
6PROD PROBLEM
11 >
1 i
1II1
? oi o
to t1 t2 t3 t4 t5 Time (years)
Figure 2: Investment Decisions Calendar (Site B: black, Site D: grey).
1102 The proposed hierarchical algorithm was tested against the single-level detailed MILP model solved in the full variable space and the results for the two illustrative examples can be found in Table 1. Notice that the reported CPU time for the hierarchical algorithm corresponds to the combined CPU time of the aggregate MILP model plus the summation of the CPU time over all scenario MILPs. Apparently, the hierarchical algorithm outperforms the single-level MILP in terms of computational effort, since the CPU time is reduced by orders of magnitude, while the quality of the derived solution compares favourably with the corresponding solution of the single-level detailed MILP.
Table 1: Computational results. 3PR0D Problem Sol. Approach Single-level Hierarchical Obj. Function 223 223 291s 4s+2s CPU
6PR0D Single-level Hierarchical 130 integer solution 283 found after 10800s 62s+88s
6. Concluding Remarks In this paper, a two-stage, multi-scenario, mixed integer linear programming (MILP) mathematical model was developed to support a holistic approach to product portfolio and multi-site capacity planning under uncertainty, while considering the trading structure of the pharmaceutical company. A hierarchical algorithm was then proposed for the solution of the resulting large-scale MILP problem based on the decoupling of the decision-making levels (strategic and operational). Without compromising on the solution quality, significant savings in computational effort were achieved by employing the proposed algorithm in two illustrative examples. Current work focuses on testing the mathematical framework to larger examples and investigating alternative solution strategies.
7. References Brooke, A., Kendrick, D., Meeraus, A. and Raman, R., 1998, GAMS: A user's guide, GAMS Development Corporation, Washington. Gatica, G., Shah, N. and Papageorgiou, L.G., 2001, In Proc. ESCAPE-11, Kolding, 865. Maravelias, C.T. and Grossmann, I.E., 2001, Ind. Eng. Chem. Res. 40, 6147. Papageorgiou, L.G., Rotstein, G.E. and Shah, N., 2001, Ind. Eng. Chem. Res. 40, 275. Rotstein, G.E., Papageorgiou, L.G., Shah, N., Murphy, D.C. and Mustafa, R., 1999, Comput. Chem. Eng. S23, S883. Subramanian, D., Pekny, J. and Reklaitis, G.V., 2000, Comput. Chem. Eng. 24, 1005.
8. Acknowledgments The authors gratefully acknowledge financial support from the European Union Programme GROWTH under Contract No. GlRD-CT-2000-00318, "VIP-NET: Virtual Plant-Wide Management and Optimisation of Responsive Manufacturing Networks".
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1103
A Multiagent-based System Model of Supply Chain Management for Traditional Chinese Medicine Industry Qian Li and Ben Hua The Key Lab of Enhanced Heat Transfer and Energy Conservation, Ministry of Education, China, South China University of Technology, Guangzhou 510640, China, email: lq001@37Lnet
Abstract This paper describes an ongoing effort on utilizing agent oriented modelling technique in the modelling and optimisation of supply chain for the Traditional Chinese Medicine (TCM) industry. The paper first analyses the specific nature of TCM supply chain and based on which configures a number of agents that cooperate with each other to provide supports to the TCM supply chain decision makers to improve performance. Some scenarios are considered such as demand pattern and market changes which are supported by the agents in the prototype system under development. Some implementation issues are also discussed.
1. Introduction During the last a few decades. Traditional Chinese Medicine (TCM) with a history of more than 5,000 years has gained more attention because it is natural and safe, with fewer side effects. The TCM industry is undergoing rapid growth both domestic and overseas. To compete in the changing market, TCM enterprises attempt to improve both their production processes and supply chain management. Significant amount of effort has been made developing techniques and software systems to help improve the performance of supply chains. For example. Fox et al. (1999) have proposed an agent oriented software architecture to manage supply chain. Garcia-Flores et al. (2000, 2002) have built an agent based system to simulate supply chain of a coating plant. This paper describes an ongoing effort on utilizing agent oriented modeling technique in the modeling and optimization of supply chain for TCM industry. The special characteristics of TCM industry are considered in designing the system in which several
1104 business activities/processes including planning, procurement, inventory management, etc. are performed by a number of software agents, each performs unique functions. These agents cooperate with each other through information sharing, collaborative decision making so as to improve the performance of the TCM supply chain.
2. Supply Chain of TCM Enterprise TCM industries' supply chains have some distinctive characteristics in comparison with that of in other industries. For example, some TCM plants, like the one taken as our background, produce massive types of products, up to several hundred, which caused the difficulty in planning and managing the production and inventory. Raw materials of TCM plant are mostly natural which caused the seasonal and locational variation of quality and cost, and the difficulty in storage. Moreover, the production process is often long and time consuming, not designed to be flexible enough to meet the dynamic demand of the supply chain. All these must be considered when managing the TCM supply chains. Among the difficulties, TCM enterprises found that the most difficult one is how to properly plan production and manage materials in their supply chain under demand and supply variation. It is also a key to lower the inventory and shorten the delivery time, which will reduce the supply chain cost. At present, the departments in the TCM plant such as inventory, procurement and production process are lack of cooperation and information sharing, each makes their own decisions based on local information available. Poor decision results in poor performance. To improve this, an integrated supply chain system of TCM industry is needed which allows business processes to share information and knowledge. The system should also facilitate the coordination and collaboration between components to achieve the optimal planning and material managing for the TCM supply chain.
3. System Design The development of agent technology in recent years has proven an effective approach to solving supply chain management problems. In this work we proposed the development of a multiagent-based system, in which business processes are performed by autonomous software agents. The following subsections identifies these business processes and configured a number of software agents to support them.
1105 Table 1 Agents in the proposed system. Agent Type Planning agent
Production management agent Inventory management agent
Procurement agent
Order management agent Supplier agent
Customer agent
Function Perform demand forecasting Use Linear Programming to perform aggregate planning to decide the production and material requirements et al. Convert raw material into products according to production plan. Receive raw material from supplier agent. Deliver products to customer agent according to accepted order. Supervise raw material inventory, inform the procurement agent when replenishment is needed. Decide the procurement time and quantity. Publish the requirements of raw material to supplier agents. Receive quotation from supplier agents. Select appropriate supplier and place the order. Collect orders from customer agent. Query and decide if the order will be accepted. Receiving inquiries from procurement agent and send the quotation back. Fulfill contracts. Place orders to order management agent. Receiving delivery.
3.1. Identification of agents Based on a business process modelling, a number of software agents have been identified as listed in Table 1. Each of these agents provides a kind of functions that can conduct the business processes. 3.2. Multiagent system of TCM supply chain Figure 1 shows how these software agents work together to manage a TCM supply chain. In this figure, software agents are represented by rounded rectangle. The links between rectangles represent information flow (represented by thiner lines) and material flow (represented by thicker lines) transfer between agents. The arrows indicate the direction of information flow or material flow. There are three main process identified in our system. Order fulHllment process The order fulfillment process begins when the customer agents place an order to the order management agent (OMA), the order often contains
1106 Information flow Planning Agent Supplier Agent 1,2...
Procurement Agent
"^
Material flow
Order Management Agent
Production Agent
Customer Agent 1,2...
IT Inventory Agent
Fig. 1. A general model of multiagent based supply chain system.
information such as product type, quality, quantity and desired delivery time. OMA then queries planning agent about available capability to decide if the order can be accepted. If the order is accepted, the production agent will arrange the production and the inventory agent will deliver products to customers before due time. Planning Planning agent performs aggregate planning with the information it collected from other agents. It determines the capacity, production and inventory decisions for each period over a period of time, (one week and 2 months respectively in this work) Based on the plan, the production agent will calculate the forecasted raw material consumption in the next period and send the information to procurement agent. The latter will decide if a replenishment is needed and take action. The planning agent also dynamically updates the level of safety inventory according to forecast results. The plan needs to be renewed if the situation significantly deviates the current plan. Replenishment process The inventory agent manages the raw material according to predefined inventory policy (continuous review policy in this case). When the inventory of raw materials drops below the safety inventory level, it will inform the procurement agent to procure new materials. The procurement agent then inquires suppliers to select the appropriate one according to their quotation with certain select rule. After the order is placed, the replenishment will be performed by the supplier agents.
1107
4. A Prototype System Development A TCM plant has been chosen as a case study. E-fang Traditional Chinese Medicine Company is a TCM plant located in Guangdong province of China. Its main products are refined powders of TCM made through advanced extraction technique. The production processes use raw materials provided by suppliers distributed in different parts of China. The products are sold both domestic and overseas. There are over 400 products which makes it complicated and difficult to plan and manage the production and inventory of both raw materials and finished products. It is also difficult to forecast product demand as the products have different demand patterns. The demand of some products are highly stochastic while others are steady. Some products have seasonal demand patterns. And the demand of some products are related to the demand pattern of other products. To solve this problem, we have to design different forecast models for each products. Since the demands for over 400 products must be met, the plant must carefully plan and manage its production and raw materials supply. The charactristics of raw materials used in TCM also result in the difficulty in planning raw material procurement. Most of the raw materials of TCM production are natural, so the price often varies in different seasons. This also offers the procurement department opportunities to gain extra profit from properly planning its procurement and inventory. A computer aided system had been designed to solve these problems. This prototype system contains an order management agent, a planning agent, a production management agent, a procurement agent and an inventory management agent. It also contains 5 customer agents and 4 supplier agents to simulate the real suppliers and distributors. In comparison with other agentbased supply chain systems, special attention has been paid on planning agent and procurement agent. In the planning agent, special module had been set for data forecast which manages historical demand data of each products and provides the demand forecast methods . Aggregate planning is made based on forecast results to determine capacity, raw material supply and other issues. The procurement agent also maintains price pattern versus season of each products to determine best procurement schedule. 40 typical products with different demand pattern have been selected to be used in the system. We have collected the selling and procurement data of these products from year 2000 to 2002. In the pretreatment process, the demand data of each products from year
1108 2000 to 2001 are used as historical data . In the pretreatment process, the demand data of each products are regressed, the best forecast method and relevant parameters are automatically determined through the comparison of forecast accuracy. The price variation of relevant raw materials are also modeled by data regression. The data collected from year 2002 is used to test the feasibility of the proposed system. Earliy test runs indicate that this system can generate alternatives for optimal decision making. More practical results will be published in subsequent phases of the project.
5. Summary In this paper, we reported our first step toward developing a multiagent system to simulate the supply chain of TCM industry. The specific charactristics of the TCM industry has been analyzed, based on which both main business processes of TCM enterprise have been identified and and a software agent system has been configured with agents cooperating with each other to imporve the decision making process in the TCM supply chain management. . Efforts are made to keep the prototype system model generic and agile so as to be extended to other cases. Some details on the prototype system development are discussed .
6. Reference Fox, M.S., Barbuceanu, M., Teigen, R., 2000, International Journal of Flexible Manufacturing Systems, 12,165. Covington, M.A., 1998, Decision Support Systems, 22, 203. Garcia-Flores, R., Wang, X.Z., Goltz, G.E., 2000, Computers and Chemical Engineering, 24, 1135. Garcia-Flores, R., Wang, X.Z., 2002, OR Spectrum, 24,343.
7. Acknowldgement The authors would like to acknowledge the financial support from National Science Foundation of China (Highlight Project, No. 79931000), and the Major State Basic Research Development Program (973 Project, No: G2000026308), and the Significant Project of Guangdong Science & Technilogy Bureu: "Modernization of Chinese Medicine". Authors also acknowledge Dr. M.L.Lu and David Hui for their precious help in preparing this paper.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1109
A Tool for Modelling the Impact of Regulatory Compliance Activities on the Biomanufacturing Industry Ai Chye Lim, Suzy Farid, John Washbrook*, Nigel John Titchener-Hooker The Advanced Centre for Biochemical Engineering Department of Biochemical Engineering, University College London, Torrington Place, London WCIE 7JE, UK. *Department of Computer Science, University College London, Gower Street, London WCIE 6BT, UK.
Abstract Quality control (QC)/quality assurance (QA) and batch documentation form key parts of the in-process validation procedure for any biopharmaceutical. The activities of QC/QA and batch documentation rather than the manufacturing steps themselves are usually rate limiting in a plant (Ramsay, 2001). Thus, managing QC/QA and batch documentation becomes a challenge. This paper presents the configuration of a prototype tool for modelling the impact of in-process testing and batch documentation in the biopharmaceutical-manufacturing environment. A hierarchical task-oriented approach was employed to convey maximum user flexibility. The impact of employing a range of manufacturing options on financial and technical performance was used in a case study to evaluate the functionalities of the tool. The study aims to investigate the effect of regulatory compliance activities on operational costs and demands on resources. The study demonstrates the use of such a software tool for the facilitation of early planning of process development and the appropriate allocation of resources to each stage of manufacturing including in-process testing and documentation. The modelling tool provides realistic indications as to the feasibility of different manufacturing options and ultimately is an aid in the decision-making process.
1. Introduction The biopharmaceutical manufacturing industry is highly regulated, having to comply with stringent rules and procedures. Process controls are crucial to ensure that a specific process will consistently produce a product which meets its predetermined specifications and quality attributes. Adequate in-process testing, thorough process validation and analytical methodologies are required to demonstrate lot-to-lot consistency. The production of new drugs, using progressively more complex production technologies, the wide range of biopharmaceuticals potency and stability, targeted with sociological factors has triggered the development of stringent regulations in the biopharmaceutical industry. With an increasing number of international and national regulations, companies face additional burdens of ensuring and improving quality, each being expensive and time-consuming activities.
1110 A major challenge faced by biotechnology companies is the need for efficient inprocess testing and documentation systems in order to deliver new drugs to the market quickly. An examination of existing process modelling tools for the simulation of biomanufacturing shows that the pertinent issue of in-process testing/batch documentation are usually not included in any analysis. Farid (2001) reported the implications of these support activities on the manufacturing environment but no implementation was attempted. This deficiency could vitiate the accuracy of any model to provide an actual reflection of the entire manufacturing process and distort the accuracy of resource management and manufacturing cost estimates incurred. Furthermore including these support activities, which run in parallel with production, is of critical importance in the biopharmaceutical industry in order to identify lead times and bottlenecks in the manufacturing process. As the need for speed rises, the application of simulation modelling tools to address these problems is becoming increasingly important. These factors have driven the need for the addition of these regulatory compliance activities in a bioprocess simulator to reflect the impact of current good manufacturing practices (cGMP) in biopharmaceutical plants.
2. Design Methodology 2.1. Modelling approach A highly structured bioprocess simulation support tool is proposed in order to achieve rapid process modelling for the manufacture of biological products. The conceptual framework seeks to integrate various aspects, including resource management, mass balance analysis, in-process testing and costing, that each relate to strategic bioprocess decision-making. The tool structure was arranged in a hierarchical manner to represent the key tasks and resources in a manufacturing operation (Farid et al., 2000) but was extended further to incorporate QC/QA activities and batch documentation (Figure 1). As depicted in Figure 1, the ancillary steps (e.g. equipment-preparation, in-process testing and batch documentation) were modelled separately from the productmanufacture steps.
Manufacturing tasks
4^ Productmanufacture tasks
Equipmentpreparation tasks
4-
*
Regulatory compliance tasks
4r
Seed fermentation
Cleaning-in-place
Quality control
Production femnentation
Sterilising-in-place
Quality assurance
Chromatography
Equilibration
Batch documentation
etc.
etc.
etc.
Figure 1. Hierarchical tree representation of manufacturing tasks.
nil 2.2. Implementation The design, implementation and application of the tool were based upon the production of monoclonal antibodies (Mabs) expressed in mammalian cell culture. The software tool was developed using a task-oriented approach on the platform of the visual simulation package Extend Industry Suite v5 (Imagine That Inc., San Jose). The software tool comprises the operational tasks (e.g. fermentation, cleaning-in-place), resources required to carry out each task (e.g. equipment, raw materials, utilities, labour) and resultant process streams from each task. Specific blocks to describe the bioprocess steps were coded in Extend and linked to represent the whole process within the manufacturing environment. Each of the unit operations was simulated as an activity requiring resources. The same approach was applied to the modelling of equipment-preparation, in-process testing and batch documentation operations. To model the process stream, each was represented as a batched item, comprising of several components (e.g. media, cells, IgG, buffer etc.). The stream carried the attributes (e.g. mass, volume, density, concentration) of each component. Such information was then carried over to the subsequent operating task for mass balance. 2.3. Key parameters The simulation of the model requires the specification of the maximum availability of all the resources within the plant. These conditions cause constraints to be placed on the simulation flow in the model based on the availability of resources. User specified the purchase cost/cost per use of the resources or built-in cost model based on costestimating factors for biopharmaceutical process equipment were used (Remer & Idrovo, 1991). The series of process steps and ancillary operations to manufacture the product, prepare the equipment, test the sample and document the batch were then defined in their respective hierarchical workspaces. Finally, the factors for mass balance calculations were input to determine the characteristics (i.e. mass, volume) of the process streams from each process step. The feasibility of a given manufacturing option could be determined from an economic point of view, i.e. the cost of goods (COG) performance metric. The COG model was simulated with cost equations developed for conventional chemical engineering facilities (Sinnott, 1993). In addition, it is important to consider supplementary costs associated with the compliance to cGMP in the biopharmaceutical manufacturing industry. Based on bioprocessing plant data (Francis, Protherics Pic, London, UK), it was possible to predict the cost contributions arising from in-process testing and documentation steps (Table 1). The miscellaneous materials (e.g. equipment and raw materials) associated with the QC/QA and documentation tasks were related to the utilisation of labour. The costs included taking samples, transferring to the laboratory, testing, reviewing data and reporting results. Table 1. COG model for QC/QA and documentation activities. Cost category Quality control/Quality assurance labour Batch documentation labour Miscellaneous materials
Equation f(utilisation) f(utilisation) 10-20% of operating labour time
1112
3. Case Study 3.1. Set-up To evaluate the functionalities of the software tool, the production of monoclonal antibodies (Mabs) using perfusion culture was considered. The example was based on a biopharmaceutical company wishing to investigate how often to pool broth after fermentation, i.e. the pooling interval, given a fixed downstream process scale. The company considered whether to pool at 1, 2 10, 15 or 30 day intervals. Such decisions require an appropriate balance between running costs and annual throughput to be made. The software tool was used to model the different scenarios and to compare the results. An overview of the process production of a therapeutic Mab using mammalian cell culture is illustrated in Figure 2. The entire manufacturing process consisted of basic unit operations including fermentation, affinity chromatography etc and equipment-preparation steps of cleaning-in-place (CIP) and sterilising-in-place (SIP). In addition to these product-manufacture and equipment-preparation steps, the model also considered the issue of quality control and assurance as well as batch documentation running in parallel with the main production run. The performance metrics used to compare the production strategies were the cost of goods per gram (COG/g) and the demand on resources. Several assumptions were made for the case study (Table 2). These assumptions were validated through discussion with industrial experts. Harvesting of the broth was included as part of the perfusion process typically using microfiltration to remove the cells and recycle them back into the bioreactor. After harvesting, the broth was loaded onto the affinity column, purified and stored. The eluate was then pooled and passed through other purification steps. The QC/QA and documentation activities were carried out for each unit operation. At the end of the batch, there was a lot review to determine product acceptance/rejection.
Inoculum Seed grow-up fermentation
Final filtration
Viral clearance
Gel filtration chromatography
Ion exchange chromatography
Concentration/ Diafiltration
Figure 2. Process diagram of the case study: Production of a therapeutic Mab from mammalian cell culture using perfusion culture.
1113 Table 2. Key process case study assumptions. Assumption Plant operating hours Bioreactor size Length of 1 perfusion run PooUng intervals Productivity Perfusion rate Capacity of QC/QA & documentation unit Manual time to test a batch Manual time to document a batch
Input value 48 weeks a year, 7 days a week 1,000L 30 days 1-30 days 200 mg/L/day 1 reactor volume/day 40% of total plant operating force 0.5-1 day 15% - 20% of operating time
3.2. Simulation results and discussion The annual cost outputs on a task category basis for pooling intervals of 1, 2, 10, 15 and 30 days are plotted as shown in Figure 3. The costs are relative to the case of a pooling interval of 1 day. Examining the base case indicates that the COG/g is dominated by the equipment-preparation and product-manufacturing tasks. Each contributes about the same proportion to the COG/g. As the pooling interval increases, the COG/g drops as fewer equipment-preparation steps are required and becomes dominated by the productmanufacture tasks. The costs of the product-manufacture tasks are almost invariant with pooling interval. The results provide the capability of viewing where the bulk of manufacturing costs are concentrated for different production strategies. Further examination illustrates that QC/QA and documentation activities contribute about 14.3% of the COG/g in the base case and falls sharply so that at an interval of 15 days, it is about 4%. It is interesting to note that the COG/g increases again at an interval of 30 days. Hence, there is a limit to the duration of the interval that the company can operate under. The tool also generates the current utilisation of operator resources highlighting the daily peak levels in demand and when they occur. Examples of the demand on the QC/QA staff for 3 successive batches are illustrated in Figure 4a and b for pooling intervals of 1 and 15 days respectively. Figure 4a indicates that typically, a maximum of 4 operators are employed at any time in the process. At a longer pooling interval. Figure 4b shows that a maximum of 2 operators are used during the majority of the production time. The average utilisation of staff in both cases was also probed. This value corresponded to 2.6 and 1.2 in the case of 1 and 15 day intervals respectively. Thus, different manufacturing options can affect the demand on resources. The current utilisation performance metrics would prompt a company to allocate the appropriate number of staff to carry out the task efficiently depending on the manufacturing option. In this particular case study, the lowest COG/g occurred at a pooling interval of 15 days. Hence, one may conclude that based on this performance metric, the company has to pool the broth every 15 days given the particular DSP equipment scale and operating conditions. The number of QC/QA staff could be reduced since the utilisation curve shows that the staff resource pool is not fully utilised most of the time.
1114
0 R egUctory oonrpi i cnoe t a ks D Prcxjuct-mcnufcctu'eta ks • E qu pment-pr epacti on t a ks
2
10 15 Pooling intervd (Dcys)
Figure 3. Annual cost of goods per gram (COG/g) on a task category basis for pooling intervals ofl, 2, 10, 15 and 30 days. Values are relative to an interval of 1 day. (b)
(a)
«* •• §3
-
O
•s
i2 T& M ^1
°0
1
1 50
100
Time(Days)
150
-
n 50
1
1 ni •
100 Time (Days)
Figure 4. Utilisation of QC/QA staff for (a) a daily pooling strategy (b) a 15 day pooling strategy.
4. Conclusions The case study demonstrated the functionalities of the tool to output the cost of goods on a task basis, view the demands on resources and compare the effect of QC/QA and batch documentation activities using different production strategies. Such a tool could be employed in early planning, hence contributing to transparent planning and project management decisions.
5. References Farid, S., 2001, A Decision-Support Tool for Simulating the Process and Business Perspectives of Biopharmaceutical Manufacture, PhD thesis. Farid, S., Novais, J.L., Karri, S., Washbrook, J. and Titchener-Hooker, N.J., 2000, A Tool for Modelling Strategic Decisions in Cell Culture Manufacturing, Biotechnology Progress, Vol. 16, pp. 829-836. Ramsay, B., Lean Compliance in Manufacturing, 2001, Pharmaceutical Tech. Europe. Remer, D.S. and Idrovo, J.H., 1991, Cost-Estimating Factors for Biopharmaceutical Process Equipment, Pharmaceutical Tech. International. Sinnott, R.K., 1993, Coulson & Richardson's Chemical Engineering, Chapter 6.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B. V. All rights reserved.
1115
Modeling the Polymer Coating in Microencapsulated Active Principles D. Manca^ M. Rovaglio^, I. Colombo^ CMIC Department, Politecnico di Milano, Italy^ Eurand International S.p.A., Italy'' e-mail: [email protected]
Abstract Starting from the detailed theory of Kamide (1990) about the nucleation and deposition of a polymer on a particle surface, the paper describes and models the progressive formation of the polymer coating in terms of juxtaposed unit spheres. The deposition of such polymer spheres on the particle is modeled by means of a Montecarlo technique that tries to numerically quantify the spatial vacancies that are responsible for the pores formation. The dimension and distribution of the pores are the two main quantities responsible for the possible evaluation of membrane diffusivity. The numerical model, developed to describe in a detailed and phenomenological way the polymer coating attributes, puts together two main points of the microencapsulation process: polymer nucleation and deposition onto the active principle. The research target is to quantitatively describe the surface morphology of the polymer coating in terms of both micro and macro pores, in order to use the diameter and distribution values of the holes as input data for the evaluation of the effective surface diffusivity. By understanding the direct and indirect effect of those operating parameters over the pores formation and distribution, it is possible to fine-tune such variables so to reach an optimal coating quality and consequently an optimal release curve.
1. Introduction Microencapsulation of active principles is a well-known process of the pharmaceutical industry. A polymer coating covers a particle of drug. The resulting product has some peculiar and interesting properties as far as the release of the drug is concerned. We speak of controlled release of the active principle within the human body. Actually, respect to a conventional tablet, a microencapsulated one has the capability to avoid any sudden release while distributing and smoothing the concentration profile of the drug throughout a longer time period. Often 8-10 hours of sustained release can be achieved bypassing the stomach membrane and releasing the active principle directly into the intestine duct. Dynamic models about controlled release have been proposed by several authors and are available in the literature (Deasy et a.l., 1980; Nixon and Wong, 1990; Singh, et al., 1994). Most of them are quite simpHfied and based on macroscopic properties. The most advanced model proposes a combined effect of penetration theory and drug diffusivity through the coating, thus explaining the typical dependence of release from the square root of time. In such a model an adaptive semiempirical parameter makes the numerical model matching the experimental data. Such a
1116 parameter is the effective diffusivity that tries to summarize the real coating structure of the polymer layer. Actually, the coating diffusivity depends on pores that can be clearly observed as reported in figures 1 and 2. The polymer layer has a quite complicate morphology that comprises micro and macro pores, craters, chinks and holes. These features are responsible for the delayed release of drug into the external solution. On the contrary, if a perfectly continuous polymer layer is produced by film casting, no release at all is observed also after 96 hours. This paper wants to increase the understanding about the main features that affect and regulate the drug release, in order to identify the process parameters that can be tuned to reach the requested properties.
Figure 1. Pores and chinks in the polymer Figure 2. Passing holes and craters in the coating. polymer coating.
2. Model of the Polymer Coating The theory of Kamide (1990) allows determining the properties of the polymeric membrane covering a particle both in terms of number of pores, their conformation and apparent diffusivity. Basically, it is necessary to know the volumes ratio i? between solvent and polymer. These two phases are respectively identified as poor and rich phases. In our formulation and experimental activity we worked with ethylcellulose polymer and cyclohexane solvent. Consequently, the specific values defined in the following refer to such pairing. The membrane is formed by the separation of the polymer from the poor phase and the consequent coagulation of the polymer particles. It is possible to outline three main steps for the membrane formation: • Nucleation: the formation of polymer nuclei and the growth of primary particles; • Coagulation of primary particles and formation of a layer of secondary particles; • Pores formation matching the voids existing among the secondary particles. y Given R as: R = -^ (1) the ratio between the poor and the rich polymer phases, the radius S^ of primary particles can be determined once are known: the coagulation free energy variation for unit volume: A/^ = AG(JCO) - AG^XQ) (2) where
XQ
is the initial concentration of polymer,
for the coexistence of poor and rich phases:
AG(XQ)
is the mean free Gibbs energy
1117 (x -X )
AG(xo) = (AG'(x2)-AG'(xi)))-^
^ + AG'(xi)
(3)
(X2-X1)
and Xj, X2 are the volume fractions of poor and rich polymer phases. The free mixing Gibbs energy for unit volume is then defined as: AG'(XQ) =
^—^RT
log(l-xo) +
Xo+Xo{^ +
PlXo+P2^o)^i
pJ XQRT
VX„
(4)
log(xo)-(X^-l)(l-Xo) + X^;roa-^o)'a + Y a + 2 x o ) + ^ ( l + 2xo+3xo')
where X = 1607 is the mean number of monomer units present in the polymer, Oy/Q
1+
k'
(5) pJ y/Q = 2.2 is the entropy parameter, 0 = 364.21 K is the Flory temperature, which is an
;iro=(0.5 + ^o) +
intrinsic property of the polymer, T is the system temperature, whilst:
t'=)fcii-^' T
(6)
k = -328.39 is a parameter that does not depend neither on the temperature nor on the polymerization degree, pi = -6.34 and pj = 47.49 are two coefficients from the Taylor series expansion of the interaction parameter. As before described, the coagulation of primary particles of radius S^ produces a distribution of secondary particle radii that is simulated by a Montecarlo method. Actually, primary particles move across the space with constant velocity modulus and random direction. Collisions between two or more particles are statistically accounted for. The resulting Brownian motion is characterized by the following velocity modulus: v = k6juS{
(7)
where k is the Boltzmann constant and // is the solution viscosity. Equation (7) is obtained by equating the following two equations: V
juSf Knit=^^^-=kT
(8) (9)
where A^^^^^-^ is the sampling time adopted to simulate the dynamic evolution of the solution. As a matter of facts, lSX^^^^^ is the time taken by a primary particle to walk a length equivalent to its diameter. Once the dynamic formation of secondary particles has been simulated, it is possible to focus the attention on the pores pattern. The pores are originated by the vacancies existing among the secondary particles. A straight classification of pores allows defining:
1118 •
Isolated pores: made up of a reduced number of vacancies that are internal to the membrane; • Semi-open pores: connecting the active principle surface to the coating or the coating to the external solution; • Passing pores: they cross the whole polymer coating connecting the active principle to the external solution. The latex structure, constituting the juxtaposed secondary particles, is assumed as hexagonal-compact. Consequently, every particle is surrounded by twelve neighboring particles (or voids). This assumption conditions the pores formation. The membrane porosity can then be defined in terms of probability factors for the three categories outlined before: Pr = Pr,^+Pr^^+Prp^ (10) where the terms in equation (10) represent respectively: overall, isolated, semi-open and passing probabilities for the membrane pores. By assuming that the membrane porosity does not change during the exsiccation process, Kamide's theory proposes the following formulation for the overall porosity: Pr-
1-
S'.^' yS2j
( d \ {l-Pr') + \l-X2-^\{l-Pr') + Pr'
(11)
^.y
where: dp^ is the polymer density, d is the density of the polymer particles, ^'2 is the radius of the exsiccated secondary particles that can be determined as follows: dpi »>2 — »>2
. V
(12)
d , P J ^
\hlla
whilst Pr' is defined as follows: Pr' =
(13)
where /Q is the layer thickness and /^ is the exsiccated coating thickness. The pores formation is simulated by combinatorial calculus and through the concept of probability. The system is assumed to consist of spherical particles of .^2 radius. The hexagonal and compact latex contains either polymer-rich aggregates or voids filled with solvent (polymer-poor vacancies). The polymer-rich units represent the structural frame of the coating. A pore is formed whenever one or more vacancies are juxtaposed. Focusing the attention on a given sphere / , a probability Pr^ is given that such an element is a void. The overall probability of the coating is obtained by multiplying Pr^ and A^o» which is the total number of spheres in the unit volume of coating. To better understand the line of reasoning, let us consider the simplest case where the pores are made of a single vacancy. Evidently, this situation may occur only when all the surrounding spheres are polymer-rich particles. Due to the hexagonal and compact hypothesis, there must be twelve polymer spheres around the single void. Consequently, the associated probability of this event becomes: {l-Pr)
. Finally, the number of
pores A^i made of a single vacancy is obtained by combining the two independent
1119 events: E^ "the selected sphere is a vacancy" and E2 "the surrounding spheres are all polymer-rich particles": Nj = NgPr^l-Pr)
(14)
The following step consists of extending the same procedure to the case where two adjacent vacancies are involved. The independent events that should be considered are: • E^ both the adjacent spheres are two vacancies, p(Ej) = Pr^; •
E2
the
remaining
spheres
in the
layer
are polymer-rich
particles,
p(E,) = {l-Pr)"; •
E^ the spheres of the following layer, which are adjacent to the vacancy of the previous layer, are polymer-rich particles, p(E^) = {^1-Pr^ ^^ .
The spheres m^ . belonging to layer / that are adjacent to a given number /^ of Mspheres of layer / - 1 are given by the following equation: m/ ^ = 3/, —^^
(15)
Mi
where M, is the number of spheres belonging to layer / and is equal to: M; = hit 16 | ( / - l ) ^ 2 ^ ( i - l ) + l
(16)
V
For example, if we want to determine the number of particles in the second layer that are adjacent to a vacancy in the first layer: Wj j =3-1
M(l)
=3
(17)
On the contrary, when considering the number of possible vacancies in the first layer, we discover, by combinatorial calculus, that there are twelve different selections. It is then straightforward to write the relation existing between the number of pores made up of two vacancies and their probability: (18) N,= ^ ^ ^ V , P r ^ ( 7 - P r / ' ( 7 - P r p vly When the number of vacancies increases, the formulation gets more complex since the number of layers to be considered increases as well. Considering the case of three vacancies, there are two distinct alternatives. Given the intermediate vacancy it is possible either to find the remaining two vacancies both in the first layer or one in the first and the other in the second layer. Such alternatives are concurrent and exclusive. Consequently, the number of pores for one configuration gets summed to the other one: N',= N'y.
12
(19)
N,Pr'{l-Pr)"{l-Prf"
\ '•' Vl-Prf''
v2y The same line of reasoning can be extended to the four vacancies case. By extrapolating the method it is possible to determine some rules for the deduction of the polymer structure. By reporting the fulfill layers as a function of the possible vacancies distribution within the layers it is possible to obtain Table 1.
1120 Table 1: Vacancies as a function of layers. Layer 1
Layer 2
Layer 3
1
1
1
Vacancies in a layer
Table 2 reports the number of times a layer is involved in the redistribution of vacancies as a function of the number of voids. Table 2: Distribution of vacancies as a function of the number of voids. Number of particles 2 3 4
Layer 1 1 1 1
Layer 2
Layer 3
I 2
1
This allows to emphasize how for an assigned number of vacancies, the total number of configurations can be determined by summing row-wise the elements of Table 2, which are distributed according to the Poisson's law. The determination of the passing pores, i.e. the ones that go from the inner active principle towards the external solution, is done by following a very similar combinatorial and probabilistic approach. Instead of speaking of layers, the concept of sheets is introduced. Due to the tetrahedral packing, once a certain number of vacancies has been evaluated, it is possible to determine how many spheres of the following sheet are vacancies. In a structure comprising two layers, a void in the former sheet may be followed by one, two or three vacancies in the latter. The sum of the probabilities associated to each alternative gives the overall probability of the passing pore Pr^^. Finally, once determined Pr^^ and Pr^^, it is possible to evaluate the probability of semi-open pores Pr^^ by simple difference through equation (10). Although the theory of Kamide does not consider any influence exerted by exsiccation, it is realistic that a too high temperature or a too fast procedure can dramatically influence the pores number and pores structure. Some macroscopic irregularities on the coating surface can be originated by the evaporation of solvent that leaves the polymer during exsiccation. At the moment, an extended experimental activity is being carried on to understand and validate the interaction of polymer deposition and exsiccation on coating diffusivity.
3. References Deasy, P.B., Brophy, M.R., Ecanow, B. and Joy, M.M. 1980, J. Pharm. Pharmacol, 32, 588. Kamide K., 1990, Thermodynamics of polymer solutions. Polymer Science Library, Elsevier, Amsterdam. Nixon, J.R. and Wong, K.T. 1990, Int. J. Pharm., 58,1421. Singh, M., Lumpik, J.A. and Rosenblatt, J. 1994, J. Control. Release, 32, 931.
European Symposium on Computer Aided Process Engineering - 13 A. Krasiawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1121
Extractant Design for Enhanced Biofuel Production through Fermentation of Cellulosic Wastes Eftychia C. Marcoulaki* and Fragiskos A. Batzias Laboratory of Simulation of Industrial Processes, Dept. Industrial Management University of Piraeus, Karaoli & Dimitriou 80, Piraeus 185 34, Greece
Abstract This work suggests novel materials to enhance the production of bioethanol from pulp and paper industry wastes. The extractant design problem is formulated mathematically to include process data and solvent properties tailored to the use of cellulosic waste substrates. The simulator is interfaced with available molecular design synthesis algorithms to search and select chemicals of desired and/or optimal performance. The materials designed herein can initiate more detailed analyses and laboratory experiments to validate and support the screening results.
1. Introduction The industrial and agricultural wastes are rich in cellulosic substrates, so they can be fermented to give renewable fuels within a "zero emission" framework (Ulgiati, 2001). This waste treatment option produces clean alternatives to fossil fuels, while using wastes that would otherwise be landfilled. The environmental benefits of this scheme are significant, but reduction of the production cost is considered essential to make the biofuel products competitive in the open market (Wyman, 2001). In typical fermentation processes, ethanol is produced from the fermentation of saccharites (substrate) by microorganisms in an aqueous environment. Ethanol concentrations higher than 12% inhibit the reaction and yield an effluent mixture that is poor in ethanol (Minier and Goma, 1982). The inhibition effect in the conventional scheme increases the energy and capital costs, due to increased separation effort (Boukouvalas et al., 1995) and low substrate conversion (Tangnu, 1982). In extractive fermentation the ethanol is simultaneously recovered in a solvent phase, from where it can easily be obtained via simple distillation. This work considers the mathematical modeling of an in situ extractive fermentation process tailored to cellulosic waste treatment, and discusses the solvent requirements. The model and the requirements are used here to propose new extraction solvents using available molecular design synthesis technology. The work makes employs recently developed tools (Marcoulaki and Kokossis, 2000a), which have been tested to solvent design problems, and reported novel molecular structures and significant improvements over other techniques (Marcoulaki and Kokossis, 2000b; Marcoulaki et al., 2000, 2001).
Corresponding author. E-mail: [email protected] (E. Marcoulaki)
1122
2. Advantages of Cellulosic Wastes The use of waste instead of primary raw materials should bring significant decrease to the operational cost for the production of bioethanol fuels (Tangnu, 1982). Additionally, there are several differences in processing cellulosic materials wasted or obtained as byproducts from the pulp and paper industry, compared to straw or agricultural waste. The main difference starts from the necessity to carry out the end of pipe treatment, since the pulp waste contains hazardous materials that need to be removed before it can be safely landfiUed. Post-processing to derive a marketable product provides an opportunity for profit, while reducing the landfilling fees and the transportation cost. The advantages of processing the pulp waste spring from the scale economies of massive production in specific locations, and the nature of the waste itself. Economical advantages include the following •
The pulp waste appears in large quantity and predetermined rate
•
The waste can be processed within the pulp mill site, in reduced cost due to industrial intensification
•
Integrated plants provide investment incentives associated with cash grants, loan interest rate subsidies, leasing subsidies, tax allowances, etc. These incentives may contribute decisively in increasing the competence of the pulping itself.
•
The financial benefits of in situ processing. These, however, can be counterbalanced by scale economies, when the waste from various locations is collected and co-processed in a distant facility.
Processing advantages include that the pulp waste •
is enriched in cellulose during the pulping process
•
obeys quality standards (not very strict in the case of repulping)
•
is more suitable for biological treatment and more susceptible to the required biotransformations, due to its reduced content in lignin and hemicellulose
•
is hydrolyzed in an aqueous slurry, avoiding expensive dewatering.
Post-processing of cellulosic wastes to biofuels aims to produce useful environmentally benign products out of a potentially hazardous material. To ensure environmental consistency, however, life cycle assessment is essential, to evaluate the environmental impact of post-processing and its end products. A significant amount of research has been conducted in the fields of experimental extractant selection and mathematical modeling for the typical glucose fermentation. This was based on the typical scheme involving sugar beet, where the xylose amount is very low, and used Saccharomyces cerevisiae that treats mainly the glucose substrate. These works could not address the problem of cellulose fermentation, where significant amounts of xylose are also produced. The present paper uses kinetic data based on yeast which is specially modified to treat both substrates (Krishnan et al., 1999). The kinetic data are used to develop an extended simulation model, to enable the screening of appropriate extractants formulated as a solvent optimization problem.
1123
3. Extractive Fermentation Simulation Model A process simulation model tailored to the case of cellulosic wastes is hereby developed to assist the search for solvents with improved performance. The new model stems from the work of Kollerup and Daugulis (1985), and modifies it to include both glucose and xylose according to the kinetic models of Krishnan et al. (1999). The final equations are S Y p , , (Do -S" - D S , ) = P [ D + Mp / ( l - M p P/pp)]
(1)
k
where, Yp/k is the ethanol yield coefficient based on substrate k; D is the effluent aqueous dilution rate in h"^ S^ is the effluent aqueous concentration of substrate k in g/L; P is the effluent ethanol concentration in the aqueous phase in g/L; k is G for glucose or X for xylose; the subscript/superscript 0 denotes the influent variables; pp is the ethanol density in g/L; Mp is the ethanol mass distribution coefficient between extract and aqueous phases. Do p" - D p ^ = P M p D" / ( l - M p • P / P p ) + Z Yc/, (Do -S" - D S J
(2)
k
where, Yc/k is the CO2 yield coefficient based on substrate k; kG {G, X}, G: glucose and X: xylose; PA is the effluent aqueous phase density in g/L; DE is the influent extract dilution rate in h"^ The dilution rate D is equal to the overall reaction rate. The mixing rule and the specific growth rates are expressed by Eq. 3 and 4, respectively D = (SG-^G+SX-|^XMSG+SX)
(3)
^^k =l^.,k -Sk /(Ks,k + S , +S^ / K , J . [ l - P / P , / ^ J
(4)
The aqueous phase densities (Eq. 3) are given by the linear expression PA = P W + « G SG -t-a^ Sx H-ttp P
(5)
The equation system (Eq. 1-5) is solved for given values of Mp, DE, S^and Sk, to get the values of P, D and DQ. The productivity in the extract (PDE) phase, the extraction efficiency (EE) and the conversion of substrate k (Ck) are PDE = P . M p D ^ [ l + M p P / ( p p - M p P ) ]
(6)
EE = P . D / ( P D + PDE)
(7)
and
C^ = l - D S ^ / ( D Q S^)
The coefficients and parameters for the above equations are found in Table 1. Krishnan et al. (1999) do not give values for the CO2 yield coefficient Yc/s,k based on substrate k. Since Yp/s,G and Yp/s,G have similar values in the papers of Kollerup and Daugulis (1985) and Krishnan et al. (1999), YC/S,G is taken from the former, and for Yc/s,x we maintain the analogy between the ethanol values. Also, for Pmc and bo (Eq. 4) the
1124 present work uses the set of values of Krishnan et al. (1999) where the differences in the reaction rate values between the two models are small. The coefficients used in the density expression (Eq. 5) are taken from linear regression on experimental values for aqueous glucose, xylose (Wooley and Putsche, 1996) and ethanol (Perry and Green, 1997) binary solutions at 20°C (in the range of the expected aqueous ethanol concentrations). Table 1: Coefficients and parameters for the model equations. Parameter Yp/s
Yos Pm
Ks ^m
Kl
P
Glucose value 0.470 0.44 129.9 0.565 0.662 283.7 0.25
Xylose value 0.400 0.3745 59.04 3.400 0.190 18.10 1.036
Units g/L g/L h-' g/L -
Parameter pp
Pw OG
OCX
ap
Value 789.34 998.23 0.3697 0.4093 0.1641
Units g/L g/L -
4. Desirable Extractant Properties A typical extractive fermentation process EFP includes a fermentor, the solvent recovery unit, and the product purification unit. The biotransformation reaction is carried out in the aqueous (nutrient) phase, which contains the substrate and the microorganisms. The recovery and purification stages are preferably done via simple distillation, and can be carried out in the same equipment according to the extract mixture. An appropriate solvent to extract the toxic product should preferably have the following properties, with particular emphasis on the low toxicity, the low solubility in the aqueous phase and the high selectivity for the product (Fournier, 1986; Karakatsoulis et al., 1995). •
Low extractant microbial toxicity (t), to guarantee biocompatibility to the microorganisms that perform the fermentation.
•
Low extractant losses to the nutrient phase (SI), to reduce the solvent makeup, and the cost of downstream treatment of the aqueous stream.
•
High distribution coefficient of the product to be removed (Mp), to remove the toxic bioproduct from the nutrient phase and reduce the inhibition.
•
Low distribution coefficient for essential nutrient(s) (Mk), to reduce the loss of substrate(s) in the extract stream. Similar constraints may apply to reaction byproducts, e.g. acetic acid.
•
High extractant selectivity (SE) (high Mp is usually accompanied by increased SE).
•
Easy separation of solute from extractant, preferably by simple distillation. A substantial difference in the boiling point temperatures of the two components can safely guarantee the absence of azeotropic formations and the ease of separation.
1125 Table 2: Operational parameters and optimization constraints for extractant design. Process conditions: Temperature = 298K, D^ = 2 h-^ S^ = 300g/L; SG = 30 g/L; S^ = 150g/L; Sx = 30 gyTL Solvent properties: selectivity to solute > 2.0 (wt); solute distribution coeffiecient > 1.3 (wt), losses to raffinate < 0.01 (wt); melting point temperature (Tm)> 288K; boiling point temperature (Tb): 300 PA / 0.85 or p < pA • 0.85 Additional property constraints to guide the design of promising extractants include that the material should be liquid, thermally stable, and chemically inert at the process conditions. Large density difference between the nutrient and extract phases enhances immiscibility. Also, the solvent cost is extremely important when selecting materials for large-scale industrial use. Requirements that cannot be formulated should be considered in the experiments and analyses following the screening stage.
5. Extractant Design Application The available solvent media for extractive fermentation are not efficient in addressing all the problem requirements. The system also includes significant amounts of xylose, which needs to be efficiently fermented to ethanol. The design objective is to maximize the ethanol productivity in the extract phase, while minimizing the molecular complexity. Details on the screening method and the stochastic optimization parameter choices can be found in Marcoulaki and Kokossis (2000a). The tool is interfaced to the proposed simulation model and the search. The mathematical model presented above can be used to select materials suitable for cellulosic wastes. Additionally, the new solvents should obey the set of constraints given on Table 2, according to the requirements set above. Mixture properties are predicted using UNIFAC at infinite dilution conditions. The consideration of xylose, glucose and acetic acid distributions would shrink the feasible solution space to the materials that their interactions with the carbonyl and carboxyl groups are tabulated. The search generates a variety of designs that satisfy the property constraints and serve the synthesis objectives. The obtained results indicate that the main group to increase the productivity (and the distribution coefficient) is CH2CN, and the presence of double bounds (e.g. CH2=CH, CH=CH, etc) is also found beneficial. Table 3 gives typical properties of the compounds designed here. Most of these materials, though simple in their molecular structure, are not found in common access property databases. This gives incentives for further studying the results of this work, to validate the choices using molecular simulation and laboratory experiments.
Table 3: Typical predicted properties of the designed extractants. PDE
Mp
32.0
2.46
Sl% 0.655
Tb
442K
-logCLCso) 2.48
PE
646 g/L
EE% 94.3
Ck%
88.3
T™ 205K
1126
6. Conclusions This work employs the merits of solvent design synthesis on a casestudy of industrial and environmental interest involving the production of bioethanol from pulp and paper industry wastes. The scope here is to suggest novel materials that appear promising in enhancing the conversion to ethanol, and decreasing the subsequent treatment effort. The extractant design problem is formulated mathematically to include process data, and desirable/optimal properties of a suitable designed material. This work proposes a new process simulation model that includes the fermentation of xylose using improved yeast. Xylose concentration is significant when the substrate comes from cellolosic material. The new simulation model is interfaced with available molecular design synthesis algorithms to search and select chemicals of optimal performance in the given process. Apart from increasing the process performance, the designed solvents should have a number of physical, thermodynamic and environmental properties within desirable ranges. The materials designed using the models and the procedure presented here can provide excellent starting points for more detailed analyses and laboratory experiments to validate and support the screening results. Future work could consider the simultaneous saccharification, fermentation and extraction (Moritz and Duff, 1996), and the selection of multiple extractants suitable to enhance both bioprocesses.
7. References Boukouvalas, C , Markoulaki, E., Magoulas, K. & Tassios, D., 1995, Separation Sci. Technol.,30,2315. Fournier, R.L., 1986, Biotechnol. Bioeng., 18, 1206. Karakatsoulis, C.G., Dervakos, G.A. & Newsham, D.M.T., 1995, Proc. 1995 IChemE Research Event, Edinburgh, Vol. 2, paper 1121. Kollerup, F. & Daugulis, A.J., 1985, Biotechnol. Bioeng., 17, 1335. Krishnan, M.S., Ho, N.W.Y. & Tsao, G.T., 1999, Appl. Biochem. Biotech., 77-79, 373. Marcoulaki, E.G. & Kokossis, A.C., 2000a, Chem. Eng. Sci., 55,2529. Marcoulaki, E.G. & Kokossis, A.G., 2000b, Ghem. Eng. Sci., 55,2547. Marcoulaki, E.G., Kokossis, A.G. & Batzias, F.A., 2000, Gomput. Ghem. Eng., 24, 705. Marcoulaki, E.G., Kokossis, A.G. & Batzias, F.A., 2001, Gani, R. & Jorgensen, S.B., eds. Gomputer-Aided Ghemical Engineering, 9, Elsevier, 451. Minier, M. & Goma, G., 1982, Biotechnol. Bioeng., 14, 1565. Moritz, J.W. & Duff, S.J.B., 1996, Biotechnol. Bioeng., 49,504. Perry, R.H. & Green, D.W.,1997, Perry's chemical engineers' handbook, 7* ed., McGrawHill,2-112. Tangnu, S.K., 1982, Process Biochem., 17(3), 36. Ulgiati, S., 2001, Grit. Rev. Plant Sci., 20 (1), 71. Wooley, R.J. & Putsche, V., 1996, Development of an ASPEN PLUS physical property database for biofuels components, NREL/MP-425-20685. Wyman, G.E., 2001, Appl. Biochem. Biotech., 91-93, 5.
8. Acknowledgements The authors acknowledge the financial support from the Reseach Genter, Univ. Piraeus.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1127
Optimisation of Fed-Batch Bioreactors Using Genetic Algorithms: Two Control Variables Debasis Sarkar and Jayant M. Modak Department of Chemical Engineering Indian Institute of Science, Bangalore 560012, INDIA Email: [email protected], [email protected]
Abstract The determination of optimal feed rate profiles for fed-batch bioreactors with two feed rates is a numerically difficult problem involving two singular control variables. A solution strategy based on genetic algorithm approach for the determination of optimal substrate feeding policies for fed-batch bioreactors with two control variables is proposed. The efficiency of the algorithm is demonstrated for a fed-batch bioreactor for the production of foreign protein production by recombinant bacteria. The control policies obtained retain the characteristics of the profiles generated through the application of control theory
1. Introduction The usual objective in optimal control of a fed-batch bioreactor is to maximize the biomass and/or the metabolite production and the optimization has been traditionally sought with respect to substrate feed rate. Determination of optimal substrate feed rate is a problem in singular control, so called because the control variable appears linearly both in the dynamic equations describing the process and/or in the performance index which is to be optimized. Several optimization studies of fed-batch processes have been reported in the literature and many have focussed on manipulating only one control variable, namely the feed rate of a single growth-limiting substrate, for example, a carbon source. However, many fermentation processes involve microbial cells which require more than one substrate for their growth (Modak and Lim, 1989). It has long been realized that the production of antibiotics and enzyme requires precise control of the nitrogen source in addition to the carbon source. The production of a desired chemical from recombinant cell cultures often involves addition of either an inducer or repressor along with the primary growth-limiting nutrient. The optimization problem for such processes involves the determination of the optimal feed rates of two nutrients: either two growth-limiting substrates such as carbon and nitrogen or one growthlimiting substrate and an inducer or a repressor. The optimization of bioreactor performances by manipulating two control variables simultaneously by use of optimal control theory has been reported in the literature (Modak and Lim, 1989; Lee and Ramirez, 1994; Lee et. al., 1998). Also, there are some reports of use of direct methods for solution of such problems (Roubos et. al., 1999; Mekarapiruk and Luus, 2000). We present here an optimization technique based on Genetic Algorithm to determine optimal substrate feeding policy for fed-batch bioreactors with two control variables. A customized genetic algorithm with a problem specific representation for the decision
1128 variables has been used and the algorithm incorporated to some extent the information available from the use of optimal control theory. The efficiency of the proposed method is illustrated by taking an optimal control problem from the literature.
2. Optimization Problem For a typical fed-batch operation with two control variables, the mass balance equations for cells, substrates, product, and total mass (assuming constant density) can be written as follows: (XV)
= JU(S,,S,)XV
(!)
(Sy)
= F,S,,-a,{S„S,)XV
(2)
(Slv)
= F,S,,-CT,(S„S,)XV
(3)
(PV)
= 7r(S,,S,)XV
(4)
(V)
=
F, + F,
(5)
where X, Si, S2 and P are the concentrations of cells, substrate-1, substrate-2, and product, respectively. |LI, GI, G2 and n are the specific rates of growth, substrate-1 consumption, substrate-2 consumption and metabolite production, respectively. The optimization problem is to determine the feed rate policy for both the substrates (Fi(t), F2(t)) during the entire period of operation (0 < t < tf), which maximizes an objective function defined in terms of status of the bioreactor at the end of operation, for example, maximize the amount of metabolite at the end of operation: Maximize
PI=iPV),^
(6)
subject to: V(t)' —
max
; F.
< F.(t) < F.
'
— * / v * / — * i,max
i,min
i = l,2;0
(7) \'
j
where the final time tf may be free or fixed a priori. The optimization problem posed by Eq. 6 along with Eqs. 1 through 5, and 7 is a singular control problem due to linear appearance of the control variables Fi and F2 in these equations. The optimal control theory states that the optimal feeding pattern consists of maximum (Fmax). minimum (Fmin), and singular (Fs) feed rates (Bryson and Ho, 1975). However, the exact sequence of these feed patterns such as, Fmax-Fmin-FsFmin, is not known a priori. Although F^ax and Fmin (=0) are usually specified, the singular interval feed rate (Fs) is also not known a priori. The proposed GA based approach determines the correct feeding sequences as well as the feed rates in the singular intervals for fed-batch processes with two control variables. By application of optimal control theory, the singular feed rate (Fs) can be expressed as a nonlinear feedback expression involving state and adjoint variables (Modak, Lim, & Tayeb, 1986). Depending on the process kinetics, this feedback law maintains the substrate concentration constant or allows its variation in a predetermined manner. If the
1129 singular feed rate maintains the substrate concentration constant during the singular interval, the singular feed rate can be expressed, for a scalar optimization problem as: F =
aXV Sp-S
(8)
To account for the variation of substrate concentration during singular interval, we include a correction term as follows: F=^
^XV
(9)
S,-S where AFc is taken as a nonlinear function of the state variables involving few unknown coefficients as, AFc = a(S)^(P)^. The optimal evaluation of the unknown coefficients a, b, and c allows us to express the singular feed rate as a nonlinear feedback control law. For optimization problems with two control variables there can be three different possibilities at any given point of time: (i) only Fi is singular, (ii) only F2 is singular, and (iii) both Fi and F2 are singular. If only Fi is singular and F2 is either at maximum or minimum value, the Fis can be computed simply by setting dSi/dt = 0 in Eq. 2. If we include the correction term, this results in: 1 ^1.=
is,,-s,)
(10)
{F,S,^(r,XV^AF,^XV)
Similarly, if only F2 is singular and Fi is either at maximum or minimum value, then F2S can be computed as: 1 (•^2F
(F.S^-^-a^XV-^-AF^^XV)
(11)
•^2)
If both Fi and F2 are singular, we can find out Fis and F2sby solving simultaneously Eqs. 10 and 11. This results in: (
(S2F-S2)
Fu = WlF
^\)\^1F
cr,+^2)
^l^zi
(5,.-s,) ^\)\^2F
^2)
\^2F
a,S,
F2.=
WlF
a.S,
'^1*^2
\ - + AF„ XV
(12)
^2)
+ AK, XV
{s^,-s,)
(13)
3. Genetic Algorithm for Optimal Feed Rate Determination The important aspect of our algorithm is the representation of the decision variables to represent a substrate feeding profile to the bioreactor. According to optimal control theory the optimal feeding pattern will consist of intervals of maximum, minimum, and singular feed. Since we do not know such a pattern a priori, the decision variables for our optimization procedure are the feeding pattern, the switching times, and also the
1130 coefficients involved in the correction term AFc for the calculation of feed rate in the singular intervals, for both the control variables. A chromosome here has thus three different parts: the pattern (feeding sequence), the coefficients for the correction term, and the switching times. Indicating the maximum feed by the integer 1, the singular feed by 2, and the minimum feed by 3, the first part of the chromosome (feeding sequence for each control variable) can be constituted in the initial population by randomly generating its genes from an integer set of {1,2,3}. The unknown coefficients (a, b, c etc) of the correction term and the switching times are coded as real numbers and they can be randomly generated in the initial population between a suitable upper and lower bounds. Care should be taken that the switching times are always in an ascending order for a meaningful representation of the feed rate policy. The following represents a typical chromosome consisting of 4 intervals, 3 coefficients, and 3 switching times for each of the control variables.
1322 pattem
pattern.
0.25 - 0 27 0.56 :0.32 0.68 0.21 CoefficientsCoefficients^
2.25 3.55 7.50 1.45 5.24 6.32 Switching times. Switching times^
The chromosome given above represents the following feed profiles: Control variable 1
Control variable 2
F , _ 0 < t < 2.25
^2^.
F , ^ 2.25 < t < 3.55
^2,s
F,,
3.55 < t < t,
0 < t < 1.45 1.45 < t < 5.24 5.24 < t < 6.32
F2,n.x 6.32 < t < t, To evaluate the fitness of a chromosome the differential equations describing the process (Eqs. 1 through 5) are integrated with the feed rate profile represented by each chromosome and the performance index given by Eq. 6 is evaluated. The well known stochastic remainder roulette wheel is used as reproduction operator. The integer part of the chromosome undergoes a simple crossover similar to binary crossover and the real part of the chromosome undergoes the arithmatic crossover. As with crossover, we use different types of mutations in different parts of the chromosomes. In the pattern part, we use two different types of mutation: uniform mutation and shift mutation. The real part of the chromosome undergoes the non-uniform mutation as proposed by Janikow and Michalewicz (1991). We also include elitism reservation strategy so that the best chromosome of each generation always survives. In the present study, the two best chromosomes of each generation are explicitly transferred to the next generation. Since the switching times must be in ascending order for a meaningful representation of a feasible trial solution, we also include an order maintaining procedure.
4. Results and Discussion Lee and Ramirez (1994) have developed the optimal nutrient and inducer feeding strategy for the fed-batch production of induced foreign protein using recombinant bacteria. They used the optimal control theory and showed the existence of singular
1131 Table 1. GA parameters used for optimization. Population size Crossover probability Non-uniform mutation probability Uniform mutation probability Shift mutation probability Number of elitist individuals Maximum number of generation (gmax)
100 Pc= 1.0-0.5 g/gmax Pm = 0.1+0.9 g/gmax 0.1 0.05 2 1000
control arcs for this system. There after, this fed-batch system with two control variables has been widely used for optimal control studies by various algorithms (Carrasco and Banga, 1997; Roubos et al., 1999; Mekarapiruk and Luus, 2000; Jayaraman et al., 2001). The details of the model can be found in Lee and Ramirez (1994). The objective is to maximize the profitability of the process for a specified final time of fed-batch operation. This can be described mathematically by the following performance functional: Maximize
f PI = x,(tf)x,(t^)-Q\u,(t)dt
(14)
where X4 is the foreign protein concentration, Xi is the reactor volume, and Q is the ratio of the cost of the inducer to the value of the protein product. The fermentation time is fixed at 10 h. The nutrient (glucose) feed rate (ui) and the inducer feed rate (U2) values are constrained as 0 < Ui < 1.0, i = 1,2. Two different cases have been considered for this two control variable problem: (i) the cost of inducer can be neglected, Q = 0, and (ii) the cost of inducer can not be neglected, Q = 5. For each feed rate, the chromosome consists of six intervals to represent the feeding sequence, three correction coefficients and five switching times. Thus, the number of decisions variables for two feed rates becomes 28. Various GA parameters used for the optimization study are summarized in Table 1. Fig. lA-B shows the optimal control profiles that are obtained for the case when Q = 0. The feeding sequence for nutrient feed rate is singular-maximum-minimum (213) while that for the inducer is minimum-singular-maximum (321). This resulted in a performance index of 1.009666. Fig. IC-D shows the optimal control profiles that are obtained for the case when Q = 5 and the performance index obtained is 0.816605. The feeding sequence for nutrient feed rate is minimum throughout (3) while that for the inducer is minimum-singularminimum (323). It can be seen that when cost of the inducer can be neglected (Q=0) both the feed rates have to be varied for maximum performance where as when the cost of the inducer can not be neglected (Q=5) only the inducer feed rate has to be varied as the nutrient feed rate is maintained at zero for the entire period of operation. The feed rate profiles and the performance indices obtained by the proposed method match favorably with the reported results in the literature.
1132
4
6
Time (h) 15
x10
D 10
/ '5
0 4
6
Time (h)
0
,
2
4
6
8
10
Time(h)
Figure 1. Optimal control policies for Q=0 (A,B) and Q-5 (C,D).
5. References Carrsco, E.F. and Banga, J.R., 1997, Ind. Eng. Chem. Res. 36(6), 2252. Janikow, C.Z. and Michalewicz, Z., 1991, Proc. 4th Int. Conf. on Genetic Algorithms, San Mateo, CA, 31. Jayaraman, V. K., Kulkarni, B. D., Gupta, K., Rajesh, J. and Kusumaker, H.S., 2001, Biotechnol. Prog. 17,81. Lee, J. and Ramirez, W.F., 1994, AIChE Jl, 40, 899. Lee, J.H., Hong, J. and Lim, H.C., 1998, Chem. Eng. Comm. 164, 61. Mekarapiruk, W. and Luus, R., 2000, Ind. Eng. Chem. Res. 39, 84. Modak, J.M., Lim, H.C. and Tayeb, Y.J., 1986, Biotech. Bioeng. 28, 1396. Modak, J.M. and Lim, H.C, 1989, Chem. Eng. Jl. 42, B15. Roubos, J.A., van Straten, G. and van Boxtel, A.J.B., 1999, Jl. Biotechnol. 67(2/3), 173.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1133
Efficient Modeling of ^^C-Labeling Distributions in Microorganisms Wouter A. van Winden^'^, Peter J.T. Ve^heijen^ Joseph, J. Heijnen^ ^ Process Systems Engineering, ^ Kluyver Laboratory for Biotechnology, Delft University of Technology, The Netherlands, phone: +31-15-275307, fax: +31-15-2782355, e-mail: [email protected]
Abstract Metabolic networks can be analysed using 2D [^^C,^H] COSY (NMR) measurements of ^^C-labeled metabolites. A framework is presented whereby the steady state reaction rates are deduced from conventional isotopomer balances. This model is reduced by removing redundant nodes and lumping equilibrium pools. Conversion of the balances to the recently introduced bondomer notation further reduces the complexity. When the reduction approaches are applied to the glycolysis and pentose phosphate pathway, the number of equations is reduced by a factor of three without loss of information.
1. Introduction A central issue in metabolic engineering is the determination of steady state reaction rates (or: 'fluxes') in the metabolic reaction network in cells. During the past decade a ^^C-labeling based tracer technique has been developed for this purpose. ^^C-labeling experiments consist of cultivating a microorganism in a continuous bioreactor system fed with ^^C-enriched carbon feedstock (e.g. glucose), sampling ^"^C-labeled cells from the reactor and measuring the ^^C-distribution of intracellular compounds by means of mass spectroscopy or nuclear magnetic resonance spectroscopy. Metabolic reaction rates are subsequently determined by fitting the labeling measurements with a model of the metabolic reaction network wherein the reaction rates are the free variables.
I
II
Isotopomers of A:
Isotopomens of B: ( B I ) ^ «
Isotopomers of C (or D): (c^ (cy
ffll
ffil
^ 2 ) (p2
Figure 1:1: a small network showing carbon atom (circles) transitions. II: isotopomers of the compounds (white and black circles represent ^^C- and ^^C-isotopes respectively).
1134 A mathematical model for this system traditionally consists of a set of balances for all isotopomers of each metabolite in the reaction network (Schmidt et al., 1997). Isotopomers are chemically identical molecules that only vary with respect to their ^^C and ^^C-distribution. The isotopomer concept is illustrated in Fig.l. The isotopomer balances of the eight isotopomers of compound D in Fig.l-I are given in Eq.l, where subscripts 0 and 1 represent ^^C-isotopes ^^C-isotopes, respectively and '®' denotes an elementwise multiplication of two vectors.
(c ^
^ 1 0
0
0^
1 0
0
0
0
1
^001
0
^010
1
^011
^
(l
0^
0
1 0
0
Ax)
1
0
1 0
0
A)i
0
0
0
1 0
Ao
1
0
0
0
1 0
[AJ
0
1
0
0
0
1
1
0
0
0
0
1
0
1
J
®
V
I+V2-
(D ^010
Dm = ^3-
c
\
(1)
Moo
Ao. Alo
J
The matrices here are called isotopomer mapping matrices (IMM), and the vectors are isotopomer distribution vectors (idv). A compound that contains n carbon atoms has 2" isotopomers. By consequence, the total set of all isotopomer balances for a metabolic network with tens of compounds that contain up to 7 carbon atoms each, consists of many hundreds of non-linear equations that have to be solved in each step of the iterative fitting procedure. Here we present three model reduction approaches that lead to a substantial reduction of the size of the ^^C-labeling distribution models without any loss of information. The model reduction is exemplified by the glycolysis and pentose phosphate pathway, which are two central reaction pathways in the carbon metabolism of most cells (see Fig.2). The full isotopomer model of this network consists of 618 isotopomer balances (64 for glucose 6-phosphate (g6p), fructose 6-phosphate (f6p), fructose 1,6bisphosphate (fbp), 6-phosphoglucono-5-lactone (6pgl) and 6-phosphogluconate (6pg); 32 for ribulose 5-phosphate (ru5p), xylulose 5-phosphate (x5p) and ribose 5-phosphate (r5p); 16 for erythrose 4-phosphate (e4p); 128 for sedoheptulose 7-phosphate (s7p); 8 for dihydroxyacetonephosphate (dhap), glyceraldehyde 3-phosphate (g3p), 1,3-bisphosphoglycerate (bpg), 3-phosphoglycerate (3pg), 2-phosphoglycerate (2pg), phosphoenolpyruvate (pep), and pyruvate (pyr); 2 for carbondioxide (cd)). Note that no isotopomer balances are made for glucose (glc) as its ^^C-labeling is chosen by the experimenter.
2. Model Reduction Approaches 2.1. Removal of linear and divergent nodes from the network A first reduction of ^^C-labeling distribution models is based on the number of reactions entering and leaving metabolite pools (Van Winden et al., 2001). Three basic structures exist in metabolic networks: linear, divergent and convergent nodes. In their article
1135 glucose
carbondioxide
v ^ 6-phospho- ^^2^ 6-phosphogluconate glucono5-lactone
glucose 6-phosphate
7^
ribulose 5-phosphate
Vi4f/b y ^
N.
Vi5f/b
^V2f/b
^ xylulose 5-phosphate
fructose 6-phosphate
ribose 5-phosphate
**'v*'
fructose 1,6-bisphosphate
Vi6f/b
erythrose 4-phosphate dihvdroxvacetone^ ^ glyceraldehyde phosphate Vs 3-phosphate ^ sedoheptulose 7-phosphate
tvef/b 1,3-bisphosphoglycerate ^Vyf/b
3-phosphoglycerate
%/b
2-phospho glycerate
^st/b
phosphoenol ^^o pyruvate pyruvate
Figure 2: The glycolysis and pentose phosphate pathway. Double-headed arrows denote bidirectional reactions, 'f and 'b' indicate the corresponding forward and backward reactions. Wiechert et al. (1999) argue that pools with only one influx only yield so-called 'labeling redundancies' between isotopomer fractions and do not give information about fluxes. It can be shown that only the labeling distribution of metabolites at convergent nodes yield flux ratios. This is demonstrated for the metabolic network shown in Fig.2 by analyzing the three types of nodes. The flux balance and labeling balance of the linear node around metabolite 6pgl in Fig.2 and the combination of both balances is shown in Eq.2.
Vii IMMg6p^6pgi g 6 p = Vj2 6 p g l
IMM g6p-^6pgl g6p = 6pgl
(2)
IMMgic.6pgi represents the isotopomer mapping matrix that describes which isotopomer of the reaction product 6pgl is formed from each isotopomer of the substrate g6p. The vectors g6p and 6pgl contain the labeling information of the concerning metabolites. The combined balance in Eq.2 enables the calculation of the labeling distribution of 6pgl from that of g6p but does not give information about the fluxes. Eq.3 shows that a similar labeling redundancy follows from a divergent node (e.g. pep in Fig.2):
IMM 2pg->pep 2pg = pep
v^f •IMM2pg_,p,p-2pg=:(v 9b "*• ^ 1 0
) pep
(3)
1136 A different set of relations results from combining the flux balance and the labeling balance of a convergent node (e.g. g6p in Fig.2):
V, -(iMM^L^^.p •§!*:)+V,, -(iMMf.p^^.p •f6p)= (v^, + v„)-g6p
(4)
V, (IMM^,,^^,^ •glc-g6p)+ V,, (iMMf.p^^.p •f6p-g6p) = 0 By means of linear algebra Eq.4 can be rewritten to one single equation in which the ratio of the fluxes entering the g6p pool appears. The remaining equations are labeling redundancies. Eq.4 shows that only the ratio Vi/v2b can be determined from this labeling balance and that one single element of vector g6p (or: one ^^C-labeling measurement) suffices to do so, provided that the labeling distributions of glc and f6p are known. The above shows that all linear and divergent nodes can be removed from the network in Fig.2 without losing any ^^C-labeling information. Thus, metabolite pools fbp, pep, 6pgl, 6pg are removed. When pool pep is removed, 2pg becomes a single-influx pool and is removed as well. This latter procedure applies to 3pg and bpg as well. 2.2. Lumping equilibrium pools A second possible reduction of metabolic networks is the lumping of metabolite pools of which the ^^C-labeling information may be considered instantaneously equilibrated by large exchange fluxes. This reduction is often applied to the hexose 6-phosphate pools (g6p and f6p) in the glycolysis and the pentose 5-phosphate pools (ru5p, x5p, r5p) in the pentose phosphate pathway (e.g. Schmidt et al., 1997; Follstad and Stephanopoulos, 1998). Lumping g6p and f6p and lumping ru5p, x5p and r5p further simplifies the metabolic reaction network of Fig.2 to the reduced version shown in Fig.3. carbondioxjde
glucose
pentose 5-phosphate
hexose
glyceraldehyde 3-phosphate ^•'
erythrose j 4-phosphate j ^st/b
pyruvate sedoheptulose 7-phosphate
Figure 3: The reduced metabolic network.
1137 2.3. Bondomer balancing A measurement method that is often used to determine the ^^C-labeling of metabolites is 2D [^"^C,^!!] correlation nuclear magnetic resonance spectroscopy (COSY). This method yields relative intensities of multiplets in the NMR spectra that correspond to the relative amounts of groups of isotopomers in which the observed atom is C-labeled and the adjacent carbon atoms in the carbon backbone are either ^^C-labeled or not (Szyperski, 1995). If the compound of which the labeling pattern is measured was synthesized by a microorganism growing on a mixture of uniformly ^ C-labeled and naturally labeled carbon substrate, then the relative intensities of NMR fine structures can be calculated from the fractions of molecules that stem from one or more substrate molecule(s) in the feed medium. This is done using so-called 'probability equations' (Szyperski, 1995) that require as input the fraction of uniformly ^^C-labeled medium substrate and the fraction of naturally ^^C-labeled carbon. Molecules that stem from one or more medium substrate molecules and that are calculated using the probability equations are both chemically and physically identical. They only vary in the numbers and positions of C-C bonds that have remained intact since the medium substrate molecule entered the metabolism. These entities are hereby defined as 'bondomers' (Van Winden et al., 2002). Bondomers of a given compound are denoted by an abbreviation of the compound and a binary subscript. Whereas in isotopomer notation the binary subscript '0' denotes a ^^C-atom and a T a ^^C-atom (Schmidt et al., 1997), in bondomer notation '0' denotes a C-C bond that has been formed in one of the metabolic reactions and ' 1 ' denotes a C-C bond that was already present in the medium substrate molecule. A linear or branched molecule that has a backbone of n carbon atoms has n-1 C-C bonds. Such a molecule has 2" isotopomers and 2"'^ bondomers. The bondomer distribution of a molecule can be simulated in a way that is completely analogous to the simulation of isotopomers. Bondomer balancing is based on bondomer mapping matrices (BMMs) that indicate which bondomers of the reaction products are formed from the bondomers of the reaction substrates. Each column of a BMM corresponds with a bondomer of (one of) the reaction substrate(s), each row with a bondomer of (one of) the product(s). The bondomer distributions are given as bondomer distribution vectors (bdv). The bondomer balances of the four bondomers of compound D in Fig.l-I are given in Eq.5: ^1
0^
0
0
0
1
v" %
^00
^A,^ V
V
+ v.
Ac A,
(5)
In a bondomer balance, the inflows and outflows of all the 2"'^ bondomers of a n-carbon compound are accounted for. The inflow terms in the balance are the products of the fluxes, BMMs and substrate-bdvs of the reactions that lead to the balanced metabolite. The outflow term is obtained by multiplying the bdv of the balanced metabolite by the sum of the fluxes of the reactions in which the metabolite itself serves as a substrate. In the inflow term corresponding to reaction Vi in Eq.5, the bondomer distribution of B
1138 does not appear for the simple reason that a one-carbon compound has no C-C bonds and therefore has only one (=2^'^) bondomer fraction equalling 1. Eq.5 shows that bondomer modeling does not only lead to fewer, but also to simpler balances than isotopomer modeling (Eq.l). As shown before, the isotopomer model of Fig.2 consisted of 618 isotopomer balances. Due to the removal and lumping of metabolite pools, the reduced model of Fig.3 only counts 258 isotopomer balances. As 2D [^^C,^H] COSY data can be calculated from bondomers, one need not simulate all isotopomers to fit these measured data with simulated data. Instead, one can apply a bondomer model, which counts half as few balances: only 129 (32 for hexose 6-phosphate; 16 for pentose 5-phosphate; 8 for erythrose 4-phosphate; 64 for sedoheptulose 7-phosphate; 4 for glyceraldehyde 3phosphate, and pyruvate; 1 for carbondioxide). An even further reduction is possible by applying the 'cumomer' concept (Wiechert et al., 1999) to the bondomer balances, but this is outside the scope of this paper. See Van Winden et al. (2002) for details.
3. Conclusions In this paper we discussed three methods that can be applied to reduce the size of mathematical models that give the ^^C-labeling distribution in the metabolites of a biochemical reaction network as a function of the reaction rates in that network. We illustrated the methods by applying them to the metabolic network that consists of the glycolysis and pentose phosphate pathway. For this case study, the model was reduced from the initial 618 balances to a mere 129 balances, without loss of information. This reduction shows that models can sometimes be trimmed by carefully checking that only the needed output is simulated and that all redundancies are removed from the model. This saves computational time, which is especially relevant in problems that require many repeated simulations, such as iterative parameter fitting procedures and optimization. Additionally, the achieved model reduction has the advantage that it makes the model amenable to a priori identifiability analysis. This allows better designing of ^^C-labeling experiments.
4. References Follstad, B.D. and Stephanopoulos, G., 1998, Eur. J. Biochem., 252. Schmidt, K., Carlsen, M., Nielsen, J. and Villadsen, J., 1997, Biotechnol. Bioeng., 55, 6. Szyperski, T., 1995, Eur. J. Biochem., 232. Van Winden, W.A.., Heijnen, J.J., Verheijen, P.J.T., Grievink, J., 2001, Biotechnol. Bioeng., 74, 6 Van Winden, W.A., Heijnen J.J., Verheijen, P.J.T., 2002, Biotechnol. Bioeng., 80, 7 Wiechert, W., MoUney, M., Isermann, N., Wurzel, M., De Graaf, A.A., 1999, Biotechnol. Bioeng., 66, 2.
European Symposium on Computer Aided Process Engineering - 13 A. Kraslawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1139
Fuzzy Goal Attainment Problem of a Beer Fermentation Process using Hybrid Differential Evolution Feng-Sheng Wang Department of Chemical Engineering, National Chung Cheng University, Chia-Yi 621-02, Taiwan E-mail: [email protected]
Abstract In this study, we applied the fuzzy decision-making method to simultaneously determine the optimal cooling rate and supplied sugar concentrations toward maximizing ethanol production rate at the minimum fermentation time. For real applications, the problem includes three additional fuzzy constraints. The problem is then converted into the fuzzy goal attainment problem by eliciting membership functions for each of objective functions and constraints. Hybrid differential evolution is applied to solve the goal attainment problem to obtain a unique solution. For comparison, we also applied the weighted sum method to solve the crisp optimization problem. The results show that the approach is more realizable than the weighted sum method.
1. Introduction In real process optimizations, the process engineers have to deal with different kinds of uncertainty. The nature of uncertainty may be different. Impreciseness which is a result of a stochastic character of the process is described by probability methods. The uncertainty of other types, credibility, preference, possibility and necessity, is determined in other ways. The most popular methods to handle such uncertainty are interval arithmetic, sensitivity analysis and fuzzy set theory. The fuzzy set method, because of its generality and flexibility, is most widely applied. Bellman and Zadeh (1970) introduced linear membership functions to convert fuzzy linear optimization problems into linear progranmiing problems. For a fuzzy nonlinear optimization problem, the fuzzy objective function and fuzzy constraints can be transformed to a conventional nonlinear progranmiing problem, which is, in general, a nonconvex optimization problem. To obtain a global solution, a global optimization method should be applied to solve the nonconvex problem. In this study, we will introduce a fiizzy decision-making approach to design a fiizzy multiobjective optimization problem of batch beer fermentation. Hybrid differential evolution (Chiou and Wang, 1999) is then applied to solve the converted nonconvex problem to obtain a Parato solution.
2. Process Model The mathematical model of batch beer fermentation used is based on the work of Gee and Ramirez (1988). In mathematical modeling for ethanol production by yeast, biomass growth and ethanol formation are assumed to be limited by three sugars, glucose, maltose and maltotriose. The rates of uptake of the three sugars are given by the equations:
1140 ds — - = -9^1 (^i)x, s^ (0) = unspecified dt ds — - = -9^2 ("^1' ^2 )^' ^2 (^) - unspecified dt ds 53(0) = unspecified ds^^ - -3i-^(s^,S2,s^)x,
(1) (2) (3)
dt The reaction rates, 3{^, 9^2 and 9I3, are in terms of the mole concentration of glucose, maltose and maltotriose, s^, S2 and s^. The maximum reaction velocities, Michaelis constants and inhibition constants in the reaction rates follow the Arrhenius dependency. The definitions of the symbols and their corresponding data can be obtained from the literature (Gee and Ramirez, 1988; Wang and Jing, 2000). The rates of biomass and ethanol production are proportionally related to the uptakes of the individual sugars by constant yield coefficients as following equations dx
It dp^ dt
y
^ +y ^ +y ^ ^'^' dt '"' dt '"' dt
(4)
ds^ "'^ dt
(5)
f^4.y '"' dt
^ '"' dt
where the stoichiometric yield factors can be obtained from the literature (Gee and Ramirez, 1988; Wang and Jing, 2000). The rate of temperature change in the fermenting medium is given by dT dt
pcp
dsi '' dt
ds-y '' dt
dsrt '' dt
where the cooling rate denotes as u-lAp^c^Uq^jVpc^^Al]-^Ip^c^q^),
(6)
and the
heat of reaction A//^ can be obtained from the literature (Gee and Ramirez, 1988; Wang and Jing, 2000).
3. Fuzzy Goal Attainment Problem The aim of the problem is to simultaneously determine the cooling rate for maximizing production rate and minimizing the fermentation time. The problem becomes a multiobjective optimization problem. Many of multiobjective optimization problems in the real world take place in an environment in which the decision-maker (DM) has the preference goals in prior. Such a preference design problem refers to a decision-making problem. T w o requirements must be satisfied in a decision-making problem. The first requirement is to solve multiobjective optimization problems to obtain the optimal decision variables and the corresponding objective function values. The second requirement is to check whether each optimal objective function value meets the pre-assigned goal and each constraint satisfies the threshold level. If any optimal objective function value does not satisfy the goal or any constraints disobey, the D M has to trade-off some goals and threshold levels. The effort should be repeated to find another optimal solution. In this study, we will introduce a fuzzy goal attainment approach, an
1141 efficient trade-off technique, to design the beer fermentation process. By considering the imprecision or fuzziness of the DM's judgement, the objective functions can be softened into the following fuzzy version: fuzzy fuzzy
max
fi= pitf)lt.
(7)
min
f2=h
(^)
M(/),5,(0),52(0),53(0),/y
^
-^
The fuzzy multiobjective optimization means that the production rate should be maximized as well as reasonable, the fermentation time should be minimized as well as reasonable. In real optimization problems, the total amount of supplied sugars requires to restrict at some quantity. Such a restriction can be expressed as M^,s,(0) + M^,s,{0) + M^,s,{0)^Sf e [s;,^;]
(9)
where M^i^,k = 1,2,3 denotes the molecular weight for glucose, maltose and maltotriose. The symbol" ^ " in (9) denotes a fuzzy version of the ordinary inequality " < ", and S^ is the interval boundary. The fuzzy inequality constraint means that DM is completely acceptable if the total supplied sugar is less than S^ . Conversely, the DM is completely unacceptable if the total supplied sugar is greater than S^ . When the supplied sugar is between [5^,5^ ] , this implies that the DM has some degree of satisfactory. The preference of customers does not include in the above-mentioned process model equations. In this study, two simple constraints are used to maintain the desired product quality at the end of the beer fermentation process. Such desired product qualities are described for keeping the final ethanol quality and total residual sugars at some level. Both qualities can be expressed as two fuzzy quality constraints in the forms M.,p(^,)/p-P,G[p/,P/,Pi^]
(10)
(M^,5,(rp + M^,s,itf) + M^,s,{tf))lp^ S, e [s^,S',,S^]
(11)
where M^^ denotes molecular weight of ethanol and p is the density of the broth. The symbol "==" in (10) and (11) denote a fuzzy version of "=". The fuzzy threshold levels Pj and 5j denote as the desired weighted percentage for ethanol and residual sugars, respectively. Such desired threshold levels depend on the preference of customers. The preference values for ethanol are 4% to 6% and for sugar 3% to 5%. In the optimization problem, the decision variables and the temperature are restricted within a physically realistic boundary as u^^
(12)
^/min^^/^^/max
(13)
T^,
(14)
The state variables must be positive for all time as restricted in the form Sj^(t)>0,k = l2,3
(15)
1142 Having elicited the membership functions from the DM for each of fuzzy objective functions and each of constraints (Sakawa, 1993), the fuzzy multiobjective optimization problem can be converted to the fuzzy goal attainment problem as in the form. 4
mm
//r,
mm
m a x f e -Mk]}+ ^ X ^ k - M k )
M(0,i^l(0),52(0),S3(0),ry.eQ| *-l,...4
(16)
^^^
where the search domain Q consists of the hard constraints in the problem and ^ is a sufficiently small positive scalar. The second term in (16) is used to avoid inspecting a unique test for Pareto optimality. In the minimax sense, the Pareto solution is nearest to the requirement or better than the reference membership levels are attainable.
4. Results and Discussion To solve the fuzzy dynamic optimization problem efficiently, a control parameterization method is employed to represent the cooling rate as a finite set of control actions. A violent control action is unacceptable in the practical fermentation process. To avoid such a violent action, the variation of the decision parameters for the cooling rate are restricted within a bounded region in this study. The additional constraint is therefore expressed by: K 7 + l)-w(7)|<0.25(M„
J,7=l,...,yV„-l
(17)
Hybrid differential evolution (HDE) is now applied to solve the fuzzy goal attainment problem (16). The cooling rate was approximated by 15 time partitions. The decision parameters included 15 control parameters, fermentation time and three supplied sugar concentrations. In this work, the setting parameters used in HDE are listed as follows: crossover factor C^ =0.5 , and two tolerances for population diversity and gene diversity 8^ =82 = 0.01 . The problem involves some hard inequality constraints as expressed in (12) to (15) and (17). To inspect the feasibility of a solution, we define the sum of constraint violations as SCV = ^'"^ max{^^,0}. It indicates that an optimal solution with a smaller SCV is a more feasible point to the problem. To solve the fuzzy goal attainment problem, we use the exponential membership function to judge the fuzzy preference for each of objective functions and each of constraints. We require to assign two boundary values for each of fuzzy objective functions and inequality constraint, and three values for each of fuzzy equality constrains. Table 1 shows the best solution to the fuzzy goal attainment problem with various boundary values. In the first run, we supplied the total amount of sugars, 128.6kg/m^ , and retained the sugars at 3.704% and the ethanol at 4.704% to obtain the maximum production rate, 6.573 mol/m^h and the minimum time, 155.56h. The production rate was 89.17% satisfactory. However, the other goals were 52.92% so that the best decision grade ju^ was 0.4708. In the second run, we set the broader lower and upper boundary values for equality constraints so that the fermentation time, supplied sugars and final ethanol were achieved at 64.47% satisfactory. In the third and fourth run, the initial concentrations for glucose, maltose and maltotriose were specified so that the total supplied sugars were fixed at 125.1kg/m^ to apply for solving the fuzzy goal attainment problem. The computational results were shown in Table 1 for comparison. In the first and second run, the initial concentrations for glucose, maltose and maltotriose were considered as the decision
1143 variables in the fuzzy optimization problem so that we could obtain more satisfactory than the third and fourth run. Table 1. Fuzzy goal attainment problem using various threshold levels. Item
Run
3
4
[4.5,7.0]
[4.5,7.0]
2
1
\f,'-J,"\,moVm\
[4.5,7.0]
kj2"U
[120.0, 180.0] [120.0, 180.0] [120.0, 180.0] [120.0, 180.0]
[S'f,S';],kg/m'
[106.2, 144.0] [106.2, 144.0] [125.1, 125.1] [125.1, 125.1]
IPJ,P',P"1%
[4.5,5.0,5.5] [4.0,5.0,6.0] [4.5,5.0,5.5]
[4.0,5.0,6.0]
[s^,s',,s",l%
[3.5,4.0,4.5] [3.0,4.0,5.0] [3.5,4.0,4.5]
[3.0,4.0,5.0]
/ , ' , mol/m^h
6.573
6.618
6.165
6.607
/2*.h
155.56
148.6
161.68
148.75
/;.kg/m'
128.6
124.22
125.1*
125.r
/;.%
4.704
4.523
4.585
4.521
/;,%
3.704
3.597
3.585
3.710
>",*(/;)
0.8917
0.9038
0.7693
0.9009
MlifD
0.5293
0.6447
0.4162
0.6422
Mii/;)
0.5292
0.6447
1*
1*
Mlif^)
0.5293
0.6447
0.2485
0.6422
Mii/n
0.5293
0.7112
0.2485
0.8044
0.4708
0.3553
0.7515
0.3578
7.355E-11
7.132E-7
2.606E-10
6.076E-9
M'D
SCV
[4.5,7.0]
* The initial concentrations for glucose, maltose and maltotriose are specified in the fuzzy goal attainment problem. To illustrate the advantage of the proposed fuzzy approach, we applied the weighted sum method to solve the crisp optimization problem. The problem was considered as a decision-making problem in which the DM would like to utilize a limited total amount of supplied sugars to simultaneously maximize the production rate / j and minimize fermentation time / 2 . The unit for each of objective functions in this problem was different. It was difficult to assign a suitable and physical meaningful weighting factor for each of objective functions. In this computation, we disregarded the unit problem and assigned 50% percentages for each of objective functions to the weighted sum problem. In addition, we were unable to obtain a feasible solution with crisp equality constraints as illustrated in (10) and (11). Both constraints were therefore expressed as simple inequality constraints. Table 2 shows various boundary values using for the crisp optimization problem. In the first and second run, we supplied the upper boundary value for the total amount of sugars and obtained the optimal final ethanol at the lower boundary value for each run as observed from Table 2. In the third and fourth run, we set the broader upper boundary value for the supplied sugars, we obtained the optimal final
1144 ethanol at the lower boundary value and the residual sugar at the upper boundary value. A fuzzy optimization problem, in general, requires to perform trade-off computations. However, the weighted sum method has a shortage performing such procedures. Table 2. The crisp multiobjective optimization problem using the weighted sum method. Item
~~
_Run
1
~2
S"} <Sf<S"f ,kg/m^ 0<5^<125.1 0<S^ < 125.1
1
1 0 < 5 ^ < 144.0
0 < 5 ^ < 144.0
,%
4.5
4.0
4.5
4.0
s^<s,<s:;,%
3.5<Sj<4.5
3.0<Sj<5.0
3.5 <Sj< 4.5
3.0 < 5^ < 5.0
/,*,mol/m^h
6.672
7.054
7.304
7.392
/2*.h
146.63
123.26
133.94
120.0
125.1
125.1
132.8
129.7
4.5
4.0
4.5
4.0
3.75
4.716
4.5
5.0
4.049E-4
1.142E-3
2.644E-4
9.06E-5
p,'
/3%kg/m^ /;.% /;.%
scv 5. Conclusion
In this study, we applied the fuzzy goal attainment method to design fuzzy multiobjective optimization problem of beer fermentation. The designer could use trade-off procedures to obtain a satisfactory solution. For comparison, the weighted sum method was also applied to solve the crisp optimization problem. The weighted sum method was difficult to perform trade-off computations because the method was hard to assign a suitable and physical meaningful weighting factor for each of objective functions and was unable to manage fuzzy constraints.
6. References Bellman, R.E. and Zadeh, L.A., 1970, Decision-making in fuzzy environment. Management Science, 17, B-141. Chiou, J.P. and Wang, F.S., 1999, Hybrid method of evolutionary algorithms for static and dynamic optimization problems with application to a fed-batch fermentation process. Computers and Chemical Engineering, 23, 1277. Gee, D.A. and Ramirez, W.F., 1988, Optimal temperature control for batch beer fermentation. Biotechnology and Bioengineering. 31,224. Sakawa, M., 1993, Fuzzy Sets and Interactive Multiobjective Optimization, Plenum Press, New York, USA. Wang, F.S. and Jing, C.H., 2000, Application of hybrid differential evolution to fuzzy dynamic optimization of a batch beer fermentation, J. Chin. Inst. Chem. Engrs, 31,443.
7. Acknowledgement This work was supported by the National Science Council of Republic of China under Grant NSC90-2214-E194-001.
European Symposium on Computer Aided Process Engineering - 13 A. Krasiawski and I. Turunen (Editors) © 2003 Elsevier Science B.V. All rights reserved.
1145
Application of Multiobjective Optimization in the Design of Chirai Drug Separators based on SMB Technology Faldy Wongso, K. Hidajat and Ajay K. Ray* Department of Chemical and Environmental Engineering National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260
Abstract The multiobjective optimization in the design of continuous countercurrent separation units based on simulated moving bed (SMB) technology is considered. In addition, the Varicol system, which is based on a non-synchronous shift of the inlet and outlet ports instead of the synchronous one used in the SMB technology is also considered. The optimization of SMB unit is quite complex as it includes relatively large number of decision variables both continuous variables, such as flow rates and lengths, as well as discontinuous ones, such as column number and configuration. Moreover, it is important to formulate the optimization problem as multiobjective, since the factors affecting the economy of chirai drug separation process are multiple and often in conflict with each other. A typical example is simultaneous maximization of the productivity of both raffmate and extract streams with a specific purity. An adaptation of genetic algorithm has been used to optimize the SMB and Varicol systems using a literature available model validated with experimental results for a chirai separation. This paper compares the optimal separation performance that can be achieved with the SMB and the Varicol technologies.
1. Introduction Process industries aim at maximizing their production capacities while simultaneously maintaining or improving the product quality. This is particularly true in chirai drug separation using simulated moving bed (SMB) systems where purities of the products are crucial and have to satisfy relatively narrow specifications. The annual sales for chirai drug industry reached US$150 billion in 2001. These compounds now represent close to one-third of all drug sales worldwide. Unfortunately, it is not unusual to find that one isomeric form of a chirai drug compound has a therapeutic effect on the human body, and is therefore, an effective medication while its enantiomer is inactive or even harmful. For example, the S-isomer of Penicillamine is an effective drug for arthritis while the R-isomer is highly toxic. When one isomer of a chirai compound is 'good' and the other 'bad', there is obvious benefit from separating the two enantiomers to enhance its safety and tolerability. Simulated Moving Bed (SMB) systems (Broughton and Gerhold, 1961) are used for separations that are either impossible or difficult using traditional separation techniques. SMB has become one of the most popular techniques finding its application in petrochemical and sugar industries, and of late, there has been a drastically increased interest in SMB in pharmaceutical industry for enantio-separations. Recently, a modification of SMB technology, named as Varicol process (Ludemann-Hombourger et al., 2000), is reported which is based on non-synchronous shift of the inlet and outlet ports. During one global switching period, there are different column configurations for
* Corresponding author: Fax: +65 6779 1936, Email: [email protected]
1146 (sub)-time intervals due to local switching. As a result, Varicol process can have several column configurations, which endow more flexibility compared to SMB process. SMB process can be regarded as the most rigid and a special case of more flexible Varicol process without adding any additional fixed cost. In this article, multiobjective optimization (Bhaskar et al., 2000) is used in the design of SMB and Varicol processes for chiral drug separation. The optimization problem is complicated by the relative large number of decision variables, including continuous variables, such as flow rates and lengths, as well as discontinuous ones, such as column number and configuration. In addition, it is important to reformulate the optimization problem as multiobjective, since the factors affecting the economy of a given separation process are multiple and often in conflict with each other. A typical example is maximization of the productivity of both the raffinate and extract streams with purity of both the streams greater than a specific value. Multiobjective function optimization problems were solved that include both existing and design stage of SMB and Varicol systems. Pareto optimal solutions are obtained in all cases, and moreover, it was found that the performance of SMB (and Varicol) could be improved significantly under optimal operating conditions. A new optimization procedure based on genetic algorithm is used in solving the optimization problems. An experimentally verified literature available chiral separation model has been used to optimize the SMB and Varicol systems. This also offered a unique opportunity to compare the optimal separation performance that can be achieved with the SMB and the Varicol technologies.
2. Optimization of SMB and Varicol Processes Recently, Ludemann-Hombourger (2002) reported experimental results for both SMB and Varicol systems for the separation of the optical isomers of the SB-553261 racemic mixtures. We have used their model without any modifications. Table 1. Formulation of the optimization problems solved in this study. Case/Problem 1 SMB
Obj func MaxQp
Varicol 2
SMB Max PrR Max PrE Varicol
3
Constraints PurR > X ± d PurE > X ± d X = 99% d = 0.4% PurR > X ± d PurE > X ± d X = 90% d = 0.4%
SMB
Varicol
Max PrR Max PrE Min Lcoi
PurR > 90% PurE > 90%
Decision variables 0.3 < QF < 0.7 1 < QR < 10 0.4
Fixed variables Qi = 17.49 QD = 9 . 7 8
Lcoi = 8.1 Ncoi = 5 Qi=17.49 QF=0.3
Ncoi = 4 or 5 Qi=17.49 QF=0.3
Ncoi = 4 or 5
2.1. Single-objective optimization of the SMB and Varicol process First a single objective optimization (case 1 in Table 1) is carried out to determine whether improvement over experimentally reported (reference) results is possible. The objective function chosen is the maximization of feed flow rate (QF), subject to target purities of both extract (PurE) and raffinate (PurR) streams greater than 99% using four decision variables, namely raffmate flow rate (QR), feed flow rate (QF), switching time
1147 (ts) and column configuration (co) keeping all other variables fixed at the reference values. In a SMB system, there is only one column configuration, which is fixed for entire duration of operation. However, in a 4-subinterval 4 or 5-column Varicol process, there exist many possible configurations some of which are shown in Table 2. Table 2. Optimum column configuration. CO
A B C
Nco, = 4 Column Configuration 1/1/1/1 1/2/1/0 2/1/1/0
Ncoi = 5 CO
D E F
Column Configuration 1/1/2/1 1/2/1/1 2/1/1/1
Table 3 compares the optimum results obtained with Simple Genetic Algorithm (SGA) when the feed flow rate was maximized with that of the results of LudemannHombourger et al. (2002) obtained by simulation-optimization (reference run). It is seen that the optimization leads to a larger feed flow rate, Qp = 0.462 and 0.509 ml/min for a 5-column SMB and a 5-column Varicol respectively compared to experimental value of 0.3 ml/min. Moreover, values of the product purity and recovery of both raffinate and extract streams, productivity and desorbent consumption per unit product are all improved (Table 3). The (O = E, for a SMB process, indicates the column configuration 1/2/1/1. Whereas, for a 4-subinterval Varicol process, (O = D/E/D/F indicates that the sequence of column configurations D-E-D-F was used within the 4-subinterval global switching period. In terms of time average column lengths, this corresponds to the configuration 1.25/1.25/1.5/1.0. These comparisons, relative to single-objective optimization problems show the reliability and efficiency of SGA in finding optimal operating conditions, which compare well with previous literature results and actually lead to improved values of the objective functions. The unique capabilities and superiority of the SGA clearly appear later when multi-objective optimization problems are considered. It is also confirmed that the Varicol process shows improvements over SMB operation. Table 3. Single objective optimization result as described in case 1. Process Parameters Fixed parameters Qi (ml/min) QD(ml/min) Lcoi(cm) Optimized parameters QF(nil/min) QR(ml/min) ts(min) Column configuration, co Calculated Parameters
QE(ml/min) Raffinate purity(%) Extract purity(%) Productivity(kg-prod/kg-CSP/day) Raffinate recovery(%) Extract recovery(%) Desorbent consumption(m^/kg-prod)
Reference 5 Varicol 17.49 9.78 8.1
5 SMB 17.49 9.78 8.1
This work 5 Varicol 17.49 9.78 8.1
0.3 2.49 0.925 0.95/1.85/1.5/0.7
0.462 2.048 0.914 E
0.509 4.957 0.739 D/E/D/F
7.59 99.70 96.80 0.725 96.8 99.9 1.050
8.194 99.97 99.17 1.149 100 98.65 0.664
5.333 99.94 99.28 1.271 100 98.71 06
1148 2.2. Multi-objective Optimization of the SMB and Varicol Processes Quite a few studies have been reported on the design and optimization of SMB, but they only involve single objective optimization. However, multiobjective optimization problems are more realistic. In multi-criterion optimization instead of trying to find the best design (unique global) solution, the goal is to obtain a set of equally good nondominated solutions, which are known as Pareto optimal solutions. In a set of Pareto solutions, no solution could be considered better than other solutions with respect to all objective functions. Numerous multiobjective optimization problems can be formulated. Table 1 describes a two-objective (Case 2) and three-objective (Case 3) function optimization problems considered in this work. (a)
(b)
7n 6.9-
9n
^
6.7 PrE
7A
\
6.6 6.5 6.4 R '^ -
Lxoi6- ^ ^ ^ / ^ ^ A
V^ 1^
O 5 Varicol D5SMB A 4 Varicol • Experiment
6. 95
•
8
^^fi^/iflA/w^
6.8
o 5 Varicol D5SMB A 4 Varicol • Experinrent A
^
5-
Ho
4-
o 0
7
7.05 FVR
7.1
3 7.15
6.95
7
7.05 PrR
7.1
(d)
(c) 14.
8 T
o 5 Varicol D5SMB A 4 Varicol • Experinrent
13 12
7
3
A
O OCW>^
\
<^4
10 -
i^
^^AV
6 5
-
QD11
7.15
r^F^ nmB^^ O 5 Varicol D5SMB A 4 Varicol • Experiment
•
2
9
1
ft 6. 95
7
7.05 PrR
7.1
7.15
6. 95
7
7.05 R-R
7.1
(e) 1 -
^ ^
0.90.8
(t) 100 n
O 5 Varicol D5SMB A 4 Varicol • Experiment
07
A AAA
™"Wfcii>
0.3n 9 6.95
7
7.05 PrR
7.1
ml^
A A A
0.4
6.9
A^BMO%yAQ&
98-
t3 0.6 0.5-
7.15
7.15
96PUrR 94-
o D 92 - A • on 6 9
o o o
5 Varicol 5 SMB 4 Varicol ExpeririBnt 6.95
7
7.05
7.1
7.15
PrR
Figure 2. Pareto optimal solution and plot of decision variables (case II) for SMB and Varicol systems. In the two-objective optimization, productivity of both the raffinate and extract streams are maximized at the design stage (Lcoi is treated as a variable) contrary to the existing
1149 set-up considered in case 1. Qi (related to maximum pressure drop in the system) and QF (throughput) were kept fixed at the reference experimental values while the desorbent flow rate (QD) and raffinate flow rate (QR) were chosen as decision variables along with the switching time (ts), length of each column (Lcoi) and column configuration (co). Optimal results were obtained for 4-column varicol and 5-column SMB and Varicol, and the results were compared with the experimental result of 5column Varicol. (a) 6.95
:u.mt%^^
6.85
(b) o 5 Varicol ^ • 5 SMB (E) ^ X5SMB(F) A 4 Varicol • Experiment
9 . 8-
6.75
7-
6.65 PrE ^6.55 6.35 6.25
6.7
6.8
Leo. 6 -
i
o 5 Varicol D 5 SMB {E) X 5 SMB (F) A 4 Varicol • Experiment
6.45
^^^^^Oftji^iia^jsitt^
54-
6.9
7
7.1
7.2
367
6.8
6.9
PrR
7
7.1
7.2
PrR (c)
(d)
14 13 12 -I
xioaoioioKX
7 ] 6 5^
lOCfD
oj 9)
Q^ll
O 5 Varicol 3 D 5 SMB ( ^ 2 -I X 5 SMB (F) Varicol 1 -A•1 4B
o 5 Varicol D 5 SMB (E) X 5 SMB (F) A 4 Varicol • Experiment
10 9 8 6.7
6.8
6.9
7
7.1
xmncioK X x
7.2
1 0.9 0.8 0.7 ts 0.6 0.5. 0.40.3 1 02^ 67
•
(e) o 5 Varicol D5SMB(E) X5SMB(F) A 4 Varicol • Experiment
XJOOOIOIC XC X X X X )iaON
6.9
7 PrR
7.1
7.2
(f) 1009998 PurE 97 96-
6.8
7 R-R
PrR
7.1
7.2
95 67
AZ^ ^ X J> iC^^^ ^^ * ^O^ >j^ >(5» x ^ 6.8
6.9
7
o 5 Varicol D5SMB(E) X 5 SMB (F) A 4 Varicol • B(periment 7.1
7.2
PrR
Figure 3. Pareto optimal solution and plot of decision variables (case III) for SMB and Varicol systems. Figure 2a compares Pareto optimal solutions when productivity of both streams are maximized for 4-column varicol and 5-column SMB and Varicol with the reported experimental results using 5-column Varicol. The figure clearly shows the benefit of multiobjective optimization as it provides a wide range of choice of operating points. The figure also reveals that optimum solution is better than the experimental point and
1150 5-column Varicol offers more room for improvement indicated by the size of the Pareto set, followed by 5-column SMB and 4-column Varicol. Each point on the Pareto set corresponds to a set of decision variables shown in figures 2b-e. The optimum configuration for 4-column varicol in the Pareto set is (o = A-A-B-C (see Table 2). The results show that the experiment was performed at the near optimum range for 4-column Varicol suggested in this study. The optimum column configuration for 5-column SMB is (0 = E and this is similar to experimental column configuration used. Significant transition from 4-column Varicol to 5-column SMB is observed, especially for the raffinate productivity. Improved productivity of both streams are achieved with 5column Varicol with (o = D/E/F/F indicating more column is needed in the feed section during the early sub-switching period while one extra column is needed in the purging section during the later stage. In the separation of chiral drugs in SMB units, the chiral stationary phase (CSP) used is very expensive. Therefore, it is also important to minimize simultaneously the total volume of CSP required. Hence, in case 3 we have used minimization of column length as the third objective. The Pareto optimal sets are shown in Figure 3a and 3b. Figure 3a shows similar trend as before but figure 3b clearly shows that length of each column could be reduced significantly thereby reducing total volume of CSP. Figure 3a also reveals that 5-column SMB can outperform 4-column Varicol in terms of raffinate productivity but not for the extract productivity, which was not quite obvious in the case 2. The decision variables demonstrate similar trend as in case 2 and the purity of the raffinate stream is always greater than 99%.
3. Conclusions In this work, a systematic study for optimal operation of SMB as well as Varicol process is presented for the separation of an optical isomers mixture. Multiobjective optimization problems were considered using NSGA. Pareto optimal curve is obtained for both SMB and Varicol systems. The optimization result has shown that significant improvement can be made. These results help in enhancing the performance of the existing set-up and also serve as an important tool in designing a new set-up.
4. Nomenclature Lcoi Ncoi PrE PrR PurE
Length of column, cm Number of column Daily extract productivity, g product/day Daily raffinate productivity, g product/day Purity of extract stream, %
PurR Q ts 0)
Raffinate purity, % Flow rate, ml/min Switching time Column configuration
5. References Bhaskar, V., Gupta, S.K. and Ray, A.K., 2000, Reviews in Chem. Engg., 16,1. Broughton, D.B. and Gerhold, CO., 1961, US Patent 2, 985,589. Charton, F. and Nicoud, R.M., 1995, J. Chromatography A, 702,97. Ludemann-Hombourger, O., Nicoud, R.M. and Bailly, M., 2000, Sep. Sci. Techno!., 35, 1829. Ludemann-Hombourger, O., Pigorini, G., Nicoud, R.M., Ross, D.S. and Terfloth, G., 2002, J. Chromatography A, 947, 59.
1151
Authcir Index
11 347 ;467 1073 137 899 389 539 761 ;773 605 269 995 761 737 ;545 353 17 527 551 347 335 557 905 113 35 ;779 359 995 371; 419; 479; 41 ;485 23 Balendra, S. Banares-Alcantara, R. 269 ;185 1085 Bansal, P.S. 257; 95 Barbdsa-Povoa, A.P.F.D. 563 Bardow, A. 1079 Barreiros, A.M. Barrett, W. 569 Bart, H.-J. 557 Batzias, A.F. 365 Batzias, F.A. 1121 ;365 Bayer, B. 29 Benqlilou, C. 371 Berard, F. 35 Berber, R. 605 ;335 Berezowski, M. 377 Bhushan, M. 341 Bildea, C.S. 431; 929 ;971 Biscaia Jr., E.G. 1055; 1007 Bodolai, B. 347 Abebe, S. Abonyi, J. Acuna, G. Adjiman, C.S. Afonso, P.A.F.N.A. Agachi, S.P. Ahola, J. Aittamaa, J. Aktas, Z. Aldea, A. Alexandridis, A. Almark, M. Alopaeus, V. Alstad, V. Andersen, N.K. Ang, M.H. Arellano-Garcia, H. Arva, P. Atasoy, I. Attarakih, M.M. Aumo, J. Avramenko, Y. Azzaro-Pantel, C. Badell, M. Bafas, G. Bagajewicz, M.J.
Bogle, I.D.L. Bombardi, D. Bonfill, A. Borissova, A. Boutin, A. Bozga, G. Brad,R.B. Brauner, N. Brink, A. Brown, D. Brown, S.T. Brusis, D. Bumbac, G. Buxton, A. Caballero, J.A. Cabassud, M. Cafaro, D.C. Camarda, K.V. Cameron, D. Cameron, I.T. Cantdn, J. Cao, Y. Caruthers, J. Castells, F. Cerda, J. Cezerac, J. Chatzidoukas, C. Chavali, S. Cheng, H.N. Chia, K.C. Cho, J. Chovan, T. Christensen, S. Chung, P.W.H. Cisternas, L.A. Citir, C. Ciumei, C. Coimbra, M.d.C. Coll, N. Colombo, I. Concu, A. Cordiner, J. Costa, A.O.S. Crespo, J.G. Cristea, M.V. Cubillos, F.A.
1; 23; 989 251 41 47; 53 653 575 581 587 809 1001 911 869 575 683 59 641; 779 65 77 425 755 41 383 701 185 65 641 71; 593 77 599 287 437 467 767 161 125;83 605 683 611 17 1115 251 245 1007 1079 389 1073; 395
1152 83 977 617 623 779 1013 701 629 635 431 347 35 ;779 401 377 89 95 101 641 107 491 407 371;;413; 41; 419';479 797 Esteves, I.A.A.C. 1079 Eusebio, M.F.J. 47; 53;,581 ;647 Fairweather, M. 557 Faqir, N.M. 1109 Farid, S. 113; 203i;323 Farkas, T. 515 Femat, R. 1073 Ferret, E. 839 Feyo de Azevedo, S. 635 Fischer, I. 17 Fischer, U. 197; 203;;323i;491 Fonyo, Z. 191 Foral, M.J. 1079 Fortunate, R. 713 Fraczek, K. i; 299 ;293 119; Fraga, E.S. 227 Eraser, D.M. 137 Galindo, A. 125 Galvez, E.D. 17; 101;;24f i;449 Gani, R. 965 Gauss, B. 653 Gerbaud, V. 131 Gerogiorgis, D.I. 1019 Ghaffari, Sh. 509 Giannelos, N.F. 413 Gimeno, L. 137 Giovanoglou, A.
Cueto, J.Y. Cziner, K. Dalai, N.M. Dave, DJ. Davin, A. de Vaal, P.L. Delgass, W.N. Demicoli, D. Dietzsch, L. Dimian, A.C. Dobosne Horvath, Zs Domenech, S. Duarte,B.P.M. Dubaj,D. Dumont, M.-N. Duque, J. Eden, M.R. Elgue, S. Emet, S. Emtir, M. Engelien, H.K. Espufia, A.
Glavic, P. Godat, J. Goltz, G.E. Gopinathan, N. Gorak, A. Graells, M. Grievink, J. Griffiths, J.F. Grossmann, I.E. Guadix, A. Guadix, E.M. Guillen, G. Haario, H. Hadj-Kali, M. Hallas, I.e. Han, E. Hangos, K.M. Harrison, R.P. Harten, P. Haug-Warberg, T. Hausmanns, Ch. Havre, K. Heijnen, J.J. Heikkila, A.-M. Heimann, F. Heinonen, J. Herder, P.M. Hernandez, S. Hernandez-Castro, S. Herrera, M. Hetreux, G. Heyen, G. Hidajat, K. Hinnela, J. Horner, D.J. Horvath, L. Hosseini, A.R. Hourigan, J.A. Hua,B. Huang, W. Hugo, A. Huismann, T. Hungerbiihler, K. Hupa, M. Hurme, M. Hyllseth, M. Hoskuldsson, A. ledema, P.D. Inglez de Souza, E.T. Irsic Bedenik, N.
179;827 143 47; 53 647 713;743 917 929 581 59; 191; 215 149 149 419 941 653 659 707 755;857 1025 569 665 965 425 1133 287 155 671 305 515;521 851 917 845 89;1001 1145 677 1085 197 695 527 1103 161 683 77 17 809 977 425 497 431 689 167
1153 Isaksson, J. Jackson, G. Jang, S.S. Jensen, N. Jernstrom, P. Jia, X. Jimenez, A. Jimenez, L. Joulia, X. J0rgensen, S.B. Kallas, J. Kangas, J. Kasiri, N. Katare, S. Kawada, A. Kenig, E. Keskinen, K.I. Kim, Y.H. Kiparissides, C. Kiss, A.A. Klemes, J. Kloker, M. Koti, P. Kohout, M. Kokossis, A. Korpi, M. Koskinen, J. Koskinen, K. Kotoulas, C. Kova^ Kralj, A. Kraslawski, A. Kravanja, Z. Kreis, P. Kristensen, N.R. KubiCek, M. Kulay, L. Kurpas, M. Kuusisto, J. Kwon, S.P. Laakkonen, M. Lacks, D.J. Ladwig, H.-J. Lakatos, B.G. Lakner, R. Le Lann, J.M. Lee, G. Lee, S. Lelkes, Z.
1031 137 533 17 1031 647 515;521 185 653 101; 245;449; 767;1091 785;983 539 695 701 923 713 545;737 437;707 71; 173; 593 431 221 713 719 725 11;281 731 545;737 737 173 179 113;209; 275;491;977 167;323 743 1091 725;719 185 953 905 437; 707 773 749 635 239 755 641; 845 443 191 113; 197;203;323
Lengyel, A. Levis, A.A. Li,B. Li,H. Li, P. Li,Q. Li, X.-N. Li, X.X. Lievo, P. Lim, A.Ch. Lim, Y.-I. Lima, E.L. Lin, B. Logsdon, J.S. Lopez-Arevalo, I. Louhi-Kultanen, M. Lugli, G. Luukka, P. Loffler, H.-U. Machefer, S. Maciel Filho, R. Madar, J. Madsen, H. Magna, J.A. Majander, J. Malik, R.K. Manca, D. Manninen, M. Maravelias, C.T. Marcoulaki, E.G. Marechal, F. Marek, M. Marquardt, W. Martin, E.B. Martini, W. Masruroh, N.A. Masudy, M. Matos, H.A. Maunula, T. Maurya, M.R. Mele, F.D. Meng, Q.F. Meyer, M. Miettinen, T. Mijoule, C. Mikkola, J.-P. Miller, D.C. Mizsey, P. Modak, J.M. Moghadam, M.
491 1097 221 449;599 551 1103 209 455;461;599 761 1109 767 395; 1007 77 191 269 785;983 251 941 887 635 689 467 1091 125 545;737 617 1115 737 215 1121 143;1001 719 29; 563 815 551 221 1037 233 539 473 419; 479 485 893 773 653 905 77 491 1127 695
1154 1073 Molin, P. Montastruc, L. 779 785 Mori, Y. 815 Morris, A.J. 791;797; 1079 Mota, J.P.B. 227 Msiza, A.K. 803 Mu,F. Mueller, C. 809 575 Muja, I. 347 Nagy, G. 467 Nagy, L. 347 Nemeth, M. 347 Nemeth, S. 485 Nougues, J.M. 95; 257 Novais, A.Q. 815 Novakovic, K. 503 Nyberg, J. 209;113 Nystrom, L. 233 Oliveira Francisco, i\.P. 821;839 Oliveira, R. 239 Orban-Mihalyko, E. 827 Oreski, S. 713 Orlikowski, W. 167 Pahor,B. 311 Pajula, E. 833 Paloschi, J.R. 173 Papadopoulos, E. 245 Papaeconomou, I. 119; 149; 1097 Papageorgiou L.G. 1001 Paris, J. 785 Partanen, J. 839 Peres, J. 1073 Perez-Correa, R. 71; 593 Perkins, J.D. 845 Perret, J. 671;1043; 1061 Pettersson, F. 35;779 Pibouleau, L. 251 Pierucci, S. 257 Pinto, T. Pistikopoulos, E.N. 71; 263; 593; 683 173 Pladis, P. 575 Plesu, V. 851 Ponce-Ortega, J.M. 857 Pongracz, B. 863 Pons, M. 575 Popescu, C D . 869 Poth, N. 641 Prat, L. 875 Preisig, H.A. 893 Prevost, M.
Proios, P. Puigjaner, L.
263 41; 359:;371; 413; 419 ;479;485!;917 761 Purola, V.-M. 761 Pyhalahti, A. 737 Pattikangas, T. Qian, Y. 455;461 ;599 461 Qin, S.J. Racz, L. 491 Ray, A.K. 1145 497 Reinikainen, S.-P. 1079 Reis, M.A.M. 473 Rengaswamy, R. 881 Repke, J.-U. Rev, E. 197; 203i;323 59 Reyes-Labarta, J.A. 851 ;521 Rico-Ramirez, V. 311 Ritala, R. 1067 Roberts, J.C. 797 Rodrigo, A.J.S. 611 Rodrigues, A. 413 Rodrigues, M.T. Rodriguez-Martinez, A. 269 1049 Rolandi, P.A. 1049; 1019 Romagnoli, J.A. 389 Roman, R. 359 Romero, J. 209; 275r,977 Rong, B.-G. 653 Roques, J. 797 Rostam-Abadi, M. 887 Roth, S. 893 Rouzineau, D. 1115 Rovaglio, M. 707 Ryu, M.J. 905 Ronnholm, M. 383 Saha, P. 713 Salacki, W. 899 Salgado; P.A.C. 905 Salmi, T. 1013 Sandrock, C. 401 Saraiva, P.M. 995 Sarimveis, H. 1127 Sarkar, D. 73][;503 Saxen, B. 677 Saxen, H. 287 Schabel, J. 911 Schneider, P.A. 305 Schor, D. 725 Schreiber, I. 947 Secchi, A.R.
1155 509 Seferlis, P. 515 ;521 Segovia-Hernandez, J.G. 917 Sequeira, S.E. 611 Sereno, C. 587 Shacham, M. 11;281 Shang, Z. 119 Sharma, R. 911 Sheehan, M.E. 923 Shimizu, Y. 1055 Silva, CM. 185 Silva, G.A. 929 Singare, S. 353;407 ;935 Skogestad, S. 935 Skouras, S. 527 Sleigh, R.W. 941 Smolianski, A. 947 Scares, R. de P. 287 Srinivasan, R. 299 Stalker Firth, R. A. Stalker, I.D. 293 ;299 197 Steger, C. Stichlmair, J. 629 ;869 305 Stikkelman, R.M. Stuart, P.R. 1025 311 Sundqvist, S. Swaney, R.E. 83 317 Syrjanen, T.L. Szederkenyi, G. 857 467 Szeifert, F. Szitkai, Z. 203 ;323 Soderman, J. 1043; 1061 S0rensen, E. 149 ;659 Tade, M.O. 527 Tanaka, Y. 923 Tanskanen, J. 539 Thery, R. 845 Thullie, J. 953 Tiitinen, J. 959 Titchener-Hooker, N.J. 1109 Toivonen, H. 731 Tomlin, A.S. 581 Tsai, P.-F. 533 Turk, A.L. 971
275;977 17 653 965 305 1133 971 725 197 701; 341; 803; 473 1133 Verheijen, P J.T. 971 Verwater-Lukszo, Z. 689 Victorino, I.R.S. 881 Villain, 0. 347 Vincze, Cs. 977 Virkki-Hatakka, T. 1067 Virta, M.T. 29;293 von Wedel, L. 527 Vu, T.T.L. 1139 Wang, F.-Sh. 1067 Wang, H. 455;461 Wang, J. 1109 Washbrook, J. 329 Weiten, M. 551 Wendt, M. 107; 1031 Westerlund, T. 875 Westerweele, M. 1145 Wongso, F. Wozny,G. 329; 551; 881; 887; 965 905 Warna, J. Yang, A. 293 Yang, G. 983 Ydstie, B.E. 131 Yen, Ch.H. 533 437; 707; 443 Yoon, E.S. Yuceer, M. 335 Zavala, M.F. 125 Zhang, N. 623 Zhao, Ch. 341 989 Zilinskas, J. Zupan, J. 82 Turunen, I. Uerdingen, E. Ungerer, P. Urbas, L. van der Wal, R. van Winden, W. A. van Wissen, M.E. Vani^kova, T. Varga, V. Venkatasubramanian, V.
This Page Intentionally Left Blank